id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.13443 | Early-Exit with Class Exclusion for Efficient Inference of Neural
Networks | Deep neural networks (DNNs) have been successfully applied in various fields.
In DNNs, a large number of multiply-accumulate (MAC) operations are required to
be performed, posing critical challenges in applying them in
resource-constrained platforms, e.g., edge devices. To address this challenge,
in this paper, we propose a class-based early-exit for dynamic inference.
Instead of pushing DNNs to make a dynamic decision at intermediate layers, we
take advantage of the learned features in these layers to exclude as many
irrelevant classes as possible, so that later layers only have to determine the
target class among the remaining classes. When only one class remains at a
layer, this class is the corresponding classification result. Experimental
results demonstrate the computational cost of DNNs in inference can be reduced
significantly with the proposed early-exit technique. The codes can be found at
https://github.com/HWAI-TUDa/EarlyClassExclusion. | Jingcun Wang, Bing Li, Grace Li Zhang | 2023-09-23T18:12:27Z | http://arxiv.org/abs/2309.13443v2 | # Early Classification for Dynamic Inference of Neural Networks
###### Abstract
Deep neural networks (DNNs) have been successfully applied in various fields. In DNNs, a large number of multiply-accumulate (MAC) operations are required to be performed, posing critical challenges in applying them in resource-constrained platforms, e.g., edge devices. Dynamic neural networks have been introduced to allow a structural adaption, e.g., early-exit, according to different inputs to reduce the computational cost of DNNs. Existing early-exit techniques deploy classifiers at intermediate layers of DNNs to make a classification decision as early as possible. However, the learned features at early layers might not be sufficient to exclude all the irrelevant classes and decide the correct class, leading to suboptimal results. To address this challenge, in this paper, we propose a class-based early-exit for dynamic inference. Instead of pushing DNNs to make a dynamic decision at intermediate layers, we take advantage of the learned features in these layers to exclude as many irrelevant classes as possible, so that later layers only have to determine the target class among the remaining classes. When only one class remains at a layer, this class is the corresponding classification result. To realize this class-based exclusion, we assign each class with a classifier at intermediate layers and train the network together with these classifiers. Afterwards, an exclusion strategy is deployed to eliminate irrelevant classes at early layers. Experimental results demonstrate the computational cost of DNNs in inference can be reduced significantly with the proposed early-exit technique.
## I Introduction
In the past decade, deep neural networks (DNNs) have achieved remarkable breakthroughs in various fields, e.g., image classification [1] and object detection [2]. In DNNs, a large number of floating-point operations (FLOPs), mainly consisting of multiply-accumulate (MAC) operations, need to be executed. For example, ResNet50 [1] requires 4.1G MAC operations and around 8.2G FLOPs to predict a classification result for a \(224\times 224\) image. This tremendous computational cost poses critical challenges in applying DNNs in resource-constrained hardware platforms, e.g., edge devices.
To reduce the computational cost of executing DNNs, various techniques have been introduced at software level. For example, pruning [3][4] reduces the number of MAC operations and FLOPs by removing unnecessary weights. Quantization [5][6] approximates floating-point MAC operations with fixed-point ones to reduce the computational complexity of MAC operations. Knowledge distillation [7] transfers a large DNN model to a compact model consisting of few MAC operations. Efficient DNN architectures such as MobileNet [8], SparseNN [9] have also been introduced to improve the computational efficiency. Neural architecture search [10] explores more efficient neural network architectures.
The previous solutions described above assume the structure of a DNN model is static, indicating that all input data need to flow to the end of the DNN. Such a static structure leads to a low computational efficiency in processing different inputs. To address this issue, previous work developed dynamic neural networks [11], attempting to adapt their structures, e.g., depth, according to different inputs. Neural networks with early-exit at their intermediate layers are a representative type of dynamic neural networks. Such an early-exit dynamic network incorporates multiple output branches, also called exit points, at intermediate layers of the network. In inference, if the feature maps at an exit point are sufficient to make a correct classification, the corresponding classifier determines the class with the largest probability and the inference terminates to reduce computational cost.
Previous work has explored early-exit dynamic neural networks with various techniques. For example, BranchyNet [12] manually inserts two exit points at pre-defined intermediate layers of DNNs. MSDNet [13] uses early-exit to enable an anytime/budgeted classification for DNNs. [14] enhances the accuracy of early-exit dynamic networks by applying gradient equilibrium and one-for-all knowledge distillation. EPNet [15] develops a lightweight early-exit structure and determines the exit policy using a Markov decision process. Shallow-Deep Networks [16] exploit early-exit to address the overthinking problem in neural networks. [17] proposes a sample weighting technique to enhance the accuracy of classifiers at each exit point. EENet [18] uses a multi-objective training to fine-tune an early-exit policy to enhance the computational efficiency.
In the previous early-exit work described above, once the learned features at a layer are not able to decide the correct class, all the intermediate computation results are discarded and the exit condition is evaluated at the next layer anew. Accordingly, some computation effort conducted at early layers may be wasted, which leads to a low computation efficiency. To overcome this problem, we propose a class-based early-exit strategy for dynamic inference in which the learned features in early layers are exploited to exclude as many irrelevant classes as possible, so that the target class can be rapidly revealed before the last layer is reached. _A class is defined as an image category, e.g., cat. An excluded class means that it can no longer be considered as the correct class_. The key contributions of this work are summarized as follows:
* A novel class-based early-exit framework for dynamic inference is proposed, where the correct classification
result of an input is determined by ruling out all the irrelevant classes at intermediate layers of DNNs. An early decision can be made when only one class remains.
* To exclude irrelevant classes at intermediate layers of DNNs, the learned features in these layers are compressed and used to develop a class-exclusion decision network for each individual class.
* To maximize the number of classes that can be excluded, the training of a DNN is adjusted by modifying the cost function to consider the accuracy of the class-exclusion networks.
* To maintain a high classification accuracy while reducing the computational cost of DNNs, a class-exclusion strategy in inference is determined by a search algorithm.
* Experiment results demonstrate the number of FLOPs of DNNs can be reduced by up to 33.06% while maintaining a high inference accuracy. Compared with the previous early-exit technique, the class-based early-exit framework achieves a better computational efficiency.
The rest of this paper is organized as follows. Section II describes the background and motivation of this work. Section III explains the proposed class-based early-exit framework for dynamic inference. Section IV and V respectively present the experimental results and conclusions.
## II Background and Motivation
### _Background_
To overcome the drawback of static DNN architectures, which incur the same computational cost for all input data, dynamic neural networks have been introduced to dynamically adjust their structures conditioned on each input. For example, some inputs are much easier to be differentiated from others, so that it might be possible to use less computational cost to perform inference for such "easy" inputs and use more computational cost for "hard" inputs.
Early-exit, which allows "easy" inputs to be classified and exit at shallow layers without executing deeper layers, is one strategy to realize dynamic neural networks. An early-exit dynamic neural network consists of two parts, a backbone network and multiple output branches, also called exit points, as illustrated in Fig. 1(a). A backbone could be any neural networks, such as AlexNet [19], ResNet [1]. An exit point typically consists of a simple classification neural network that takes compressed feature maps from the backbone as inputs and uses SoftMax function as its output activation function to generate prediction probabilities. If the feature maps at an exit point are sufficient to make a correct classification, the corresponding classifier determines the class with the largest probability, and the inference terminates, as shown in Fig. 1(a).
Previous work has developed various early-exit policies. For example, BranchyNet [12] evaluates the entropy of classifier outputs at each exit point. If the maximum/minimum entropy is larger/smaller than a predefined value, the classification is made at this layer by selecting the class with the highest probability. MSDNet [13] uses either a latency restriction or a computation budget as the exit policy. Once the specified latency is reached, the classification result in the previous layer is determined as the final result. Shallow-Deep Networks [16] and [14] use a probability threshold as the exit criterion. Once the maximum probability of classifier outputs is higher than the threshold, early-exit is triggered. EPNet [15] uses a controller to decide whether the outputs of classifiers at intermediate layers are confident enough to make a decision. EENet [18] trains an additional neural network to determine whether an input can exit or not at an intermediate layer.
### _Motivation_
Previous early-exit techniques aim to push DNNs to make a classification decision as early as possible. However, the learned features at early layers might not be sufficient to exclude all the irrelevant classes and decide the correct class. In this case, all the intermediate computation results are discarded and the exit condition is evaluated at the next layer, leading to a waste of computational information.
To overcome the drawback of the previous early-exit techniques, we take advantage of the learned features at early layers to exclude as many irrelevant classes as possible, so that later layers only have to determine the target class among the remaining classes. Excluding irrelevant classes with the learned features at intermediate layers is viable since such features can obviously differentiate different classes. For example, the feature of wheel learned at one intermediate layer can directly determine that an input does not belong to the classes cat and dog. As more and more irrelevant classes are excluded, the classification decision becomes clear, facilitating a rapid early exit. Till only one class remains, the classification result is obtained naturally.
Fig. 1(b) illustrates the concept of the class-based early-exit for dynamic inference. Different from traditional early-exit in Fig. 1(a), the irrelevant classes are excluded by exploiting the learned feature maps in the first two layers. Since there is only one remaining class after class exclusion at the second layer, the remaining class is thus the target class. According to this figure, the class-based early-exit strategy has a good potential to exit earlier than the traditional early-exit techniques, leading
Fig. 1: (a) Traditional early-exit strategy. (b) The proposed class-exclusion early-exit strategy.
to better computational efficiency.
## III The Proposed Class-Based Early-Exit Framework
In this section, we will introduce the proposed class-based early-exit framework in detail. Section III-A describes the construction of class-exclusion neural networks for ruling out irrelevant classes at each intermediate layer. Afterwards, the class-exclusion aware training is introduced in Section III-B to enhance both the inference accuracy of the backbone network and the exclusion accuracy of class-exclusion networks. The class-exclusion strategy for dynamic inference is then explained in Section III-C. To determine the optimal threshold in the class-exclusion strategy, a search algorithm is proposed in Section III-D.
### _Construction of Class-Exclusion Neural Networks for Excluding Irrelevant Classes_
To exclude irrelevant classes at intermediate layers, we exploit the learned features in these layers to construct class-exclusion neural networks. Since whether one class is excluded or not should not be dependent on the other classes, we assign an individual class-exclusion network to each class to determine whether this class can be ruled out for an input, as illustrated in Fig. 2. To reduce the computational cost incurred by the class-exclusion networks, a simple fully-connected layer is used as the structure of such a network.
For the inputs of each class-exclusion network at an intermediate layer, the feature maps at this layer can be flattened and used as the inputs. However, the large size and the large number of feature maps at an intermediate layer lead to a high computational cost. To address this issue, the feature maps at an intermediate layer are compressed with global average pooling operations, as shown in Fig. 2.
For the output of each class-exclusion network, the following Sigmoid function in Equation 1 instead of SoftMax is used as the activation function
\[\delta(x)=\frac{1}{1+e^{-x}} \tag{1}\]
where \(x\) is the output of the corresponding neuron in the class-exclusion network. Sigmoid function allows each class to make a class-exclusion decision independently, while with SoftMax function, the probability of a class inevitably affects the probabilities of the remaining classes. The criterion for excluding irrelevant classes based on the output of the Sigmoid activation function will be explained later.
### _Class-Exclusion Aware Training_
To adjust the backbone neural network to exclude as many irrelevant classes as possible at intermediate layers, we introduce a class-exclusion aware training technique. Specifically, when training the backbone network, the accuracy of each class-exclusion network is also considered. Accordingly, we modify the original cost function for training the backbone network as follows
\[L=L_{CE}+\sum_{i=1}^{N}\sum_{j=1}^{M}L_{ij} \tag{2}\]
where \(L_{CE}\) is the original cross entropy cost function. \(N\) is the number of exit points in the backbone neural network. \(M\) is the number of classes for classification at each early-exit point. For example, assume that a backbone network has 5 convolutional layers to classify 10 classes and each convolutional layer has an early-exit point. In this case, \(N\) is 5 and \(M\) is 10.
\(L_{ij}\) in Equation 2 is the cost function for the \(j\)th class-exclusion network in the \(i\)th early-exit point. Since the Sigmoid function is used as the activation function in each exit point to avoid class interference, binary cross-entropy is used as the cost function as follows
\[L_{ij}=-\alpha\frac{1}{N-i+1}(\hat{y}_{ij}\log(y_{ij})+(1-\hat{y}_{ij})\log(1-y _{ij})) \tag{3}\]
where \(N\) is the number of exit points, same as that in Equation 2. \(\hat{y}_{ij}\) and \(y_{ij}\) are the true label and the output of the class-exclusion network, respectively. \(\alpha\) is a scaling factor used to prevent the vanishing gradient problem. Different exit points have different coefficients \(\frac{1}{N-i+1}\), with the coefficients of earlier layers are smaller than those in later layers. Such a different coefficient setting is used to alleviate the gradient imbalance issue [14].
### _Class-Exclusion Strategy for Dynamic Inference_
After the class-exclusion aware training, a class-exclusion strategy should be developed to rule out irrelevant classes at intermediate layers in inference. An intuitive idea is to set a probability threshold for each exit point. Any class whose probability generated by its class-exclusion network less than this threshold is excluded. However, this static threshold cannot balance the probability differences generated by class-exclusion networks for "easy" inputs and "hard" inputs. For example, the output probabilities of class-exclusion networks for an "easy" input tend to be higher than those for a "hard" input. When a large threshold is set, the target class of the "hard" input may be excluded unexpectedly due to its small output probability. When a low threshold is set, no irrelevant classes can be excluded for the "easy" input.
Even though the output probabilities of class-exclusion networks for a "hard" input tend to be small, the ratio of the probabilities of the potential target classes to those of the non-target classes is large. Accordingly, it is possible to
Fig. 2: Construction of class-exclusion neural networks for four classes.
exploit the relative magnitude of probabilities generated by class-exclusion networks to exclude irrelevant classes.
The exploration of relative probability magnitudes realizes a dynamic threshold for class exclusion. Specifically, the maximum probability of all classes generated by class-exclusion networks, denoted as \(x\), is identified at an intermediate layer. Afterwards, a class-exclusion coefficient, denoted as \(\beta\), is determined by a search algorithm as described in the next subsection, is used to rule out irrelevant classes. Any class whose probability output generated by its class-exclusion network is smaller than \(\beta x\) will be excluded.
To avoid that the target class is excluded unexpectedly at an intermediate layer, the class with the maximum probability generated by the class-exclusion network in the next layer is identified. If this class is excluded in the previous layer, it is recovered and put back to the remaining classes in the next layer for further class exclusion. Figure 3 illustrates an example of class exclusion based on a dynamic threshold, where the truck class is excluded in the first layer and put back to the remaining classes in the second layer.
### _Class-Exclusion Criterion Search_
To determine an optimal class-exclusion coefficient \(\beta\) for each exit point, we develop a search algorithm to balance computational cost and inference accuracy. The search starts by initializing the class-exclusion coefficients of all exit points to 0. Afterwards, the number of MAC operations in each layer is ranked and the class-exclusion coefficient of the layer with the largest number of MAC operations is first searched since such a layer tends to have more computing resource to achieve a better class-exclusion capability than the other layers. After the optimal \(\beta\) for the exit point of this layer is determined, \(\beta\) is fixed for this layer and the search of the next layer starts. The search process ends until \(\beta\) in the layer with the smallest number of MAC operations is determined.
To search an optimal \(\beta\) for an exit point, \(\beta\) is increased by 0.01 from 0 and the accuracy of the class-based early-exit network is evaluated with this value in each iteration. When the accuracy degradation is larger than a specified value, the previous \(\beta\) value is set as the class-exclusion coefficient for this exit point. The pseudocode of this class-exclusion criterion search is shown in Algorithm 1.
## IV Experimental Results
To demonstrate the effectiveness of the proposed class-aware early-exit framework in reducing the computational cost, three neural networks, AlexNet [19], VGGSmall [20] and ResNet50 [1] were trained and evaluated on CIFAR10 [21] and CIFAR100 [21], respectively. For all neural networks, each convolutional layer has one exit point. For ResNet50, each layer consists of multiple residual blocks. The coefficient \(\alpha\) in the class-exclusion aware cost function for early-exit was set to 36 for neural networks on CIFAR10 and 396 on CIFAR100. For the initial neural networks, AlexNet was trained by transferring the knowledge of a pre-trained AlexNet on ImageNet [22] into this model. During this process, a learning rate of 0.001 was set initially and the training epochs were set to 50. For VGGSmall and ResNet50, they were trained from scratch with an initial learning rate 0.001 and 200 epochs were used to train them. All experiments were conducted on an NVIDIA A100 80GB GPU.
Table I demonstrates the performance of the proposed early-exit framework. The used neural networks and their corresponding datasets are shown in the first column. The second and the third columns are the inference accuracy without and with applying the proposed framework, respectively. According to these two columns, there is only a slight accuracy loss of neural networks with the proposed framework. To verify the reduction of computational cost in executing neural networks, we compared the average number of FLOPs required to perform inference for an input with and without the proposed method. The FLOPs were evaluated by adding the numbers of multiplication, accumulation and operations incurred by average pooling. The results are shown in the last three columns. According to these results, the average
Fig. 3: Class exclusion based on a dynamic threshold. According to the dynamic threshold in the first layer, the truck class should be ruled out. This class is recovered and put back to the remaining class in the second layer since its probability is the maximum one among all classes.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{NN-Dataset} & \multicolumn{2}{c}{Acc. comp.} & \multicolumn{2}{c}{FLOPs (G)} \\ \cline{2-6} & Ori. & Pro. & Ori. & Pro. & Red. \\ \hline AlexNet-CIFAR10 & 90.54\% & 89.34\% & 1.4386 & 1.0375 & 27.88\% \\ VGGs-CIFAR10 & 93.89\% & 91.93\% & 1.546\% & 1.0668 & 31.00\% \\ VGGs-CIFAR100 & 72.19\% & 71.11\% & 1.5463 & 1.1535 & 25.40\% \\ ResNet50-CIFAR100 & 76.46\% & 74.39\% & 2.6008 & 1.7411 & 33.06\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Overall performance of the proposed framework
computational cost in inference can be reduced significantly.
To demonstrate the effectiveness of the proposed early-exit framework, the number of images that can be classified in each exit point is evaluated. The results are shown in Figure 4, where the total number of test images is 10000 in each combination of neural networks and datasets. The x-axis shows the layer index. It can be clearly seen that, inputs tend not to be classified at the early layers of a neural network since the low-level features at such layers are not sufficient to achieve accurate classifications. For example, for AlexNet-CIFAR10 and VGGssmall-CIFAR10, there are no inputs that can be classified after the first layer. More inputs tend to be classified in the middle layers and the later layers since such layers can learn more complex features which are sufficient for accurate classification.
To show which type of classes can be classified in each exit point, we evaluated the percentage of each class that can exit early with respect to all exited images. The result is illustrated in Figure 5, where x-axis is the layer index and y-axis is the class label, e.g., 0 corresponding to airplane. Dark green indicates a large percentage while light green represents a small percentage. According to this figure, some classes tend to exit earlier than the other classes. For example in VGGssmall-CIFAR10, most input images that can be classified at the 3rd layer are from class 1 (automobile) and class 9 (truck). In the exit points of the sixth layer, more classified images are from class 3 (cat) and class 5 (dog). This difference provides some hints that the difficulty of classifying images of cat and dog is higher than that of classifying images of automobile and truck.
To verify that the proposed method can exclude irrelevant classes at intermediate layers, we evaluated the average number of excluded classes at each intermediate layer. The results are illustrated in Figure 6, where the number of excluded classes at a later exit point contains the number of excluded classes in the previous exit points. According to this figure, it
Fig. 4: The number of input images that can be classified in the intermediate exit point of neural networks. The x-axis represents the layer index and also the exit point index.
Fig. 5: Percentage of classes that can exit early with respect to the exited images. For CIFAR10, class 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 are airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck, respectively.
Fig. 6: The average number of excluded classes in each exit point of intermediate layers in neural networks. The x-axis represents the layer index and also the exit point index.
Fig. 7: Percentage of classes that are excluded in each intermediate layer. For CIFAR10, class 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 are airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck, respectively.
is clearly seen that although a neural network can not make a decision at the shallow layers, it can still exclude some irrelevant classes. For example, in VGGSmall-CIFAR100, at the end of the first exit point, 75 classes can already be excluded dynamically. It can also be observed that the ability of neural networks to exclude classes is improved significantly after going through a few layers. For example in AlexNet-CIFAR10, at the first exit point, the neural network can hardly exclude any classes. At the second exit point, it can exclude more than six classes. In addition, the ability of neural networks to exclude classes grows very slowly in later layers, indicating that a neural network has to spend more effort to classify similar classes.
Figure 7 shows the percentage of each excluded class with respect to all excluded classes at each exit point, where dark color represents a large percentage and light color represents a small percentage. According to this figure, among the excluded classes, some classes are easier to be excluded than the other classes. For example, in VGGSmall-CIFAR10, most of the excluded classes in the first and second exit points come from class 8 (ship), class 6 (frog) and class 1 (automobile). This observation also indicates that the features of these classes are significantly different from the other classes, so that they can be classified more easily at early layers. Figure 5 also confirms this, where the images of these three classes tend to be classified at the exit point 3.
To demonstrate the advantages of the proposed method over the previous confidence-based early-exit strategy, we compared the accuracy, the remaining number of MAC operations and FLOPs between the proposed method and the previous method [12]. To fairly compare the performance, we adapted the method in [12] by adding early-exit in each layer and do not revise the original network structures. The relative comparison results are shown in Figure 8, where red bar, blue bar and green bar represent the original neural network, the proposed method and the confidence-based early exit, respectively. Under almost the same accuracy, the remaining number of MAC operations and FLOPs with the proposed method is smaller than that from the previous confidence-based early-exit method.
## V Conclusion
In this paper, we have proposed a class-based early-exit to realize dynamic inference for DNNs to reduce the computational cost. Specifically, we take advantage of the learned features at intermediate layers to exclude as many irrelevant classes as possible, so that later layers only have to determine the final class among the remaining classes. Until there is only one remaining class at a layer, this class is the corresponding correct classification result. Experimental results demonstrate the FLOPs in executing DNNs in inference can be reduced by up to 33.06%.
|
2309.03888 | Full L- and M-band high resolution spectroscopy of the S CrA binary
disks with VLT-CRIRES+ | The Cryogenic IR echelle Spectrometer (CRIRES) instrument at the Very Large
Telescope (VLT) was in operation from 2006 to 2014. Great strides in
characterizing the inner regions of protoplanetary disks were made using CRIRES
observations in the L- and M-band at this time. The upgraded instrument,
CRIRES+, became available in 2021 and covers a larger wavelength range
simultaneously. Here we present new CRIRES+ Science Verification data of the
binary system S Coronae Australis (S CrA). We aim to characterize the upgraded
CRIRES+ instrument for disk studies and provide new insight into the gas in the
inner disk of the S CrA N and S systems. We analyze the CRIRES+ data taken in
all available L- and M-band settings, providing spectral coverage from 2.9 to
5.5 $\mu$m. We detect emission from $^{12}$CO (v=1-0, v=2-1, and v=3-2),
$^{13}$CO (v=1-0), hydrogen recombination lines, OH, and H$_2$O in the S CrA N
disk. In the fainter S CrA S system, only the $^{12}$CO v=1-0 and the hydrogen
recombination lines are detected. The $^{12}$CO v=1-0 emission in S CrA N and S
shows two velocity components, a broad component coming from $\sim$0.1 au in S
CrA N and $\sim$0.03 au in S CrA S and a narrow component coming from $\sim$3
au in S CrA N and $\sim$5 au in S CrA S. We fit local thermodynamic equilibrium
slab models to the rotation diagrams of the two S CrA N velocity components and
find that they have similar column densities ($\sim$1-7$\times$10$^{17}$
cm$^{-2}$), but that the broad component is coming from a hotter and narrower
region. Two filter settings, M4211 and M4368, provide sufficient wavelength
coverage for characterizing CO and H$_2$O at $\sim$5 $\mu$m, in particular
covering low- and high-$J$ lines. CRIRES+ provides spectral coverage and
resolution that are crucial complements to low-resolution observations, such as
those with JWST, where multiple velocity components cannot be distinguished. | Sierra L. Grant, Giulio Bettoni, Andrea Banzatti, Ewine F. van Dishoeck, Sean Brittain, Davide Fedele, Thomas Henning, Carlo Manara, Dmitry Semenov, Emma Whelan | 2023-09-07T17:47:39Z | http://arxiv.org/abs/2309.03888v2 | # Full L- and M-band high resolution spectroscopy of the S CrA binary disks with VLT-CRIRES+
###### Abstract
Context:The Cryogenic IR echelle Spectrometer (CRIRES) instrument at the Very Large Telescope (VLT) was in operation from 2006 to 2014. Great strides in characterizing the inner regions of protoplanetary disks were made using CRIRES observations in the L- and M-band at this time. The upgraded instrument, CRIRES+, became available in 2021 and covers a larger wavelength range simultaneously.
Aims:Here we present new CRIRES+ Science Verification data of the binary system S Coronae Australis (S CrA). We aim to characterize the upgraded CRIRES+ instrument for disk studies and provide new insight into the gas in the inner disk of the S CrA N and S systems.
Methods:We analyze the CRIRES+ data taken in all available L- and M-band settings, providing spectral coverage from 2.9 to 5.5 \(\mu\)m.
Results:We detect emission from \({}^{12}\)CO (v=1-0, v=2-1, and v=3-2), \({}^{13}\)CO (v=1-0), hydrogen recombination lines, OH, and H\({}_{2}\)O in the S CrA N disk. In the fainter S CrA S system, only the \({}^{12}\)CO v=1-0 and the hydrogen recombination lines are detected. The \({}^{12}\)CO v=1-0 emission in S CrA N and S shows two velocity components, a broad component coming from \(\sim\)0.1 au in S CrA N and \(\sim\)0.03 au in S CrA S and a narrow component coming from \(\sim\)3 au in S CrA N and \(\sim\)5 au in S CrA S. We fit local thermodynamic equilibrium slab models to the rotation diagrams of the two S CrA N velocity components and find that they have similar column densities (\(\sim\)1-7\(\times\)10\({}^{17}\) cm\({}^{-2}\)), but that the broad component is coming from a hotter and narrower region.
Conclusions:Two filter settings, M4211 and M4368, provide sufficient wavelength coverage for characterizing CO and H\({}_{2}\)O at \(\sim\)5 \(\mu\)m, in particular covering low- and high-\(J\) lines. CRIRES+ provides spectral coverage and resolution that are crucial complements to low-resolution observations, such as those with JWST, where multiple velocity components cannot be distinguished.
## 1 Introduction
The inner 10 au of protoplanetary disks are regions of high temperature and density, where snowlines of abundant molecules (H\({}_{2}\)O and CO\({}_{2}\)) and dust sublimation contribute to the conditions and chemistry. These regions may be the birthplaces of planets, whose properties and composition will be impacted by the conditions in the disk.
High spectral resolution observations of infrared gas tracers are crucial probes for the kinematics and structure of the inner disk. Previous observations in the L- (\(\sim\)3.5 \(\mu\)m) and M-band (\(\sim\)4.7 \(\mu\)m) have shown that these wavelengths offer a unique view into the inner 10 au of disks, tracing the gas both inside and outside the dust sublimation radius, including from disk winds (see Banzatti et al. 2022, 2023 and references therein). The molecular tracers include CO and H\({}_{2}\)O (e.g., Najita et al. 2003; Blake & Boogert 2004; Brown et al. 2013; Banzatti & Pontoppidan 2015; Banzatti et al. 2017, 2022), and OH (Fedele et al. 2011; Brittain et al. 2016), whereas atomic tracers include Hydrogen recombination lines which can be used to determine the accretion rate (Salyk et al. 2013). The high spectral resolution that can be obtained for the emission from these tracers allow for kinematic analysis, including resolving multiple velocity components. This kinematic information can be useful in interpreting lower spectral resolution data, like that from JWST-NIRSpec and MIRI (Banzatti et al. 2023).
Most high spectral resolution L- and M-band studies of disks to-date have been done with VLT-CRIRES, Keck-NIRSPEC, and IRTF-iSiHELL (see Table 1 of Banzatti et al. 2022 for an overview). In particular, VLT-CRIRES and Keck-NIRSPEC observations in the 2000s and early 2010s provided great new insight into the gas conditions and structure in the inner 10 au of protoplanetary disks, most powerfully by using the CO fundamental emission lines as a gas diagnostic. These results showed a wide array of line shapes, from the double-peaked line profiles that are typically associated with gas in Keplerian motion, to triangular line shapes associated to disk plus wind emission, and multiple absorption components with a range of blueshifts associated to absorption from a wind (e.g., Bast et al. 2011; Herczeg et al. 2011; Brown et al. 2013; Banzatti et al. 2022). Upgraded
Keck and VLT instruments became available recently (Martin et al., 2018; Dorn et al., 2023). With these high spectral resolution spectrographs available, now is a unique time to characterize the inner regions of protoplanetary disks. This is particularly true as observations by the James Webb Space Telescope (JWST) have larger spectral coverage but lower spectral resolution and the gas in the inner \(\sim\)10 au of disks are inaccessible to facilities like ALMA.
In this work, we present CRIRES+ Science Verification observations of the S CrA binary system using all of the available L- and M-band filter settings taken in 2021. The S CrA binary system is composed of two pre-main-sequence stars: S CrA N, a K7 star with stellar mass of 0.7 \(M_{\odot}\), and S CrA S, an M1 star with a stellar mass of 0.45 \(M_{\odot}\)(Sullivan et al., 2019). The system is still somewhat embedded and the disks are borderline between Class I and II objects. The binary nature of this system complicates the distance determination from Gaia observations, therefore we adopt a distance of 150 pc, as used by Sullivan et al. (2019). The binary separation in this system is 1.''3, such that both stars can be placed on the CRIRES slit at the same time. This system was also observed using CRIRES prior to the upgrade (oCRIRES), in 2007 Apr 22, 2007 Sep 3, and 2008 Aug 9, as is presented in Brown et al. (2013). The binary nature of this system, paired with previous oCRIRES data, make S CrA an excellent test case for the upgraded CRIRES+.
We aim to provide new insight into the gas composition and conditions in the inner regions of the S CrA disks and characterize the upgraded CRIRES+ instrument with the goal of informing future observations, particularly with respect to spectral settings, to maximize efficiency and scientific output. The observations and data reduction are described in Section 2. The results are presented in Section 3 and are discussed, with an eye towards future observations and using CRIRES+ to complement other observations, in Section 4.
## 2 Observations and data reduction
### Observations
The VLT-CRIRES+ observations that we present here were taken during the Science Verification phase as part of Program 107.22T7 (PI A. Bosman). The data were taken on 18 and 19 September 2021. All L- and M-band spectral settings (seven settings in L-band and nine in M-band) were utilized, which given the upgraded cross-dispersed nature of CRIRES+, provides full coverage from \(\sim\)2.9-5.5 \(\mu\)m. The slit was placed such that both components of the S CrA binary system were oriented on the slit, which was set to the 0.''2 (\(R\sim\)80000) slit width (Figure 1). Two position angles were observed in the L3340 and M4368 filters, with an offset of 180deg. Each filter was observed with an integration time of 10 seconds for three exposures per nodding position and with two ABBA nodding patterns for a total integration time of 240 seconds per filter per position angle. The nod throw was 5 arcseconds along the slit. In addition to S CrA, a telluric standard star, \(\lambda\) Aquilae (\(\lambda\) Aql, a B9V star with a V-band magnitude of 3.43), was observed for telluric correction with an integration of 10 seconds per exposure with two exposures per nod and two ABBA nodding cycles for a total integration time of 160 seconds.
### Data reduction and calibration
We utilized the ESO CR2RES EsoReflex pipeline for initial data reduction of the 2D images (see Figure 1) using version 1.0.5 1. The default spectral extraction in the CR2RES pipeline can handle arbitrary slit functions by simply taking all of the flux along the slit. For S CrA, this would result in a single, combined spectrum for both components of the binary. However, to extract each component of the binary separately, we use the EsoRex reduction method on the reduced combined A and B frame images. To do this, we use the slit_frac keyword in the cr2res_util_extract function, to specify the regions where the spectra for each star should be extracted. This is shown in Figure 1. This produced an A nodding position and a B nodding position spectrum for each star, in each chip, for all of the observed filters. The A nod spectra of S CrA N, before telluric correction, are shown in Figure 2 to highlight the intrinsic spectral shapes due to the blaze function.
Footnote 1: [https://www.eso.org/sci/software/pipelines/crires/crires-pipe-recipes.html](https://www.eso.org/sci/software/pipelines/crires/crires-pipe-recipes.html)
The CRIRES+ slit is tilted on the detector projection, therefore to combine the extracted A and B nod position spectra, we first determine the offset between a given A and B nod pair using the Astropy Specutils template_correlation program. If a good correlation was not found, we identified the shift between the A and B frame spectra by eye. After finding the shift between the spectra, we shift the B spectra to align with the A spectra before combining. This was done for each detector, in each order, for all filters, for S CrA N, S CrA S, and the telluric standard star separately.
There is no lamp available for use in the L- and M-band to use for wavelength calibration. Therefore, telluric sky lines were used to wavelength calibrate the spectra, as was done for oCRIRES data. To do this, we use the telluric-correction tool Molecfit(Kausch et al., 2015; Smette et al., 2015). Molecfit performs synthetic modeling of telluric lines. The best-fit Molecfit model provides a wavelength solution that is free from any overall wavelength shifts or stretches/compressions of the spectra due to the slit tilt. Therefore, even if the spec
Figure 1: An example of the detector image, showing the A (red) and B (blue) nod tracks for S CrA N (bottom track) and S CrA S (top track). The slit is illustrated on the left in the inset, although in reality the slit is tilted. The small grey boxes correspond to the regions where the spectra were extracted for each source and each nod position.
tra are not telluric-corrected using Molecfit (see below), using Molecfit is still necessary to get a proper wavelength calibration in the L- and M-band.
The correction of telluric sky lines can be done with Molecfit or using a telluric standard star if the standard star was observed close in time and airmass (typically with a difference in airmass up to 0.2; Ulmer-Moll et al. 2019), and in similar observing conditions. For the S CrA observations, \(\lambda\) Aql was observed as the telluric standard. The L-band observations were taken on the night of 18 September 2021 and the difference in airmass between S CrA and \(\lambda\) Aql was greater than 0.2 (S CrA had an airmass of 1.245 to 1.494 while \(\lambda\) Aql had an airmass of 1.857 to 2.881). The M-band data, taken the following night, were observed within an airmass difference of \(\sim\)0.2 (S CrA had an airmass of 1.023 to 1.047 and \(\lambda\) Aql had an airmass of 1.104 to 1.268). Therefore, the L-band data are telluric corrected using Molecfit and the M-band data are corrected using the telluric standard, but we additionally correct the M-band data with Molecfit as a comparison (Figure 3). Although the telluric star was not used to telluric correct the L-band data, we still use this data to correct for instrumental effects (blaze function, fringing, etc.) in the data. Finally, early A-type and late B-type stars that are largely featureless and therefore useful for correcting for telluric lines, can have photospheric absorption in the hydrogen lines. We fit this absorption for \(\lambda\) Aql and remove it before using those spectra to telluric correct the science target spectra. We find that higher quality data are achieved when using a telluric standard star to correct for telluric lines and instrumental effects.
WISE Band 2 photometry are typically used for flux calibration of M-band data, however, the resolution of WISE is not sufficient to distinguish each component of the binary. Sullivan et al. (2019) provide photometry for S CrA N and S. From that data, we estimate the 4.7 \(\mu\)m flux of S CrA N is \(\sim\)2.5 Jy and of S CrA S is \(\sim\)1 Jy, however these fluxes are uncertain. Therefore, the normalized spectra are preferentially shown in this paper.
All of the spectral settings were combined into a single spectrum for each target, taking the weighted average in regions where multiple filters overlapped. The final L- and M-band spectra, after combining all filters in each band, are presented in the Appendix in Figures A.1 and A.2, respectively, for S CrA N, and in Figures A.3 and A.4 for S CrA S. The data from oCRIRES are shown for comparison and the coverage of each CRIRES+ L- and M-band spectral setting is shown for reference. The combined spectra are available for both targets on spexodisks.com.
## 3 Results
### Signal-to-noise ratio
We estimated the signal-to-noise (\(S/N\)) ratio from the final spectra by identifying a region of continuum in each detector, order, and filter. We then estimated the \(S/N\) in each of the spectra by determining the dispersion in the selected region. All filters were observed with the same integration time (240 total seconds), therefore the dispersion in \(S/N\) ratio observed depends on the sensitivity of the filter, proximity of the continuum region to strong and/or numerous telluric features, and the flux of the sources which changes as a function of wavelength from 3 to 5.5 \(\mu\)m. For S CrA N, the highest \(S/N\) of \(\sim\)100 is achieved from 3.5 to 4.0 \(\mu\)m. The highest \(S/N\) red-ward of the strong break at 4.3 \(\mu\)m due to telluric absorption, is \(\sim\)70 around 4.6 \(\mu\)m.
In comparison to the oCRIRES data of S CrA N, after taking into account the difference in integration time between observations, we find a sensitivity increase of \(\sim\)10%. The integration time for a given filter in the new CRIRES+ data are shorter than the oCRIRES data, resulting in overall lower \(S/N\), despite the increase in instrument sensitivity between observations.
### Line detections
Across the L- and M-band spectra numerous ro-vibrational emission lines of H\({}_{2}\)O, OH and CO and recombination lines of atomic hydrogen were detected (Figure A.1, A.2, A.3, and A.4), particularly in S CrA N, where the brighter nature of this source resulted in higher \(S/N\) spectra.
#### 3.2.1 Hydrogen
We detect Hydrogen recombination lines in the Brackett and Pfund series in both S CrA N and S (S CrA N shown in Figure 4). Br\(\alpha\) has broad wings. Pf\(\gamma\) and Pf\(\delta\) show an additional blue-shifted emission. Pf\(\delta\) is blended with the \({}^{12}\)CO v=2-1 R(08) line. Pf\(\epsilon\) is blended with an OH doublet. In the future, with larger samples, the relationship between the Br\(\alpha\) line flux and the accretion luminosity should be explored, as was done by Salyk et al. (2013) for the Pf\(\beta\) line. Br\(\alpha\) is a stronger feature and in a cleaner part of the spectrum than Pf\(\beta\).
Figure 2: S CrA N CRIRES+ reduced spectra in all available L- and M-band settings. No telluric correction or correction for the blaze function has been done, resulting in the bell shapes. The coverage of observations with oCRIRES for the S CrA system is shown in black. An example of an individual order showing the three chips/detectors is shown in the upper right. Note that between 3.6 and 4.2 \(\mu\)m L- and M-band filters overlap.
#### 3.2.2 Oh
Several OH lines are detected in the L-band around 2.93 \(\mu\)m in the S CrA N spectrum, also seen in the oCRIRES data analyzed in Banzatti et al. (2017).
#### 3.2.3 H\({}_{2}\)O
In the L-band of S CrA N, we detect the same water lines previously observed by Banzatti et al. (2017). Given the low \(S/N\) of the spectrum we did not analyse the L-band water lines, limiting to highlight the detections. In the M-band we detected H\({}_{2}\)O emission lines from the ro-vibrational \(R\)-branch of the bending mode (\(v_{2}\)), as observed in other disks (Banzatti et al., 2022, 2023). These M-band lines are faint, at the \(\sim\)5% continuum level. The lines are fairly broad as shown in the last panel of Figure 5. No H\({}_{2}\)O emission is detected in the S CrA S spectrum.
#### 3.2.4 Co
In S CrA N, we detect \({}^{12}\)CO emission lines in the v=1-0, v=2-1 and v=3-2 ro-vibrational branches. \({}^{13}\)CO is instead detected only in the v=1-0 branch. In S CrA S, only the \({}^{12}\)CO v=1-0 lines are detected. We analyzed these lines following the procedure described in Banzatti et al. (2022). Two velocity components are detected in the \({}^{12}\)CO v=1-0 and v=2-1 lines. In the v=3-2 lines only a broad component is observed, while in the \({}^{13}\)CO v=1-0 lines only the narrow component is present. The broad component (BC) has a full width at half maximum (FWHM) of 72 km s\({}^{-1}\) and the narrow component (NC) has a FWHM of 14 km s\({}^{-1}\). In S CrA S, the BC and NC v=1-0 FWHM are 126 km s\({}^{-1}\) and 10 km s\({}^{-1}\), respectively, and multiple absorption components are detected at different blue-shifts, especially in the low \(J\) lines. These velocity components are discussed in terms of emitting radii in Section 4.1.
The stacked CO and M-band H\({}_{2}\)O line profiles are presented in Figure 5.
### Other species
The full spectral range in these observations, from \(\sim\)2.9 to 5.5 \(\mu\)m, covers many additional molecular transitions, including those from H\({}_{2}\), C\({}_{2}\)H\({}_{2}\), HCN, NH\({}_{3}\), CH\({}_{4}\), and SO\({}_{2}\). We do not detect any of these additional features.
### Rotational diagrams
With sufficient coverage of multiple ro-vibrational lines, it is possible to use a rotational diagram as a diagnostic tool to study the temperature, density, and excitation processes of the gas
Figure 4: The Hydrogen recombination lines detected in the S CrA N spectra. A \({}^{12}\)CO v=2-1 line is blended with the P!\(\beta\) feature.
Figure 3: Comparison between selected spectral regions of the S CrA N spectrum corrected with Molecfit (top) and corrected using the telluric standard star (bottom). The order tilts seen in Figure 2 are not corrected for in the Molecfit version, however, when using a telluric standard star, the order tilts are removed during the telluric correction. Wavelength regions where the telluric lines are particularly deep have been removed. The Br\(\alpha\) and \({}^{12}\)CO v=1-0 lines are labeled. The spectrum corrected with the telluric standard star is higher signal-to-noise and instrumental effects, like the order tilts, are removed.
producing the observed spectrum. The rotational diagrams, also known as Boltzmann diagrams, show the line flux versus the excitation energy or upper-level quantum number. The quantity on y-axis is
\[y=\ln\left(\frac{4\pi F_{ul}}{h\nu_{ulB}\alpha_{ul}}\right) \tag{1}\]
where \(F_{ul}\) is the flux of the line at frequency \(\nu_{ul}\), produced by the transition from the upper level \(u\), which has statistical weight \(g_{u}\), to the lower level \(l\). Finally, \(A_{ul}\) is the Einstein coefficient associated to the transition. If the gas is in local thermodynamic equilibrium (LTE) and the lines are optically thin, the rotational diagram results in a straight line with a slope given by \(-1/T_{\rm kin}\) and the intercept is proportional to the total mass of the gas. Different combinations of gas temperature and density can result in a deviation from LTE, whereas optical depth effects can produce distinct curvatures in the rotational diagram (e.g., Goldsmith & Langer 1999). Moreover, different excitation processes such as UV- and IR-pumping can introduce a deviation from a simple straight line in the rotational diagram up to \(E_{\rm app}\sim 7000\) K (Thi et al. 2013). Thus, line coverage up to high \(E_{\rm app}\) values is necessary to identify such deviations.
The rotation diagram comparing the v=1-0 and v=2-1 broad and narrow components in S CrA N is presented in Figure 6. By being able to distinguish these two velocity components, the rotation diagram can then be used to determine the conditions probed by these two different velocity components (which correspond to different radii in the disk, see Section 4.1). For each component, we fit the rotational diagram of the v=1-0 lines with a single-slab model in LTE, following the fit procedure of Banzatti et al. (2012), after flux calibrating the spectrum as discussed in Section 2.2. The model depends on only three parameters, namely the kinetic temperature of the gas \(T_{\rm kin}\), the column density \(N_{\rm col}\), and the emitting area \(A\). For each model, the line fluxes are calculated and compared to the measured line fluxes by calculating the \(\chi^{2}\). The best-fit parameters are then found by a \(\chi^{2}\) minimization.
The temperature and column density of the best-fit model for the BC lines are \(T_{\rm kin,BC}=1830\) K and \(\dot{N}_{\rm col,BC}=7.2\times 10^{17}\) cm\({}^{-2}\), while for the NC lines they are \(T_{\rm kin,NC}=640\) K and \(N_{\rm col,NC}=1.6\times 10^{17}\) cm\({}^{-2}\). The emitting area is 0.13 au\({}^{2}\) for the broad component and 2.46 au\({}^{2}\) for the narrow component. While these emitting areas are fairly uncertain due to the uncertainty in the absolute flux calibration (see Section 2.2), the relative difference between the velocity components is unaffected by the flux calibration. Therefore, it is clear that the broad component has a much smaller emitting area than the narrow component.
The v=2-1 BC lines are generally well-reproduced, with the exception of the high energy lines \(E_{\rm app}\gtrsim 10^{4}\) K. These results suggest that the BC emission is close to LTE, both rotationally and vibrationally. This is also seen in the vibrational diagram for \({}^{12}\)CO in S CrA N, done using the broad component of the R(09) line, which is shown in Figure 7. This is not done for the narrow component, as there is no narrow component to the R(09) line in the v=2-1 and v=3-2 transitions. A linear fit to the points results in a vibrational temperature of 2400 K, higher than the rotational temperature of 1830 K, as has been found in Herbig Ae/Be stars (e.g., Brittain et al. 2007; van der Plas et al. 2015; Banzatti et al. 2022) and even in some T Tauri stars (e.g., Bast et al. 2011). This may be evidence of UV pumping, which could then be expected to be present in S CrA N. In the case of the NC emission, the best-fit model is not able to reproduce well the v=2-1 rotational diagram. This suggests that the NC emission is rotationally but not vibrationally thermalized. The gas producing the NC emission could have lower density, lower than the critical density of the CO lines and lower than the gas traced by the BC.
In previous disk samples observed with high spectral resolution in the M-band, most targets did not have coverage up to the high \(E_{\rm app}\) values that we cover here (e.g. Najita et al. 2003, Blake & Boogert 2004, Brown et al. 2013). Recently, the iSHELL spectrograph (Rayner et al. 2016, 2022) at the NASA Infrared Tele
Figure 5: The stacked \({}^{12}\)CO, \({}^{13}\)CO, and H\({}_{2}\)O line profiles for S CrA N (top) and S CrA S (bottom). Blank portions correspond to masking due to telluric lines in the case of the first two panels and the masking of another feature in the case of the H\({}_{2}\)O line profile in S CrA N.
scope Facility (IRTF) has made it possible to get a complete coverage of v=1-0 lines up to \(E_{\rm up}\sim\)10,000 K, thanks to the covering of the entire M-band (from 4.52 to 5.24 \(\mu\)m) in one single shot. This has allowed to get the most complete coverage of the rotational diagram to-date, revealing the full shape of curvatures associated to different gas conditions (Banzatti et al., 2022).
As CRIRES+ does not cover the entire M-band in a single spectral setting, it is desirable to determine what filters are needed to cover sufficient CO lines to achieve good coverage for CO analysis while maximizing observing efficiency. This is shown in Figure 8, where the v=1-0 rotational diagram is plotted in the right panel. The upper energy coverage by filters M4211 and M4368 are indicated, in comparison with one filter from oCRIRES, which had a shorter simultaneous wavelength coverage. This demonstrates that even with just two settings it is possible to sufficiently cover the v=1-0 rotational diagram up to high energy levels (\(E_{\rm up}\sim 8000\) K). M4318 covers a similar range of CO lines as the M4368 filter, however, M4368 also covers the Br\(\alpha\) line at 4.05 \(\mu\)m, which offers an isolated line for determining the simultaneous accretion rate, and H\({}_{2}\)O lines at 5 \(\mu\)m.
## 4 Discussion
### CO emitting radii
With sufficient spectral resolution, Keplerian broadening of lines can be observed and used to determine the emitting radius of the gas. Under the assumption of Keplerian rotation, the emitting radius \(R\) depends on the line width \(\Delta\)v through
\[R=M_{*}G\left(\frac{\sin i}{\Delta\mathrm{v}}\right)^{2} \tag{2}\]
where \(M_{*}\) is the mass of the central star and \(i\) is the inclination of the disk. Additionally, as in S CrA N, if multiple velocity components are observed in the lines, an emitting radius can be determined for each component. We determine the CO emitting radius for the broad and narrow velocity components in S CrA N and S using inner disk inclinations from VLTI-Gravity (Gravity Collaboration et al., 2021).
The CO radii for the broad and narrow components are presented in Figure 9. The emitting radii can be put into context using information on the disk structure from other methods, for instance from near-infrared interferometry or from the dust sublimation radius, estimated given the stellar luminosity. For S CrA N and S, the broad component is coming from inside the dust disk rim, as determined from VLTI-Gravity observations (Gravity Collaboration et al., 2021) and as found in other highly accreting disks (Figure 12 in Banzatti et al., 2022). The narrow component is coming from farther out, beyond \(\sim\)1 au in both disks.
The two velocity components seen in S CrA N and S are not uncommon in protoplanetary disks (Banzatti et al., 2022). Having
Figure 6: Rotational diagrams of \({}^{12}\)CO v=1-0 (left) and \({}^{12}\)CO v=2-1 (right) for S CrA N. The broad and narrow components are shown in red and blue, respectively. The data are shown as the points and the best-fit model is shown in the lines. The model is fit for \({}^{12}\)CO v=1-0 and predicted for the v=2-1 transitions. The two model lines correspond to the \(P\)- and \(R\)-branch lines.
Figure 7: Vibrational diagram of the \({}^{12}\)CO broad component using the R(09) line in S CrA N. The slope indicates a vibrational temperature of 2400 K.
high spectral resolution means that not only can the gas properties be studied for these components separately, but it also provides a map of the inner disk structure. Paired with information on the dust distribution (e.g., from interferometry or a determination of the dust sublimation radius), a more complete picture of the inner 10 au of protoplanetary disks can be accessed. This is useful information on its own, however, it can also be a crucial complement to low spectral resolution data, like that from JWST-MIRI, where determining the emitting area, is dependent on the model and on whether the gas is optically thin or optically thick and the actually emitting radius, is unconstrained. In this way, high spectral resolution spectroscopy is a uniquely powerful tool.
### Lessons learned for JWST observations
The high spectral resolution and large spectral coverage of these CRIRES+ observations allow for a detailed characterization of the CO gas in the S CrA disks. The spectral resolution allows for the extraction of multiple velocity components, which come from different regions in the disk, and the large coverage of the CO lines allows for access to the full CO ro-vibrational ladder for each velocity component. It is useful then to consider how this information can be used and what it would mean if these observational features were not available. This is particularly useful to consider as JWST is providing great insight into disk chemistry, but is lacking the spectral resolution to distinguish many line blends, let alone distinguish different velocity components in a single feature. Additionally, JWST-MIRI MRS observations lack full coverage of the whole CO ladder. In Figure 8 we show what the \({}^{12}\)CO rotational diagram looks like with CRIRES+ and simulate what this rotation diagram would look like given the spectral resolution and coverage of JWST-MIRI MRS. To do this, we bin the CRIRES+ resolution to that of MIRI at 5 \(\mu\)m (thereby losing the ability to distinguish between velocity components), along take the spectrum long-ward of the MIRI MRS wavelength limit of 4.9 \(\mu\)m, and re-extract the fluxes for the covered CO lines. The curvature of the rotation diagram around upper energy levels of \(\sim\)4000 K is lost, resulting in non-convergence of the models. A simple straight line fit to the rotation diagram, assuming optically thin emission, results in a very high temperature model (between \(\sim\) 3000 K and 10000 K), despite the fact that the broad component, that dominates the flux, is actually better fit by a lower temperature and higher column density when the full energy level coverage is considered in the fit.
JWST-NIRSpec observations would provide broader coverage of the CO ladder, including avoiding the gap in ground-based data caused by the strong telluric absorption short-ward of 4.5 \(\mu\)m, which would allow for higher \(R\)-branch lines to be detected. However, the lower spectral resolution of JWST-NIRSpec relative to MIRI, will make for even more severe line blending. Observations from VLT-CRIRES+ and other high spectral resolution ground-based spectrographs are crucial complements to observations from facilities like JWST.
### Comparison with other sources
Large samples of disks have been studied using CO ro-vibrational emission, including in Herczeg et al. (2011), Brown et al. (2013), and Banzatti et al. (2022). These surveys have covered disks at different evolutionary stages and different stellar mass regimes. Much has been learned about the inner disk gas in disks as a result of these stellar and evolutionary properties. Brown et al. (2013) find that the CO line profile complexity decreases with increasing evolutionary state due to the disappearance of ice features and foreground absorption. However, those authors find that the underlying emission is quite similar between Class I and Class II disks. No ice absorption is observed in the L- or M-band in either S CrA N or S, but some
Figure 8: Rotational diagrams of the \({}^{12}\)CO v=1-0 transitions. On the left is a simulation of the S CrA N CRIRES+ data with the spectral resolution and wavelength coverage of JWST MIRI-MRS, which does not have the resolution to distinguish between different kinematic components and only covers high-\(J\) lines. Three optically thin models with different temperatures are shown for illustration. The 4700 K is the best linear fit result. On the right is what is observed with VLT-CRIRES+. Only detected transitions are shown. The lines correspond to a linear fit to the simulated MIRI data (left) and the best-fit LTE model to the CRIRES+ data (right). In order to constrain the gas temperature and column density, sufficient upper energy level coverage is needed. In the right panel, we provide the coverage of two M-band filters of CRIRES+ (black) that provide sufficient to characterize the CO gas. The coverage of one of the oCRIRES filters is shown in grey. The upgrades to CRIRES+ result in greater energy level coverage with a single filter.
CO absorption is present in the low-\(J\) lines of S CrA N and in the low- to intermediate-\(J\) lines in S CrA S, indicative of some envelope/cloud absorption (Herczeg et al. 2011), which agrees with analysis of the spectral energy distributions that determine these are flat-spectrum objects (Sullivan et al. 2019), and therefore may still be embedded in their natal envelopes.
## 5 Summary and Conclusions
This work presents VLT-CRIRES+ high-resolution (\(R\)=80000) spectroscopy of the binary system S CrA. A full spectral scan in the L- and M-band filters allows for a detailed analysis of the upgraded instrument for studying protoplanetary disks and a unique look into this binary system. For the characterization of the upgraded CRIRES+ instrument, we find:
* Standard telluric star observations, taken close in airmass and time to the target observations are still ideal for removing both telluric lines and instrumental shapes from the science data.
* The sensitivity of CRIRES+ is \(\sim\)10% better than oCRIRES, determined by comparing previous observations of this binary system to the data presented here.
* We find that with only two CRIRES+ spectral settings, the CO rotational diagram can be sufficiently covered. M4211 and M4368 provide good CO energy level coverage, and M4368 additionally covers the accretion-tracing Br\(\alpha\) line at 4.05 \(\mu\)m and H\({}_{2}\)O lines at 5 \(\mu\)m. Many more lines were covered simultaneously with CRIRES+, compared to the smaller spectral coverage of oCRIRES, which allows for a more detailed analysis of the CO emission lines. For the S CrA binary system, we find:
* Atomic (hydrogen recombination) and molecular (OH, H\({}_{2}\)O, and CO) lines are observed in the spectrum of S CrA N. In the fainter S CrA S disk, only the \({}^{12}\)CO v=1-0 and hydrogen recombination lines are detected.
* The high spectral resolution of CRIRES+ (in particular with the 0.\({}^{\prime\prime}\)2 slit), allows for the detection of multiple velocity components in the CO lines observed in these disks. A narrow/low velocity component and a broad/high velocity component are detected in both disks.
* Rotation diagrams are important tools for determining the gas temperature, column density, and determining optical depth effects and/or excitation processes. For S CrA N, we find that the broad and narrow velocity components of the CO emission are coming from gas with similar column densities (\(\sim\)1-7\(\times\)10\({}^{17}\) cm\({}^{-2}\)), but different temperatures (1830 K and 640 K, for the BC and NC, respectively) and emitting areas (0.13 au\({}^{2}\) and 2.46 au\({}^{2}\), respectively).
* There is a clear separation in the CO emitting radii between the broad component (\(\sim\)0.1 au and \(\sim\)0.03 au in S CrA N and S, respectively) and the narrow component (\(\sim\)3 au and \(\sim\)5 au, respectively). The broad components are coming from within the dust-free inner region, where the inner dust disk radii were determined from VLTI-Gravity observations. The narrow component may be tracing a wind at larger radii, in agreement with the emitting area distinction found in fitting the rotational diagrams.
We are now in a time when several high spectral resolution (\(R\gtrsim\)75000) spectrographs capable of observing in the L- and M-band are online and the future for such observations is bright, in particular with E-ELT METIS achieving first light in the late 2020s. VLT-CRIRES+, in the years before METIS, is the only telescope-instrument combination that can observe targets in this wavelength range with this resolution at declinations below \(\sim\)-40\({}^{\circ}\). VLT-CRIRES+ has then great potential for continuing the work of oCRIRES and producing new, cutting-edge results.
###### Acknowledgements.
We thank Alexis Lavani and the ESO User Support team for their help in reducing this very early CRIRES+ data. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 325594231, FOR 2634/2. T. H. and D.S. acknowledge support from the Eu
Figure 9: The emitting radii of the CO lines (colors) compared to the dust continuum radius determined from VLTI-Gravity (black circle; Gravity Collaboration et al. 2021, values corrected for a different adopted distance). S CrA N is presented in the top panel and S CrA S in the bottom panel. For both stars, the broad component is coming from the inner, dust-free region, while the narrow component is coming from larger disk radii.
ropean Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28 (PI: Th. Henning).
## Appendix A Full spectra of S CrA N and S
The full L- and M-band spectra for S CrA N are shown in Figure 10 and Figure 11, respectively. And the S CrA S L- and M-band spectra are shown in Figures 12 and 13, respectively. The oCRIRES data for each binary component are also shown for comparison.
|
2309.11254 | Searching for heavy neutral leptons through exotic Higgs decays at the
ILC | In this study we investigate the feasibility of detecting heavy neutral
leptons ($N_d$) through exotic Higgs decays at the proposed International
Linear Collider (ILC), specifically in the channel of $e^+ e^- \to qq~ H$ with
$H\to \nu N_d \to \nu~lW \to \nu l~qq$. Analyses based on full detector
simulations of the ILD are performed at the center-of-mass energy of 250 GeV
for two different beam polarization schemes with a total integrated luminosity
of 2 $\mathrm{ab}^{-1}$. A range of heavy neutral lepton masses between the $Z$
boson and Higgs boson masses are studied. The $2\sigma$ significance reach for
the joint branching ratio of $BR(H\to\nu N_d)\cdot BR(N_d\to lW)$ is about
0.1\%, nearly independent of the heavy neutral lepton masses, while the
$5\sigma$ discovery is possible at a branching ratio of $0.3\%$. Interpreting
these results in terms of constraints on the mixing parameters
$|\varepsilon_{id}|^2$ between SM neutrinos and the heavy neutral lepton, it is
expected to have a factor of 10 improvement from current constraints. | Simon Thor, Masaya Ishino, Junping Tian | 2023-09-20T12:26:46Z | http://arxiv.org/abs/2309.11254v3 | # Searching for dark neutrinos through exotic Higgs decays at the ILC
###### Abstract
In this study we investigate the feasibility of detecting heavy dark neutrinos (\(N_{d}\)) through exotic Higgs decays at the proposed International Linear Collider (ILC), specifically in the channel of \(e^{+}e^{-}\to qq\ H\) with \(H\rightarrow\nu N_{d}\rightarrow\nu\ lW\rightarrow\nu l\ qq\). Analyses based on full detector simulations of the ILD are performed at the center-of-mass energy of 250 GeV for two different beam polarization schemes with a total integrated luminosity of 2 ab\({}^{-1}\). A range of dark neutrino masses between the \(Z\) boson and Higgs boson masses are studied. The \(2\sigma\) significance reach for the joint branching ratio of \(BR(H\rightarrow\nu N_{d})\cdot BR(N_{d}\to lW)\) is about 0.1%, nearly independent of the dark neutrino masses, while the \(5\sigma\) discovery is possible at a branching ratio of 0.3%. Interpreting these results in terms of constraints on the mixing parameters \(|\varepsilon_{id}|^{2}\) between SM neutrinos and the dark neutrino, it is expected to have a factor of 10 improvement from current constraints.
## I Introduction
The discovery of the Higgs boson at the Large Hadron Collider (LHC) [1; 2] marked a monumental milestone in the field of particle physics, confirming the existence of the Higgs field and adding the final puzzle piece to the Standard Model. However, there are still gaping holes in particle physics that cannot be answered by the Standard Model, including the existence of dark matter, matter-antimatter asymmetry and more. Physics beyond the Standard Model (BSM) is therefore a necessity. Despite this, no clear signs of BSM physics have so far been found. However, the Higgs boson with its unique properties and being the
least understood particle in the Standard Model, hosts great potential for being a portal to explore BSM physics. By measuring the Higgs boson properties precisely, it is possible that BSM physics could be discovered [3].
In this paper, we investigate the sensitivity of the International Linear Collider (ILC) [4] in detecting dark neutrinos as an exotic decay product of the Higgs boson, motivated by the model proposed in [5] to explain the matter-antimatter asymmetry problem. Our study leverages full detector simulations of the International Large Detector (ILD) [6]. We consider a range of dark neutrino masses and branching ratios. In the subsequent sections of this paper, we discuss the theoretical framework and models considered, outline the details of the accelerator and the detector, present the methodology employed, present the results of our analysis, and conclude with a summary of our findings and their implications.
### Theoretical framework
In [5], baryogenesis is achieved by a model that adds a dark sector to the SM, where the first-order phase transition as well as CP-violation happen only in the dark sector and the asymmetry is converted to the SM baryon asymmetry by employing a renormalizable neutrino portal Yukawa interaction,
\[\Delta\mathcal{L}_{Y}=-y_{i\alpha}\bar{l}_{i}N_{\alpha}\tilde{H}+c.c., \tag{1}\]
where \(N_{\alpha}\) (\(\alpha=u,d\)) are the two singlet dark neutrinos, \(\tilde{H}=i\sigma_{2}H^{*}\), where \(H\) is the SM Higgs doublet, \(l_{i}\) (\(i=e,\mu,\tau\)) are the SM lepton doublets, and \(y_{i\alpha}\) are the corresponding Yukawa coupling constants. This Yukawa interaction generates a mixing between the SM neutrinos and dark neutrinos with the corresponding mixing parameter \(\varepsilon_{iu}\) or \(\varepsilon_{id}\), which determines the coupling strength between dark neutrinos and SM particles Higgs, \(W\) and \(Z\) bosons. In our study we focus on the search of \(N_{d}\) only which is expected to have a mass at around electroweak scale, thus accessible at the ILC, while the mass of \(N_{u}\) is expected to be much higher.
The two free parameters in this model that are relevant for this study are thus the dark neutrino mass \(m_{N}\) (for \(N_{d}\)) and the mixing parameter \(\varepsilon_{id}\). At colliders \(N_{d}\) can be produced directly, as shown in Figure 1 for a representative Feynman diagram at \(e^{+}e^{-}\). This would be the major method to search for the dark neutrino when it is heavy [7].
When its mass is below \(Z\) or \(W\) it is possible to search it via the decay of \(Z\) or \(W\). When the dark neutrino is just heavier than \(Z\) but lighter than Higgs, an interesting method is enabled by searching for the dark neutrino from Higgs exotic decay. This is exactly the range of dark neutrino masses that we focus on in this study. The corresponding decay widths in terms of the two free parameters are given below
\[\Gamma(H\rightarrow\bar{\nu}_{j}N_{d}) = \left(\frac{|\varepsilon_{jd}|m_{N}}{v}\right)^{2}\beta_{f}(m_{H},m_{N})^{2}\frac{m_{H}}{4\pi}\equiv C_{H}|\varepsilon_{jd}|^{2} \tag{2}\] \[\Gamma(N_{d}\to l_{i}^{-}W^{+}) = \frac{(|\varepsilon_{jd}|g)^{2}}{32\pi}\beta_{f}(m_{H},m_{N})^{2} \frac{m_{N}^{3}}{m_{W}^{2}}\left(1+2\left(\frac{m_{W}}{m_{N}}\right)^{2} \right)\equiv C_{W}|\varepsilon_{id}|^{2}\] (3) \[\Gamma(N_{d}\rightarrow\nu_{i}Z) = \frac{(|\varepsilon_{id}|g)^{2}}{64\pi}\beta_{f}(m_{H},m_{N})^{2} \frac{m_{N}^{3}}{m_{W}^{2}}\left(1+2\left(\frac{m_{Z}}{m_{N}}\right)^{2} \right)\equiv C_{Z}|\varepsilon_{id}|^{2} \tag{4}\]
Here, \(v\) is the vacuum expectation value 246 GeV, \(\beta_{f}(M,m)=1-(m/M)^{2}\), \(g\) is the electroweak \(SU(2)\) coupling constant, \(m_{W}\) is the W boson mass, and \(m_{Z}\) is the Z boson mass, and \(i,j\) indicate the lepton flavor. The equations are slightly modified versions of the ones given in [8]. We have defined \(C_{H},C_{W},C_{Z}\) as the product of all terms that depends on the dark neutrino mass instead of the mixing parameter. The charge conjugate decay mode \(H\rightarrow\nu_{j}\bar{N}_{d}\) is of course also possible, with the corresponding decays \(N_{d}\to l_{i}^{+}W^{-}\) and \(N_{d}\rightarrow\bar{\nu}_{i}Z\).
Given above theory basis we are ready to define our signal process at the ILC. We will take advantage of the leading Higgs production channel (Higgs-strahlung process) \(e^{+}e^{-}\to ZH\)
Figure 1: Direct production of dark neutrinos through \(e^{+}e^{-}\) collisions. It is also possible to have the same diagram but with swapped SM and dark neutrinos and exchanging
and look for the Higgs exotic decay mode \(H\to\bar{\nu}N_{d}\). We will concentrate on the dominant decay channel where \(Z\to q\bar{q}\) and \(N_{d}\to l^{-}W^{+}\to l\ q\bar{q}\). The charge conjugate channel is also targeted as our signal process. The Feynman diagram of this signal process is shown in Figure 2 (left). The observable will be the event rate of the signal process, which is basically the product of the cross section of \(e^{+}e^{-}\to ZH\) cross section (\(\sigma_{ZH}\)) and decay branching ratios (\(BR\)) of \(Z\to q\bar{q}\), \(H\to\nu N_{d}\), \(N_{d}\to l^{-}W^{+}\) and \(W\to q\bar{q}\). With \(\sigma_{ZH}\), \(BR(Z\to q\bar{q})\) and \(BR(W\to q\bar{q})\) precisely measured in other processes at the ILC, the observable here essentially becomes a joint branching ratio of \(H\) and \(N_{d}\) decays, \(BR(H\to\bar{\nu}N_{d})\cdot BR(N_{d}\to l^{-}W^{+})\), which can be computed as a function of the two free parameters \(m_{N}\) and \(\varepsilon_{id}\) as follows (using equations 2, 3, 4)
\[\sum_{j}BR(H\to\bar{\nu}_{j}N_{d})\cdot BR(N_{d}\to l_{i}^{-}W^{+})\] \[=\frac{\sum_{j}\Gamma(H\to\bar{\nu}_{j}N_{d})}{\Gamma_{SM}+\sum_{ j}\Gamma(H\to\bar{\nu}_{j}N_{d})}\frac{\Gamma(N_{d}\to l_{i}^{-}W^{+})}{ \sum_{k}(\Gamma(N_{d}\to l_{k}^{-}W^{+})+\Gamma(N_{d}\to\nu_{k}Z))}\] \[=\frac{\sum_{j}|\varepsilon_{jd}|^{2}C_{H}}{\Gamma_{SM}+\sum_{j}| \varepsilon_{jd}|^{2}C_{H}}\frac{C_{W}|\varepsilon_{id}|^{2}}{\sum_{j}(| \varepsilon_{jd}|^{2}C_{W}+|\varepsilon_{jd}|^{2}C_{Z})}=\frac{C_{H}}{\Gamma_ {SM}+\sum_{j}|\varepsilon_{jd}|^{2}C_{H}}\frac{C_{W}|\varepsilon_{id}|^{2}}{C_ {W}+C_{Z}}\] \[\approx\frac{C_{H}}{\Gamma_{SM}}\frac{C_{W}}{C_{W}+C_{Z}}| \varepsilon_{id}|^{2}, \tag{5}\]
where the approximation in the last step holds when the decay width contribution to the Higgs from the dark neutrino is far smaller than the Higgs SM decay width \(\Gamma_{SM}\). Equation 5 tells that the observable defined above for the channel with \(l_{i}\) is proportional to the
Figure 2: Feynman diagrams for the signal (left) and the main background (right) in this study.
corresponding mixing parameter squared \(\varepsilon_{id}\). More explicitly, when the charged lepton in the final state is \(e\) the observable depends simply on \(\varepsilon_{ed}\) instead of a combination of \(\varepsilon_{ed}\), \(\varepsilon_{\mu d}\) and \(\varepsilon_{\tau d}\). Numerically, for our interested mass range \(m_{Z}<m_{N}<m_{H}\), \(\frac{C_{W}}{C_{W}+C_{Z}}\) is \(O(1)\) (\(>75\%\), closer to 1 for smaller \(m_{N}\)), and \(\frac{C_{H}}{\Gamma_{SM}}\) is \(O(10)\).
#### ii.1.1 Current constraints
It's worth summarizing here current constraints on the two free parameters in above model. First of all as given in [5], based on the test of lepton universality [9]\(|\varepsilon_{id}|^{2}\) is typically constrained to \(|\varepsilon_{id}|^{2}\lesssim 10^{-3}\). This constraint is independent of the dark neutrino mass. In the mass range below \(m_{Z}\), the constraint from \(Z\) decay comes from DELPHI at LEP1 [10]. Newer searches by the ATLAS and CMS collaborations have instead investigated decays of \(W\to lN_{d}\), which put similar or slightly stronger constraints on the mixing parameters as the DELPHI search [11; 12].
For short-lived heavy neutral leptons, when \(m(N_{d})\) is between 5 and 50 GeV, the constraint can be as strong as \(\varepsilon\lesssim 2\times 10^{-5}\), while it is weaker when \(m_{N}>50\) GeV. The constraints from searches of heavy neutral leptons at the LHC experiments [13; 14] can also be cast into the parameters defined here, notably in the mass range well below \(m_{W}\) (\(\lesssim 10\) GeV) the limit on \(|\varepsilon_{id}|^{2}\) can be as strong as \(10^{-6}\) if the heavy neutrino is long-lived and therefore has a displaced vertex.
For the mass range we are interested in this study, there is no stronger limit yet from current LHC experiments. According to [8], the measurement of Higgs total width at the LHC can set an indirect limit which is very weak at this moment; stronger limit might be possible using Higgs exotic decay but leptonic final state \(H\to\nu N_{d}\to ll\nu\nu\), based on DELPHES fast detector simulation.
It is interesting to note another method which can give indirect limit. At parton level, Higgs SM decay to \(WW^{*}\) can give exactly same final states as Higgs exotic decay to \(\nu N_{d}\), illustrated in Figure 2 (right). Thus the measurement of \(H\to WW^{*}\) branching ratio can provide a constraint on \(BR(H\to\nu N_{d})\) if the event selection efficiency for \(H\to\nu N_{d}\) events is not zero in the \(H\to WW^{*}\) analysis. The current measurements of the branching ratio is \(BR(H\to WW^{*})=25.7\pm 2.5\%\)[15]. Assuming that all of \(H\to\nu N_{d}\) decays contribute to the \(H\to WW^{*}\) decay channel, the \(2\sigma\) limit for the branching ratio of \(H\to\nu N_{d}\) is 5%.
This constraint on the branching ratio is overly optimistic but is nevertheless included as a comparison.
All of the above mentioned constraints are shown in the results section and compared to the ILC constraints.
## II Simulation Framework
### International Linear Collider
The International Linear Collider (ILC) is a proposed future linear \(e^{+}e^{-}\) collider. One of the main goals of the collider is to be a Higgs factory, i.e., produce Higgs bosons and perform precision measurements of its properties. The hope is that some of these properties will deviate from predictions by the SM and will therefore be a hint of BSM, as explained earlier. The center-of-mass energy is planned to be 250 GeV at the start, with possibilities to extend the accelerator and thus increasing the collision energy in later stages. At 250 GeV, the cross section for Higgsstrahlung process, i.e., \(e^{+}e^{-}\to ZH\) (see Feynman diagrams in Figure 2) reaches its maximum value and is therefore a suitable center-of-mass energy to perform measurements of the Higgs. The latest overviews of ILC can be found in the ILC documents for the Snowmass 2022 [3] and European Strategy Update for HEP 2020 [16]. Our study is assuming a total integrated luminosity of 2 \(\mathrm{ab}^{-1}\) at the ILC \(\sqrt{s}=250\) GeV, equally shared among two beam polarization schemes: \(P(e^{-},e^{+})=(-0.8,+0.3)\) and \(P(e^{-},e^{+})=(+0.8,-0.3)\)[17].
### The ILD concept
Our full detector simulation is based on the ILD which is one of the two proposed detectors for the ILC. It has a hybrid tracking system and highly granular calorimeters optimized for Particle Flow reconstruction. It has been developed to be optimized for precision measurements of the Higgs boson, the weak gauge bosons and the top-quark, as well as for direct searches of new particles [6]. The subdetectors relevant for this study are briefly described below.
The vertexing system consists of three double layers of silicon pixel detectors and is located closest to the interaction point. Each layer has a spatial resolution of 3 \(\mu\)m and
a timing resolution per layer of 2-4 \(\mu\)s or potentially lower. The hybrid tracking system consists of both a silicon strip detector and a time projection chamber (TPC). The silicon detectors are placed before and after the TPC. The whole tracking system is located outside the vertexing system. The TPC allows for up to 220 three-dimensional points and a spatial resolution lower than 100 \(\mu\)m for each hit and can also identify particle types from the \(dE/dx\) energy loss. The silicon strips enable further improvements to the track momentum resolution. With this setup, charged particle momenta can be measured down to \(\frac{\delta p_{t}}{p_{t}^{2}}=2\times 10^{-5}\) GeV\({}^{-1}\).
The electromagnetic calorimeter (ECAL), located outside the tracking system, is a sampling calorimeter made out of silicon and tungsten in finely segmented pads of (\(5\times 5\)) mm\({}^{2}\). A shower can be sampled up to 30 times and give a timing resolution of 100 ps or better. The hadronic calorimeter is located outside the ECAL and is based on scintillator as default consisting of (\(3\times 3\)) cm\({}^{2}\) tiles. High-performance and high-resolution calorimeters are crucial for particle flow, as every particle in each jet is separated and reconstructed.
Outside of the calorimeters there is a superconducting solenoid with a magnetic field of 3.5 T. An iron return yoke located outside the coil works as a muon identification system. The muon detector is scintillator-based and is mostly located in the inner half of the iron yoke.
### Software
This study utilizes the software package ILCSoft v02-02 [18] to conduct simulations and reconstructions. The parameters of the incoming beams are simulated using the GUINEA-PIG package [19; 20]. The beam spectrum, including beamstrahlung and initial state radiation (ISR), is taken into account. In line with the current ILC design, the beam crossing angle of 14 mrad is taken into consideration. For the generation of Monte Carlo (MC) samples of the signal and SM background events, the WHIZARD 2.8.5 event generator [21; 22] is employed. The signal events were generated by employing the UFO model that was developed in [7], with 6 different values for dark neutrino mass \(m_{N}=95,100,105,110,115,120\) GeV. The parton shower and hadronization model is adopted from PYTHIA 6.4 [23]. To simulate the detector response, the generated events are passed through the ILD simulation [6] (model version ILD_15_v02) implemented with the DD4HEP [24; 25] software pack
age, which is based on Geant4 [26; 27; 28]. Event reconstruction is performed using the Marlin [29] framework. The PandoraPFA [30] algorithm is specifically employed for calorimeter clustering and the analysis of track and calorimeter information, following the particle flow approach.
The samples used for the background are all SM processes (excluding ones with the Higgs boson) where two, four, and six fermions are produced. Additionally, processes where two quarks and one Higgs boson are produced (almost exclusively \(e^{+}e^{-}\to ZH\to q\bar{q}H\)) are included as background, including all possible SM decay modes of the Higgs. Each background can also be separated into leptonic (only leptons in the final state), semileptonic (leptons and hadrons in the final state) or hadronic (only hadrons in the final state) decay channels.
The three-fermion final state where the incoming electron or positron interacts with a photon was also investigated as a background. However, all simulated electron-gamma samples were excluded after the cuts applied (see further down) and are therefore not included in the background. The cross sections for the background processes are given in Table 1.
The analysis for applying cuts were done with the ROOT C++/Python framework 6.28 [32] and Jupyter notebooks [33]. The machine learning model was developed with TMVA
\begin{table}
\begin{tabular}{l|l|r|r} Process & Abbreviation & Cross section \((-0.8,+0.3)\) [fb] & Cross section \((+0.8,-0.3)\) [fb] \\ \hline
2 fermion leptonic & 2f\_l & 13 000 & 10 300 \\
2 fermion hadronic & 2f\_h & 77 300 & 45 700 \\
4 fermion leptonic & 4f\_l & 10 400 & 6 110 \\
4 fermion semileptonic & 4f\_sl & 19 200 & 2 840 \\
4 fermion hadronic & 4f\_h & 16 800 & 1 570 \\
6 fermion & 6f & 1.28 & 0.26 \\ \(e^{+}e^{-}\to q\bar{q}H\) & qqh & 208 & 140 \\ Signal (BR = 1\%) & Signal & 1.396 & 0.941 \\ \end{tabular}
\end{table}
Table 1: Cross sections of the various background processes [31]. For example, β2 fermion hadronicβ means that there are two hadronic fermions produced in the final state, i.e., all SM processes (excluding processes involving the Higgs) of \(e^{+}e^{-}\to q\bar{q}\). The signal cross section shows the case when the branching ratio is 1%.
[34].
## III Analysis
For each polarization scheme and each mass value of dark neutrino, the event selection and cuts applied to reduce the background were done in three stages: pre-selection, rectangular cuts, and a machine learning cut, explained in detail in the following subsections. As a preview of the analysis procedure a cut flow table of all the cuts applied and the number of events that pass each cut, separated for the different background categories, are shown in Table 2. The table is an example of the cuts for a dark neutrino mass of 100 GeV, a joint branching ratio of 1%, beam polarization of \((+0.8,-0.3)\), and an integrated luminosity of 1 ab\({}^{-1}\).
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r|r|r|r} Cut & Signal & Background & \(\sigma\) & 2f\_1 & 2f\_h & 4f\_1 & 4f\_sl & 4f\_h & 6f & qqh \\ \hline No cuts & 941 & 66651497 & 0.12 & 10314870 & 45672588 & 6114301 & 2839022 & 1570051 & 260 & 140405 \\ Pre-selection & 831 & 12565351 & 0.23 & 5696748 & 979693 & 4109167 & 1739683 & 22431 & 194 & 17434 \\ cut 1 & 769 & 1287215 & 0.68 & 70332 & 146740 & 897907 & 149918 & 15416 & 142 & 6759 \\ cut 2 & 722 & 1025729 & 0.71 & 61382 & 49161 & 785129 & 120467 & 4506 & 132 & 4952 \\ cut 3 & 708 & 434591 & 1.07 & 44787 & 22077 & 293992 & 67433 & 2031 & 74 & 4197 \\ cut 4 & 665 & 24666 & 4.18 & 399 & 4093 & 1176 & 13462 & 1687 & 72 & 3777 \\ cut 5 & 583 & 6919 & 6.73 & 0 & 1151 & 0 & 1234 & 1384 & 55 & 3094 \\ cut 6 & 574 & 4487 & 8.07 & 0 & 544 & 0 & 666 & 648 & 19 & 2611 \\ MVA cut & 434 & 1162 & 10.87 & 0 & 52 & 0 & 26 & 79 & 6 & 999 \\ \end{tabular}
\end{table}
Table 2: Cut flow table for a dark neutrino mass of 100 GeV, a joint branching ratio of 1%, beam polarization of \((+0.8,-0.3)\) and an integrated luminosity of 1 ab\({}^{-1}\). The numbered cuts represent rectangular cuts, which are explained in more detail below. The column named \(\sigma\) is the signal significance in units of standard deviation \(\sigma\).
### Pre-selection
The final state of signal events consists of one charged lepton, one neutrino and four jets. The pre-selection is applied to reconstruct the basic information of the leptons and jets, as well as to properly pair them into \(W\), \(Z\), \(N_{d}\) and Higgs in each event, based on the signal characteristics. The pre-selection will supply the necessary information for the next cuts. The procedure of pre-selection is briefly explained here.
Isolated leptons were first identified in each event using a pre-trained neural network by the IsolatedLeptonTagging algorithm in iLCSoft [29]. In this algorithm, the isolated leptons are required to have a momentum of at least 5 GeV. The neural network gives a numerical output for each particle of the event usually between 0 an 1 (though it can give a higher value, even up to 2), with a higher value meaning that a particle is more likely an isolated lepton. If the particle is a muon, it is required that the isolated lepton finder output is greater than 0.5, whereas if it is an electron, it is required to be greater than 0.2. Only a loose cut is applied on this parameter in the pre-selection not to remove signal and background events too early. Each event is then required to have at least one isolated lepton according to these criteria. For the signal, typically only one lepton fulfills this but there are occasionally (\(<1\%\)) 2 or more leptons. In that case, the highest energy lepton is chosen as the isolated lepton.
The remaining particles in the event are clustered into four jets using the Durham jet clustering algorithm [35]. If this fails, the event is also rejected. The four jets are then paired with each other and classified as \(W\) jets or \(Z\) jets based on which pairing minimized
\[\chi^{2}=\left(\frac{m_{W}-m_{12,jet}}{\Delta m_{W,jet}}\right)^{2}+\left( \frac{m_{Z}-m_{34,jet}}{\Delta m_{Z,jet}}\right)^{2}. \tag{6}\]
Here, \(m_{W}=80.4\) GeV and \(m_{Z}=91.187\) GeV are the W and Z boson masses. \(m_{12,jet},\ m_{34,jet}\) are the reconstructed masses based on a certain pairing of jets and their 4-momenta. \(\Delta m_{W,jet}=5.3\) GeV, \(\Delta m_{Z,jet}=6.8\) GeV represent the mass resolutions of \(W\) and \(Z\) bosons. This is calculated by using MC truth information to identify which of the reconstructed jets originate from \(W\) or \(Z\) jets on this basis [36]. The mass distributions are shown in Figure 3 for \(e^{+}e^{-}\to ZH\to q\bar{q}\ WW^{*}\to q\bar{q}\ q\bar{q}\ l\nu\) events. Since this process is the dominant background, it has the same final state and has very similar kinematics as the dark neutrino model, the \(W\) and \(Z\) mass resolutions are similar to signal events.
After the jet clustering and jet pairing, the \(W\) and charged lepton are grouped to form reconstructed \(N_{d}\). The missing 4-momentum, calculated as the 4-momentum of initial state minus the total 4-momentum of all reconstructed visible particles in the final state, is reconstructed as the 4-momentum of \(\nu\). Then Higgs is reconstructed as sum of the 4-momenta of \(N_{d}\) and \(\nu\). After the pre-selection most of the hadronic background events are rejected, as they do not have an isolated lepton, and large portions of the leptonic backgrounds are also reduced, as they fail the 4-jet reconstruction; details shown in Table 2.
### Rectangular cuts
For the rectangular cuts, various observables were identified by comparing their distributions between signal and background events. Cut values were optimized to maximize the final signal significance after the rectangular cuts and the machine learning cut.
The first cut applied was a combination of missing energy and lepton energy. This was mainly to reduce the leptonic and semileptonic backgrounds, which have high energy leptons or neutrinos. As an example, the 2D distributions of the lepton energy (\(E_{lep}\)) and missing energy (\(E_{mis}\)) for the 4f_sl background and signal are shown in Figure 4. The plot (and all subsequent plots in this section) are for a beam polarization of \((+0.8,-0.3)\), dark neutrino
Figure 3: Reconstructed mass distributions of the W and Z bosons, by pairing jets using MC truth information. The smooth lines are curve fits to the histograms.
mass of 100 GeV, and a branching ratio of 1% for the signal. The cut (cut 1) was set to \(E_{lep}/50\mathrm{GeV}+E_{mis}/100\mathrm{GeV}<1\). The results of al cuts shown in figures below, are the ones shown in Table 2.
The second cut applied was on the isolated lepton finder output, which is required to be greater than 0.6 (cut 2). This cut is tighter than the loose cut applied by default in the isolated lepton finder algorithm in the pre-selection. This cut mainly suppresses further the hadronic backgrounds (2f_h and 4f_h) where a lepton from a jet was mistagged as an isolated lepton. The distributions for the signal and background, as well as the cut value is shown in Figure 5. Note that the events shown are only the events passing the previous cut(s). This applies to all the upcoming similar plots in this section.
Next, the 4-jet combined invariant mass is required to be between 160 and 220 GeV (cut 3), which reduces both the hadronic (at the high-mass region) and semi-leptonic / leptonic (at the low-mass region) backgrounds. The distributions of the 4-jet invariant mass is shown in Figure 6 for the signal and the total background events.
The fourth cut applied is on the Durham jet distance \(y_{4\to 3}\), calculated as
\[y_{4\to 3}=\min_{i,j}\left\{\frac{2\min\{E_{i},E_{j}\}^{2}(1-\cos\theta_{ij})}{ E_{vis}^{2}}\right\}, \tag{7}\]
Figure 4: 2D distributions of the signal and background for isolated lepton energy (\(x\)-axis) and missing energy (\(y\)-axis) for a beam polarization of \((+0.8,-0.3)\) and a dark neutrino mass of 100 GeV. The background distribution shown is the 4f_sl background, where the red line shows the cut that is applied. The number of events shown for the signal is for a branching ratio of 1% but the distribution does not change for other branching ratios.
where \(E_{i}\) is the energy of jet \(i\), \(\theta_{ij}\) is the angle between jets \(i\) and \(j\), and \(E_{vis}\) is the total energy of the four jets. This jet distance is used for the jet clustering and the pair of jets that give the smallest jet distance are combined, hence the use of min in the equation above. This clustering is performed multiple times until there are only four jets left, and \(y_{4\to 3}\) is the minimum jet distance at this stage. By requiring \(y_{4\to 3}>0.004\) (cut 4), this can therefore help filter out semi-leptonic and leptonic background events, as shown in Figure 7.
The fifth cut applied is that any jet in the event must have at least four particles (cut 5). This cut further reduces the semi-leptonic and leptonic backgrounds, as shown in Figure 8.
The final rectangular cut is on the missing momentum. This is highly dependent on the dark neutrino mass and a different cut value is therefore used for each mass point. The missing momentum for each mass point is shown in Figure 9 (left). Both lower and upper bound cuts are used to reduce all types of backgrounds. The signal and background distributions can be seen in Figure 9, right plot. In this case, for a dark neutrino mass of 100 GeV, the missing momentum is required to be between 10 and 45 GeV (cut 6).
As shown in Table 2 after all the rectangular cuts, the backgro
Figure 5: The distributions of the isolated lepton finder output for the signal and background, where different colors indicate different background processes. The same beam polarization and dark neutrino mass is shown as in Figure 4. The distributions are properly normalized according to the corresponding cross sections. The red vertical line shows the cut value and the grey region is the region that is rejected. A logarithmic scale is used on the \(y\)-axis.
irreducible \(qqh\) background, followed by remaining 4-fermion hadronic and semi-leptonic background. The signal over background ratio (\(S/B\)) is already improved by 4 orders of magnitude, from \(1/10^{5}\) in the beginning to \(1/10\). The same cut variables are used for both beam polarizations but the cut values are optimized separately, as the background and signal differs for the two beam polarizations. Generally, the cuts for \((+0.8,-0.3)\) are looser, as the background is lower. Two cuts that were highly dependent on the dark neutrino mass, specifically the first and last rectangular cuts, were optimized and tuned for each mass point.
Figure 6: The distributions of 4-jet invariant mass for the signal and background. The figure format is the same as in Figure 5.
Figure 7: The distributions of the jet distance \(y_{4\to 3}\) for the signal and background. The figure format is the same as in Figure 5.
### Machine learning
The signal and background events that passed the rectangular cuts were further filtered through a boosted decision tree (BDT) using the TMVA framework [34]. One BDT was trained for each mass point and beam polarization. In total, 13 input parameters were passed to the BDT. The input parameters are listed below.
Figure 8: The distributions of the smallest number of particles in the jets of each event. for the signal and background. The figure format is the same as in Figure 5. The histogram bins are centered at each integer.
Figure 9: The distributions of the missing momentum. The left plot shows each dark neutrino mass. The left plot shows the signal and background for a dark neutrino mass of 100 GeV in the same format as Figure 5.
* The lepton and missing energies
* 4-jet combined momentum
* The angle between the lepton and the closest jet
* \(\cos\theta_{l},\cos\theta_{\nu},\cos\theta_{Z},\cos\theta_{N_{d}}\), where \(\theta\) is the production angle (in the lab frame) of the particle indicated by the subscript. The particles are the isolated lepton, reconstructed neutrino, Z boson and dark neutrino, respectively
* The cosine of the lepton helicity angle in the dark neutrino rest frame
* The reconstructed Higgs, \(Z\) boson, and \(W\) boson masses
* The corrected reconstructed dark neutrino mass
The formula for the corrected reconstructed dark neutrino mass is \(m(N_{d})-m_{W}+m_{W_{0}}\), where \(m_{W}\) is the reconstructed \(W\) boson mass and \(m_{W_{0}}=80.4\) GeV is the truth central value of the \(W\) boson mass. This corrected dark neutrino mass has better resolution than that directly reconstructed from the lepton and two jets from \(W\) since the largest uncertainty from the jet reconstruction can mostly cancel out. Distributions of some example input parameters for the signal and background are shown in Figure 10. The rest of the input parameters to the BDT can be found in the appendix. One example of the BDT output distributions is shown in Figure 11, where it is validated that the BDT was not overfitted. Here, it can clearly be seen that the BDT output distribution is nearly identical for the training (dots) and test (filled) events. The Kolmogorov-Smikronov test also gives a high value (above 0.05) which quantitatively shows that the distributions are similar to each other. A final cut is applied to the BDT output which further improves the \(S/B\) by a factor of 3, as shown in Table 2, and after the final cut the background contribution is completely dominated by \(qqh\) process.
It is worth pointing out that by comparing the truth and reconstructed information we found that the jet clustering and jet pairing to distinguish jets from \(Z\) and jets from \(W\) is not perfect. Instead, jet constituents can originate from both \(Z\) and \(W\) bosons, which result in a bias in the jet momentum. This is discussed in further detail in the appendix.
## IV Results
The analysis above illustrated mostly for one benchmark scenario, is carried out for each of the six values of dark neutrino mass from 95 GeV to 120 GeV, each of the two beam polarization schemes and each of the two lepton channels (\(e\) and \(\mu\)). The final BDT cut to maximize the signal significance also depends on the value of joint branching ratio as shown in Figure 12 for a range of branching ratios from 0.01% to 10%. The final significance for each branching ratio and dark neutrino mass is then calculated by combining the contribution from two beam polarization schemes and two lepton channels. This combination is calculated
Figure 10: Distributions of a few input parameters to the BDT for the signal and background events, with the format as same as in e.g., Figure 5, except that there is no grey region indicated. As in the previous figures, a dark neutrino mass of 100 GeV and a beam polarization of \((+0.8,-0.3)\) is shown. From top left to bottom right with the first row first: missing energy, cosine of lepton production angle in the lab frame, the Higgs mass, and the corrected dark neutrino mass.
as
\[\sigma_{final}=\sqrt{\sum_{i}\sigma_{i}^{2}} \tag{8}\]
where \(i\) iterates over the four \((2\cdot 2)\) possible combinations of lepton channels and beam polarizations.
To better visualize the remained signal and background events, we separately trained a BDT without using the corrected dark neutrino mass as one input variable, applied a cut on that BDT output, and then plotted the distributions of the corrected dark neutrino mass, as shown in Figure 13. If the branching ratio is as large as 1%, a sharp resonance from the dark neutrino signal events would be clearly visible on top of the background events.
We are now ready to give our model-independent final result in terms of the signal significance for searching the dark neutrinos at the ILC250 as a function of the dark neutrino mass and the joint branching ratio of \(BR(H\rightarrow\bar{\nu}N_{d})\cdot BR(N_{d}\to lW)\). The result is shown in Figure 14, where the red line indicates the exclusion limit for \(2\sigma\) significance and the green line indicates the discovery potential for \(5\sigma\) significance. We can see in Figure 14 that the significance is almost identical regardless of dark neutrino mass, with a slight decrease close to 100 GeV. This is most likely due to the background distribution, which has a peak close to
Figure 11: The distributions of BDT output for a BDT trained on the signal and background events when the dark neutrino mass is 100 GeV and the beam polarization is \((+0.8,-0.3)\). The filled histograms show the signal and background events for the test data, while the dotted histograms show the training data.
100-105 GeV (see Figure 13). This means that it is more difficult for the BDT to discriminate the signal from the background, since the mass peaks overlap. As a consequence of this, the significance is highest at 120 GeV, since this is the farthest away from the background mass peak. As a conclusion the exclusion limit (discovery potential) for the joint branching ratio is \(\sim 10\%\).
Figure 12: Examples of significance curves as function of the BDT cut value for different values of branching ratio. All plots are for dark neutrino masses of 95 GeV, with the top and bottom row being for \((-0.8,+0.3)\) and \((+0.8,-0.3)\) beam polarization respectively. The left (right) column of plots show the electron (muon) channel. Red dots indicate the location of maximum significance for different branching ratios where corresponding final cut is applied. The significance curves for other dark neutrino masses and beam polarizations typically have a similar shape.
Figure 13: Examples of mass distributions after all cuts, with a branching ratio of 1%. The final BDT cut does not use the neutrino mass as input for this figure. The order of the plots is the same as in Figure 12. The backgrounds are categorized based on if a process result in 2 leptons (2f_l), 2 quarks (2f_h), 4 leptons (4f_l), 2 leptons and 2 quarks (4f_sl), 4 quarks (4f_h), 6 fermions (6f), or 2 quarks and a Higgs boson (qqh). The signal is marked with blue.
ratio is about 0.1% (0.3%), nearly independent of the dark neutrino mass given the mass range between \(m_{Z}\) and \(m_{H}\). For one study of the High Luminosity LHC (HL-LHC) with the same final state as the one investigated in this study, it was found that a branching ratio of \(BR(H\to N_{d}\nu)=1\%\) would have a 0.4\(\sigma\) signal significance [37]. In comparison, for this study we expect the signal significance to be around 10\(\sigma\) (since \(BR(N_{d}\to lW)\approx 80\%\)), which is an improvement of the significance of a factor of 25 for the ILC compared to the HL-LHC.
The model-independent results can be cast into constraints on the two free model parameters, dark neutrino mass \(m_{N}\) and mixing parameter \(|\varepsilon_{id}|^{2}\), as introduced in Section 1, using
Figure 14: Exclusion plot as a function of branching ratio \(BR(H\rightarrow\nu N_{d})BR(N_{d}\to lW)\) (\(x\) axis) and dark neutrino mass (\(y\) axis). The color indicates the significance of detecting this channel. The red curve indicates where the significance is 2\(\sigma\), i.e., the exclusion limit, whereas the green line shows the 5\(\sigma\) limit, i.e., the limit for discovery. Every parameter value to the right of the red (green) line can be excluded (discovered). Note that a log scale is used on the \(x\) and \(z\) axis. The significance is interpolated in between the tested mass points and branching ratios and may not be completely accurate.
Figure 15: The exclusion curves for the mixing angle between the dark neutrino and the electron neutrino (left plot) and the muon neutrino (right plot). The \(x\)-axis is the dark neutrino mass while the \(y\)-axis is the squared amplitude of the mixing angle. Different constraints have different colors, with the black line showing searches by DELPHI at LEP, the orange line showing the ATLAS trilepton search, the blue line corresponding to the CMS trilepton search, the green region corresponding to measurements of the SM neutrino mixing angles (i.e., electroweak precision data), the purple region showing the results from [8], and the gray line showing the region where the \(BR(H\to\nu N_{d})>0.05\). The red region shows the new results from this study using simulated ILC data. Note that the \(y\)-axis is a logarithmic scale. The contours are linearly interpolated between each simulated mass point (for this study, it is 95 - 120 GeV, 5 GeV apart).
Equation 5. Note that in this interpretation, the branching ratio \(BR(H\to\nu\bar{N}_{d}+\bar{\nu}N_{d})=2BR(H\to\nu\bar{N}_{d})\) is used, i.e., a factor of 2 is multiplied to the branching ratio in Equation 5. The limits are calculated separately for the electron and muon channels. The constraints from this study, together with constraints imposed by previous studies mentioned above are shown in Figure 15. The exclusion limits from current constraints are taken from [38; 39; 11]. We can see that the exclusion limit on \(|\varepsilon_{id}|^{2}\) down to \(10^{-4}\) can be reached at the ILC250, in the dark neutrino mass region between \(Z\) mass and Higgs mass, which is about 1-order of magnitude improvement over the current constraint.
### Discussion for potential improvement
As mentioned earlier one source of error in this analysis is the jet clustering and jet pairing. This was e.g., evident in the distribution of the lepton helicity angle in the dark neutrino rest frame (see e.g., Figure 16). Improving this by using better jet clustering algorithms and/or a more sophisticated method for jet pairing could improve the measurements of the 4-momenta of the jets, which in turn improves the precision of the \(W\) and \(Z\) bosons reconstruction, and consequently of the dark neutrino and Higgs boson reconstruction. It is therefore potentially of great help to improve jet clustering algorithms for future experiments and analyses that focus on precision measurements.
Another potential improvement to this study is on the correction of the dark neutrino mass. Currently, a simple correction was used by subtracting the \(W\) boson mass and adding a constant, which evidently worked well for this study, since the \(W\) boson 4-momentum measurement was the main source of error. However, one could also employ techniques such as kinematic fitting to correct for errors more accurately in the \(W\) boson 4-momentum, which could improve mass resolution of the dark neutrino mass even more. This is also closely related to the jet clustering error mentioned above.
To further improve the sensitivity to the dark neutrino, other decay channels can be investigated. This includes the \(\tau\) decay channel (i.e., \(H\to\nu_{\tau}N_{d}\to\nu_{\tau}\tau q\bar{q}\)) as well as two other \(W\) and \(Z\) boson decay channels. However, the other two decay channels are likely to provide little sensitivity. The semi-leptonic decay, i.e., a leptonic decay of the \(W\) or \(Z\) boson and a hadronic decay of the other one, would likely be difficult to discriminate from the background. This is because the final state contains two neutrinos which makes it difficult to reconstruct W and/or the Z boson and thus the dark neutrino and Higgs. Additionally, there are two jets in the final state which has a large background. The other fully leptonic decay with three neutrinos and three leptons will likely have a very small background. However, reconstructing the dark neutrino or any of the \(W\), \(Z\), and \(H\) bosons will be almost impossible. Still, this decay channel was more sensitive than the hadronic decay channel for the LHC [8], though this does not mean that it will be the same for ILC, since the background characteristics are different. It is therefore totally possible that this and other decay channels would have sizable contributions to the significance.
Summary
The sensitivity of the ILC for detecting heavy dark neutrinos through exotic Higgs decays was investigated. The \(e^{+}e^{-}\to ZH\to q\bar{q}\ \nu N_{d}\to q\bar{q}\ \nu\ lW\to q\bar{q}\ \nu l\ q\bar{q}\) channel was studied, for the first time to our knowledge based on full detector simulations. The analysis was performed at 250 GeV center-of-mass energy for two different beam polarization schemes, and for six dark neutrino masses between \(m_{Z}\) and \(m_{H}\). The SM background with 2, 4, 6-fermion final states as well as \(q\bar{q}H\) processes were all considered in the analysis. Events were filtered through a pre-selection, a set of rectangular cuts, and finally a machine learning cut. The cuts were separately optimized for all combinations of beam polarizations and dark neutrino masses, though the pre-selection was the same for all of them. The background contribution turns out to be very important, and only after all the dedicated event selection cuts the signal over background ratio could be improved from \(O(1/10^{5})\) to \(O(1)\). For all masses simulated, the dominating background after all cuts was from the SM Higgs decay process \(H\to WW^{*}\to q\bar{q}l\nu\). The final significance achieved was around \(2\sigma\) for a branching ratio of \(BR(H\to\nu N_{d})\cdot BR(N_{d}\to lW)=0.1\%\), while \(5\sigma\) is reachable at a branching ratio of \(0.3\%\). The significance is almost unchanged for different masses. For a branching ratio of \(BR(H\to\nu N_{d})=1\%\), the significance is roughly \(10\sigma\), which is 25 times greater than the expected performance of HL-LHC. Interpreting these results for dark neutrino models results in constraints on the mixing angle \(|\varepsilon_{id}|^{2}\) between SM neutrinos and the dark neutrino of levels down to \(10^{-4}\), which is a factor of 10 improvement from previous constraints.
## Acknowledgement
We would like to thank Teresa Nunez, Alberto Ruiz and Kiyotomo Kawagoe for constructive discussions and suggestions during the ILD internal review process. We also would like to thank Hitoshi Murayama and Robert McGehee for providing helpful theory guidance, ILD Monte Carlo Team in particular Hiroaki Ono and Ryo Yonamine for producing the common SM background samples, Krzysztof Mekala and Jurgen Reuter for pointing out the proper UFO model file for Whizard to generate signal events, and Kay Hidaka, Daniel Jeans, Shigeki Matsumoto, Satoshi Shirai, Taikan Suehara for discussions at various local meetings. This research was supported by the Sweden-Japan foundation and "Insamlingsstiftelsen for
internationellt studentutbyte vid KTH".
|
2309.06728 | Leveraging Foundation models for Unsupervised Audio-Visual Segmentation | Audio-Visual Segmentation (AVS) aims to precisely outline audible objects in
a visual scene at the pixel level. Existing AVS methods require fine-grained
annotations of audio-mask pairs in supervised learning fashion. This limits
their scalability since it is time consuming and tedious to acquire such
cross-modality pixel level labels. To overcome this obstacle, in this work we
introduce unsupervised audio-visual segmentation with no need for task-specific
data annotations and model training. For tackling this newly proposed problem,
we formulate a novel Cross-Modality Semantic Filtering (CMSF) approach to
accurately associate the underlying audio-mask pairs by leveraging the
off-the-shelf multi-modal foundation models (e.g., detection [1], open-world
segmentation [2] and multi-modal alignment [3]). Guiding the proposal
generation by either audio or visual cues, we design two training-free
variants: AT-GDINO-SAM and OWOD-BIND. Extensive experiments on the AVS-Bench
dataset show that our unsupervised approach can perform well in comparison to
prior art supervised counterparts across complex scenarios with multiple
auditory objects. Particularly, in situations where existing supervised AVS
methods struggle with overlapping foreground objects, our models still excel in
accurately segmenting overlapped auditory objects. Our code will be publicly
released. | Swapnil Bhosale, Haosen Yang, Diptesh Kanojia, Xiatian Zhu | 2023-09-13T05:05:47Z | http://arxiv.org/abs/2309.06728v1 | # Leveraging Foundation Models for Unsupervised Audio-Visual Segmentation
###### Abstract
Audio-Visual Segmentation (AVS) aims to precisely outline audible objects in a visual scene at the pixel level. Existing AVS methods require fine-grained annotations of audio-mask pairs in supervised learning fashion. This limits their scalability since it is time consuming and tedious to acquire such cross-modality pixel level labels. To overcome this obstacle, in this work we introduce _unsupervised audio-visual segmentation_ with no need for task-specific data annotations and model training. For tackling this newly proposed problem, we formulate a novel _Cross-Modality Semantic Filtering_ (CMSF) approach to accurately associate the underlying audio-mask pairs by leveraging the off-the-shelf multi-modal foundation models (e.g., detection [1], open-world segmentation [2] and multi-modal alignment [3]). Guiding the proposal generation by either audio or visual cues, we design two training-free variants: AT-GDINO-SAM and OWOD-BIND. Extensive experiments on the AVS-Bench dataset show that our unsupervised approach can perform well in comparison to prior art supervised counterparts across complex scenarios with multiple auditory objects. Particularly, in situations where existing supervised AVS methods struggle with overlapping foreground objects, our models still excel in accurately segmenting overlapped auditory objects. Our code will be publicly released.
Swapnil Bhosale, Haosen Yang, Diptesh Kanojia, Xiatian Zhu University of Surrey, UK Audio-Visual Segmentation, Foundation Models, Cross-Modal, Unsupervised, Open World
## 1 Introduction
In the realm of audio-visual segmentation (AVS), the challenge lies in delineating sounding objects that align with audio cues present within video sequences. Historically, the intersection of audio-visual signals has been studied through the lens of self-supervised learning [4, 5]. However, this methodology exhibits constraints, especially in real-world applications that necessitate precise segmentations, such as in video surveillance, integrated video editing, and advanced robotics. A recent study [6] approached the AVS quandary using supervised learning and developed a manually annotated video dataset, emphasizing pixel-level segmentations of audio-responsive objects. Yet, extensions of this method [7, 8, 9, 10, 11, 12, 13], are not without its shortcomings: (1) substantial manual audio-mask annotation overheads, (2) a dataset spanning a narrow range of categories, and (3) intensive training requisites.
As the landscape of foundational frameworks within a specific domain expands, as exemplified by works like [2, 14, 3], a compelling question arises: Can their accumulated insights be coalesced for the enhancement of the AVS task? To answering this, we present _Cross-Modality Semantic Filtering (CMSF)_, an advanced cascade of foundational models that employs _an untrained_ approach. This approach harmoniously integrates knowledge from diverse pre-training paradigms, setting a new benchmark for a resilient and adaptive AVS methodology. Particularly, we propose two mechanisms under CMSF bootstrapping foundation models pertaining to uni-modalities and related tasks such as segmentation, audio tagging and grounding. In each of the mechanism we alternate between the audio and visual modality, wherein one is exploited to generate proposal masks, whereas the other is used as guidance to condition/filter among segmentation masks/bounding box proposals. As shown in Fig. 1, our first mechanism, AT-GDINO-SAM utilizes audio cues to generate audio tags and eventually proposal segmentation masks. Along the same lines, we also propose OWOD-BIND which utilizes the visual cue to generate proposal bounding boxes which are further filtered based on their cosine similarity with the audio counterpart, both projected through into shared latent space learned by the ImageBIND model [3].
### Supervised AVS approaches
Current methods for addressing the AVS task involve training a fully supervised model responsible for identifying all pixels associated with audible visual elements within a scene. The current AVS-Benchmark, TPAVI [6] takes a different approach by employing multiple attention fusion modules, establishing a direct link between the encoder and decoder to learn the alignment between audio and visual components. Another approach AV-SAM [15] makes use of pixel-level audio-visual fusion from SAM's pre-trained image encoder. The amalgamated cross-modal features are then directed into SAM's prompt encoder and mask decoder, generating audio-visual segmentation masks.
We contend that the supervised training paradigm for AVS necessitates resource-intensive annotations and the effectiveness of existing AVS systems, especially when confronted with multiple audible objects, faces limitations due to scene diversity, along with the quantity and quality of annotated audio masks. To address these challenges, we introduce an alternative approach that obviates the requirement for audio-mask pairs by capitalizing on _an untrained_ methodology that incorporates insights from established foundational models.
## 2 Methodology
Given a video sequence \(\{I_{t}\}_{t=1}^{T}\in\mathcal{R}^{H\times W}\) with \(T\) non-overlapping continuous image frames where \(H\) and \(W\) denote the height and width of each frame, along with an audio sequence \(A\) with the same temporal resolution as \(I\), \(\{A_{t}\}_{t=1}^{T}\), the goal of an AVS system is to generate segmentation masks \(\{G_{t}\}\in\mathcal{R}^{H\times W}\). The masks are interpreted as pixel-wise labels which highlight the sounding source object in \(I_{t}\) that is closest probable source of the sound in \(A_{t}\). In contrast to the existing AVS approaches, which utilize a supervised training pipeline incorporating pre-annotated audio-mask pairs, our work is motivated towards _an untrained_ pipeline for AVS which exploits the publicly available models pre-trained on foundational tasks namely, audio tagging, object detection and segmentation. These models have been trained on the open-set setting, with the latter two supporting zero-shot inference, and are hence aligned to our use-case.
### Pre-training Model
**Audio Tagging (AT):** An Audio Tagging (AT) model assigns descriptive tags to audio clips, representing features like musical instruments, environmental sounds, and more. Current research in AT has shifted from hand-crafted to end-to-end models, evolving from CNNs [16] to convolution-free attention models like the Audio Spectrogram Transformer (AST) [1]. We adopt AST to generate audio tags for the AVS task, utilizing its capacity to capture global context in audio sequences with multiple events. Trained on Audioset [14], AST detects diverse polyphonic audio events across 521 categories like animal sounds, instruments, and human activity.
**Open World Object Detector (OWOD):** Traditional object detection treats unfamiliar objects as background, while open-world detection allows models to handle unknown objects during training and inference [17, 18]. [19] introduced the open-world detector (ORE) with 80 COCO [20] classes split into 4 tasks, adding 20 classes to known categories in each. [21] extended this using class-agnostic proposals from a Multiscale Attention ViT [22] with late fusion (MAVL). In our work, we use this model to generate class-agnostic object proposals that are further processed by the segmentation network.
**Segment-Anything Model (SAM):** The Segment-Anything model [2], trained on a vast dataset of 1 billion masks, is renowned for its prompt-based zero-shot segmentation capabilities as well as its versatility as a foundation model to merge cross-modal representations using multimodal prompts [23]. SAM's core includes a prompt encoder capable of handling varied prompt types like points, bounding boxes, and text. We extend SAM's segmentation capabilities by leveraging visual prompts from OWOD to generate pixel-level masks.
### Cross-Modality Semantic Filtering (CMSF):
The core of the AVS task is associating audio content with object spatial arrangement. This involves processing cross-modal input cues: acoustic and visual, wherein one is used to generate proposals while other one is used to filter/condition the available proposals.
**Generating proposals using audio cue: AT-GDINO-SAM** Each audio sequence, \(A_{t}\) is padded to a maximum of 960 msec and processed by a pre-trained AST model, yielding audio tags, \(\{AT_{t}\}_{i=1}^{C_{a}}\), which are ranked based on their probability scores across 521 generic classes from the Audioset. Relevant audio tags are filtered using an empirically determined threshold, \(\tau_{AT}\) and forwarded to a pre-trained GroundingDINO model [14], generating bounding boxes in image frame \(I_{t}\). These boxes serve as visual prompts for SAM [2], producing binary masks (see Fig. 1).
**Generating proposals using visual cue: OWOD-BIND** Given an image frame, \(I_{t}\), we use OWOD to generate proposal bounding box proposals, \(\{BB_{i}\}_{i=1}^{C_{v}}\) with \(C_{v}\) proposals. These are filtered by objectness score [21] using threshold \(\tau_{BB}\). To link boxes and acoustic cues, both modalities need shared latent space embedding semantics. Towards this end, we utilize ImageBIND's [3] latents, and extract image and audio embeddings which are used to rank the proposals by cosine similarity with audio embedding. Bounding boxes above \(\tau_{BIND}\) form the final mask.
As an alternative approach, instead of relying on OWOD to generate bounding box proposals, we can randomly position single-point input prompts in a grid across the image. From each point, SAM can predict multiple masks. These masks are refined and filtered for quality, employing non-maximal suppression (NMS) to remove duplicates. We term this approach SAM-BIND.
## 3 Experiments and Results
**Dataset:** To assess our proposed methods, we use the recently released AVSBench dataset [6]. AVSBench contains videos from YouTube, split into 5 distinct clips. Each clip is paired with a spatial segmentation mask indicating the audible object. AVSBench includes two subsets: the Semi-supervised Single Source Segmentation (S4) and the fully supervised Multiple Sound Source Segmentation (MS3), differing in the number of audible objects.
**Metrics:** Motivated towards an Unsupervised AVS approach, we compare our proposed pipelines on S4 and MS3 test splits without using any audio-mask pairs from train/validation. Similar to existing supervised methods, we use average Intersection Over Union (\(M_{IOU}\)) and \(F_{score}\) as metrics. Higher \(M_{IOU}\) implies better region similarity, and elevated \(F_{score}\) indicates improved contour accuracy.
**Implementation Details:** We resize of all image frames to size 224 \(\times\) 224. For all our evaluation results we use \(\tau_{AT}\) as 0.5 for AT-GDINO-SAM and \(\tau_{BB}\) and \(\tau_{BIND}\) as 0.5 and 0.7 respectively, for OWOD-BIND. In SAM-BIND, we use IOU threshold of 0.5 for NMS.
### Quantitative Comparison
We present our primary AVSBench results on the test set for both S4 and MS3 in Table 1. We can observe that both, OWOD-BIND and SAM-BIND outperform AT-GDINO-SAM in terms of both \(M_{IOU}\) and \(F_{score}\) by a significant margin. It should be noted that AST achieves a mean average precision (mAP) of only 0.485 on the Audioset dataset, and hence the generated audio tags are highly prone to errors. Additionally, we believe despite AST's training data i.e., Audioset follows a generic ontology, many rare events such as "lawn mover", "tabla" etc. are under-represented and hence are unable to cope with an open-set inference. Between OWOD-BIND and SAM-BIND, we can observe that OWOD-BIND achieves an absolute improvement of over 0.06 and 0.08 in terms of \(M_{IOU}\) and \(F_{score}\) respectively under the MS3 setting. Similar trend is observed when comparing both the approaches under the S4 setting, with 0.16 improvement in both \(M_{IOU}\) and \(F_{score}\) respectively.
**Comparison with supervised AVS approaches:** We propose _an untrained_ paradigm without any use of audio-mask pairs for fine-tuning existing foundation models. We fall short of the AVS-Benchmark by 0.20 in terms of both \(M_{IOU}\) and \(F_{score}\), when evaluating under the MS3 setting, where the latter benefits from the effective audio-visual contrastive loss design and trainable parameters. Upon careful invigilation of the generated masks, we observe that beyond the lack of trainable parameters, the proposed approach performs poorly primarily because of the following reasons, (a) inability of SAM to perform precise localization to the level depicted in the annotated ground-truth masks. In Fig. 2(a) OWOD-BIND is able to delineate piano from other objects {human, portrait etc}, however AVSBench requires solely the keys of the piano to be highlighted. (b) the SAM performs over-segmentation particularly, when the objects of interest are zoomed in. Fig. 2(b) shows that SAM tends to segment the white keys from the black keys, while AVSBench expects the entire piano including all the keys to be highlighted under one single object mask. (c) OWOD-BIND performs segmentation of isolated frames without capturing the temporal aspect within consecutive frames and hence lacks the propagated activity detection. In Fig. 2(c) the input audio, \(A_{t}\) consists of human speech, however OWOD-BIND fails to distinguish between the speaking and non-speaking humans, since ImageBIND associates each detected person with human speech presented in \(A_{t}\).
\begin{table}
\begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{Approach} & \multicolumn{2}{c}{S4} & \multicolumn{2}{c}{MS3} \\ & \(\mathrm{M_{IOU}}\) & \(\mathrm{F_{score}}\) & \(\mathrm{M_{IOU}}\) & \(\mathrm{F_{score}}\) \\ \hline \hline
**AVSBench**[6] & 0.78 & 0.87 & 0.54 & 0.64 \\
**AV-SAM**[15] & 0.40 & 0.56 & - & - \\ \hline \hline AT-GDINO-SAM & \(0.38\) & \(0.46\) & \(0.25\) & \(0.29\) \\ SAM-BIND & \(0.42\) & \(0.51\) & \(0.28\) & \(0.36\) \\ OWOD-BIND & **0.58** & **0.67** & **0.34** & **0.44** \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison of our proposed _an untrained_ pipelines on the AVSBench test split. **Grayed**: Using explicit audio-visual mechanism i.e., supervised learning.
Figure 1: **Overview of CSMF**: Bounding box proposals are generated using a cascade of two pre-trained models: Audio Tagging and GroundingDINO. Favourable boxes are passed as input to SAM to yield segmentation masks.
### Qualitative comparison
We demonstrate the qualitative comparison of our proposed approach with the AVS-Benchmark in terms of the quality of the generated masks. We observe that although, OWOD-BIND results in a lower average \(M_{IOU}\) and \(F_{score}\) against the AVS-Benchmark, it still generates more finer details especially when segmenting non-cubical objects for instance, humans. In Fig. 3, we highlight that despite lacking an explicit audio-visual fusion mechanism, our proposed approach is able to alternate among multiple sounding objects in the foreground {_human, piano_}. It can be observed that the OWOD foundation model helps generate continuous masks despite having multiple granular background objects. AVS-Benchmark partially localizes the sounding object however, generates discontinuous masks, troubled by the presence of spatially overlapping foreground objects {_dog, baby_}.
## 4 Conclusion
In this study, we approach Audio-Visual Segmentation (AVS) unconventionally, using unsupervised learning and insights from foundational multimodal models. We present a novel Cross-modal Semantic Filtering paradigm that calls for a training-free approach to AVS. This marks the first AVS exploration without pre-annotated audio-mask pairs. Our approach achieves accurate segmentation of overlapping masks without explicit audio-visual fusion. We validate the efficacy of open world foundation models in precisely distinguishing auditory elements within visual contexts, when compared to existing supervised methods. Future work involves integrating temporal context and mitigating SAM model oversegmentation via stricter audio guidance, all while maintaining an unsupervised framework.
Figure 3: Qualitative comparisons with AVSBench and ground-truth segmentation masks. OWOD-BIND produces more precise segmentation of overlapping objects without explicit audio-visual fusion.
Figure 2: Unsuccessful outcomes: (a) Imprecise localization (b) Over-segmentation (c) Improper activity detection. |
2308.16612 | Neural Gradient Regularizer | Owing to its significant success, the prior imposed on gradient maps has
consistently been a subject of great interest in the field of image processing.
Total variation (TV), one of the most representative regularizers, is known for
its ability to capture the intrinsic sparsity prior underlying gradient maps.
Nonetheless, TV and its variants often underestimate the gradient maps, leading
to the weakening of edges and details whose gradients should not be zero in the
original image (i.e., image structures is not describable by sparse priors of
gradient maps). Recently, total deep variation (TDV) has been introduced,
assuming the sparsity of feature maps, which provides a flexible regularization
learned from large-scale datasets for a specific task. However, TDV requires to
retrain the network with image/task variations, limiting its versatility. To
alleviate this issue, in this paper, we propose a neural gradient regularizer
(NGR) that expresses the gradient map as the output of a neural network. Unlike
existing methods, NGR does not rely on any subjective sparsity or other prior
assumptions on image gradient maps, thereby avoiding the underestimation of
gradient maps. NGR is applicable to various image types and different image
processing tasks, functioning in a zero-shot learning fashion, making it a
versatile and plug-and-play regularizer. Extensive experimental results
demonstrate the superior performance of NGR over state-of-the-art counterparts
for a range of different tasks, further validating its effectiveness and
versatility. | Shuang Xu, Yifan Wang, Zixiang Zhao, Jiangjun Peng, Xiangyong Cao, Deyu Meng, Yulun Zhang, Radu Timofte, Luc Van Gool | 2023-08-31T10:19:23Z | http://arxiv.org/abs/2308.16612v2 | # Neural Gradient Regularizer
###### Abstract
Owing to its significant success, the prior imposed on gradient maps has consistently been a subject of great interest in the field of image processing, Total variation (TV), one of the most representative regularizers, is known for its ability to capture the intrinsic sparsity prior underlying gradient maps. Nonetheless, TV and its variants often underestimate the gradient maps, leading to the weakening of edges and details whose gradients should not be zero in the original image (i.e., image structures is not describable by sparse priors of gradient maps). Recently, total deep variation (TDV) has been introduced, assuming the sparsity of feature maps, which provides a flexible regularization learned from large-scale datasets for a specific task. However, TDV requires to retrain the network with image/task variations, limiting its versatility. To alleviate this issue, in this paper, we propose a neural gradient regularizer (NGR) that expresses the gradient map as the output of a neural network. Unlike existing methods, NGR does not rely on any subjective sparsity or other prior assumptions on image gradient maps, thereby avoiding the underestimation of gradient maps. NGR is applicable to various image types and different image processing tasks, functioning in a zero-shot learning fashion, making it a versatile and plug-and-play regularizer. Extensive experimental results demonstrate the superior performance of NGR over state-of-the-art counterparts for a range of different tasks, further validating its effectiveness and versatility.
Deep image prior, unsupervised deep learning, low-level vision, total variation
## I Introduction
A NN image restoration task generally aims at recovering a high-quality image \(\mathbf{\mathcal{X}}\) from its corrupted observation \(\mathbf{\mathcal{Y}}\), by solving the following optimization problem [1]:
\[\min_{\mathbf{\mathcal{X}}\in\mathbb{R}^{H\times W\times C}}L(\mathbf{\mathcal{X}}, \mathbf{\mathcal{Y}})+\lambda R(\mathbf{\mathcal{X}}), \tag{1}\]
where \(H\), \(W\) and \(C\) denote the height, width and the number of channels, respectively. \(L(\mathbf{\mathcal{X}},\mathbf{\mathcal{Y}})\) denotes the data fidelity term measuring the difference between \(\mathbf{\mathcal{X}}\) and \(\mathbf{\mathcal{Y}}\), \(R(\mathbf{\mathcal{X}})\) denotes the regularizer term encoding the prior knowledge imposed on the recovered image, and \(\lambda\) is a hyper-parameter. Over the past few decades, remarkable progress has been made in data prior modeling [2, 3, 4], with image gradient research being a focal point of discussion over the image restoration research field [5, 6, 7, 8].
Considering that adjacent pixel values vary smoothly, total variation (TV) [9] is devised to characterize this phenomenon. It is defined as \(\mathrm{TV}(\mathbf{\mathcal{X}})=\|\nabla_{h}\mathbf{\mathcal{X}}\|_{1}+\|\nabla_{v }\mathbf{\mathcal{X}}\|_{1}\), where \(\nabla_{h}\) and \(\nabla_{v}\) denote the gradient operators along the horizontal and vertical axes, respectively. For brevity, TV can be compactly expressed as
\[\mathrm{TV}(\mathbf{\mathcal{X}})=\|\nabla\mathbf{\mathcal{X}}\|_{1}, \tag{2}\]
where \(\nabla=(\nabla_{h},\nabla_{v})\). TV, with its widespread visual applications [10, 11, 12], has been one of the most classic image gradient regularizers. Several noteworthy directions for enhancing TV are summarized as follows.
(i) Non-convex Total Variation: According to the definition of TV, it is the \(\ell_{1}\)-norm of gradients, implicitly assuming that gradients follow the Laplacian distribution. However, histograms of gradients in natural images reveal that this assumption is too rough to be always correct, and instead, the hyper-Laplacian distribution should be more properly considered [13]. Therefore, the \(\ell_{p}\)-norm (\(0<p<1\)) of gradients is a more appropriate choice, leading to a non-convex TV formulation expressed as \(\mathrm{TV}_{\ell_{p}}(\mathbf{\mathcal{X}})=\|\nabla\mathbf{\mathcal{X}}\|_{p}^{p}\). Moreover, numerous studies have also validated the superior performance of other non-convex norm-based TV regularizers [14, 15].
(ii) Directional Total Variation: Due to the inclusion of only horizontal and vertical neighbor information considered in conventional TV norms, edges in other directions are inevitably weakened, resulting in suboptimal and blurry image recovery in specific image processing tasks, such as rain streak removal [16] and seismic noise attenuation [17]. To address this issue, DTV [18] can be employed, defined as \(\mathrm{DTV}(\mathbf{\mathcal{X}})=\|\mathbf{k}_{\theta}\otimes\mathbf{\mathcal{X}}\|_{1}\), where \(\mathbf{k}_{\theta}\) is a convolutional kernel modeling directional gradients and \(\otimes\) denotes the convolution operator.
(iii) 3D Total Variation: When focusing on the smoothness along the channel axis in 3D images, an additional term can be incorporated. Specifically, \(\mathrm{TV}_{\mathrm{3D}}(\mathbf{\mathcal{X}})=\mathrm{TV}(\mathbf{\mathcal{X}})+\| \nabla_{t}\mathbf{\mathcal{X}}\|_{1}\), where \(\nabla_{t}\) denotes the gradient operator along the channel axis. For better characterizing sparsity of gradient maps along three axes, the enhanced 3D TV applies the sparsity measure to, instead of the gradient maps themselves, their subspace basis maps along all channels [19]. More recently, the correlated TV (CTV) was proposed to better model channel smoothness by imposing a nuclear norm on spatial gradient maps, i.e., \(\mathrm{CTV}(\mathbf{\mathcal{X}})=\|\nabla\mathbf{\mathcal{X}}\|_{*}\)[20]. Tensor-CTV (t-CTV) extends this idea by using a tensor nuclear norm [21].
(iv) High-order Total Variation (HOTV): Traditional TV imposes norms on first-order gradients, leading to undesirable "stair effects" (i.e., resulting in piecewise constant images) [22]. Second-order TV, defined as \(\|\nabla^{2}\mathbf{\mathcal{X}}\|_{1}\), maintains the good properties of traditional TV near true edges and penal
izes the formation of incorrect edges in areas that ought to maintain smoothness, potentially mitigating stair effects [23]. Furthermore, high-order TV preserves edges in the resultant deformation and is more robust to outlier noise [24].
(v) Total Generalized Variation (TGV): TGV, a state-of-the-art improvement over TV, also aims to mitigate stair effects by leveraging high-order gradients. Generally, a \(k\)-order TGV involves gradients of orders \(i=1,2,\cdots,k\), and its kernel consists of polynomials with degrees less than \(k\). To be more specific, 2-order TGV is defined as follows [25]:
\[\mathrm{TGV}_{\mathbf{\lambda}}^{2}(\mathbf{\mathcal{X}})=\min_{\mathbf{\mathcal{W}}} \lambda_{1}\|\nabla\mathbf{\mathcal{X}}-\mathbf{\mathcal{W}}\|_{1}+\lambda_{0}\| \mathcal{E}\mathbf{\mathcal{W}}\|_{1}, \tag{3}\]
where \(\mathcal{E}=0.5(\nabla+\nabla^{\mathrm{T}})\) is the symmetrized gradient operator, and \(\mathbf{\lambda}=(\lambda_{0},\lambda_{1})\) represents the hyper-parameters. It has been reported that 2-order TGV prefers piecewise linear reconstructions rather than piecewise constant ones, thereby preventing stair effects while still retains the edges that should be presented in the restored image [26].
(vi) Total Deep Variation (TDV): The aforementioned variants of TV can essentially be formulated as
\[g_{\mathrm{TV}}(\mathbf{\mathcal{X}})=f(\mathbf{k}\otimes\mathbf{\mathcal{X}}), \tag{4}\]
where \(f(\cdot)\) denotes a certain norm imposed on the transformed image \(\mathbf{k}\otimes\mathbf{\mathcal{X}}\). However, both \(f(\cdot)\) and \(\mathbf{k}\) are fixed and required to be manually pre-specified. TDV makes Eq. (4) learnable and is given by [27, 28]
\[\mathrm{TDV}(\mathbf{\mathcal{X}})=\sum_{(i,j)}\mathbf{\mathcal{F}}_{ij}=\sum_{(i,j)} [\mathbf{w}\otimes\mathrm{CUNet}(\mathbf{k}\otimes\mathbf{\mathcal{X}})]_{ij} \tag{5}\]
where \((i,j)\) indexes the pixel coordinates, \(\mathbf{k}\) is a zero-mean convolutional kernel, \(\mathrm{CUNet}\) represents the cascade of three U-shaped network, and \(\mathbf{w}\) is a \(1\times 1\) convolutional kernel imposed on the output of CUNet to generate a one-channel feature map \(\mathbf{\mathcal{F}}_{ij}\). Parameters \(\mathbf{k}\), \(\mathbf{w}\) and the weights in CUNet are learnable. From the definition of TDV, it is clear that TDV is more flexible than \(g_{\mathrm{TV}}\), since CUNet could explore multi-scale prior and representative features.
A brief overview of the recent advancements in gradient prior modeling is provided above. Subsequently, we will delve into discussing their inherent limitations.
(i) The first four kinds of TV variants primarily aim to better characterize the specific sparsity prior of gradient maps, which is equivalent to local smoothness. TV and its variants measure the distance between gradient maps and zeros. While these methods yield commendable results for smooth regions when minimizing the TV regularizer, it is crucial to note that they also tend to weaken edges and details whose gradients should not be zeros. In essence, they consistently underestimate the intrinsic complex priors besides sparsity underlying gradient maps of general real-world images.
(ii) Regarding TDV, it is a black-box regularizer, the behavior of which is not easily comprehensible. The fundamental question of why minimizing the sum of \(\mathbf{\mathcal{F}}_{ij}\) leads to successful image restoration remains unanswered. Consequently, it remains unclear as to what type of prior is being modeled by TDV.
(iii) Furthermore, the performance enhancement of TDV stems from training on a large-scale paired dataset for a specific task. Consequently, TDV could not be readily employed as an off-the-shelf regularizer in a plug-and-play manner, making it challenging to directly apply the learned TDV when the task or data changes. Even when they are unchanged, the generalization capability of TDV remains obscure. In both scenarios, retraining TDV becomes necessary. More critically, TDV cannot be trained when paired datasets are unavailable or difficult to collect, such as in the case of electron microscope
Fig. 1: The mechanism of sparse gradient priors (left) that minimizes a certain norm of gradient/feature maps, and the proposed neural gradient regularizer (right) that encourages an untrained neural network to predict the gradient maps of a high-quality image.
images.
Based on these analyses, the ideal gradient regularizer should possess the following attributes: good performance, plug-and-play capabilities, and less requirement for training on large-scale paired datasets. In response to this burgeoning need, we present a neural gradient regularizer (NGR) in this paper. As depicted in Fig. 1, NGR seeks to recover the gradient map from a degraded image using a neural network, in a zero-shot learning fashion. This method does not rely on the assumption of sparse gradients, thereby distinguishing it from the existing techniques under Eq. (4). In comparison to previous gradient regularizers, NGR exhibits an evident performance enhancement. We also apply NGR to multiple image processing tasks, and experimental results validate the widespread superiority of NGR over existing methods.
The remainder of this article is structured as follows: Section II presents the NGR. Section III reports the outcomes of numerical experiments. Finally, Section IV summarizes the findings of this study.
## II Methods
### _Neural gradient regularizer_
The success of TV can be attributed to the fact that most entries of the gradient maps approach zero. However, the downside of minimizing TV is that it underestimates the gradient of edges. Intuitively, incorporating more comprehensive information from gradient maps could potentially lead to better results. For instance, CTV and t-CTV, which simultaneously model local smoothness and low-rankness, aids in recovering finer details beyond what TV can achieve. Let us consider an ideal scenario where we have access to all the information of the ground-truth gradient maps, denoted as \(\mathbf{\mathcal{G}}=(\mathbf{\mathcal{G}}_{h},\mathbf{\mathcal{G}}_{v},\mathbf{\mathcal{G}}_ {t})\). It is reasonable to constrain the gradient map of the restored image to be equal to the ground-truth gradient map. Consequently, the model can be formulated as
\[\min_{\mathbf{\mathcal{X}}}L(\mathbf{\mathcal{X}},\mathbf{\mathcal{Y}}),\quad\text{s.t.} \ \nabla\mathbf{\mathcal{X}}=\mathbf{\mathcal{G}},\]
where \(\nabla=(\nabla_{h},\nabla_{v},\nabla_{t})\) represents the gradient operators along the three axes.
In practice, the ground-truth gradient map \(\mathbf{\mathcal{G}}\) is often inaccessible, so we need to estimate it. Similar to the working mechanism of TDV, one plausible solution is to train a neural network that maps an observed corrupted image \(\mathbf{\mathcal{Y}}\) to its clean gradient map \(\mathbf{\mathcal{G}}\) for a specific task. However, it should be noted that this approach is effective only when the task and image type remain fixed. For instance, a network trained for the denoising task may not perform well for the inpainting task. Moreover, if the network is trained on RGB images, the network cannot be compatible with the test data consists of hyperspectral images (HSIs) or multispectral images (MSIs) with different channel numbers. In summary, this solution falls short of meeting the plug-and-play requirement.
To address this issue, we propose an estimation approach for gradient maps in the fashion of zero-shot learning. Partially inspired by deep image prior (DIP), the proposed neural gradient regularizer (NGR) encourages an untrained neural network \(f_{\Theta}(\cdot)=\big{(}f_{\Theta_{h}}(\cdot),f_{\Theta_{v}}(\cdot),f_{ \Theta_{t}}(\cdot)\big{)}\) to predict the gradient maps along three axes of a high-quality image. This can be expressed as
\[\min_{\mathbf{\mathcal{X}},\Theta}L(\mathbf{\mathcal{X}},\mathbf{\mathcal{Y}}),\quad \text{s.t.}\nabla_{i}\mathbf{\mathcal{X}}=f_{\Theta_{i}}(\mathbf{\mathcal{G}}^{(0)}),i\in\{h,v,t\}, \tag{6}\]
where \(\Theta=(\Theta_{h},\Theta_{v},\Theta_{t})\) denotes the collection of all learnable parameters in \(f_{\Theta}(\cdot)\), \(\mathbf{\mathcal{G}}^{(0)}\) is a randomly sampled variable, and \(\nabla_{i}\mathbf{\mathcal{X}}\) represents the gradient of \(\mathbf{\mathcal{X}}\) along its \(i\)th axis (\(i\in\{h,v,t\}\)). To solve this optimization problem, we rewrite the constraint equation as a term in the objective function:
\[\min_{\mathbf{\mathcal{X}},\Theta}L(\mathbf{\mathcal{X}},\mathbf{\mathcal{Y}})+\sum_{i\in \{h,v,t\}}\frac{\lambda_{i}}{2}\|\nabla_{i}\mathbf{\mathcal{X}}-f_{\Theta_{i}}( \mathbf{\mathcal{G}}^{(0)})\|_{2}^{2}, \tag{7}\]
where \(\lambda_{i}\) is a hyper-parameter that controls the penalty strength for each axis (\(i\in\{h,v,t\}\)).
Eq. (7) does not impose any manually pre-defined assumptions on the image processing tasks or image types. The solution to Eq. (7) presented in the following section will illustrate that NGR is not reliant on large-scale dataset training. Consequently, NGR could be conveniently used as a flexible plug-and-play gradient modeling tool.
**Remark:** Readers familiar with DIP [29, 30] might observe similarities between DIP and our proposed NGR, as both utilize untrained neural networks to embody prior knowledge and function in a zero-shot learning paradigm. However, several critical distinctions exist. Notably, DIP models the image prior rather than the gradient map prior. Moreover, the spectral bias theory [31, 32] reveals that DIP learns low-frequency information more rapidly than high-frequency information, resulting in its relative inadequacy in capturing details and textures. Consequently, the incoherent mechanism of DIP hinders its capability to reconstruct details and textures. Conversely, considering that the observed image itself contains useful low-frequency structures, NGR is designed to better focus on recovering high-frequency information, thereby obviating the need to model low-frequency information, which could potentially facilitate the restoration quality of image details and textures. As will be demonstrated in the subsequent subsection, during training, NGR manipulates Eq. (16) to merge the low-frequency information provided by the observed image and the high-frequency information recovered by the network. Most significantly, the results exhibited in Section III substantiate that NGR surpasses DIP by a gain of 3dB in terms of PSNR for the inpainting task across several RGB and video datasets. This empirical evidence finely validates the superiority of NGR over DIP.
### _Solution to NGR regularized image inpainting_
NGR is potentially applicable to a wide range of image processing tasks. As a case in point, we demonstrate how to address Eq. (7) in the context of image inpainting, which involves restoring missing entries in observed pixels, with their indices denoted as \(\Omega\). Mathematically, image inpainting entails estimating the observed image, formulated as a tensor \(\mathbf{\mathcal{X}}\), subject to the constraint \(\mathcal{P}_{\Omega}(\mathbf{\mathcal{X}})=\mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}})\), where \(\mathcal{P}(\cdot)\) denotes the projection operator. In order to decouple \(\mathcal{P}_{\Omega}(\cdot)\) and \(\mathbf{\mathcal{X}}\), an auxiliary variable \(\mathbf{\mathcal{K}}\) is introduced to compensate for unobserved entries, leading to the revised constraint
\(\mathcal{P}_{\Omega}(\mathbf{\mathcal{X}}+\mathbf{\mathcal{K}})=\mathcal{P}_{\Omega}(\mathbf{ \mathcal{Y}})\), thereby facilitating the solution of Eq. (7). By combining NGR, an image inpainting problem can be addressed using the following optimization problem:
\[\begin{cases}\min_{\mathbf{\mathcal{X}},\mathbf{\mathcal{K}},\Theta}&\delta \mathbf{\mathcal{K}},\Omega+\sum_{i\in\{h,v,t\}}\frac{\lambda_{i}}{2}\|\nabla_{i} \mathbf{\mathcal{X}}-f_{\Theta_{i}}(\mathbf{\mathcal{G}}^{(0)})\|_{2}^{2},\\ \mathrm{s.t.}&\mathcal{P}_{\Omega}(\mathbf{\mathcal{X}}+\mathbf{\mathcal{K}})= \mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}}),\end{cases} \tag{8}\]
where \(\delta_{\mathbf{\mathcal{K}},\Omega}\) is an indicator function that constrains \(\mathbf{\mathcal{K}}\) to be in the complement of \(\Omega\), defined as:
\[\delta_{\mathbf{\mathcal{K}},\Omega}=\begin{cases}0,&\mathcal{P}_{\Omega}(\mathbf{ \mathcal{K}})=0,\\ +\infty,&\text{otherwise}\.\end{cases} \tag{9}\]
This constraint helps the elements of X exactly similar to those of Y in non-missing components, but flexibly valued in those missing parts. To solve this optimization problem, the Alternating Direction Method of Multipliers (ADMM) can be readily employed. The original problem is recast as a minimization of the augmented Lagrangian function:
\[\begin{split}\min_{\mathbf{\mathcal{X}},\mathbf{\mathcal{K}},\mathbf{ \mathcal{A}},\Theta}&\left\{\delta_{\mathbf{\mathcal{K}},\Omega}+\sum_{i \in\{h,v,t\}}\frac{\lambda_{i}}{2}\|\nabla_{i}\mathbf{\mathcal{X}}-f_{\Theta_{i}} (\mathbf{\mathcal{G}}^{(0)})\|_{2}^{2}\right.\\ &\left.+\frac{\mu}{2}\|\mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}})-\mathbf{ \mathcal{X}}-\mathbf{\mathcal{K}}+\frac{\mathbf{\Lambda}}{\mu}\|_{2}^{2}\right\}, \end{split} \tag{10}\]
where \(\mathbf{\Lambda}\) is the Lagrangian multiplier and \(\mu\) is a hyperparameter. The unknown variables are optimized iteratively.
(1) Updating \(\Theta\): By fixing the variables other than \(\Theta\), we derive the objective function for \(\Theta\). The optimization problem can be formulated as follows:
\[\min_{\Theta}\sum_{i\in\{h,v,t\}}\frac{\lambda_{i}}{2}\|\nabla_{i}\mathbf{ \mathcal{X}}-f_{\Theta_{i}}(\mathbf{\mathcal{G}}^{(0)})\|_{2}^{2}. \tag{11}\]
Given that \(f_{\Theta_{i}}(\cdot)\) is a neural network composed of nonlinear operators, we can easily employ the Adam optimizer to solve this optimization problem.
(2) Updating \(\mathbf{\mathcal{K}}\): The optimization of \(\mathbf{\mathcal{K}}\) is straightforward, and we provide the expression as follows:
\[\mathbf{\mathcal{K}}=\mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}})-\mathbf{\mathcal{X}}+ \frac{\Lambda}{\mu},\quad\text{where}\quad\mathcal{P}_{\Omega}(\mathbf{\mathcal{ K}})=0. \tag{12}\]
(3) Updating \(\mathbf{\mathcal{X}}\): We update \(\mathbf{\mathcal{X}}\) by solving the following optimization problem:
\[\begin{split}\min_{\mathbf{\mathcal{X}}}&\left\{\sum_{i \in\{h,v,t\}}\frac{\lambda_{i}}{2}\|\nabla_{i}\mathbf{\mathcal{X}}-f_{\Theta_{i}} (\mathbf{\mathcal{G}}^{(0)})\|_{2}^{2}+\right.\\ &\left.\frac{\mu}{2}\|\mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}})- \mathbf{\mathcal{X}}-\mathbf{\mathcal{K}}+\frac{\mathbf{\Lambda}}{\mu}\|_{2}^{2}\right\}, \end{split} \tag{13}\]
Taking the derivative of this objective function with respect to \(\mathbf{\mathcal{X}}\) and setting it to zero lead to the following equation:
\[\sum_{i\in\{h,v,t\}}\lambda_{i}\nabla_{i}^{\mathrm{T}}(\nabla_{i}\mathbf{ \mathcal{X}}-f_{\Theta_{i}}(\mathbf{\mathcal{G}}^{(0)}))=\mu(\mathcal{P}_{\Omega} (\mathbf{\mathcal{Y}})-\mathbf{\mathcal{X}}-\mathbf{\mathcal{K}})+\mathbf{\Lambda}. \tag{14}\]
After simple calculations, we obtain a linear system:
\[\begin{split}&\left(\mu+\sum_{i\in\{h,v,t\}}\lambda_{i}\nabla_{i }^{\mathrm{T}}\nabla_{i}\right)\mathbf{\mathcal{X}}=\\ &\sum_{i\in\{h,v,t\}}\lambda_{i}\nabla_{i}^{\mathrm{T}}f_{\Theta _{i}}(\mathbf{\mathcal{G}}^{(0)})+\mu(\mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}})-\mathbf{ \mathcal{K}})+\mathbf{\Lambda}.\end{split} \tag{15}\]
Here, \(\nabla_{i}^{\mathrm{T}}\) denotes the transposed operator of \(\nabla_{i}\), and we denote the right-hand side of Eq. (15) as \(\mathbf{\mathcal{R}}\). The closed-form solution can be deduced using the following expression:
\[\mathbf{\mathcal{X}}=\mathcal{F}^{-1}\left(\frac{\mathcal{F}\left(\mathbf{\mathcal{R}} \right)}{\mu\mathbf{1}+\sum_{i\in\{h,v,t\}}\left|\mathcal{F}\left(\nabla_{i} \right)\right|^{2}}\right), \tag{16}\]
In Eq. (16), \(|\cdot|^{2}\) represents the element-wise square operator, and \(\mathcal{F}\left(\cdot\right)\) and \(\mathcal{F}\left(\cdot\right)^{-1}\) denote the Fourier transform and its inverse, respectively.
(4) Update \(\mathbf{\Lambda}\): According to general ADMM principles, the multipliers are updated by
\[\mathbf{\Lambda}=\mathbf{\Lambda}+\mu(\mathcal{P}_{\Omega}(\mathbf{\mathcal{Y}})-\mathbf{ \mathcal{X}}-\mathbf{\mathcal{K}}). \tag{17}\]
Algorithm 1 outlines the overall workflow. Taking the observation image \(\mathbf{\mathcal{Y}}\), observation set \(\Omega\), and hyper-parameters \(\lambda_{i}(i=1,2,3)\) as input, the algorithm yields the restored image \(\mathbf{\mathcal{X}}\). At step 1, \(\mathbf{\mathcal{G}}^{(0)}\) is initialized by sampling from a uniform distribution. Notably, this algorithm dispenses with the need for training the network on large-scale datasets, thereby enabling NGR to operate in a zero-shot learning paradigm. We also apply NGR to image denoising problem, and please refer to supplementary document for details.
```
0:\(\mathbf{\mathcal{Y}}\), \(\Omega\), \(\lambda_{i}(i=1,2,3)\)
0:\(\mathbf{\mathcal{X}}\).
1: Initialize \(\mathbf{\mathcal{G}}^{(0)}\).
2:while not converged do
3: Update \(\Theta\) by applying Adam optimizer to Eq. (11);
4: Update \(\mathbf{\mathcal{K}}\) by Eq. (12);
5: Update \(\mathbf{\mathcal{X}}\) by Eq. (16);
6: Update \(\mathbf{\Lambda}\) by Eq. (17).
7:endwhile
```
**Algorithm 1** NGR regularized image inpainting
### _Connection to previous works_
Connection to TGV: By introducing an auxiliary variable \(\mathbf{\mathcal{W}}\), Eq. (7) can be equivalently reformulated as
\[\begin{split}\min_{\mathbf{\mathcal{X}},\Theta}L(\mathbf{\mathcal{X}}, \mathbf{\mathcal{Y}})+\sum_{i\in\{h,v,t\}}\frac{\lambda_{i}}{2}\|\nabla_{i}\mathbf{ \mathcal{X}}-\mathbf{\mathcal{W}}_{i}\|_{2}^{2},\\ \mathrm{s.t.}&\mathcal{W}_{i}=f_{\Theta_{i}}(\mathbf{ \mathcal{Z}}).\end{split} \tag{18}\]
Conversely, the TGV regularized problem can be written as
\[\min_{\mathbf{\mathcal{W}}}L(\mathbf{\mathcal{X}},\mathbf{\mathcal{Y}})+\lambda_{1}\|\nabla \mathbf{\mathcal{X}}-\mathbf{\mathcal{W}}\|_{1}+\lambda_{0}\|\mathcal{E}\mathbf{\mathcal{W}} \|_{1}. \tag{19}\]
It is observed that there exists a strong connection between TGV and NGR, as both minimize the distance between the gradient map and the auxiliary variable \(\mathbf{\mathcal{W}}\). However, TGV
and NGR impose distinct constraints on \(\mathbf{\mathcal{W}}\). While TGV promotes the a manually specified sparsity prior of \(\mathcal{EW}\), NGR restricts \(\mathbf{\mathcal{W}}\) to be the output of a neural network that can automatically extract intrinsic prior structures underlying the gradient map, facilitating a possibly more flexible and complex representation capability on expressing such information.
Connection to TDV: Both TDV and NGR share the common goal of characterizing gradient priors using neural networks, namely, they are data-driven regularizers. Nonetheless, they exhibit several distinctions. Firstly, Kobler et al. [27] employed a gradient flow to minimize the energy functional regularized by TDV, where the training process was described as a mean-field optimal control problem, necessitating the supervised learning of the network on large-scale datasets for a fixed and specific task. Consequently, TDV can be interpreted as a discriminative prior. However, it requires TDV to be retrained if the task/data varies. In contrast, NGR operates within a zero-shot learning paradigm, thus potentially serving as a versatile and plug-and-play regularizer. Secondly, from the definition displayed in Eq. (5), the TDV approach primarily focuses on minimizing the sum of a feature map obtained by a neural network, making it difficult to comprehend which prior is encoded in the regularizer and why minimizing this regularizer leads to good performance. In contrast, as validated by section III-C1 and Fig. 11, NGR gradually generates a refined gradient map of a high-quality image, thereby facilitating an understanding of the behavior of NGR.
## III Experiments
This study mainly focuses on visual data inpainting and denoising tasks to testify the performance of NGR as well as other related SOTA methods. Specifically, we adopt the same network architecture as proposed by Ulyanov et al. in [29], which utilizes a U-Net with skip connections. It is important to note that, in contrast to other network architectures (discussed in Section III-C3), the NGR exhibits better robustness to the choice of backbone mainly attributed to its gradient prediction principle.
We employ peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as the evaluation metrics for all experiments. For HSI data, we additionally incorporate spectral angle mapper (SAM) and erreur relative global adimensionnelle de synthese (ERGAS) metrics to more accurately assess the quality of channel-wise restoration. All experiments are conducted on a server equipped with Python 3.9.0, PyTorch 2.0.0, and Nvidia GeForce RTX 2080Ti GPUs. Our
Fig. 3: GT, observed rgb image, text inpainting visual results and corresponding PSNR values by methods for comparison on β291000β from BSDS100 dataset.
Fig. 2: GT, observed rgb image, inpainting visual results and corresponding PSNR values by methods for comparison on β148089β (SR = 10%) from BSDS100 dataset.
source code is publicly available at [https://github.com/yyfz/Neural-Gradient-Regularizer](https://github.com/yyfz/Neural-Gradient-Regularizer), facilitating reproducibility and further development of the proposed method.
### _Visual data inpainting_
We first demonstrate the applicability of the proposed NGR to inpainting tasks involving diverse visual data types, encompassing RGB images, videos, and HSIs. Furthermore, we verify the effectiveness of the NGR in addressing a specific image inpainting task, namely multi-temporal MSI deccloud.
The comparison methods include: high accuracy low-rank tensor completion (HaLRTC) 1[33], smooth PARAFAC tensor completion with TV (SPC-TV) 2[34], tensor nuclear norm minimization using fast fourier transform (TNN-FFT) 3[35], tensor nuclear norm minimization using discrete cosine transform (TNN-DCT) 4[36], low-rank tensor completion with TV (LRTC-TV) 5[37], tensor correlated TV (t-CTV) 6[21], deep image prior (DIP) 7[29, 38] and TV-SSTV constrained deep image prior (S2DP) 8[39]. All compared methods are carried out by the official implementation with recommended hyperparameters.
Footnote 1: [https://www.cs.rochester.edu/u/liju/publications.html](https://www.cs.rochester.edu/u/liju/publications.html)
Footnote 2: [https://ieeexplore.ieee.org/document/7502115/media/media](https://ieeexplore.ieee.org/document/7502115/media/media)
Footnote 3: [https://github.com/canyila/tensor-completion-tensor-recovery](https://github.com/canyila/tensor-completion-tensor-recovery)
Footnote 4: [https://github.com/canyila/Tensor-robust-PCA-and-tensorcompletion-under-linear-transform](https://github.com/canyila/Tensor-robust-PCA-and-tensorcompletion-under-linear-transform)
Footnote 5: [https://github.com/zhaoxile/Tensor-completion-using-total-variation-and-low-rank-matrix-factorization](https://github.com/zhaoxile/Tensor-completion-using-total-variation-and-low-rank-matrix-factorization)
Footnote 6: [https://github.com/wanghahaling7/Guaranteed-Tensor-Recovery-Fused-Low-rankless-and-Smoothness](https://github.com/wanghahaling7/Guaranteed-Tensor-Recovery-Fused-Low-rankless-and-Smoothness)
Footnote 7: [https://github.com/DmitryUyanov/deep-image-prior](https://github.com/DmitryUyanov/deep-image-prior),
Footnote 8: [https://github.com/accessem/deep-hs-prior](https://github.com/accessem/deep-hs-prior)
Footnote 9: [https://github.com/YisiL.cov/S2DP](https://github.com/YisiL.cov/S2DP)
#### Iv-A1 RGB images inpainting
Three color image datasets are applied to verify the performance of NGR. They are BSDS100 [40], set5 [41] and five color images from USC-SIP7. The evaluation encompasses three cases with sampling rate (SR) 50%, 30% and 10%, respectively.
Fig. 4: GT, observed video and inpainting visual results, corresponding PSNR values by methods for comparison on claire dataset (SR = 15%). The 178th frame is selected to display.
It can be observed that NGR achieves superior performance in RGB image inpainting, as evidenced by the metrics listed in Table I. When compared to SPC-TV and LRTC-TV, NGR exhibits superiority, even though they incorporate TV regularizer with low-rank prior. As a state-of-the-art TV variant, t-CTV can achieve competitive performance with NGR on the BSDS100 dataset, but with a significantly lower PSNR value and a higher SSIM value. However, on other datasets, t-CTV underperforms NGR in terms of both PSNR and SSIM, indicating the robust and consistent performance of NGR across different datasets.
The superiority of the proposed method is exemplified by its better performance in edge preservation, as displayed in Fig. 2. t-CTV and S2DIP generate evident artifacts, while NGR faithfully retains the edge information. Additionally, these methods are applied to a more difficult task, text inpainting. NGR achieves the highest PSNR among all competing methods, as shown in Fig. 3. Most methods result in faint text imprints around the horse, but only S2DIP and NGR provide fine restoration without text imprints. Notably, S2DIP utilizes a combination of three regularizers (i.e. DIP, TV and SSTV) to achieve this, while NGR employs only one automatically learned regularizer.
#### Iv-B2 Videos inpainting
Eight widely used videos10 are selected, and SR is chosen as 20%, 15% and 10%. As shown in Table II, NGR achieves the highest PSNR and SSIM among all compared methods, with a consistent performance gain of 0.2dB-0.4dB over t-CTV in terms of PSNR. The visual results presented in Fig. 4 demonstrate that NGR performs best in restoring facial details and structures, while other methods either fail to achieve complete restoration or result in local blurness.
Footnote 10: [http://trnace.as.asu.edu/yuw/index.html](http://trnace.as.asu.edu/yuw/index.html)
#### Iv-B3 HSIs inpainting
Next, the experiment is conducted on two typical HSI datasets, Pavia Centre (PA) 11 and Washington D.C. (WDC) 12, with SR configured as 10%, 7.5% and 5%, respectively. Additionally, remote sensing HSI data is often corrupted by deadlines, where entire rows/columns of pixels are missing. Deadline removal is a more challenging task since
Fig. 5: GT, observed HSI, inpainting visual results and corresponding PSNR values by methods for comparison on PA. The pseudo images consisted of the 47-th, 20-th and 14-th bands are selected to display.
there is no additional information available for the missing regions, which are contiguous blocks absent in all channels. Therefore, particular attention requires to be specifically paid to deadline removal.
The metrics reported in Table III indicate that NGR outperforms all other competing methods. Only NGR achieves a high-precision restoration, with PSNR exceeding 41dB and SSIM surpassing 0.99 for cases with randomly missing pixels. As for deadline removal, the recovered images displayed in Fig 5 demonstrate that conventional methods such as TV regularization, low-rank regularization, or their combination are insufficient to recover from such severely corrupted HSI data. On the other hand, approaches based on correlated TV and untrained neural networks manage to achieve satisfactory image restoration, but NGR attains the best results through precise predictions of gradient maps.
#### Iv-B4 Multi-temporal MSIs decloud
Multi-temporal MSIs decloud is a special inpainting task for remote sensing, wherein cloud-free images of different timestamps are utilized to facilitate the restoration of cloud-affected images. Suppose there are MSIs captured at \(T\) timestamps denoted as \(\mathbf{\mathcal{M}}_{t}\in\mathbb{R}^{B\times H\times W}(t=1,2,\cdots,T)\), with the assumption that \(\mathbf{\mathcal{M}}_{1}\) is a cloudy MSI, while the others are cloud-free MSIs. These MSIs are concatenated along the channel dimension to construct a multi-temporal MSI, denoted as \(\mathbf{\mathcal{Y}}=[\mathbf{\mathcal{M}}_{1},\mathbf{\mathcal{M}}_{2},\cdots,\mathbf{ \mathcal{M}}_{T}]\in\mathbb{R}^{BT\times H\times W}\). Generally speaking, it aims to recover the missing values in cloudy regions in \(\mathbf{\mathcal{M}}_{1}\).
The experimental setup remains consistent with the aforementioned configurations. The experimental data includes Forish Mountain with 8 bands and 4 timestamps, Forish Farmland with 8 bands and 4 timestamps, and Beijing with 6 bands and 4 timestamps. Three real cloud masks of varying sizes (small, medium and large) are selected from the WHU cloud dataset [42]. Table IV unequivocally demonstrates the superior performance of NGR over other methods across all evaluation metrics. Additionally, the visual results depicted in Fig. 6 illustrate the remarkable capabilities of NGR in reconstruction of fine details from multi-temporal images.
#### V-A5 Brief summary
In this section, we have conducted a comprehensive analysis to compare the performance of different competing methods on the inpainting problem with various types of visual data, encompassing RGB images, videos, HSIs, and multi-temporal MSIs. The results demonstrate that the NGR regularizer consistently achieves good performance across all data types, in contrast to previous state-of-the-art methods that are only applicable to a limited range of data types. This highlights the versatility of NGR, and implies it could be generally usable for a broad range of data types, thereby solidifying its potential for widespread applicability in the field of image processing.
### _Visual data denoising_
For denoising task, we compare our method with TV regularized low-rank matrix factorization (LRTV) 9[12], TV regularized low-rank tensor decomposition (LRTDTV) 10[43], correlated TV (CTV) 11[20], t-CTV [21], fast hyperspectral mixed noise removal (FastHyMix) 12[44], DIP [29, 38], S2DIP [39] and 3D quasi-recurrent and transformer based network (TRQ3D) 13,[45] for HSIs denoising. VBM3D 14[46], VBM4D [47], CTV, t-CTV, DIP and S2DIP are for
Fig. 8: GT, observed video frame, denoising visual results and corresponding PSNR values by methods for comparison on akiyo dataset (\(\sigma\) = 0.15). The 277-th frame is selected to display.
Fig. 7: The visual results on real-world data, France (the first row) and China (the second row), obtained by several representative methods.
videos denoising comparison.
#### Iv-C1 Videos denoising
Eight widely used videos are selected for this study. Four levels of Gaussian noise, characterized by \(\sigma\) = 0.05, \(\sigma\) = 0.1, \(\sigma\) = 0.15, and \(\sigma\) = 0.2, are considered.
As depicted in Table V, the NGR performs superiorly to all comparison methods under low-intensity Gaussian noise conditions. When high-intensity Gaussian noise is present, the NGR achieves the second highest PSNR, slightly lower than S2DIP. Nonetheless, the NGR attains the highest SSIM value in this scenario. The high PSNR values of the S2DIP method on high-intensity Gaussian noise conditions can be attributed to its more sufficient use of proper priors on this data, like local smoothness (by additional TV and SSTV prior terms). However, the NGR's accurate estimation of gradient maps results in higher SSIM values, indicating that the NGR exhibits relatively stronger restoration capabilities in terms of preserving structural information. Furthermore, S2DIP exhibits unsatisfactory performance in the low-intensity Gaussian noise conditions, thereby indicating that NGR should be more robust to noise intensities.
As illustrated in Fig. 8, it is evident that VBM3D and VBM4D are incapable of recovering the text on the background. The results obtained by CTV and S2DIP lead to a local blurriness in the foreground. Conversely, NGR exhibits its efficacy in effectively restoring the foreground and background of videos with strong noise, yielding relatively clear restorations. This demonstrates the superior performance of NGR in the video denoising task.
#### Iv-C2 HSIs denoising
In order to demonstrate the effectiveness of NGR for the HSI denoising task, we also employ the PA and WDC. Considering the intricate nature of noise components in HSIs, we focus on three distinct scenarios for simulated data: (1) Independently and identically distributed (i.i.d.) Gaussian noise. In this scenario, the image cube is contaminated by i.i.d. Gaussian noise with an intensity of \(\sigma=0.1\). (2) Weak mixed noise. Each band is corrupted by Gaussian noise with an intensity randomly selected from 0.1 to 0.4, and impulse noise with a ratio of 0.1. Additionally, 20% of the bands are corrupted by deadlines and stripes, with the number of each type of noise being 35. (3) Strong mixed noise. The intensity of impulse noise is increased to 0.25, and 50% of the bands are affected by deadlines and stripes. Through these simulations, we aim to comprehensively evaluate the performance of NGR in addressing various noise types and intensities commonly encountered in HSIs.
Quantitative results are presented in Table VI. In terms of Gaussian noise, NGR exhibits superior performance in comparison to traditional low-rank based methods. Although the metrics for Gaussian scenarios are slightly lower than those of S2DIP, NGR achieves superior recovery results in most competing cases. Such performance superiority is more evident when dealing with mixed noise than other comparison methods, especially those deep learning ones, like FastHyMix and TRQ3D.. This demonstrates that NGR excels in complex noise removal and accurately restores degraded images. Visual results on the WDC dataset are presented in Fig. 9. These results are consistent with our analysis, indicating that the gradient map estimation provided by NGR aids in the recovery from severe degradation.
In addition to simulated experiments, we also employ two real-world datasets with severe degradation caused by stripes and deadlines, namely Baoqing and Wuhan, which were both acquired by the GaoFen-5 satellite. Fig. 10 illustrates the denoised images obtained by several representative methods. It is observed that both LRTDTIV and CTV struggle to handle these dense stripes and deadlines effectively. Although DIP and S2DIP can remove most of the stripes and deadlines, they do so at the expense of evident detail loss and color distortion. Conversely, NGR emerges as the best performer for real-world data, demonstrating its effectiveness in tackling such challenging scenarios.
#### Iv-C3 Brief summary
In the context of image denoising, it has been demonstrated that NGR is capable of simultaneously eliminating noise and preserving textures from corrupted images. While S2DIP may achieve better metrics in certain instances, its performance highly rel
more priors. Considering that NGR only depends on purely deep gradient prior automatically extracted from data, and the denoised images by NGR can always attain better visual quality, as depicted in the demonstrated figures, it should be rational to say that NGR is effective and potentially useful for more general scenarios.
### _Discussion_
#### Iv-C1 Analysis of gradient estimation process
The robust edge-preserving capability of NGR could possibly be attributed to its efficiency in extracting gradients with strong self-similarity. Leveraging the inductive bias (also known as the deep prior) of neural networks, NGR iteratively refines gradient details, such as edges and textures, enabling it to effectively capture fine-grained gradient maps.
Fig. 11 illustrates the gradient estimation process for the same task shown in Fig. 2. At the \(i\)-th iteration, based on Eq. 11, the deep network can automatically extract fine details from \(\mathbf{\mathcal{X}}^{(i)}\) and estimate a refined gradient map \(\mathbf{\mathcal{G}}^{(i+1)}\) by exploiting the strong self-similarity of gradients. By employing the gradient map \(\mathbf{\mathcal{G}}^{(i+1)}\) estimated by the network, a more refined \(\mathbf{\mathcal{X}}^{(i+1)}\) tends to be obtained by using Eq. 16. This iterative process continues until a proper fine-grained gradient map is obtained. This coarse-to-fine gradient estimation process is crucial to the superior performance of NGR.
#### Iv-C2 Sensitivity to hyperparameters
We then discuss the effects of hyperparameters on NGR. For the inpainting task on the Set5 dataset (SR = 10%), these hyperparameters include \(\lambda_{i}(i\in\{h,v,t\})\) and \(\mu\) in Eq. 10. The hyperparameter \(\lambda_{i}\) is designed to control the weights on the \(h,v\), and \(t\) directions. As aforementioned, we always set \(\lambda_{h}\) and \(\lambda_{v}\) to 1, while regarding \(\lambda_{t}\) as the sole hyperparameter to control the channel direction. \(\mu\) is a hyperparameter from the ADMM algorithm, which is used to control the fidelity term.
To comprehensively analyze the effects of different hyperparameters on the performance of our method, we vary
Fig. 10: The visual results on real-world data, Baoqing (the first row) and Wuhan (the second row), obtained by several representative methods.
Fig. 9: GT, observed HSI, denoising visual results and corresponding PSNR values by methods for comparison on WDC (strong mixed noise). The pseudo images consisted of the 67-th, 56-th and 45-th bands are selected to display.
the value of each hyperparameter while keeping the others fixed, and report the corresponding results. As shown in Fig. 12, we observe that our method is relatively robust, as NGR maintains a good performance across a wide range of hyperparameter values. Notably, for \(\mu\), NGR achieves a PSNR higher than 29dB across the range of \(\mu\) values from \(2^{2}\) to \(2^{8}\). This demonstrates that NGR is easily applicable in real-world scenarios, making it a more practical and robust choice for inpainting tasks.
#### V-B3 Influences of backbones
The gradients estimated by NGR are represented by neural networks \(f_{\Theta}(\cdot)\). The experiments conducted above have demonstrated that NGR outperforms other methods based on untrained neural networks (namely, DIP and S2DIP). We now discuss whether NGR exhibits higher robustness to model backbones compared to these methods. In our study, we employed the following networks: ResNet [48], U-Net [49], and a convolutional network composed of ten convolution-batch normalization-ReLU blocks.
The results are presented in Fig. 13. The performance of DIP marked in blue shows significant variations under different architectures, and in some backbones, it even loses its ability to recover. On the other hand, S2DIP, as a result of incorporating TV, SSTV and DIP, exhibits improved robustness compared to DIP. However, the red markers representing NGR consistently demonstrate strong robustness to different backbone architectures. Regardless of the backbone network used, NGR consistently achieves good performance. This indicates that NGR is capable of adapting and maintaining its effectiveness across different model backbones.
#### V-B4 Comparison with other regularizers
To demonstrate the advantage over other regularizers, we apply TV, TGV, HOTV, \(\text{TV}_{l_{p}}(p=1/2)\), CTV, and t-CTV to the PA dataset with Case 3, which represents one of the most challenging scenario for the HSI denoising task. For fair comparison, all regularizers are equipped with \(\ell_{1}\)-norm based data fidelity, and the hyperparameters are tuned by grid search. TV, TGV, HOTV and \(\text{TV}_{l_{p}}(p=1/2)\) belong to the sparse gradient prior category, but Table VII shows that these TV variants do not result in significant improvements over the vanilla TV. CTV fuses sparsity and low-rank priors, achieving a 5.13dB gain over TV in terms of PSNR. This gain stems from CTV taking into account the additional prior knowledge (i.e., low-rank). The proposed NGR further improves over CTV by 0.71dB using the gradient estimation technique. In the future, it is promising to investigate the combination of sparse or low-rank prior with NGR, which may lead to even better results.
#### V-B5 Transferring capability
An intriguing inquiry pertains to the feasibility of transferring learned knowledge from one image to other images. As is well established, the learned knowledge of neural networks can be inherently embedded
Fig. 11: NGRβs gradient estimation process on β291000β (SR = 10%) from BSDS100 dataset. (a)-(e) denote the estimated gradient maps on step 1k, 2k,..., 5k. (f) is the final output of NGR.
Fig. 12: The quantitative performances of NGR on Set5 dataset (SR = 10%) with different value of hyperparameters (\(\lambda_{t}\) and \(\mu\)).
Fig. 13: The quantitative performances of NGR on Set5 dataset (SR = 50%) for inpainting task and PA (strong mixed noise) for denoising task with different backbone.
within the network weights. Consequently, an experimental investigation was conducted where the NGR model was trained with distinct initializations. In the first trial, the weights were initialized randomly, and the NGR was directly trained on the test image. In contrast, the second trial used the pretrained weights on another image, followed by finetuning the NGR on the test image.
The PSNR curves visualized in Fig. 14 reveal that pretrained weights facilitate the attainment of higher PSNR values than random weights, with the gap diminishing after 8000 iterations. Ultimately, after 11000 iterations, the PSNR values for pretrained and random weights reached 31.71dB and 31.54dB, respectively. This outcome corroborates that utilizing pretrained weights yields superior performance.
In summary, this experiment unequivocally demonstrates that the knowledge learned by NGR from one image can indeed be transferred and applied to other images.
## IV Conclusion
In this research, we have proposed a novel approach for gradient prior modeling, referred to as the neural gradient regularizer (NGR). Distinct from conventional manually pre-designed sparse gradient priors, such as total variation and its variants, NGR does not depend on the gradient sparsity assumption. Instead, it is capable of automatically extract intrinsic priors underlying the gradient maps in an entirely data-driven and zero-shot learning manner. Especially, attributed to its representation by a neural network, more flexible and complex priors of gradient images, rather than only sparsity prior as conventional TV methods, are expected to be derived from data and help further enhance the performance of the image restoration task. Our comprehensive experimental evaluation demonstrates that the versatile NGR is applicable to a wide range of image processing tasks and data types, exhibiting superior performance compared to state-of-the-art methods. The effectiveness of the proposed method can thus been substantiated.
|
2308.00070 | Phase transition during inflation and the gravitational wave signal at
pulsar timing arrays | Gravitational wave signal offers a promising window into the dynamics of the
early universe. The recent results from pulsar timing arrays (PTAs) could be
the first glimpse of such new physics. In particular, they could point to new
details during the inflation, which can not be probed by other means. We
explore the possibility that the new results could come from the secondary
gravitational wave sourced by curvature perturbations, generated by a
first-order phase transition during the inflation. Based on the results of a
field-theoretic lattice simulation of the phase transition process, we show
that the gravitational wave signal generated through this mechanism can account
for the new results from the PTAs. We analyze the spectral shape of the signal
in detail. Future observations can use such information to distinguish the
gravitational wave signal considered here from other possible sources. | Haipeng An, Boye Su, Hanwen Tai, Lian-Tao Wang, Chen Yang | 2023-07-31T18:46:16Z | http://arxiv.org/abs/2308.00070v1 | # Phase transition during inflation and the gravitational wave signal
###### Abstract
Gravitational wave signal offers a promising window into the dynamics of the early universe. The recent results from pulsar timing arrays (PTAs) could be the first glimpse of such new physics. In particular, they could point to new details during the inflation, which can not be probed by other means. We explore the possibility that the new results could come from the secondary gravitational wave sourced by curvature perturbations, generated by a first-order phase transition during the inflation. Based on the results of a field-theoretic lattice simulation of the phase transition process, we show that the gravitational wave signal generated through this mechanism can account for the new results from the PTAs. We analyze the spectral shape of the signal in detail. Future observations can use such information to distinguish the gravitational wave signal considered here from other possible sources.
## I Introduction
Despite the enormous progress in our knowledge about the early universe, large gaps still remain. Gravitational wave (GW) signal offers a new window into the early universe epochs and dynamics that can't be probed by other means. PTA collaborations have recently released further evidence that the common-spectrum process observed previously has the Hellings-Downns angular correlation [1; 2; 3; 4; 5; 6; 7]. This indicates the existence of a gravitational wave (GW) background in the nano-Hz frequency range [8].
GW signal in this frequency range can be produced by low-scale phase transitions in the radiation domination (RD) era [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], supermassive black hole mergers [28; 29; 30; 31; 32; 33; 34; 35; 36], and topological defects, such as cosmic strings and domain walls [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. At the same time, such a signal can be generated during the inflation, approximately 15 e-fold after the CMB mode exit the horizon. Various possible mechanisms have been studied [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73]. New physics scenarios during the inflation as a possible origin for the observed signal have been proposed [74; 75; 76; 77; 78; 79]. In the work, we focus on the possibility of the source being a first-order phase transition during inflation [80; 81; 82]. The bubble collision process during the phase transition can produce GWs. Such so-called primary GW signal is suppressed by \((H_{\rm inf}/\beta)^{5}\) and \((L/\rho_{\rm inf})^{2}\), where \(H_{\rm inf}\) is the Hubble expansion rate, \(\beta\) is the phase transition rate, \(L\) is the latent heat density and \(\rho_{\rm inf}\) is the total energy density of the universe during inflation. At the same time, the phase transition process is also a source of curvature perturbation. Such curvature perturbation, after inflation, can generate the so-called secondary GWs [83], see also [84; 85]. Compared to the primary GWs, the secondary GWs can be naturally enhanced by the slow-roll parameter and thus can give rise to the signal observed by the PTAs.
In this work, we perform a field-theoretic simulation of the bubble nucleation and collision process, with a lattice of size \(1000\times 1000\times 1000\), to calculate induced curvature perturbation. Based on these results, we can predict both the strength and the spectral shape of the secondary GW signal. In the absence of a combination of the data sets from different PTAs, we choose the results from the NANOGrav collaboration [1] as a benchmark for comparison. We show that both the size and the shape of the observed signal by NANOGrav [1; 5] in the region with the frequency \(f<\left(1~{}{\rm year}\right)^{-1}\) can be well fit by the secondary GWs produced by first-order phase transition produced during inflation.
## II The model
In this work, we model a spectator sector with a single real scalar field, \(\sigma\). The Lagrangian of \(\phi\) and \(\sigma\) is
\[\mathcal{L}=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-\frac{ 1}{2}g^{\mu\nu}\partial_{\mu}\sigma\partial_{\nu}\sigma-V(\phi,\sigma). \tag{1}\]
For the convenience of later discussions, we decompose \(V(\phi,\sigma)\) as
\[V(\phi,\sigma)=V_{0}(\phi)+V_{1}(\phi,\sigma)\, \tag{2}\]
where \(V_{0}(\phi)=V(\phi,0)\).
The inflaton field \(\phi\) can be decomposed as \(\phi_{0}+\delta\phi\), where \(\phi_{0}\) is the homogeneous part, and \(\delta\phi\) is the perturbation. The crucial part in the Lagrangian (1) is that
the mass of the \(\sigma\) field is \(\phi_{0}\) dependent, with
\[m_{\sigma}^{2}=(c_{m}\phi_{0}^{2}-m^{2}). \tag{3}\]
In this scenario, the evolution of \(\phi_{0}\) will change the shape of the potential and thus trigger a first-order phase transition. The general framework for such a scenario can be found in [81; 82]. The details of the specific model used in the numerical simulation in this work are presented in the appendix. The time scale of the phase transition is determined by \(t\sim\beta^{-1},\ \beta=-dS_{4}/dt\), where \(S_{4}\) is the bounce action between the false and true vacuums, and \(t\) is the physical time. The value of \(\beta\) depends on the details of the models in the inflaton and spectator sectors. For the class of models considered here, \(\beta/H_{\rm inf}\sim{\cal O}(10)\)[81; 82], \(H_{\rm inf}\) is the Hubble parameter during inflation. \(\beta^{-1}\) is the typical size of the bubbles.
Through this work, we use the Newtonian gauge, and thus the metric perturbation can be written as
\[g_{00}\!=\!-a^{2}(1+2\Phi),g_{0i}\!=\!0,g_{ij}\!=\!a^{2}[\delta_{ ij}(1-2\Psi)+h_{ij}]\,, \tag{4}\]
and during inflation, we have \(a=-1/H_{\rm inf}\tau\) with \(\tau\) the conformal time. In radiation domination, we have \(a=a_{R}^{2}H_{R}\tau\), where \(a_{R}\) and \(H_{R}\) are the scale factor and the Hubble parameter at reheating. In this work, we assume de Sitter inflation with instantaneous reheating. Therefore, we have \(H_{R}=H_{\rm inf}\).
## III Primary and secondary GWs
In our setup, there are two periods that classical GWs can be copiously produced. In both cases, the GWs satisfy the differential equation,
\[h_{ij}^{\rm TT}{}^{\prime\prime}+2{\cal H}h_{ij}^{\rm TT}{}^{ \prime}-\nabla^{2}h_{ij}^{\rm TT}=16\pi G{\cal T}_{ij}\, \tag{5}\]
where "TT" denotes the transverse and traceless components, \({\cal H}=a^{\prime}/a\), and \({\cal T}_{ij}\) is the source of GWs.
During inflation, the main contribution to the GWs is from the TT components of the energy-momentum tensor. The energy-momentum tensor is composed of \(\sigma\) and \(\delta\phi\), where \(\delta\phi\) is induced by the back reaction from the phase transition as discussed later. We call both of these contributions primary GWs. After being produced, the primary GWs will exit the horizon, and their field strength will be frozen to fixed values. The primary GWs will oscillate again once they reenter the horizon.
In addition to the primary GWs, the phase transition will also induce scalar curvature perturbation leading to secondary GWs after inflation. In particular, with the Newtonian gauge in Eq. (4), the source \({\cal T}_{ij}\) is composed of terms quadratic in \(\Phi\)[83; 84].
In summary, the source term can be written as
\[{\cal T}_{ij}=\left\{\begin{array}{ll}\left[\partial_{i}\sigma \partial_{j}\sigma+\partial_{i}\delta\phi\partial_{j}\delta\phi\right]^{\rm TT }\,&\mbox{primary}\\ -M_{\rm pl}^{2}\left[4\Phi\partial_{i}\partial_{j}\Phi+2\partial_{i}\Phi \partial_{j}\Phi-\frac{2}{{\cal H}^{2}-{\cal H}^{\prime}}\partial_{i}(\Phi^{ \prime}+{\cal H}\Phi)\partial_{j}(\Phi^{\prime}+{\cal H}\Phi)\right]^{\rm TT }\,&\mbox{secondary}\end{array}\right. \tag{6}\]
## IV Generation of the curvature perturbation
From the Lagrangian (1), we can derive the equation of motion for the Fourier modes of the inflaton perturbation,
\[\delta\tilde{\phi}_{\bf q}^{\prime\prime}-\frac{2}{\tau}\delta \tilde{\phi}_{\bf q}^{\prime}+\left(q^{2}+\frac{1}{H_{\rm inf}^{2}\tau^{2}} \,\frac{\partial^{2}V_{0}}{\partial\tilde{\phi}_{0}^{2}}\right)\delta\tilde{ \phi}_{\bf q}={\cal S}_{\bf q}\, \tag{7}\]
where the source \({\cal S}_{\bf q}\) is
\[{\cal S}_{\bf q} = -\frac{1}{H_{\rm inf}^{2}\tau^{2}}\left[\frac{\partial V_{1}}{ \partial\phi}\right]_{\bf q}-\left\{\frac{2\tilde{\Phi}_{\bf q}}{H_{\rm inf}^{ 2}\tau^{2}}\left(\frac{\partial V_{0}}{\partial\phi_{0}}+\left[\frac{\partial V _{1}}{\partial\phi}\right]_{0}\right)\right. \tag{8}\] \[\left.+\frac{\dot{\phi}_{0}}{H_{\rm inf}\tau}\left(3\tilde{\Psi}_{ \bf q}^{\prime}+\tilde{\Phi}_{\bf q}^{\prime}\right)\right\}\.\]
Here the symbol \([\dots]_{\bf q}\) denotes the Fourier mode with comoving momentum \({\bf q}\), and \(\dot{\phi}_{0}\) denotes \(d\phi_{0}/dt\). There are two source terms on the right-hand side of Eq. (7), in which the first term is from the direct interaction between \(\phi\) and \(\sigma\), whereas the second term is purely gravitational. In the case of polynomial interaction \(c_{m}\phi^{2}\sigma^{2}\), we have
\[\left[\frac{\partial V_{1}}{\partial\phi}\right]_{\bf q}=c_{m}[ \phi\sigma^{2}]_{\bf q}\approx c_{m}\phi_{0}[\sigma^{2}]_{\bf q}. \tag{9}\]
Due to the Einstein equations, \(\Phi\) and \(\Psi\) satisfy the differential equation
\[\tilde{\Psi}_{\bf q}^{\prime}-\frac{\tilde{\Phi}_{\bf q}}{\tau}=-4 \pi G_{N}\left(\frac{\dot{\phi}_{0}\delta\tilde{\phi}_{\bf q}}{H_{\rm inf}\tau} +\left[\frac{\partial_{i}}{\partial^{2}}(\sigma^{\prime}\partial_{i}\sigma) \right]_{\bf q}\right). \tag{10}\]
From the energy-momentum conservation, we have
\[\tilde{\Phi}_{\bf q}-\tilde{\Psi}_{\bf q}=-8\pi G_{N}\tilde{\pi}_{ \bf q}^{S}/(H_{\rm inf}^{2}\tau^{2})\, \tag{11}\]
where \(\pi^{S}\) is the anisotropic inertia
\[\tilde{\pi}_{\bf q}^{S}=-\frac{3}{2}H_{\rm inf}^{2}\tau^{2}q_{i}q _{j}q^{-4}\left[(\partial_{i}\sigma\partial_{j}\sigma)^{\rm TL}\right]_{\bf q}\, \tag{12}\]
where the upper script TL refers to the traceless part.
During slow-roll inflation, the _mass term_, \(\partial^{2}V_{0}/\partial\phi_{0}^{2}\), in the Eq. (7) is negligible. The solution of \(\delta\tilde{\phi}_{\bf q}\) can be written as
\[\delta\tilde{\phi}_{\bf q}(\tau)=\int_{-\infty}^{\tau}d\tau^{\prime}\tilde{G}( \tau,\tau^{\prime},q){\cal S}_{\bf q}(\tau^{\prime})\, \tag{13}\]
where the Green's function
\[\tilde{G}(\tau,\tau^{\prime};q) = -\frac{1}{q^{2}\tau^{\prime 2}}(\tau-\tau^{\prime})\cos q(\tau- \tau^{\prime}) \tag{14}\] \[+\frac{1}{q^{3}\tau^{\prime 2}}[(1+q^{2}\tau\tau^{\prime})\sin q (\tau-\tau^{\prime})]\.\]
After the phase transition, the universe returns to single-field inflation. Thus, the gauge-invariant quantity,
\[\zeta_{\bf q}=-\tilde{\Psi}_{\bf q}-\frac{H_{\rm inf}\delta\tilde{\phi}_{\bf q }}{\dot{\phi}_{0}}\, \tag{15}\]
is conserved when evolving outside of the horizon.
Several e-folds after the phase transition, the \(\Phi\) contribution to \(\zeta\) becomes negligible since its direct source from the \(\sigma\) field quickly redshifts away. Hence, we have
\[\zeta_{\bf q}=-\frac{H_{\rm inf}}{\dot{\phi}_{0}q}\int_{-\infty}^{0}d\tau^{ \prime}{\cal K}(q\tau^{\prime}){\cal S}_{\bf q}(\tau^{\prime})\, \tag{16}\]
where the integral kernel is
\[{\cal K}(\eta)=\frac{1}{\eta}\left(\cos\eta-\frac{\sin\eta}{\eta}\right). \tag{17}\]
## IV The signal
In this work, we use a \(1000\times 1000\times 1000\) lattice to simulate the phase transition process in de Sitter space and numerically solve Eqs. (7), (10) and (11) to calculate the various contributions to \(\zeta_{\bf q}\). The details of the simulation are presented in the appendix. The solid curves in Fig. 1 show the numerical results of \(\Delta_{\zeta}^{2}\), the spectrum of the induced curvature perturbation for \(\beta/H_{\rm inf}=4\), \(5\), \(10\) and \(20\), respectively. \(\Delta_{\zeta}^{2}\) is defined as
\[\Delta_{\zeta}^{2}(q)=\frac{q^{3}}{2\pi^{2}}P_{\zeta}(q)=\frac{q^{3}}{2\pi^{2} }\langle\zeta_{\bf q}\zeta_{\bf q^{\prime}}^{\star}\rangle^{\prime}\, \tag{18}\]
where \(\langle\cdots\rangle^{\prime}\) denotes the correlation function without the delta function. From Fig. 1, we can see that \(\Delta_{\zeta}^{2}\) grows as \(q^{3}\) in the IR region (left of the peak) and drops as \(q^{-6}\) in the UV region (on the far right). By comparing the peak values for the curves of different \(\beta/H_{\rm inf}\) in Fig. 1, we can also conclude that \(\Delta_{\zeta}^{2}\propto(H_{\rm inf}/\beta)^{3}\). 1
Footnote 1: The \(\beta\)-dependence of \(\zeta_{\bf q}\) is not a function of \(\zeta_{\bf q}\).
To further illustrate the physics, we will derive an approximate formula for the spectrum of the scalar perturbation. As shown in the appendix, in the parameter space we are interested in, the gravitational-induced contribution to the curvature perturbation is smaller than the direct contribution. Hence, in the qualitative analysis, we focus on the contribution from the direct source (the first term in \({\cal S}_{\bf q}\)). From Eqs. (9) and (16), we have \(\Delta_{\zeta}^{2}\) must be proportional to \(1/\epsilon\), where \(\epsilon=\dot{\phi}_{0}^{2}/(2H_{\rm inf}^{2}M_{\rm pl}^{2})\), is the slow-roll parameter after the phase transition. Here we assume that the fields in the spectator sector are heavy, and after the phase transition, the rolling of the scalar fields is still mostly in the direction of the original inflaton, \(\phi_{0}\). \(\zeta_{\bf q}\) can be estimated as
\[\zeta_{\bf q}\approx\frac{H_{\rm inf}}{\dot{\phi}_{0}}\int\frac{d \tau^{\prime}}{q^{2}\tau^{\prime}}\left(\cos q\tau^{\prime}-\frac{\sin q\tau^{ \prime}}{q\tau^{\prime}}\right)\frac{c_{m}\phi_{0}[\sigma^{2}(\tau^{\prime})] _{\bf q}}{H_{\rm inf}^{2}\tau^{\prime 2}}. \tag{19}\]
For first-order phase transitions to complete, we require its duration \(\sim\beta^{-1}<H_{\rm inf}^{-1}\). However, even after the phase transition, the \(\sigma\) field still oscillates and continues producing \(\zeta\). Since \(\beta<m_{\sigma}\) so that the oscillations are matter-like, it redshift as \(a^{-3}\). Thus, the \(\zeta\) production fades within a couple of e-folds after the phase transition. Indeed, Fig. A2 in the appendix shows that most of the induced curvature perturbations are produced between one and two e-folds after the phase transition starts.
Next, we consider the spectrum of scalar perturbation. We begin with the modes with \(q_{\rm phys}<H_{\rm inf}\), or equivalently \(q\tau^{\prime}<1\). In this regime, we can Taylor expand the cosine and sine in the integrand of Eq. 19 and get
\[\zeta_{\bf q}\approx\frac{1}{3\dot{\phi}_{0}}\int dt^{\prime}c_{m}\phi_{0}[ \sigma^{2}(\tau^{\prime})]_{\bf q}\, \tag{20}\]
Figure 1: Power spectrum of the induced curvature perturbation, \(\Delta^{2}(k)\) for different choices of parameters. The solid curves are the results of numerical simulation, and the dashed curves are based on the empirical formula Eq. (23). The wiggles in the curves for \(\beta/H_{\rm inf}=10\) and \(20\) are the residual of the oscillation pattern in the integral kernel (17), which also gives the oscillatory pattern in the primary GW spectrum as discussed in [81; 82]. Large curvature perturbation will lead to primordial black hole (PBH) production, and the relevant constraint is indicated by the grey dotted line [86].
where \(dt^{\prime}=a(\tau^{\prime})d\tau^{\prime}\). Since the typical scale of the bubble size is \(\beta^{-1}\) and \(H_{\rm inf}<\beta\), we expect the correlation \(\langle[\sigma^{2}]_{\bf q}[\sigma^{2}]_{\bf q^{\prime}}\rangle^{\prime}\) is insensitive to \({\bf q}\). Since the term \(c_{m}\phi_{0}^{2}\sigma^{2}\) triggers the phase transition in the \(\sigma\) sector, we also expect \(c_{m}\phi_{0}^{2}\sigma^{2}\sim L\). Hence, we have,
\[\int\!dt^{\prime}\!\int\!dt^{\prime\prime}\!c_{m}^{2}\phi_{0}^{2} \langle[\sigma^{2}(\tau^{\prime})]_{\bf q}[\sigma^{2}(\tau^{\prime\prime})]_{ \bf q^{\prime}}\rangle^{\prime}\sim\frac{L^{2}}{H_{\rm inf}^{2}a_{*}^{6}} \!\!\left(\frac{2\pi}{\beta}\right)^{3}\, \tag{21}\]
where \(a_{*}\) is the scale factor at the time of the phase transition. The factor \(H_{\rm inf}^{-2}\) is from the integral of the physical time duration, and the factor \((2\pi/\beta)^{3}\) is from dimensional analysis. The factor \(a_{*}^{6}\) appears in the denominator because the Fourier transformation is in comoving space. Combining (20) and (21), we have, in the region \(q_{\rm phys}<H_{\rm inf}\),
\[\Delta_{\zeta}^{2}(q)=\frac{{\cal A}}{\epsilon}\left(\frac{M_{ \rm pl}}{\phi_{0}}\right)^{2}\left(\frac{H_{\rm inf}}{\beta}\right)^{3}\left( \frac{L}{\rho_{\rm inf}}\right)^{2}\left(\frac{q_{\rm phys}}{H_{\rm inf}} \right)^{3}\, \tag{22}\]
where \({\cal A}\) collects all the numerical factors. Eq. (22) explains the IR behavior, the \((H_{\rm inf}/\beta)^{3}\) dependence, and the \((L/\rho_{\rm inf})^{2}\) dependence of \(\Delta_{\zeta}^{2}\) shown in Fig. 1.
The \(q^{3}\) growth of \(\Delta_{\zeta}^{2}\) stops when \(q_{\rm phys}\) reaches \(H\). In the region that \(q_{\rm phys}\gtrsim H_{\rm inf}\), the sine and cosine in (19) are order one, whereas \(\langle[\sigma^{2}]_{\bf q}[\sigma^{2}]_{\bf q^{\prime}}^{*}\rangle^{\prime}\) is still insensitive to the change of \(q\). Therefore, by counting the power of \(q\) in this region, we get \(\Delta_{\zeta}^{2}\) drops as \(q^{-1}\).
As shown in the appendix, the production of the curvature perturbation lasts for a couple of e-folds, the typical physical momentum of the bubbles are redshifted, and thus we expect the correlation \(\langle[\sigma^{2}]_{\bf q}[\sigma^{2}]_{\bf q^{\prime}}\rangle^{\prime}\) to drop significantly with \(q\) for \(q_{\rm phys}\) larger than some value between \(H\) and \(\beta\). From numerical simulation, as shown in Fig. 1, \(\Delta_{\zeta}^{2}\) drops as \(q^{-6}\) in the UV region.
In summary, we arrive at an empirical formula for the spectrum of the induced curvature perturbation,
\[\Delta_{\zeta}^{2}(q)=A_{\rm ref}{\cal F}\left(\frac{q_{\rm phys} }{H_{\rm inf}}\right)\, \tag{23}\]
where
\[A_{\rm ref} = \frac{24}{\epsilon}\left(\frac{M_{\rm pl}}{\phi_{0}}\right)^{2} \left(\frac{H_{\rm inf}}{\beta}\right)^{3}\left(\frac{L}{\rho_{\rm inf}}\right) ^{2}\, \tag{24}\] \[{\cal F}(x) = \frac{x^{3}}{1+(\alpha_{1}x)^{4}+(\alpha_{2}x)^{9}}\ . \tag{25}\]
From the numerical results, we can get \(\alpha_{1}=0.31\), \(\alpha_{2}\) mildly depends on \(\beta/H_{\rm inf}\) and equals \(0.143\), \(0.17\), \(0.2\), and \(0.2\) for \(\beta/H_{\rm inf}=4,5,10\) and \(20\).
In a general inflation model, \(\phi_{0}\) and \(\epsilon\) can be independent parameters. For single field inflation, if we consider the case that \(\partial V_{1}/\partial\phi\) dominates the evolution of \(\phi_{0}\) after the phase transition, we have
\[\epsilon=\frac{\dot{\phi}_{0}^{2}}{2M_{\rm pl}^{2}H_{\rm inf}^{2} }\sim\frac{(V_{1})^{2}}{6\rho_{\rm inf}H_{\rm inf}^{2}}\, \tag{26}\]
and, assuming \(V_{1}\) is a polynomial,
\[\frac{M_{\rm pl}^{2}}{\phi_{0}^{2}}\sim\left(\frac{M_{\rm pl}}{ V_{1}/V_{1}^{\prime}}\right)^{2}\sim\left(\frac{M_{\rm pl}}{L/V_{1}^{\prime}} \right)^{2}. \tag{27}\]
Thus, without fine-tuning, the peak value of \(\Delta_{\zeta}^{2}\) is
\[\Delta_{\zeta}^{2}(q)\approx 3.6\times\left(\frac{H_{\rm inf}}{\beta} \right)^{3}{\cal F}\left(\frac{q_{\rm phys}}{H_{\rm inf}}\right)\, \tag{28}\]
Thus, for \(H_{\rm inf}/\beta\sim 0.1\), it is natural to expect the peak value of \(\Delta_{\zeta}^{2}\) to be around \(0.01\).
## IV Production of secondary GW
To produce the GW to account for the PTA results, the curvature perturbations need to reenter the horizon well before the matter-radiation equity. Thus, we have, after inflation,
\[\tilde{\Phi}_{\bf q}(\tau)=-\frac{2}{3}T(q\tau)\zeta_{\bf q}\, \tag{29}\]
where \(T(q\tau)\) is the RD transfer function. We then use Eqs. (5) and (6) to calculate the secondary GW. Following the standard procedure [84], we can get the spectrum function for the secondary GWs,
\[\Omega_{\rm GW}^{(2)}(f)=\Omega_{R}A_{\rm ref}^{2}{\cal F}_{2} \left(\frac{q_{\rm phys}}{H_{\rm inf}}\right), \tag{30}\]
where \(\Omega_{R}\) is the radiation energy density of the universe. The form factor \({\cal F}_{2}\) collects the transfer functions and Green's functions. As shown in the appendix, the peak value of \({\cal F}_{2}\) is about \(200\), in the IR region, we have
\[{\cal F}_{2}^{\rm IR}(x)\approx x^{3}\left(\frac{6}{5}\log^{2}x+ \frac{16}{25}\log x+\frac{28}{125}\right). \tag{31}\]
It is the logarithmic structure in \({\cal F}_{2}\) that slows down the rising of the spectrum and thus gives a fit to IR the NANOGrav observation data as shown in Fig. 2.
The relation between comoving momentum \(q\) and today's frequency \(f\) is
\[f=\frac{q}{2\pi a_{0}}=f_{\rm ref}\times\frac{q_{\rm phys}}{H_{ \rm inf}}\, \tag{32}\]
where
\[f_{\rm ref}=10^{-9}\ {\rm Hz}\times e^{40-N_{e}}\left(\frac{H_{\rm inf}}{10^{14}\ {\rm GeV}} \right)^{1/2}\, \tag{33}\]
where \(N_{e}\) is the number of e-folds the phase transition happened before the end of inflation. Here we assume the reheating process finished within one e-fold. Thus, if the phase transition is the reason for the PTA signals, it happened at about \(40\) e-folds before the end of inflation.
## Summary and Outlook
The main result of this work is shown in Fig. 2. As a benchmark, we compare them with data from the NANOGrav collaboration. The observations from the other PTA collaborations are in broad agreement. The secondary GW signal considered in this work can have the same magnitude to account for the observation. They can also fit the spectral shape of the observed data, in particular in the region \(f<3\times 10^{-8}\) Hz. Hence, we conclude that the mechanism of GW production studied in this paper provides a promising explanation for the observations made at the PTA collaborations.
The observed signal in the higher frequency range \(f>3\times 10^{-8}\) Hz seems to indicate an even higher amplitude. Additional data and the combination of the data from all PTA collaborations can shed more light on this region. In our scenario, we could, in principle, consider slightly later phase transitions (smaller \(N_{e}\)) to have the signal peak towards higher frequencies. We can also adjust other parameters, such as \(L\), \(\epsilon\), and \(\beta\), to a higher amplitude. However, as already evident in Fig. 1, higher amplitudes of curvature perturbation will inevitably lead to copious production of primordial black holes and be in tension with observations. This is a generic limit for any mechanism of secondary GW production.
We expect significant PBH production in the region with large curvature perturbations considered in this work, even if they have not been excluded yet. This could offer a correlated signal to verify the secondary GW production mechanism. We leave a detailed study of this question for future work.
_Acknowledgment-_ The work of HA is supported in part by the National Key R&D Program of China under Grants No. 2021YFC2203100 and No. 2017YFA0402204, the NSFC under Grant No. 11975134, and the Tsinghua University Dushi Program No. 53120200422. The work of LTW is supported by the DOE grant DE-SC0013642.
**Appendix**
In this appendix, we present the details of the phase transition used in the numerical simulation, the lattice simulation method, and the analysis of the form factor \(\mathcal{F}_{2}\).
## Details of the numerical simulation
Here we present the detailed simulation of the evolution of the spectator field \(\sigma\) and the inflaton field \(\phi\) during the first-order phase transition.
### The phase transition model
In our simulation, the potential in the spectator sector is
\[V_{1}\left(\phi,\sigma\right)=-\frac{1}{2}(m^{2}-c_{m}\phi^{2})\sigma^{2}+ \frac{1}{3}c_{3}\sigma^{3}+\frac{1}{4}\lambda\sigma^{4}. \tag{1}\]
\(V_{1}\) has two non-degenerate local minima, \(\sigma_{\rm fl}\) and \(\sigma_{\rm tr}\). In the earlier time of inflation, when \(c_{m}\phi_{0}^{2}>m^{2}\), \(\sigma=0\)
Figure 2: Differential spectra of the secondary GW induced by first-order phase transition during inflation for different parameters as shown in the plot. Two choices of the model parameters are shown as examples. They give the right amplitude to account for the data set collected by the NANOGrav collaboration. They can also fit the spectral shape of the observed data, in particular in the region \(f<3\times 10^{-8}\) Hz. As a comparison, the corresponding primary GWs are also shown by the dashed curves. In the interesting parameter region, the magnitude of the primary GWs is smaller than the secondary GWs by a few orders of magnitude.
is the preferred vacuum. During the slow-roll inflation, \(\phi_{0}\) becomes smaller, the phase transition happens when \(c_{m}\phi_{0}^{2}<m^{2}\).
The cubic term in the potential \(V_{1}\) provides a barrier between the true and false vacua, and thus the phase transition is first-order. The bubble nucleation process is described by the bounce solution, \(\sigma_{\rm b}\)[87; 88], which satisfies the Euclidean field equation,
\[\frac{{\rm d}^{2}\sigma_{\rm b}}{{\rm d}r^{2}}+\frac{3}{r}\frac{{\rm d}\sigma_ {\rm b}}{{\rm d}r}=\frac{{\rm d}V_{1}}{{\rm d}\sigma_{\rm b}}\, \tag{10}\]
where in Euclidean space \(r=(t^{2}+{\bf x}^{2})^{1/2}\) and \(dr^{2}=dt^{2}+d{\bf x}^{2}\). In Eq. (10), the Hubble expansion is ignored, since typical energy scale of the spectator sector in this study is much larger than \(H_{\rm inf}\). The boundary conditions for the Euclidean equation of motion (10) are
\[\sigma_{\rm b}\left(\infty\right)=\sigma_{\rm fl},\ \left.\frac{{\rm d}\sigma_{ \rm b}}{{\rm d}r}\right|_{r=0}=0\, \tag{11}\]
and then the bubble nucleation rate per unit physical volume can be written as
\[\frac{\Gamma}{V_{\rm phys}}\sim m^{4}{\rm e}^{-S_{\rm b}}, \tag{12}\]
where the Euclean bounce action \(S_{\rm b}\) is
\[S_{\rm b} = 2\pi^{2}\int_{0}^{\infty}{\rm d}rr^{3} \tag{13}\] \[\left[\frac{1}{2}\left(\frac{{\rm d}\sigma_{\rm b}}{{\rm d}r} \right)^{2}+V_{1}\left(\phi_{0},\sigma_{\rm b}\right)-V_{1}\left(\phi_{0}, \sigma_{\rm fl}\right)\right]\]
The initial configuration of the bubbles in Minkowski space are then given by the analytical continuation of bounce solution to the Minkowski space, \(\sigma_{\rm b}\left(r\right)\to\sigma_{\rm b}\left(\sqrt{a^{2}\left({\bf x}^{2 }-\tau^{2}\right)}\right)\). Then we use the classical field equation,
\[\sigma^{\prime\prime}+2{\cal H}\sigma^{\prime}-\nabla^{2}\sigma+a^{2}\frac{{ \rm d}V_{1}}{{\rm d}\sigma}=0\, \tag{14}\]
to calculate the evolution of the \(\sigma\) field.
### Lattice simulation
We discretize the space using a cubic lattice with \(N\) grids per spatial dimension. The \(N^{3}\) points are labeled as
\[{\bf n}=\left(n_{1},n_{2},n_{3}\right)\,\ {\rm with}\ n_{i}=0,1,\ldots,N-1,\ i=1,2,3. \tag{15}\]
In the simulation, a field defined on the continuum space with comoving coordinate \({\bf x}\), \(f\left({\bf x}\right)\), is converted to a field defined \(f\left({\bf n}\right)\) defined on the lattice, with the condition, \(f\left({\bf x}\right)\) at \({\bf x}={\bf n}\delta x\). Note that in our simulation, the lattice is defined in coordinate system. For convenience, we set \(m=1\) hereafter. In this unit, the Hubble constant \(H_{\rm inf}\) has the value \(0.01\). The physical size of the lattice spacing \(\delta x\) is set to be \(0.6\) at the beginning of the simulation. We proved that \(0.6\) is small enough to give accurate secondary GW signals by comparing the restuls between \(\delta x=0.6\) to \(\delta x=0.3\) with the same total volume.
In our simulation, \(N=1183\), thus the size of the whole space is \(7H_{\rm inf}^{-1}\). In the simulation, we use periodic boundary conditions in the three spatial directions, so that \(f\left({\bf n}+{\bf e}_{i}N\right)=f\left({\bf n}\right)\), where \({\bf e}_{i}\) denotes one of the unit vectors in the three spatial dimensions.
Note that the finite volume of the cubic lattice implies an IR cut-off of momenta
\[\delta k=\frac{2\pi}{N\delta x}\, \tag{16}\]
and therefore the momenta must be discretized. The finite spatial volume and the discretization require discrete Fourier transformation,
\[f\left(\tilde{\bf n}\right) = \sum_{\bf n}\left(\delta x\right)^{3}f\left({\bf n}\right){\rm e }^{-\frac{12\pi}{N}{\bf n}\cdot\tilde{\bf n}},\] \[f\left({\bf n}\right) = \sum_{\tilde{\bf n}}\left(\frac{\delta k}{2\pi}\right)^{3}f\left( \tilde{\bf n}\right){\rm e}^{\frac{12\pi}{N}\tilde{\bf n}\cdot{\bf n}}\, \tag{17}\]
where the momenta are also periodic, so practically we choose
\[\tilde{\bf n}=\left(\tilde{n}_{1},\tilde{n}_{2},\tilde{n}_{3}\right)\,\ {\rm with }\ \tilde{n}_{i}=-\frac{N-1}{2},\ldots,\frac{N-1}{2}. \tag{18}\]
and the corresponding comoving momenta are
\[{\bf k}=\left(k_{1},k_{2},k_{3}\right),\ {\rm with}\ k_{i}=-\frac{N-1}{N}\frac{ \pi}{\delta x},\ldots,\frac{N-1}{N}\frac{\pi}{\delta x}. \tag{19}\]
For time evolution, we set the scale factor \(a\left(\tau_{\star}\right)=1\) at the starting time of the simulation, so the conformal time \(\tau_{\star}=-H_{\rm inf}^{-1}\). The temporal step is chosen different in each e-fold. In practice, the phase transition completes in at most 5 e-folds, and we split the first one into 1000 steps, the second one into 500 steps, the third one into 250 steps, the fourth one into 125 steps, and the last one into 60 steps.
To facilitate the numerical simulation, we redefine the fields and parameters in the theory as
\[\sigma\to\frac{\sigma}{\sqrt{\lambda}},\ \phi\to\frac{\phi}{\sqrt{c_{m}}},\ c_{3} \to\sqrt{\lambda}c_{3}. \tag{20}\]
Then Eq. (19) becomes
\[V_{1}\left(\phi,\sigma\right)=\frac{1}{\lambda}\left[-\frac{1}{2}\left(1-\phi ^{2}\right)\sigma^{2}+\frac{1}{3}c_{3}\sigma^{3}+\frac{1}{4}\sigma^{4} \right]\, \tag{21}\]
and the field equation Eq.(10) can be rewritten as
\[\sigma^{\prime\prime}+2\mathcal{H}\sigma^{\prime}-\nabla^{2}\sigma+a^{2}\left[- \left(1-\phi^{2}\right)\sigma+c_{3}\sigma^{2}+\sigma^{3}\right]=0. \tag{12}\]
To ensure the phase transition can take place and complete in acceptable time, the action \(S_{\mathrm{b}}\) should not be too large, which we keep under 20. Under this requirement and the slow-roll condition, we set the quartic interaction coefficient \(\lambda=2\), the cubic coefficient \(c_{3}=1\), the initial value of the inflaton \(\phi_{0}\left(\tau_{0}\right)=1.03\), and then tune the initial velocity of the inflaton \(\dot{\phi}_{0}\left(\tau_{0}\right)\) to adjust the value of \(\beta\).
The starting time of simulation, \(t_{\star}\), is defined as the moment when there is roughly one bubble nucleated per Hubble volume, which means
\[\frac{\Gamma(t_{\star})}{V_{\mathrm{phys}}}\simeq H_{\mathrm{inf}}^{4}. \tag{13}\]
During the phase transition, the bubble nucleation rate can be parameterized as
\[\Gamma(t)\simeq\Gamma(t_{\star})\mathrm{e}^{\beta(t-t_{\star})}, \tag{14}\]
where
\[\beta\equiv-\left.\frac{\mathrm{d}S_{b}}{\mathrm{d}t}\right|_{t=t_{\star}}. \tag{15}\]
Here \(t\) denotes the physical time. The initial condition of \(\sigma\) is chosen as
\[\sigma=\sigma_{\mathrm{fi}},\quad\sigma^{\prime}=0. \tag{16}\]
we simulate the process of phase transition using the classical equation of \(\sigma\). This field is evolved by the same numerical integrator used in [89; 90], (the details can also be found in the appendix of [91]). In the case of this work, the Hamiltonian that governs the evolution of the system is
\[\mathcal{H}=\int\mathrm{d}^{3}\mathbf{x}\frac{1}{2}\left[\frac{1}{a^{2}}\pi^{ 2}+a^{2}(\nabla\sigma)^{2}+a^{4}V_{1}\left(\phi,\sigma\right)\right], \tag{17}\]
where \(\pi\equiv a^{2}\sigma^{\prime}\).
During the simulation, the bubbles are generated in the region still occupied by the false vacuum at the beginning of each temporal step. The number density of the bubbles generated at each temporal step is calculated by the nucleation rate per unit lattice volume and the positions of bubble centers are chosen randomly.
At each temporal step, the probability for a bubble to be produced on each site follows the binomial distribution with the probability,
\[p=\frac{\Gamma(\tau)}{V_{\mathrm{phys}}}\delta x^{3}\delta\tau a^{4}(\tau). \tag{18}\]
Thus, in practice, at each temporal step, we generate random numbers that obey the binomial distribution with probability \(p\) at the sites that are still occupied by the false vacuum to decide whether the true vacuum bubbles can be generated at that site. The profile of the bubbles is determined by the bounce solution \(\sigma_{b}\).
After nucleation, the bubbles will expand rapidly, then collide with each other and finally occupy the whole space. Fig. 11 shows the evolution of ratio \(\sigma_{0}/\sigma_{\mathrm{tr}}\), where \(\sigma_{0}\) is the mean value of the \(\sigma\) field. This value reflects the occupation fraction of the true vacuum since the false vacuum of our system is set to be \(\sigma_{\mathrm{fi}}=0\). Fig. 11 shows that the bubble collision process is finished within a couple of \(\beta^{-1}\).
With the \(\sigma\) field configuration at each temporal step, we calculate \(\delta\phi\), \(\Psi\) and \(\Phi\) by solving the differential equations (7) and (10) together with the conditions (8), (11) and (12).
Notice that \(\delta\phi\) contributes to the source term of \(\Psi\) in Eq. (10), so we can't finish the integral Eq.(13) directly using the Green's function method described in the main text for the qualitative analysis. In the numerical simulation, the differential equations are in an iterative way. In detail, at each temporal step, we use \(\sigma\), \(\delta\phi\), \(\Psi\) and \(\Phi\) from the last step to calculate the source function (8), (10) and (12). And use the source to calculate the evolution of \(\delta\phi\), \(\Psi\), and \(\Phi\) in the current step. We do this calculation iteratively to obtain the evolution of \(\delta\phi\), \(\Psi\), and \(\Phi\).
After the phase transition, we calculate the curvature perturbation \(\zeta\) using Eq. (15), and the power spectrum \(\Delta_{\zeta}^{2}\). Fig. 12 shows the accumulated contributions to \(\Delta_{\zeta}^{2}\), where for each curve the \(\tau^{\prime}\) integral is from \(\tau_{\star}\) to the value indicated in the figure. The parameters are chosen to be \(\beta/H=5\), \(L/\rho_{\mathrm{inf}}=10^{-2}\), \(M_{\mathrm{pl}}/\phi_{0}=1,\epsilon=6\times 10^{-3}\). We can see that the induced curvature perturbation are
mostly produced between one and two e-folds after the phase transition. Therefore, the physical duration of the production of the curvature perturbation is about \(H^{-1}\).
Fig.(A3) shows the induced curvature spectrum \(\Delta_{\zeta}^{2}\) for different choices of \(\beta/H\), together with pure gravitational contributions shown in the dashed curves. we can see that the gravitational contributions are s negligible compared to the direct contributions.
## Appendix B Details of the shape function \(\mathcal{F}_{2}\)
For the curvature perturbation to reenter the horizon well before the matter-radiation equality, the GWs observed today are mainly produced during radiation dominated era. Thus the energy fraction of GWs can be written as [84]
\[\begin{split}\Omega_{\text{GW}}(k)&=\Omega_{\text{ rad}}\frac{k^{3}}{6}\int_{0}^{\infty}dk_{1}\int_{-1}^{1}d\mu\frac{k_{1}^{3}}{k_{ 2}^{3}}\left(1-\mu^{2}\right)^{2}\\ &\cdot\overline{I^{2}}(k,k_{1},k_{2})\mathcal{P}_{\zeta}\left(k_ {1}\right)\mathcal{P}_{\zeta}\left(k_{2}\right),\end{split}\] (A21)
where \(\overline{I^{2}}(k,k_{1},k_{2})\) is the integration kernel
\[\begin{split}&\overline{I^{2}}(k,k_{1},k_{2})\ =\ \frac{1}{2}\left(\frac{3(k_{1}^{2}+k_{2}^{2}-3k^{2})}{4k_{1}^{3}k_{2}^{3}}\right) ^{2}\\ &\times\Bigg{\{}\pi^{2}\left(k_{1}^{2}+k_{2}^{2}-3k^{2}\right)^ {2}\theta\left(k_{1}+k_{2}-\sqrt{3}k\right)\\ &+\left[-4k_{1}k_{2}+\left(k_{1}^{2}+k_{2}^{2}-3k^{2}\right) \log\left|\frac{3k^{2}-(k_{1}+k_{2})^{2}}{3k^{2}-(k_{1}-k_{2})^{2}}\right| \right]^{2}\Bigg{\}}.\end{split}\] (A22)
with \(\mathbf{k}=\mathbf{k}_{1}+\mathbf{k}_{2}\) and \(\mu=\frac{\mathbf{k}\cdot\mathbf{k}_{1}}{kk_{1}}\). It's convenient to introduce a new variable \(v=k_{1}/k\), then we have
\[\begin{split}\Omega_{\text{GW}}(k)=&\Omega_{\text{ rad}}\int_{0}^{\infty}dv\int_{-1}^{1}d\mu\left(1-\mu^{2}\right)^{2}\mathcal{K}(v, \mu)\\ &\cdot\mathcal{P}_{\zeta}\left(vk\right)\mathcal{P}_{\zeta}\left( \sqrt{v^{2}+1-2v\mu}k\right),\end{split}\] (A23)
where
\[\mathcal{K}(v,\mu)\equiv\frac{v^{3}}{6}\frac{\overline{I^{2}}\left(1,v,\sqrt{ v^{2}+1-2v\mu}\right)}{(v^{2}+1-2v\mu)^{3/2}}.\] (A24)
Notice the asymptotic behaviors of \(\mathcal{K}(v,\mu)\) is well approximated by
\[\mathcal{K}(v,\mu)\simeq\begin{cases}v^{3}/3&v\ll 1\\ 3v^{-4}\log^{2}v&v\gg 1\end{cases},\] (A25)
which is independent of \(\mu\). The shape function \(\mathcal{F}_{2}\) defined in the main text can be calculated by
\[\begin{split}\mathcal{F}_{2}(x)=&\int_{0}^{\infty}dv \int_{-1}^{1}d\mu\left(1-\mu^{2}\right)^{2}\mathcal{K}(v,\mu)\\ &\cdot\mathcal{F}\left(vx\right)\mathcal{F}\left(\sqrt{v^{2}+1-2v \mu}x\right),\end{split}\] (A26)
where
\[\mathcal{F}(x)=\frac{x^{3}}{1+(\alpha_{1}x)^{4}+(\alpha_{2}x)^{9}}\] (A27)
is the form factor for \(\Delta_{\zeta}^{2}\). We assume \(\mathcal{F}(x)\) has a maximum value \(\mathcal{F}_{\text{max}}\) with coordinate \(x_{\text{max}}\), and it can be approximated as
\[\mathcal{F}(x)\simeq\begin{cases}\mathcal{F}_{\text{max}}(x/x_{\text{max}})^ {3}&x<x_{\text{max}}\\ \mathcal{F}_{\text{max}}(x/x_{\text{max}})^{-6}&x\gg x_{\text{max}}\end{cases}\] (A28)
Figure 10: The power spectra of curvature perturbation of \(\beta=5H_{\text{inf}}\) (blue), \(\beta=10H_{\text{inf}}\) (orange) and \(\beta=20H_{\text{inf}}\) (green). The dotted line shows the contribution from direct coupling and the dashed line shows the pure gravitational contribution, while the full results are shown by the solid line. We can see the solid line and the dotted line are almost overlapped.
Now we consider the asymptotic behaviors of \(\mathcal{F}_{2}\). The \(v\) integral is most donated around \(vx\simeq x_{\rm max}\) i.e. when \(\mathcal{F}(x)\) achieved its maximum value, since the descent of \(\mathcal{F}(x)\) is deeper then \(\mathcal{K}(v,\mu)\). For small \(x\), \(v\simeq x_{\rm max}/x\gg 1\), \(\sqrt{v^{2}+1-2v\mu}\simeq v\), then the integral becomes
\[\mathcal{F}_{2}(x)\simeq \mathcal{F}_{\rm max}^{2}\Bigg{\{}\int_{0}^{x_{\rm max}/x}dv3v^{- 4}\log^{2}v\cdot(vx/x_{\rm max})^{6} \tag{103}\] \[+\int_{x_{\rm max}/x}^{\infty}dv3v^{-4}\log^{2}v\cdot(vx/x_{\rm max })^{-12}\Bigg{\}}\] \[\simeq \mathcal{F}_{\rm max}^{2}(x/x_{\rm max})^{3}\log^{2}(x/x_{\rm max }),\]
where the \(\mu\) integral is factorized and the result is a \(\mathcal{O}(1)\) number. And for large \(x\), \(v\simeq x_{\rm max}/x\ll 1\), \(\sqrt{v^{2}+1-2v\mu}\simeq 1\). Similarly, we finish the integration and get
\[\mathcal{F}_{2}(x)\simeq \mathcal{F}_{\rm max}^{2}\Bigg{\{}\int_{0}^{x_{\rm max}/x}dvv^{3}/ 3\cdot(vx/x_{\rm max})^{3}(x/x_{\rm max})^{-6} \tag{104}\] \[+\int_{x_{\rm max}/x}^{\infty}dvv^{3}/3\cdot(vx/x_{\rm max})^{-6 }(x/x_{\rm max})^{-6}\Bigg{\}}\] \[\simeq \mathcal{F}_{\rm max}^{2}(x/x_{\rm max})^{-10}.\]
In the parameter space we interested, we have \(\mathcal{F}_{\rm max}\simeq\mathcal{O}(10)\), so the peak value of \(\mathcal{F}_{2}(x)\) is roughly \(\mathcal{O}(100)\). The shape of \(\mathcal{F}_{2}(x)\) for \(\beta/H_{\rm inf}=4,5,10,20\) is shown in Fig.(A4). We can see a shoulder in the left of the peak which is from the contribution of logarithmic function. This shoulder will slow down the slope of this region significantly.
|
2309.08705 | Torsion Vanishing for Some Shimura Varieties | We generalize the torsion vanishing results of Caraiani-Scholze and
Koshikawa. Our results apply to the cohomology of general Shimura varieties
$(\mathbf{G},X)$ of PEL type $A$ or $C$, localized at a suitable maximal ideal
$\mathfrak{m}$ in the spherical Hecke algebra at primes $p$ such that
$\mathbf{G}_{\mathbb{Q}_{p}}$ is a group for which we know the Fargues-Scholze
local Langlands correspondence is the semi-simplification of a suitably nice
local Langlands correspondence. This is accomplished by combining Koshikawa's
technique, the theory of geometric Eisenstein series over the Fargues-Fontaine
curve, the work of Santos describing the structure of the fibers of the
minimally and toroidally compactified Hodge-Tate period morphism for general
PEL type Shimura varieties of type $A$ or $C$, and ideas developed by Zhang on
comparing Hecke correspondences on the moduli stack of $G$-bundles with the
cohomology of Shimura varieties. In the process, we also establish a
description of the generic part of the cohomology that bears resemblance to the
work of Xiao-Zhu. Moreover, we also construct a filtration on the compactly
supported cohomology that differs from Manotovan's filtration in the case that
the Shimura variety is non-compact, allowing us to circumvent some of the
circumlocutions taken by Cariani-Scholze. Our method showcases a very general
strategy for proving such torsion vanishing results, and should bear even more
fruit once the inputs are generalized. Motivated by this, we formulate an even
more general torsion vanishing conjecture. | Linus Hamann, Si Ying Lee | 2023-09-15T18:52:53Z | http://arxiv.org/abs/2309.08705v3 | # Torsion vanishing for some Shimura varieties
###### Abstract.
We generalize the torsion vanishing results of [13, 14, 15, 16]. Our results apply to the cohomology of general Shimura varieties \((\mathbf{G},X)\) of PEL type \(A\) or \(C\), localized at a suitable maximal ideal \(\mathfrak{m}\) in the spherical Hecke algebra at primes \(p\) such that \(\mathbf{G}_{\mathbb{Q}_{p}}\) is a group for which we know the Fargues-Scholze local Langlands correspondence is the semi-simplification of a suitably nice local Langlands correspondence, as shown in [13, 12, 14, 15]. This is accomplished by combining Koshikawa's technique [16], the theory of geometric Eisenstein series over the Fargues-Fontaine curve [12], the work of Santos [17] describing the structure of the fibers of the minimally and toroidally compactified Hodge-Tate period morphism for general PEL type Shimura varieties of type \(A\) or \(C\), and ideas developed by Zhang [18] on comparing Hecke correspondences on the moduli stack of \(G\)-bundles with the cohomology of Shimura varieties. In the process, we also establish a description of the generic part of the cohomology that bears resemblance to the work of Xiao-Zhu [19]. Moreover, we also construct a filtration on the compactly supported cohomology that differs from Mantovan's filtration in the case that the Shimura variety is non-compact, allowing us to circumvent some of the circumlocutions taken in [13, 14]. Our method showcases a very general strategy for proving such torsion vanishing results, and should bear even more fruit once the inputs are generalized. Motivated by this, we formulate an even more general torsion vanishing conjecture (Conjecture 6.6).
###### Contents
* 1 Introduction
* 2 Preliminaries on Shimura Varieties
* 2.1 Shimura Varieties
* 2.2 Igusa Varieties
* 3 Mantovan's Formula and the Hodge-Tate Period Morphism
* 4 The Local Results
* 4.1 The Spectral Action
* 4.2 Perverse \(t\)-exactness
* 4.3 Verification of additional assumptions
* 5 The Proof of Theorems 1.15 and 1.17
* 5.1 Proof of Theorems 1.15 and 1.17
* 5.2 Proof of Corollary 1.19
* 6 Conjectures and Concluding Remarks
* 6.1 Relationship to Xiao-Zhu
* 6.2 A General Torsion Vanishing Conjecture
* A Spectral Decomposition of Sheaves on \(\mathrm{Bun}_{G}\), by David Hansen
## 1. Introduction
Let \(\mathbf{G}\) be a connected reductive group over \(\mathbb{Q}\) admitting a Shimura datum \((\mathbf{G},X)\), and let \(\mathbb{A}\) (resp. \(\mathbb{A}_{f}\)) denote the adeles (resp. finite adeles) of \(\mathbb{Q}\). Fix a prime number \(p>0\) and let \(G:=\mathbf{G}_{\mathbb{Q}_{p}}\) be the base-change to \(\mathbb{Q}_{p}\). We will assume that \(G\) is unramified so that there exists a hyperspecial subgroup \(K_{p}^{\mathrm{hs}}\subset G(\mathbb{Q}_{p})\) and a Borel \(B\) surjecting onto a maximal torus \(T\) which we now fix. We consider the open compact subgroup \(K:=K^{p}K_{p}^{\mathrm{hs}}\subset\mathbf{G}(\mathbb{A}_{f})\), where \(K^{p}\subset\mathbf{G}(\mathbb{A}_{f}^{p})\) denotes a sufficiently small level away from \(p\). Let \(\mathrm{Sh}(\mathbf{G},X)_{K}\) denote the corresponding Shimura variety defined over the reflex field \(E\). Given a prime \(p\neq\ell\), we will be interested in understanding the \(\ell\)-torsion cohomology groups
\[R\Gamma_{c}(\mathrm{Sh}(\mathbf{G},X)_{K,\overline{E}},\overline{\mathbb{F} }_{\ell})\]
and
\[R\Gamma(\mathrm{Sh}(\mathbf{G},X)_{K,\overline{E}},\overline{\mathbb{F}}_{ \ell}).\]
In particular, since the level at \(p\) is hyperspecial, the unramified Hecke algebra
\[H_{K_{p}^{\mathrm{hs}}}:=\overline{\mathbb{F}}_{\ell}[K_{p}^{\mathrm{hs}} \backslash G(\mathbb{Q}_{p})/K_{p}^{\mathrm{hs}}]\]
will act on these complexes via the right action. Given a maximal ideal \(\mathfrak{m}\subset H_{K_{p}^{\mathrm{hs}}}\), we can localize both of these cohomology groups at \(\mathfrak{m}\). We will be interested in describing this localization. To do this, we recall that, given such a maximal ideal \(\mathfrak{m}\subset H_{K_{p}^{\mathrm{hs}}}\), this defines an unramified \(L\)-parameter
\[\phi_{\mathfrak{m}}:W_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{F}}_{\ell})\]
specified by a semisimple element \(\phi_{\mathfrak{m}}(\mathrm{Frob}_{\mathbb{Q}_{p}})\). In particular, if \(T\) denotes the maximal torus of \(G\) then \(\phi_{\mathfrak{m}}\) is induced from a parameter \(\phi_{\mathfrak{m}}^{T}:W_{\mathbb{Q}_{p}}\to{}^{L}T(\overline{\mathbb{F}}_{ \ell})\subset{}^{L}G(\overline{\mathbb{F}}_{\ell})\) factoring through the \(L\)-group of the maximal torus. Now, recall that the irreducible representations of \({}^{L}T\) correspond to the \(\Gamma\)-orbits \(\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})/\Gamma\) of geometric dominant cocharacters of \(G\). We have the following definition.
**Definition 1.1**.: [10, Definition 1.4] Given a toral \(L\)-parameter \(\phi_{T}:W_{\mathbb{Q}_{p}}\to{}^{L}T(\overline{\mathbb{F}}_{\ell})\), we say that \(\phi_{T}\) is generic if, for all \(\alpha\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})/\Gamma\) corresponding to a \(\Gamma\)-orbit of coroots, we have that the complex \(R\Gamma(W_{\mathbb{Q}_{p}},\alpha\circ\phi_{T})\) is trivial. Similarly, we say that \(\mathfrak{m}\) is generic if \(\phi_{\mathfrak{m}}^{T}\) is a generic toral parameter.
If \(G=\mathrm{GL}_{n}\) then this coincides with the notion of decomposed generic considered in [11, Definition I.9]. We set \(d=\dim(\mathrm{Sh}(\mathbf{G},X)_{K})\). Motivated by [11, Theorem 1.1] and [11, Theorem 1.1], we make the following conjecture.
**Conjecture 1.2**.: _Let \((\mathbf{G},X)\) be a Shimura datum such that \(G=\mathbf{G}_{\mathbb{Q}_{p}}\) is unramified and \(K=K_{p}K^{p}\) is a sufficiently small level with \(K_{p}=K_{p}^{\mathrm{hs}}\) hyperspecial. If \(\mathfrak{m}\subset H_{K_{p}^{\mathrm{hs}}}\) is a generic maximal ideal then the cohomology of \(R\Gamma(\mathrm{Sh}(\mathbf{G},X)_{K,\overline{E}},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}}\) (resp. \(R\Gamma_{c}(\mathrm{Sh}(\mathbf{G},X)_{K,\overline{E}},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}}\)) is concentrated in degrees \(d\leq i\leq 2d\) (resp. \(0\leq i\leq d\))._
We first recall the motivating situation of Caraiani-Scholze [11, 12]. Let \(F/\mathbb{Q}\) be a CM field, and let \((B,*,V,\langle\cdot,\cdot\rangle)\) be a PEL datum with \(B\) a central simple \(F\)-algebra and \(V\) a non-zero finite type left \(B\)-module. Let \((\mathbf{G},X)\) denote the Shimura datum attached to it, where \(\mathbf{G}\) is a connected reductive group over \(\mathbb{Q}\) defined by the \(B\)-linear automorphisms of \(V\) preserving the choice of pairing \(\langle\cdot,\cdot\rangle\). We have the following result.
**Theorem 1.3**.: _[_11, 12_, 13_]_ _Assume that \((\mathbf{G},X)\) is a PEL type Shimura datum of type \(A\). If the prime \(p\) splits completely in \(F\) then Conjecture 1.2 is true._
_Remark 1.4_.: Koshikawa proved this under the assumption that \(B=F\) and \(V=F^{2n}\), and the global unitary group \(\mathbf{G}\) is quasi-split, as well as in the case when \(p\) is split in \(F\) and the Shimura variety is compact. These additional assumptions were removed in the PhD thesis of Santos [12].
_Remark 1.5_.: Caraiani-Scholze actually proved a slightly different result. More precisely, let \(S\) be a set of finite places not containing \(p\) such that \(\mathbf{G}\) is unramified and \(K^{p}\) is hyperspecial away from \(S\). Consider a maximal ideal \(\mathfrak{m}\subset\mathbb{T}^{S}\) in the spherical Hecke algebra such that \(\mathfrak{m}\) is generic at \(p\). Caraiani-Scholze show that the localization at \(\mathfrak{m}^{p}\subset\mathbb{T}^{\mathrm{S}\cup\{p\}}\) is concentrated in the relevant degrees.
_Remark 1.6_.: In the case of Harris-Taylor Shimura varieties, there is also work of Boyer [1], which describes the localization at non-generic maximal ideals.
_Remark 1.7_.: We believe that Conjecture 1.2 is true under the weaker hypothesis that \(H^{2}(W_{\mathbb{Q}_{p}},\alpha\circ\phi_{T})\) is trivial for all \(\Gamma\)-orbits of coroots \(\alpha\), as is shown in [10, 11, 12] in their particular case. However, the theory of geometric Eisenstein series which we will invoke in this paper becomes more complicated in this case (See the discussion around [10, Conjecture 1.29]), and so a proof of this Theorem using our methods would require more deeply understanding geometric Eisenstein series when this assumption is dropped (cf. Remark 6.8).
Caraiani-Scholze [10, 10] proved their results under some small restrictions, which Koshikawa [12] was able to remove by using compatibility of the Fargues-Scholze local Langlands correspondence with the semi-simplification of the Harris-Taylor correspondence for \(\mathrm{GL}_{n}\). In the process, Koshikawa exhibited a much more flexible method for proving Theorem 1.3. The goal of the current paper is to expand the scope of Koshikawa's technique, motivated by work of the first author on geometric Eisenstein series in the Fargues-Fontaine setting [10]. We then carry the strategy out in some particular cases using work on local-global compatibility of the Fargues-Scholze local Langlands correspondence beyond the case of \(\mathrm{GL}_{n}\).
One of the basic ingredients is the perspective on Mantovan's product formula provided by the Hodge-Tate period morphism. To explain this, we let \(\mu\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\) denote the minuscule geometric dominant cocharacter of \(G\) determined by the Hodge cocharacter of \(X\) and an isomorphism \(j:\mathbb{C}\simeq\overline{\mathbb{Q}}_{p}\) which we fix from now on. We consider the Kottwitz set \(B(G)\) and with it the subset \(B(G,\mu)\subset B(G)\) of \(\mu\)-admissible elements. Let \(\mathfrak{p}|p\) be the prime dividing \(p\) in the reflex field \(E\), induced by the embedding \(\overline{\mathbb{Q}}\to\overline{\mathbb{Q}}_{p}\) given by the isomorphism \(j\). We let \(E_{\mathfrak{p}}\) be the completion at \(\mathfrak{p}\), \(C:=\tilde{\overline{E}}_{\mathfrak{p}}\) be the completion of the algebraic closure, and \(\breve{E}_{\mathfrak{p}}\) be the compositum of \(E_{\mathfrak{p}}\) with the completion of the maximal unramified extension of \(\mathbb{Q}_{p}\). We recall that, attached to each element \(b\in B(G,\mu)\), there exists a diamond
\[\mathrm{Sht}(G,b,\mu)_{\infty}\to\mathrm{Spd}(\breve{E}_{\mathfrak{p}})\]
parametrizing modifications
\[\mathcal{E}_{b}\dashrightarrow\mathcal{E}_{0}\]
of meromorphy \(\mu\) between the \(G\)-bundle \(\mathcal{E}_{b}\) corresponding to \(b\) and the trivial \(G\)-bundle. This space has an action by \(G(\mathbb{Q}_{p})=\mathrm{Aut}(\mathcal{E}_{0})\) and \(J_{b}(\mathbb{Q}_{p})\subset\mathrm{Aut}(\mathcal{E}_{b})\), where \(J_{b}\) is the \(\sigma\)-centralizer of \(b\). This allows us to consider the quotients
\[\mathrm{Sht}(G,b,\mu)_{\infty}/\underline{K_{p}}\to\mathrm{Spd}(\breve{E}_{ \mathfrak{p}})\]
for varying compact open subgroups \(K_{p}\subset G(\overline{\mathbb{Q}}_{p})\). In certain cases, these quotients are representable by rigid analytic varieties called local Shimura varieties, but they are always representable as diamonds. We can consider the compactly supported cohomology
\[R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}^{\mathrm{hs}}},\overline{\mathbb{F}}_{\ell})\]
at hyperspecial level with torsion coefficients. This has an action of \(W_{E_{\mathfrak{p}}}\times J_{b}(\mathbb{Q}_{p})\times H_{K_{p}^{\mathrm{hs}}}\). Now, the Mantovan product formula tells us that if we look at \(R\Gamma(\mathrm{Sh}(\mathbf{G},X)_{K,\overline{E}},\overline{\mathbb{F}}_{ \ell})\) then this should always admit a filtration in the derived category whose graded pieces are
\[(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/K_{p}^{\mathrm{hs}},\overline{ \mathbb{F}}_{\ell}(d_{b}))[2d_{b}]\otimes_{\mathcal{H}(J_{b})}R\Gamma(\mathrm{ Ig}^{b},\overline{\mathbb{F}}_{\ell})\]
for varying \(b\in B(G,\mu)\), where the objects are as follows.
1. \(\mathrm{Ig}^{b}\) is the perfect Igusa variety attached to an element \(b\in B(G,\mu)\) in the \(\mu\)-admissible locus inside \(B(G)\) and \(d_{b}:=\dim(\mathrm{Ig}^{b})=\langle 2\rho_{G},\nu_{b}\rangle\), where \(\rho_{G}\) is the half sum of all positive roots and \(\nu_{b}\) is the slope cocharacter of \(b\).
2. \(\mathcal{H}(J_{b}):=C_{c}^{\infty}(J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F} }_{\ell})\) is the usual smooth Hecke algebra.
3. \(\overline{\mathbb{F}}_{\ell}(d_{b})\) is the sheaf on \(\mathrm{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}^{\mathrm{hs}}}\) with trivial Weil group action and \(J_{b}(\mathbb{Q}_{p})\) action as defined in [13, Lemma 7.4].
Such a filtration should always exist, but is not currently proven in general. In the case that the Shimura datum \((\mathbf{G},X)\) is PEL of type \(A\) or \(C\), a modern proof of this result can be found in [13, Theorem 7.1].
This filtration on the complex \(R\Gamma(\mathrm{Sh}(\mathbf{G},X)_{K,\overline{\mathbb{F}}_{\ell}})\) allows us to roughly split the verification of Conjecture 1.2 into two parts.
1. Controlling the cohomology of the shtuka spaces \(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}^{\mathrm{hs}}},\overline{\mathbb{F}}_{\ell}(d_{b}))_{\mathfrak{m}}\).
2. Controlling the cohomology of the Igusa varieties \(R\Gamma(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell})\).
We first discuss point (1). One of the key observations underlying Koshikawa's method was that the cohomology of the space \(\mathrm{Sht}(G,b,\mu)_{\infty}\) computes the action of a Hecke operator \(T_{\mu}\) corresponding to \(\mu\) on \(\mathrm{Bun}_{G}\) the moduli stack of \(G\)-bundles of the Fargues-Fontaine curve. The Hecke operators commute with the action of the excursion algebra on \(\mathrm{Bun}_{G}\), and the action of the excursion algebra on a smooth irreducible representation \(\rho\), viewed as a sheaf on \(\mathrm{Bun}_{G}\), determines the Fargues-Scholze parameter of \(\rho\). It follows that \(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}^{\mathrm{hs}}},\overline{\mathbb{F}}_{\ell}(d_{b}))_{\mathfrak{m}}\) as a complex of smooth \(J_{b}(\mathbb{Q}_{p})\)-modules will have irreducible constituents \(\rho\) with Fargues-Scholze parameter \(\phi_{\rho}^{\mathrm{FS}}\) equal to \(\phi_{\mathfrak{m}}\) as conjugacy classes of parameters. When \(\mathbf{G}_{\mathbb{Q}_{p}}=G\) is a product of \(\mathrm{GL}_{\mathrm{n}}\)s as in Theorem 1.3 (by the assumption that \(p\) splits in \(F\)), it follows from the work of Hansen-Kaletha-Weinstein [10, Theorem 1.0.3] that the Fargues-Scholze correspondence for \(J_{b}(\mathbb{Q}_{p})\) with rational coefficients agrees with the semi-simplification of the Harris-Taylor correspondence, where we recall that \(J_{b}\) is a product of inner forms of \(\mathrm{GL}_{\mathrm{n}}\)s in this case. In particular, using that \(\mathfrak{m}\) is generic, it follows that \(\phi_{\rho}^{\mathrm{FS}}=\phi_{\mathfrak{m}}\) must lift to a \(\overline{\mathbb{Z}}_{\ell}\) parameter which is also generic in the analogous sense, and the condition of generic implies that the lift cannot come from the semi-simplification of a parameter with non-trivial monodromy. Using this, one can deduce that such a \(\rho\) only exists if the group \(J_{b}\) is quasi-split. In this particular case (\(G\) is a product of \(\mathrm{GL}_{\mathrm{n}}\)s), this can only happen if \(b\in B(G,\mu)\) corresponds to the ordinary element.
This argument of Koshikawa was formalized and generalized further in work of the first author [10]. In particular, it was noted that, for a general quasi-split \(G\) and \(\mathfrak{m}\) generic, the cohomology \(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/K_{p}^{\mathrm{hs}},\overline{ \mathbb{F}}_{\ell}(d_{b}))_{\mathfrak{m}}\) will only be non-trivial if \(b\in B(G,\mu)_{\mathfrak{m}}:=B(G)_{\mathfrak{m}}\cap B(G,\mu)\), where \(B(G)_{\mathfrak{m}}\) is the set of elements lying in the image of the map \(B(T)\to B(G)\), assuming that the Fargues-Scholze local Langlands correspondence has certain expected properties (Assumption 4.4). These unramified elements will be precisely the elements for which \(J_{b}\) is quasi-split. The set \(B(G,\mu)_{\mathfrak{m}}\) corresponds to Weyl group orbits of weights in the representation \(V_{\mu}\) of \(\hat{G}\) restricted to \(\hat{G}^{\Gamma}\). In particular, if \(G\) is split then, since \(\mu\) is minuscule, \(B(G,\mu)_{\mathfrak{m}}\) consists of only one element, corresponding to the unique Weyl group orbit of the highest weight. This is the situation occurring in the previous paragraph. Moreover, the contribution of the cohomology of this shtuka space is easily understood, and the problem completely reduces to controlling the cohomology of \(\mathrm{Ig}^{b}\) when \(b\in B(G,\mu)_{\mathfrak{m}}\) is the \(\mu\)-ordinary element. However, if \(G\) is not split then the restriction of \(V_{\mu}\) to \(\hat{G}^{\Gamma}\) may have multiple Weyl group orbits of weights. In particular, one needs to control the cohomology groups
\[R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/K_{p}^{\mathrm{hs}},\overline{ \mathbb{F}}_{\ell}(d_{b}))_{\mathfrak{m}}\]
for all possible \(b\in B(G,\mu)_{\text{un}}\). This makes the situation much more complicated; in fact, for non-split \(G\), the basic element could be unramified, and in this case the Igusa variety is just a profinite set, hence the problem of torsion vanishing for the contribution of the basic locus is completely reduced to controlling the generic part of the torsion cohomology of the local shtuka space attached to the basic element.
Such control of the cohomology of shtuka spaces with torsion coefficients for these more general situations was attained in [1]. In order to understand this, it is helpful to move away from the language of isotypic parts of shtuka spaces and consider the action of Hecke operators on \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\), the derived category of etale \(\overline{\mathbb{F}}_{\ell}\)-sheaves on \(\operatorname{Bun}_{G}\). Since we are interested in cohomology localized at a generic maximal ideal \(\mathfrak{m}\), we construct in appendix A a full-subcategory \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{ \phi_{\mathfrak{m}}}\subset\operatorname{D}(\operatorname{Bun}_{G},\overline {\mathbb{F}}_{\ell})\) together with an idempotent localization map \((-)_{\phi_{\mathfrak{m}}}:\operatorname{D}(\operatorname{Bun}_{G},\overline {\mathbb{F}}_{\ell})\rightarrow\operatorname{D}(\operatorname{Bun}_{G}, \overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\) such that, on smooth irreducible representations, the localization map is either an isomorphism or \(0\) depending on if the representation has Fargues-Scholze parameter conjugate to \(\phi_{\mathfrak{m}}\) or not (Lemma 4.2 (1)). We let \(\operatorname{D}^{\operatorname{ULA}}(\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})\) denote the full subcategory of ULA objects, where we recall by [13, Theorem V.7.1], that this is equivalent to insisting that the restrictions to all the HN-strata indexed by \(b\in B(G)\) are valued in the full subcategories \(\operatorname{D}^{\operatorname{adm}}(J_{b}(\mathbb{Q}_{p}),\overline{ \mathbb{F}}_{\ell})\) of admissible complexes (i.e the invariants under all open compacts \(K\subset J_{b}(\mathbb{Q}_{p})\) is a perfect complex). Using the results of [1], we show, under various technical hypothesis including the genericity of \(\mathfrak{m}\), that one has a direct sum decomposition:
\[\operatorname{D}^{\operatorname{ULA}}(\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\simeq\bigoplus_{b\in B(G)_{\text{un} }}\operatorname{D}^{\operatorname{adm}}(J_{b}(\mathbb{Q}_{p}),\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}.\]
More precisely, we show that the \(!\) and \(*\) push-forwards with respect to the inclusion of HN-strata agree on this sub-category, and so the excision semi-orthogonal decomposition splits on \(\operatorname{D}^{\operatorname{ULA}}(\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\). This decomposition is a refinement of the fact mentioned above that only the shtuka spaces corresponding to the unramified elements \(b\in B(G,\mu)_{\text{un}}\) can contribute to the generic localization of the cohomology of the Shimura variety. The desired control of the shtuka spaces is now in turn encoded in understanding how Hecke operators interact with a perverse \(t\)-structure on \(\operatorname{Bun}_{G}\) after restricting to the localized category \(\operatorname{D}^{\operatorname{ULA}}\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\).
We recall \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) has an action by Hecke operators. In particular, for each geometric dominant cocharacter \(\mu\), we have a correspondence
where \(\operatorname{Hck}_{G,\leq\mu}\) is the stack parametrizing modifications \(\mathcal{E}_{1}\rightarrow\mathcal{E}_{2}\) of a pair of \(G\)-bundles with meromorphy bounded by \(\mu\) at the closed Cartier divisor defined by the fixed untilt given by \(C\), and \(h_{\mu}^{\rightarrow}\) (resp. \(h_{\mu}^{\leftarrow}\)) remembers \(\mathcal{E}_{1}\) (resp. \(\mathcal{E}_{2}\)). We define
\[T_{\mu}:\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell}) \rightarrow\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell })^{BW_{E_{\mu}}}\]
\[A\mapsto h_{\mu*}^{\rightarrow}(h_{\mu}^{\leftarrow*}(A)\otimes^{\mathbb{L} }\mathcal{S}_{\mu})\]
where \(E_{\mu}\) denotes the reflex field of \(\mu\) and \(\mathcal{S}_{\mu}\) is the sheaf on \(\operatorname{Hck}_{G,\leq\mu}\) attached to the highest weight tilting module \(\mathcal{T}_{\mu}\in\operatorname{Rep}_{\overline{\mathbb{F}}_{\ell}}(\hat{G})\) of highest weight \(\mu\) by geometric Satake. The action of Hecke operators commutes with the action of excursion operators and therefore the action of the spectral Bernstein center. Moreover, it preserves the subcategory of ULA objects. It follows that we have an induced map
\[T_{\mu}:\operatorname{D}^{\operatorname{ULA}}(\operatorname{Bun}_{G},\overline {\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\rightarrow\operatorname{D}^{ \operatorname{ULA}}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{ \phi_{\mathfrak{m}}}^{BW_{E_{\mu}}}\]
on the localized category (See Lemma 4.2 (2)).
We are almost ready to state the result on Hecke operators we will need. To do this, we recall that \(\mathrm{D}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) has a natural perverse \(t\)-structure, which can be defined as follows. The \(v\)-stack \(\mathrm{Bun}_{G}\) is cohomologically smooth of \(\ell\)-dimension \(0\). Moreover, each one of the HN-strata \(\mathrm{Bun}_{G}^{b}\) is isomorphic to \([*/\mathcal{J}_{b}]\), which is cohomologically smooth of \(\ell\)-dimension \(-d_{b}=-\mathrm{dim}(\mathrm{Ig}^{b})\). Therefore, we can define a perverse \(t\)-structure \({}^{\mathrm{p}}\mathrm{D}^{\geq 0}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{ \ell})\) (resp. \({}^{\mathrm{p}}\mathrm{D}^{\leq 0}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})\)) on \(\mathrm{D}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) given by insisting that the \(!\) (resp. \(*\)) restrictions to \(\mathrm{Bun}_{G}^{b}\) are concentrated in degrees \(\geq\langle 2\rho_{G},\nu_{b}\rangle\) (resp. \(\leq\langle 2\rho_{G},\nu_{b}\rangle\)). The key result that follows from the work of [1] and various compatibility results for the Fargues-Scholze correspondence is as follows.
**Theorem 1.8**.: _(Corollary 4.24) Let \(\mu\) be a minuscule geometric dominant cocharacter and \(G\) a product of groups satisfying the conditions of Table (1) with \(p\) and \(\ell\) satisfying the corresponding conditions. Then if \(\mathfrak{m}\) is generic the restriction of the Hecke operator_
\[j_{1}^{*}T_{\mu}:\mathrm{D}^{\mathrm{ULA}}(\mathrm{Bun}_{G},\overline{\mathbb{ F}}_{\ell})_{\phi_{\mathfrak{m}}}\to\mathrm{D}^{\mathrm{adm}}(G(\mathbb{Q}_{p}), \overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}^{BW_{E_{\mu}}}\]
_is perverse \(t\)-exact. In particular, it induces maps_
\[j_{1}^{*}T_{\mu}:{}^{\mathrm{p}}\mathrm{D}^{\mathrm{ULA},\geq 0}(\mathrm{Bun}_{G },\overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\to\mathrm{D}^{\mathrm{adm },\geq 0}(G(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}^{ BW_{E_{\mu}}}\]
_and_
\[j_{1}^{*}T_{\mu}:{}^{\mathrm{p}}\mathrm{D}^{\mathrm{ULA},\leq 0}(\mathrm{Bun}_{G },\overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\to\mathrm{D}^{\mathrm{adm },\leq 0}(G(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}^{ BW_{E_{\mu}}}\]
_on the halves of the perverse \(t\)-structure, where we note that the perverse \(t\)-structure on \(\mathrm{D}(\mathrm{Bun}_{G}^{1},\overline{\mathbb{F}}_{\ell})\simeq\mathrm{D}(G (\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\) coincides with the usual \(t\)-structure._
Here is the table summarizing our local constraints:
\[\begin{array}{|c|c|c|c|}\hline G&\text{Constraint on $G$}&\ell&p\\ &\\ &\\ \hline\mathrm{Res}_{L/\mathbb{Q}_{p}}(\mathrm{GL}_{n})&L/\mathbb{Q}_{p}\text{ unramified}&(\ell,[L:\mathbb{Q}_{p}])=1&\\ \hline\mathrm{Res}_{L/\mathbb{Q}_{p}}(\mathrm{GSp}_{4})&L=\mathbb{Q}_{p}&( \ell,2(p^{4}-1))=1&\\ &L/\mathbb{Q}_{p}\text{ unramified}&(\ell,2[L:\mathbb{Q}_{p}](p^{4[L:\mathbb{Q}_{p }]}-1))=1&p\neq 2\\ \hline\mathrm{Res}_{L/\mathbb{Q}_{p}}(\mathrm{GU}_{2})&L/\mathbb{Q}_{p}\text{ unramified}&(\ell,[L:\mathbb{Q}_{p}])=1&\\ \hline G=\mathrm{U}_{n}(L/\mathbb{Q}_{p})&n\text{ odd $L$ unramified}&\ell\neq 2&\\ \hline G=\mathrm{GU}_{n}(L/\mathbb{Q}_{p})&n\text{ odd $L$ unramified}&\ell\neq 2&\\ \hline G(\mathrm{SL}_{2,L})&L/\mathbb{Q}_{p}\text{ unramified}&(\ell,[L:\mathbb{Q}_{p}])=1&\\ \hline G(\mathrm{Sp}_{4,L})&L/\mathbb{Q}_{p}\text{ unramified, $L\neq\mathbb{Q}_{p}$}&(\ell,2[L:\mathbb{Q}_{p}](p^{4[L:\mathbb{Q}_{p}]}-1))=1 &p\neq 2\\ \hline\end{array} \tag{1}\]
The groups \(G(\mathrm{SL}_{2,L})\) and \(G(\mathrm{Sp}_{4,L})\) are the similitude subgroup of \(\mathrm{Res}_{L/\mathbb{Q}_{p}}(\mathrm{GL}_{2})\) (resp. \(\mathrm{Res}_{L/\mathbb{Q}_{p}}(\mathrm{GSp}_{4})\)), i.e. the subgroup of elements such that the similitude factor lies in \(\mathbb{Q}_{p}\). We will recall the definition of these groups in SS4.3.
_Remark 1.9_.: Assuming the Fargues-Scholze correspondence for \(G\) behaves as expected with rational coefficients, the analysis in [1] allows one to verify this for any \(\mu\) after imposing some additional conditions on the toral parameter \(\phi_{T}^{\mathfrak{m}}\) attached to the maximal ideal \(\mathfrak{m}\) ([1, Condition/Definition 3.6]). However, for the groups considered, we will see that these additional conditions are superfluous and all one needs is generic, except for the case where \(G=\mathrm{Res}_{L/\mathbb{Q}_{p}}(\mathrm{GSp}_{4})\) or \(G=G(\mathrm{Sp}_{4,L})\) with \(L/\mathbb{Q}_{p}\) non-trivial, where we need an extra banality assumption on the prime \(\ell\). It is conjectured ([1, Conjecture 1.27]) that the results used to establish this theorem should always be true just under the condition that \(\mathfrak{m}\) is generic.
_Remark 1.10_.: We should warn the reader that some of the results of [1] and in particular this consequence, are currently contingent on the proof of some ULA properties of sheaves on the moduli stack of \(B\)-structures [1, Assumption 8.1]. However, this will appear in forthcoming work on geometric Eisenstein series [1].
These local torsion vanishing results would allow us to prove Conjecture 1.2 in several new cases if one could get control over the Igusa varieties \(\mathrm{Ig}^{b}\). In Koshikawa's argument, this is done by using a semi-perversity result proven by Caraiani-Scholze [14, Theorem 4.6.1], which was further generalized in work of Santos [13]. Roughly speaking, we want to show that \(R\Gamma(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell})\) is concentrated in degrees \(\geq d_{b}\), so that the complex of \(J_{b}(\mathbb{Q}_{p})\)-representations \(R\Gamma(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell})\) defines the stalk of a semi-perverse sheaf on \(\mathrm{Bun}_{G}\) at \(b\in B(G)\), to which we can apply the previous result. In the case that the Shimura vareities \(\mathrm{Sh}(\mathbf{G},X)_{K}\) are compact, there is a simpler way of seeing this. In particular, \(\mathrm{Ig}^{b}\) is known to be a perfect affine scheme in this case, and so the desired semi-perversity just follows by applying Artin vanishing and then using Poincare duality on the global Shimura variety. It turns out that this style of argument can be made to work even in the non-compact case. In [14, 14, 15, 16], the non-compactly supported cohomology \(R\Gamma(\mathrm{Sh}(\mathbf{G},X)_{K},\overline{\mathbb{F}}_{\ell})_{\mathfrak{ m}}\) is studied together with its filtration involving \(R\Gamma(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell})\) coming from Mantovan's formula, and shown to be concentrated in degrees \(\geq d\). However, one could also study the compactly supported cohomology \(R\Gamma_{c}(\mathrm{Sh}(\mathbf{G},X)_{K},\overline{\mathbb{F}}_{\ell})_{ \mathfrak{m}}\) and show that it is concentrated in degrees \(\leq d\), a la Poincare duality. To do this, we recall [14, Section 3.3] that, in the non-compact case, the perfect scheme \(\mathrm{Ig}^{b}\) is not affine, but it admits a partial minimal compactification \(g_{b}:\mathrm{Ig}^{b}\hookrightarrow\mathrm{Ig}^{b,*}\) which is affine, as proven in this more general setting of PEL type \(A\) or \(C\) by Santos [13]. We define
\[V_{b}:=R\Gamma_{c-\partial}(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell}):=R \Gamma(\mathrm{Ig}^{b,*},g_{b!}(\overline{\mathbb{F}}_{\ell}))\]
the partially compactly supported cohomology, which is supported in degrees \(\leq d_{b}\) by Artin-vanishing (Proposition 3.7). Now, for \(K\subset\mathbf{G}(\mathbb{A}_{f})\) a sufficiently small open compact, we define \(\mathcal{S}(\mathbf{G},X)_{K}:=(\mathrm{Sh}(\mathbf{G},X)_{K}\otimes_{E}E_{ \mathfrak{p}})^{\mathrm{ad}}\) to be the adic space over \(\mathrm{Spa}(E_{\mathfrak{p}})\) attached to the Shimura variety. We can define the infinite level perfectoid Shimura varieties \(\mathcal{S}(\mathbf{G},X)_{K^{p}}\) by taking the inverse limit of \(\mathcal{S}(\mathbf{G},X)_{K^{p}K_{p}}\) as \(K_{p}\to\{1\}\). The base-change \(\mathcal{S}(\mathbf{G},X)_{K^{p},C}\) is representable by a perfectoid space if \((\mathbf{G},X)\) is of pre-abelian type, and in general it is diamond. By the results of [14, 15], we have a Hodge-Tate period map
\[\pi_{\mathrm{HT}}:[\mathcal{S}(\mathbf{G},X)_{K^{p},C}/\underline{G(\mathbb{Q} _{p})}]\to[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}]\]
recording the Hodge-Tate filtration on the abelian varieties with additional structure that \(\mathcal{S}(\mathbf{G},X)_{K^{p},C}\) parametrizes. Here \(\mathcal{F}\ell_{G,\mu^{-1}}:=(G_{C}/P_{\mu^{-1}})^{\mathrm{ad}}\) is the adic flag variety attached to the parabolic in \(G_{C}\) given by a dominant inverse of \(\mu\) and the dynamical method. We recall that the flag variety \([\mathcal{F}\ell_{G,\mu^{-1}}/G(\mathbb{Q}_{p})]\) admits a locally closed stratification \(i_{b}:[\mathcal{F}\ell_{G,\mu^{-1}}/G(\mathbb{Q}_{p})]\hookrightarrow[ \mathcal{F}\ell_{G,\mu^{-1}}/G(\mathbb{Q}_{p})]\) indexed by \(b\in B(G,\mu)\), given by pulling the HN-stratification along the natural map \(h^{\leftarrow}:[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}] \to\mathrm{Bun}_{G}\). We will now impose the following very mild assumption in what follows.
**Assumption 1.11**.: _Write \(\partial\mathrm{Ig}^{b,*}\subset\mathrm{Ig}^{b,*}\) for the closed complement of \(\mathrm{Ig}^{b}\) in \(\mathrm{Ig}^{b,*}\). We assume that \((\mathbf{G},X)\) is a PEL datum of type \(A\) or \(C\) such that, for all \(b\in B(G,\mu)\), the perfect scheme \(\partial\mathrm{Ig}^{b,*}\) is empty or has codimension in \(\mathrm{Ig}^{b,*}\) greater than \(2\)._
_Remark 1.12_.: If \(\mathbf{G}\) is simple then it is easy to show that this assumption will be satisfied if \(\dim(\mathcal{S}(\mathbf{G},X)_{K^{p},C})\geq 2\), by using that the boundary of the partially minimally compactified Igusa varieties is expressible as the Igusa varieties of Shimura varieties attached to Levis of \(\mathbb{Q}\)-rational parabolics of \(\mathbf{G}\), as we will explain in SS2.2.1. Moreover, if \(\mathcal{S}(\mathbf{G},X)_{K^{p},C}\) is compact then it is automatic that \(\partial\mathrm{Ig}^{b,*}\) is empty. Therefore, if \(\mathbf{G}\) is simple, this is excluding the cases where \(\dim(\mathcal{S}(\mathbf{G},X)_{K^{p},C})=1\) and \(\mathcal{S}(\mathbf{G},X)_{K^{p},C}\) is non-compact. There are two possibilities; either \((\mathbf{G},X)\) is the Shimura datum attached to the modular curve, or it is the Shimura datum attached to the unitary Shimura curve (See [15, Proposition 1.9]). In the latter case, we have that the connected components are given by modular curves. In these cases, the results of [16] are sufficient to prove Conjecture 1.2.
Now, assuming this, one can show that the stalk of \(R\pi_{\mathrm{HTI}}(\overline{\mathbb{F}}_{\ell})\) at a geometric point \(x:\mathrm{Spa}(C,C^{+})\to\mathcal{F}\ell_{G,\mu^{-1}}\) which lies in the adic Newton strata \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\) is given by \(V_{b}\). Moreover, if we write \(h_{b}^{\leftarrow}:[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q} _{p})}]\to[\mathrm{Spd}(C)/\mathcal{J}_{b}]\simeq\mathrm{Bun}_{G}^{b}\) for the pullback of \(h^{\leftarrow}\) to \(\mathrm{Bun}_{G}^{b}\) then one can deduce that the complex \(i_{b!}i_{b!}^{*}R\pi_{\mathrm{HTI}}(\overline{\mathbb{F}}_{\ell})\) is isomorphic to \(h^{\leftarrow*}j_{b!}(V_{b})\). Therefore, by excision, we deduce that the complex of \(G(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-representations
\[h_{*}^{\rightarrow}R\pi_{\mathrm{HTI}}(\overline{\mathbb{F}}_{\ell})\simeq R \Gamma_{c}(\mathcal{S}_{K^{p},C},\overline{\mathbb{F}}_{\ell})\simeq\mathrm{ colim}_{K_{p}\to\{1\}}\,R\Gamma_{c}(\mathcal{S}_{K^{p}K_{p},C},\overline{ \mathbb{F}}_{\ell})\simeq\mathrm{colim}_{K_{p}\to\{1\}}\,R\Gamma_{c}( \mathrm{Sh}(\mathbf{G},X)_{K^{p}K_{p},C})\]
has a filtration with graded pieces isomorphic to \(h_{*}^{\rightarrow}h^{\leftarrow*}(j_{b!}(V_{b}))\) for varying \(b\in B(G,\mu)\), where \(h^{\rightarrow}:[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}] \to[\mathrm{Spd}(C)/\underline{G(\mathbb{Q}_{p})}]\) is the structure map quotiented by \(G(\mathbb{Q}_{p})\). Here the second isomorphism follows since taking compactly supported cohomology respects taking limits of spaces, and the third isomorphism is a standard comparison result due to Huber [14, Theorem 3.5.13].
Now, via the Bialynicki-Birula isomorphism, the flag variety \([\mathcal{F}\ell_{G,\mu^{-1}}/G(\mathbb{Q}_{p})]\) identifies with an open substack of \(\mathrm{Hck}_{G,\leq\mu}\) for the fixed minuscule \(\mu\). In particular, under this relationship the maps \(h_{\mu}^{\rightarrow}\) and \(h_{\mu}^{\leftarrow}\) identify with \(h^{\rightarrow}\) and \(h^{\leftarrow}\), and therefore we can relate the graded pieces of the excision filtration to Hecke operators. We write
\[R\Gamma_{c}(G,b,\mu):=\mathrm{colim}_{K_{p}\to\{1\}}\,R\Gamma_{c}(\mathrm{ Sht}(G,b,\mu)/\underline{K_{p}},\overline{\mathbb{F}}_{\ell}(d_{b}))\]
for the complex of \(G(\mathbb{Q}_{p})\times J_{b}(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-modules defined by the compactly supported cohomology of this tower. Here \(\overline{\mathbb{F}}_{\ell}(d_{b})\) is the sheaf with \(J_{b}(\mathbb{Q}_{p})\)-action defined as in [11, Lemma 7.4].
We deduce the following variant of the Mantovan product formula for the compactly supported cohomology.
**Theorem 1.13**.: _The complex \(R\Gamma_{c}(\mathcal{S}_{K^{p},C},\overline{\mathbb{F}}_{\ell})\) has a filtration as a complex of \(G(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-representations with graded pieces isomorphic to \(j_{1}^{*}(T_{\mu}j_{b!}(V_{b}))[-d](-\frac{d}{2})\). More specifically, the graded pieces are isomorphic to_
\[(R\Gamma_{c}(G,b,\mu)\otimes^{\mathbb{L}}_{\mathcal{H}(J_{b})}V_{b})[2d_{b}].\]
_as \(G(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-modules._
_Remark 1.14_.: When the Shimura variety is compact, we have that \(R\Gamma_{c-\partial}(\mathrm{I}\mathrm{g}^{b},\overline{\mathbb{F}}_{\ell}) \simeq R\Gamma(\mathrm{I}\mathrm{g}^{b},\overline{\mathbb{F}}_{\ell})\), and this recovers precisely [11, Theorem 7.1].
We now apply our localization functor \((-)_{\phi_{\mathfrak{m}}}:\mathrm{D}(\mathrm{Bun}_{G})\to\mathrm{D}(\mathrm{ Bun}_{G})_{\phi_{\mathfrak{m}}}\) for a generic maximal ideal \(\mathfrak{m}\) to get a complex \(R\Gamma_{c}(\mathcal{S}_{K^{p},C},\overline{\mathbb{F}}_{\ell})_{\phi_{ \mathfrak{m}}}\in\mathrm{D}^{\mathrm{ULA}}(\mathrm{Bun}_{G},\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\), which we view as a sheaf on \(\mathrm{Bun}_{G}\) by \(!\) extending along the neutral strata. After applying \(R\Gamma(K_{\mathrm{I}\mathrm{g}}^{\mathrm{h}},-)\), this agrees with \(R\Gamma_{c}(\mathcal{S}_{K^{p}K_{p}^{\mathrm{h}},C},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}}\), the usual localization under the unramified Hecke algebra, which is the object we want to study. This in turn admits a filtration by \(R\Gamma(K_{p}^{\mathrm{h}},(j_{1}^{*}T_{\mu}j_{b!}(V_{b})))_{\phi_{\mathfrak{m }}}[-d](\frac{-d}{2})\). However, now we know, by the direct sum decomposition of \(\mathrm{D}^{\mathrm{ULA}}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{ \phi_{\mathfrak{m}}}\) described above, that the natural map \(j_{b!}(V_{b})\to j_{b*}(V_{b})\) is an isomorphism after applying \((-)_{\phi_{\mathfrak{m}}}\). Moreover, one only has interesting contributions coming from the unramified elements \(B(G,\mu)_{\mathrm{un}}\). In particular, we can deduce the following Corollary.
**Theorem 1.15**.: _Suppose \((\mathbf{G},X)\) is a PEL datum of type \(A\) or \(C\) such that \(\mathbf{G}_{\mathbb{Q}_{p}}\) is a product of simple groups as in Table (1) with \(p\) and \(\ell\) satisfying the corresponding conditions, the complex \(R\Gamma_{c}(\mathcal{S}_{K,C},\overline{\mathbb{F}}_{\ell})_{\mathfrak{m}}\simeq R \Gamma_{c}(\mathrm{Sh}(\mathbf{G},X)_{K,C},\overline{\mathbb{F}}_{\ell})_{ \mathfrak{m}}\) breaks up as a direct sum_
\[\bigoplus_{b\in B(G,\mu)_{\mathrm{un}}}(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{ \infty,C}/\underline{K_{p}^{\mathrm{h}},\overline{\mathbb{F}}_{\ell}(d_{b})})_{ \mathfrak{m}}\otimes^{\mathbb{L}}_{\mathcal{H}(J_{b})}R\Gamma_{c-\partial}( \mathrm{I}\mathrm{g}^{b},\overline{\mathbb{F}}_{\ell}))[2d_{b}]\]
_of \(H_{K_{p}^{\mathrm{h}}}\times W_{E_{\mathfrak{p}}}\)-modules._
_Remark 1.16_.: As we will explain more in SS6.1, in the case that the unique basic element \(b\in B(G,\mu)_{\mathrm{un}}\) is unramified, the contribution of the corresponding summand to middle degree cohomology serves as a generic fiber analogue of the description of the middle degree cohomology on the special fiber of the integral model at hyperspecial level, as provided in [13, Theorem 1.1.4].
As a consequence, we deduce our main Theorem, by combining Theorem 1.7 with the fact that \(R\Gamma_{c-\partial}(\mathrm{I}\mathfrak{s}^{b},\overline{\mathbb{F}}_{\ell}) \in\mathrm{D}^{\leq d_{b}}(J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\), by Artin vanishing.
**Theorem 1.17**.: _Suppose \((\mathbf{G},X)\) is a PEL datum of type \(A\) or \(C\) such that \(\mathbf{G}_{\mathbb{Q}_{p}}\) is a product of simple groups as in Table (1) with \(p\) and \(\ell\) satisfying the corresponding conditions then Conjecture 1.2 is true._
_Remark 1.18_.: This notably allows one to relax the assumption in [11, 12, 13, 14] that the prime \(p\) splits in \(F\), answering a question of Caraiani.
We can also easily deduce the result for some abelian type Shimura varieties, such as Hilbert modular varieties, from the above result, which recovers work of Caraiani-Tamiozzo [15] (See Corollary 5.3).
**Corollary 1.19**.: _Suppose \((\mathbf{G},X)\) is an abelian-type Shimura datum which has an associated PEL-type datum \((\mathbf{G}_{1},X_{1})\) of type \(A\) or \(C\) such that \(\mathbf{G}_{1,\mathbb{Q}_{p}}\) is a product of simple groups as in Table (1) with \(p\) and \(\ell\) satisfying the corresponding conditions. Then Conjecture 1.2 is true._
## Acknowledgements
We would like to thank Ana Caraiani, Jean-Francois Dat, David Hansen, Naoki Imai, Teruhisa Koshikawa, and Chris Skinner for helpful discussions pertaining to this work. Special thanks go to Mafalda Santos for sharing with us the results of her thesis, Peter Scholze for encouraging us to avoid working with the good reduction locus by directly describing the fibers of the Hodge-Tate period morphism, Matteo Tamiozzo for comments and corrections on an earlier draft, as well as suggestions for the arguments in SS5.2, and Mingjia Zhang for very helpful discussions and filling in several gaps in the arguments used in SS3, as well as several comments and corrections. This project was carried out while the second author was at the Max Planck Institute for Mathematics in Bonn and she thanks them for their hospitality and financial support.
## Notation
* Fix distinct primes \(\ell\neq p\).
* We write \(\mathbb{Q}_{p}\) for the \(p\)-adic numbers, and \(\breve{\mathbb{Q}}_{p}\) for the completion of the maximal unramified extension with Frobenius \(\sigma\).
* We let \(\overline{\mathbb{F}}_{\ell}\) denote the algebraic closure of the finite field \(\mathbb{F}_{\ell}\). We fix a choice of square root of \(p\) in \(\overline{\mathbb{F}}_{\ell}\) and define all half Tate twists and square roots of the norm character with respect to this choice.
* For \(L/\mathbb{Q}_{p}\) a finite extension, we write \(\breve{L}:=L\breve{\mathbb{Q}}_{p}\) for the compositum of \(L\) with the maximal unramified extension and \(W_{L}\) for the Weil group of \(L\). We let \(\mathrm{WD}_{L}:=W_{L}\times\mathrm{SL}(2,\overline{\mathbb{Q}}_{\ell})\) denote the Weil-Deligne group of \(L\).
* We let \(\mathbb{A}\) (resp. \(\mathbb{A}_{f}\)) denote the adeles (resp. finite adeles) over \(\mathbb{Q}\).
* A pair \((\mathbf{G},X)\) will denote a Shimura datum. We will use \(E\) to denote the reflex field. For \(K\subset\mathbf{G}(\mathbb{A}_{f})\) a sufficiently small open compact, we write \(\mathrm{Sh}(\mathbf{G},X)_{K}\to\mathrm{Spec}\,E\) for the attached Shimura variety of level \(K\).
* We fix an isomorphism \(j:\overline{\mathbb{Q}}_{p}\xrightarrow{\simeq}\mathbb{C}\). Consider the induced embedding \(\overline{\mathbb{Q}}\to\overline{\mathbb{Q}}_{p}\) this gives a finite place \(\mathfrak{p}\) of \(E\). We write \(E_{\mathfrak{p}}\) for the completion at \(\mathfrak{p}\).
* We let \(C:=\hat{\overline{E}}_{\mathfrak{p}}\) be the completed algebraic closure of \(E_{\mathfrak{p}}\).
* We use the symbol \(G\) to always denote a connected reductive group over \(\mathbb{Q}_{p}\), usually taken to be \(\mathbf{G}_{\mathbb{Q}_{p}}\). We will always assume that \(G\) is quasi-split with a fixed choice \(T\subset B\subset G\) of maximal torus and Borel, respectively.
* If \(G\) is unramified then we let \(K_{p}^{\mathrm{hs}}\subset G(\mathbb{Q}_{p})\) be a choice of hyperspecial subgroup. We set \(H_{K_{p}^{\mathrm{hs}}}:=\overline{\mathbb{F}}_{\ell}[K_{p}^{\mathrm{hs}} \backslash G(\mathbb{Q}_{p})/K_{p}^{\mathrm{hs}}]\) to be the unramified Hecke algebra with \(\overline{\mathbb{F}}_{\ell}\)-coefficients.
* We let \(\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\) denote the set of geometric dominant cocharacters of \(G\) and let \(\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}/\Gamma\) denote the set of Galois orbits, where \(\Gamma:=\mathrm{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\).
* Let \(B(G):=G(\tilde{\mathbb{Q}}_{p})/(g\sim hg\sigma(h)^{-1})\) denote the Kottwitz set of \(G\).
* For \(b\in B(G)\), we write \(J_{b}\) for the \(\sigma\)-centralizer of \(b\).
* For \(\mu\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\), we let \(B(G,\mu)\) be the \(\mu\)-admissible locus (as defined in [14, Definition 2.3]).
* Let \(\mathrm{Perf}\) denote the category of affinoid perfectoid spaces in characteristic \(p\) over \(*:=\mathrm{Spd}(\overline{\mathbb{F}}_{p})\) endowed with the \(v\)-topology. For a perfectoid space \(S\), let \(\mathrm{Perf}_{S}\) denote the category of affinoid perfectoid spaces over the tilt \(S^{\flat}\).
* For \(S\in\mathrm{Perf}\), let \(X_{S}\) denote the relative schematic Fargues-Fontaine curve over \(S\).
* For \(\mathrm{Spa}\left(F,\mathcal{O}_{F}\right)\in\mathrm{Perf}\) a geometric point, we will often drop the subscript on \(X_{F}\) and just write \(X\) for the associated Fargues-Fontaine curve.
* For \(b\in B(G)\), we write \(\mathcal{E}_{b}\) for the associated \(G\)-bundle on \(X\).
* For \(S\in\mathrm{Perf}\), we let \(\mathcal{E}_{0}\) denote the trivial \(G\)-bundle on \(X_{S}\).
* To a diamond or \(v\)-stack \(X\) over \(*\), we write \(\mathrm{D}(X,\overline{\mathbb{F}}_{\ell})\) for the category of etale \(\overline{\mathbb{F}}_{\ell}\)-sheaves, as defined in [11]. We let \(\mathrm{D}^{\mathrm{ULA}}(X,\overline{\mathbb{F}}_{\ell})\) denote the full subcategory of ULA sheaves over \(*\).
* For an Artin \(v\)-stack \(X\) and \(\Lambda\in\{\overline{\mathbb{F}}_{\ell},\overline{\mathbb{Z}}_{\ell}, \overline{\mathbb{Q}}_{\ell}\}\), we write \(\mathrm{D}_{\blacksquare}(X,\Lambda)\) for the condensed \(\infty\)-category of solid \(\overline{\mathbb{F}}_{\ell}\)-sheaves on \(X\), and write \(\mathrm{D}_{\mathrm{lis}}(X,\Lambda)\subset\mathrm{D}_{\blacksquare}(X,\Lambda)\) for the full sub-category of \(\Lambda\)-lisse-etale sheaves, as defined in [11, Chapter VII].
* If \(X\) is an Artin \(v\)-stack ([11, Definition IV.V.1]) admitting a separated cohomologically smooth surjection \(U\to X\) from a locally spatial diamond \(U\) such that the etale site has a basis with bounded \(\ell\)-cohomological dimension (which will always be the case for our applications) then we will regard it as a condensed \(\infty\)-category via the identification \(\mathrm{D}_{\mathrm{lis}}(X,\overline{\mathbb{F}}_{\ell})\simeq\mathrm{D}(X, \overline{\mathbb{F}}_{\ell})\) when viewed as objects in \(\mathrm{D}_{\blacksquare}(X,\overline{\mathbb{F}}_{\ell})\)[11, Proposition VII.6.6].
* We let \(\hat{G}\) denote the Langlands dual group of \(G\) with fixed splitting \((\hat{T},\hat{B},\{X_{\alpha}\})\).
* If \(E\) denotes the splitting field of \(G\) then the action of \(W_{\mathbb{Q}_{p}}\)on \(\hat{G}\) factors through \(Q:=W_{\mathbb{Q}_{p}}/W_{E}\). We let \({}^{L}G:=\hat{G}\rtimes Q\) denote the \(L\)-group.
* For \(I\) a finite index set, we let \(\mathrm{Rep}_{\overline{\mathbb{F}}_{\ell}}({}^{L}G^{I})\) (resp. \(\mathrm{Rep}_{\overline{\mathbb{F}}_{\ell}}(\hat{G}^{I})\)) denote the category of finite-dimensional algebraic representations of \({}^{L}G^{I}\) (resp. \(\hat{G}^{I}\)) over \(\overline{\mathbb{F}}_{\ell}\).
* For \(\mu\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\), we write \(V_{\mu}\in\mathrm{Rep}_{\overline{\mathbb{F}}_{\ell}}(\hat{G})\) (resp. \(\mathcal{T}_{\mu}\in\mathrm{Rep}_{\overline{\mathbb{F}}_{\ell}}(\hat{G})\)) for the usual highest weight representation (resp. highest weight tilting module, as in [10]) of highest weight \(\mu\).
* To any condensed \(\infty\)-category \(\mathcal{C}\), we write \(\mathcal{C}^{BW^{I}_{\mathbb{Q}_{p}}}\) for the category of objects with continuous \(W^{I}_{\mathbb{Q}_{p}}\)-action, as defined in [11, Section IX.1].
* For any separated \(v\)-stack, \(X\to\mathrm{Spa}(K,\mathcal{O}_{K})\) where \(\mathrm{Spa}(K,\mathcal{O}_{K})\) is a non-archimedean field, we write \(\overline{X}\) for the canonical compactification of \(X\) with respect to the structure map ([11, Proposition 18.6], [12, Theorem 5.15]).
* For a reductive group \(H/\mathbb{Q}_{p}\), we write \(\mathrm{D}(H(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\) for the unbounded derived category of smooth \(\overline{\mathbb{F}}_{\ell}\)-representations.
* For an analytic adic space \(X\), we will often abuse notation and use \(X\) to also denote the diamond \(X^{\diamond}\) attached to it (as defined in [11, Lecture X]).
## 2. Preliminaries on Shimura Varieties
In this section we will recall some facts about Shimura varieties which we will need later in this paper.
### Shimura Varieties
We will mainly work with the following two types of Shimura varieties.
#### 2.1.1. PEL type A and C
Let \((\mathcal{O}_{B},*,L,\langle\cdot,\cdot\rangle)\) be an integral PEL datum, where \(B\) is a finite-dimensional semisimple \(\mathbb{Q}\)-algebra, \(*\) is a \(\mathbb{Q}\)-linear involution of \(B\), with fixed field \(F\), \(\mathcal{O}_{B}\) is a \(*\)-stable \(\mathbb{Z}\)-order of \(B\), \(L\) is a lattice with \(\mathcal{O}_{B}\)-actions, and \(\langle\cdot,\cdot\rangle:L\times L\to\mathbb{Z}(1)\) is a non-degenerate alternating form such that \(\langle bv,v^{\prime}\rangle=\langle v,b^{*}v^{\prime}\rangle\), for all \(b\in\mathcal{O}_{B}\) and \(v,v^{\prime}\in L\).
To our integral PEL datum, we have the following group scheme \(\mathbf{G}\) over \(\mathbb{Z}\) whose \(R\)-points, for each \(\mathbb{Z}\)-algebra \(R\), are given by
\[\mathbf{G}(R):=\{(g,r)\in\operatorname{End}_{\mathcal{O}_{B}\otimes_{\mathbb{ Z}}R}(L\otimes_{\mathbb{Z}}R)\times R|\langle gv,gw\rangle=r\langle v,w\rangle \text{ for all }v,w\in L\otimes_{\mathbb{Z}}R\}.\]
The PEL data is of type A if \((B\otimes_{F}\overline{F},*)\) is isomorphic to \(\operatorname{End}(W)\times\operatorname{End}(W)^{\operatorname{op}}\) with \((a,b)^{*}=(b,a)\) for some vector space \(W\). The PEL data is of type C if \((B\otimes_{F}\overline{F},*)\) is isomorphic to \(\operatorname{End}(W)\) with \(*\) being the adjoint map with respect to a symmetric bilinear form on \(W\). We will assume from now on that we are in one of these cases.
We now further assume the data is unramified at \(p\); namely, that each term in the decomposition \(B_{\mathbb{Q}_{p}}=\prod_{\mathfrak{p}\mid p}B\otimes_{F}F_{\mathfrak{p}}\) is a matrix algebra over an unramified extension of \(\mathbb{Q}_{p}\). We will thus moreover assume that \(\mathcal{O}_{B}\otimes\mathbb{Z}_{p}\) is a maximal order in \(B_{\mathbb{Q}_{p}}\), and \(L\) is self-dual after localization at \(p\). This can be arranged following [12, Remark 1.3.4.8]. Note that these conditions equivalently ensure that the group \(G\) is unramified.
We will now briefly discuss what conditions we may need to impose on the prime \(p\) so that the form of the local group \(G\) satisfies the conditions in Table (1). Firstly, suppose we are in type A. Then the center \(Z(B)=F_{c}\) is a quadratic imaginary extension of \(F\). Let \(n\) be the \(\mathcal{O}_{B}\) rank of \(L\). Observe that we will need to assume that the prime \(p\) satisfies for all primes \(\mathfrak{p}\) of \(F\) above \(p\),
1. \(\mathfrak{p}\) is split in \(F_{c}\); or
2. \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\), and \(n\) is odd.
These conditions imply that \(G\) will be a similitude subgroup of \(\prod_{\mathfrak{p}}G_{\mathfrak{p}}\) where \(G_{\mathfrak{p}}\) is either \(\operatorname{Res}_{F_{\mathfrak{p}}/\mathbb{Q}_{p}}(\mathbb{G}_{m}\times \operatorname{GL}_{n})\) or \(\operatorname{GU}_{n}\) for an odd unitary group over \(\mathbb{Q}_{p^{2}}\).
Now suppose we are in type C. Since the PEL data is unramified at \(p\), we see that \(B\otimes_{F}F_{\mathfrak{p}}\) is indefinite for all primes \(\mathfrak{p}\) of \(F\) above \(p\), and thus \(G\) will be a similitude subgroup of
\[\prod_{\mathfrak{p}}\operatorname{Res}_{F_{\mathfrak{p}}/\mathbb{Q}_{p}}( \operatorname{GSp}_{2n}).\]
Here, we will need to assume that the rank \(n\) of \(L\) as an \(\mathcal{O}_{B}\) lattice is either \(1\) or \(2\) to satisfy the conditions in Table (1).
Both types of Shimura varieties will be moduli spaces of abelian varieties with extra structures, which we will briefly describe. Let \(K^{p}\subset G(\mathbb{A}_{f}^{p})\) be an open compact subgroup. To any PEL data, the Shimura variety \(\mathfrak{S}(\mathbf{G},X)_{K}\) over \(\mathcal{O}_{E_{\mathfrak{p}}}\) is the scheme which represents the functor that associates to each locally Noetherian scheme \(S\) over \(\mathcal{O}_{E_{\mathfrak{p}}}\) the set of isomorphism classes of tuples \((A,\lambda,\iota,\eta^{p})\) consisting of
1. An abelian scheme \(A/S\) of dimension \(n[F:\mathbb{Q}]\) up to prime to \(p\)-isogeny,
2. A prime-to-\(p\) polarization \(\lambda:A\to A^{\vee}\),
3. An embedding \(\iota:\mathcal{O}_{B}\otimes\mathbb{Z}_{(p)}\hookrightarrow\operatorname{End}(A )\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}\) of \(\mathbb{Z}_{(p)}\)-algebras such that \[\lambda\circ\iota(b^{*})=\iota(b)^{\vee}\circ\lambda,\]
4. A section \(\eta^{p}\in\Gamma(S,\operatorname{Isom}_{B}(L\otimes_{\mathbb{Z}}\mathbb{A}_{f}^{p}, \hat{V}(A)^{p})/K^{p})\), where \(\operatorname{Isom}_{B}(L\otimes_{\mathbb{Z}}\mathbb{A}_{f}^{p},\hat{V}(A)^{p})\) is the etale torsor of isomorphisms that maps \(\langle,\rangle\) to a \(\mathbb{A}_{f}^{p\times}\) multiple of the pairing on \(\hat{V}(A)^{p}\) defined by the Weil pairing,
satisfying the Kottwitz determinant condition that \(\det(b|\mathrm{Lie}(A))=\det(b|V^{-1,0})\) as polynomial functions on \(\mathcal{O}_{B}\), where \(V=L\otimes\mathbb{Q}\) and \(V_{\mathbb{C}}=V^{-1,0}\oplus V^{0,-1}\) is the Hodge decomposition.
#### 2.1.2. Compactifications
We will now recall some constructions from the theory of toroidal compactifications of PEL type Shimura varieties from [11]. To match the setting in [11], we will moreover assume from now on that the level structure \(K\) is a principal congruence subgroup for some \(N\geq 3\), namely
\[K=K(N)=\{g\in\mathbf{G}(\hat{\mathbb{Z}})|g\equiv 1\pmod{N}\}.\]
We first recall the definition of a split, symplectic and admissible filtration from [11, SS5.2.1]. Let \(R\) be a commutative ring.
**Definition 2.1**.: A split, symplectic and admissible filtration on \(L\otimes_{\mathbb{Z}}R\) is a two-step filtration on \(L\otimes_{\mathbb{Z}}R\) by \((\mathcal{O}_{B}\otimes_{\mathbb{Z}}R)\)-submodules, i.e. we have
\[0=Z_{-3}\subset Z_{-2}\subset Z_{-1}\subset L\otimes_{\mathbb{Z}}R,\]
such that if we put \(\mathrm{Gr}_{-i}^{Z}=Z_{-i}/Z_{-i-1}\) for \(0\leq i\leq 2\), and \(\mathrm{Gr}^{Z}=\oplus_{0\leq i\leq 2}\mathrm{Gr}_{-i}^{Z}\), we have
1. \(\mathrm{Gr}_{-i}^{Z}\) is isomorphic to \(M\otimes_{\mathbb{Z}}R\) for some \(\mathcal{O}_{B}\)-lattice \(M\)
2. There is some isomorphism of \(\mathcal{O}_{B}\)-lattices
\[L\otimes_{\mathbb{Z}}R\simeq\mathrm{Gr}^{Z}\]
3. \(Z_{-2}\) and \(Z_{-1}\) are annihilators of each other under the pairing \(\langle,\rangle\) induced from \(L\).
**Definition 2.2**.: Let \(M\) be a finite \(B\)-module. Since \(B\simeq\prod_{i}B_{i}\) where each \(B_{i}\) is simple, we have a decomposition \(M\simeq M_{i}^{m_{i}}\) where \(M_{i}\) is the unique simple left \(B_{i}\)-module. We call the tuple \((m_{i})\) the \(B\)-multi-rank of \(M\).
Let \(R=\hat{\mathbb{Z}}\) and suppose that we have a split symplectic admissible filtration \(Z=Z_{\bullet}\) as above.
**Definition 2.3**.: A torus argument \(\Phi\) for \(Z\) is a tuple \(\Phi=(X,Y,\phi,\varphi_{-2},\varphi_{0})\), where
1. \(X\) and \(Y\) are \(\mathcal{O}_{B}\)-lattices of the same \(B\)-multi-rank, and \(\phi:Y\hookrightarrow X\) is an \(\mathcal{O}_{B}\)-linear embedding
2. We have isomorphisms \(\varphi_{-2}:\mathrm{Gr}_{-2}^{Z}\simeq\operatorname{Hom}_{R}(X\otimes_{ \mathbb{Z}}R,R(1))\) and \(\varphi_{0}:\mathrm{Gr}_{0}^{Z}\simeq Y\otimes_{\mathbb{Z}}R\) such that the pairing \(\langle,\rangle_{20}:\mathrm{Gr}_{-2}^{Z}\times\mathrm{Gr}_{0}^{Z}\to R(1)\) is the pullback under these isomorphisms of the pairing \[\langle\cdot,\cdot\rangle^{\phi}:\operatorname{Hom}_{R}(X\otimes R,R(1)) \times(Y\otimes R)\xrightarrow{\operatorname{id}\times\phi}\operatorname{Hom}_ {R}(X\otimes R,R(1))\times(X\otimes R)\to R(1),\] where the last arrow is the tautological pairing.
We thus define a cusp label as a pair \((Z,\Phi)\), where \(Z\) is a split symplectic admissible filtration on \(L\otimes_{\mathbb{Z}}\hat{\mathbb{Z}}\), and \(\Phi\) is a torus argument for \(Z\). Note that this is the generalization of the cusp labels \((Z,X)\) considered in [11, SS2.5.2], as for the PEL type A Shimura data they considered, the assumption of principal polarization means we can set \(X=Y\), and the torus argument \(\Phi\) is determined by the \(\mathcal{O}_{F}\)-isomorphism.
There is an action of \(\mathbf{G}(\mathbb{A}_{f})\) on pairs \((Z,\Phi)\), as defined in [11, SS5.4.3], and we define a cusp label at level \(K\) to be a \(K\)-orbit of pairs \((Z,\Phi)\).
To each cusp label \((Z,\Phi)\), we can associate a split torus \(E_{\Phi}\) over \(\mathbb{Z}\), as constructed by Lan in [11, SS6.4]. Let \(S_{\Phi}=\mathbb{X}^{*}(E_{\Phi})\). Let \(S_{\Phi}^{\vee}:=\operatorname{Hom}_{\mathbb{Z}}(S_{\Phi},\mathbb{Z})\) be the \(\mathbb{Z}\)-dual of \(S_{\Phi}\), and let \((S_{\Phi})_{\mathbb{R}}^{\vee}:=S_{\Phi}^{\vee}\otimes_{\mathbb{Z}}\mathbb{R}\). The \(\mathbb{R}\)-vector space \((S_{\Phi})_{\mathbb{R}}^{\vee}\) is isomorphic to the space of Hermitian pairings \(|\cdot,\cdot|:(Y\otimes\mathbb{R})\times\mathbb{R}\)
\((Y\otimes\mathbb{R})\to\mathcal{O}_{B}\otimes\mathbb{R}\) by sending a Hermitian pairing \(|\cdot,\cdot|\) to the function \(y\otimes\phi(y^{\prime})\mapsto\mathrm{Tr}_{B/\mathbb{Q}}(|y,y^{\prime}|)\) in \(\mathrm{Hom}_{\mathbb{Z}}(S_{\Phi},\mathbb{R})\) (c.f. [1, SS6.2.5]).
Thus, we have an \(\mathbb{R}\)-vector space \((S_{\Phi})_{\mathbb{R}}^{\vee}\) of Hermitian pairings, and we define \(P_{\Phi}\) to be the subset of \((S_{\Phi})_{\mathbb{R}}^{\vee}\) corresponding to positive semi-definite Hermitian pairings with admissible radicals (see [1, Definition 6.2.5.4] and subsequent discussion for the precise definition of admissible radical). \(P_{\Phi}\) will a rational polyhedral cone in \((S_{\Phi})_{\mathbb{R}}^{\vee}\). Moreover, to every torus argument \(\Phi\) we can also associate a stabilizer group \(\Gamma_{\Phi}\). We thus let \(\Sigma_{\Phi}\) be a \(\Gamma_{\Phi}\)-admissible rational polyhedral cone decomposition of \(P_{\Phi}\), as in [1, Definition 6.1.1.14].
From now on, we will assume that we have fixed a compatible choice of admissible smooth rational polyhedral cone decomposition data (rpcd) \(\Sigma\) for \(K\); namely, we have
1. A complete set of representatives \((Z,\Phi)\) of cusp labels at level \(K\),
2. A \(\Gamma_{\Phi}\) -admissible smooth rational polyhedral cone decomposition \(\Sigma_{\Phi}\) for each cusp \((Z,\Phi)\) so that the cone decompositions are pairwise compatible.
The precise definition and proof of existence of such smooth admissible rpcd is [1, SS6.3.3.2, SS6.6.3.3]. Associated to this admissible smooth rpcd, we have a toroidal compactification of \(\mathfrak{S}(G,X)_{K}\), as in the following theorem of Lan [1, Theorem 6.4.1.1].
**Theorem 2.4**.: _To each compatible choice \(\Sigma=\{\Sigma_{\Phi}\}\) of admissible smooth rational polyhedral cone decomposition data, there is an associated proper smooth algebraic scheme \(\mathfrak{S}(G,X)_{K}^{\mathrm{tor}}\) over \(\mathcal{O}_{E,\mathfrak{p}}\) containing \(\mathfrak{S}(G,X)_{K}\) as an open dense subscheme, together with a semiabelian family \(\mathcal{A}\) over \(\mathfrak{S}(G,X)_{K}^{\mathrm{tor}}\). Moreover, we have a stratification_
\[\mathfrak{S}(G,X)_{K}^{\mathrm{tor}}=\coprod_{(\Phi,\sigma)}Z_{(\Phi,\sigma)},\]
_where \(\sigma\) is a face of \(P_{\Phi}\)._
Finally, observe that there is a cusp label corresponding to taking the filtration \(Z_{-2}=0,Z_{-1}=L\otimes R\), and the torus argument \(X=Y=0\). This trivial cusp label will correspond to the original Shimura variety \(\mathfrak{S}(G,X)_{K}\) in the stratification above.
### Igusa Varieties
We fix now some \(b\in B(G,\mu)\), and consider a geometric point \(x\in\mathfrak{S}(\mathbf{G},X)_{K}(\overline{\mathbb{F}}_{p})\) lying in the Newton strata for \(b\). This corresponds to some abelian variety \(\mathcal{A}_{x}\), which has \(p\)-divisible group with \(G\)-structure \(\mathbb{X}:=\mathcal{A}_{x}[p^{\infty}]\) given by \(b\). Up to replacing \(x\) by another element in its isogeny class, we can assume \(\mathbb{X}\) is completely slope divisible. Thus, we can write \(\mathbb{X}=\oplus_{i=1}^{r}\mathbb{X}_{i}\), where the \(\mathbb{X}_{i}\) are isoclinic \(p\)-divisible groups of strictly decreasing slopes.
We consider the following subset
\[\mathscr{C}_{\mathbb{X}}:=\{x\in\mathfrak{S}(\mathbf{G},X)_{K,\overline{ \mathbb{F}}_{p}}:\exists\text{ isomorphism }\rho:\mathcal{A}_{x}[p^{\infty}]\times k(\overline{x})\simeq\mathbb{X} \times k(\overline{x})\text{ preserving $G$-structure}\},\]
where we denote by \(\mathfrak{S}(\mathbf{G},X)_{K,\overline{\mathbb{F}}_{p}}\) the (geometric) special fiber of \(\mathfrak{S}(\mathbf{G},X)_{K}\). This turns out to be a closed subset of \(\mathfrak{S}(\mathbf{G},X)_{K,\overline{\mathbb{F}}_{p}}\), and thus we can give this subset the induced reduced scheme structure. We will continue to denote the associated scheme by \(\mathscr{C}_{\mathbb{X}}\), and it turns out this scheme is smooth.
Let \(\mathscr{G}\) be the \(p\)-divisible group of the restriction to \(\mathscr{C}_{\mathbb{X}}\) of the universal abelian variety over \(\mathfrak{S}(\mathbf{G},X)_{K}\). We further define \(\mathrm{Ig}^{b}\) as the scheme over \(\mathscr{C}_{\mathbb{X}}\) parametrizing, for any perfect \(\mathscr{C}_{\mathbb{X}}\)-scheme \(\mathscr{T}\), isomorphisms \(\mathscr{G}\times_{\mathscr{C}_{\mathbb{X}}}\mathscr{T}\simeq\mathbb{X}\times _{\overline{\mathbb{F}}_{p}}\mathscr{T}\) which preserve \(G\)-structure. Equivalently, we can define \(\mathrm{Ig}^{b}\) as the functor sending an \(\overline{\mathbb{F}}_{p}\)-algebra \(R\) to the set
\[\mathrm{Ig}^{b}(R)=\{(\rho,x):x\in\mathfrak{S}(\mathbf{G},X)_{K}(R),\rho: \mathcal{A}_{x}[p^{\infty}]\overset{\sim}{\to}\mathbb{X}_{R}\text{ preserving $G$-structure}\}. \tag{2}\]
By [1, Corollary 4.3.5], we know that this scheme is perfect, and hence it lifts uniquely to a flat \(p\)-adic formal scheme, which we denote by \(\mathfrak{Ig}^{b}\) over \(\mathrm{Spf}(W(\overline{\mathbb{F}}_{p}))\).
We write \(\mathfrak{Ig}^{b}_{C}\) for the perfectoid space attached to the adic generic fiber of \(\mathfrak{Ig}^{b}\) over \(C\). These spaces are supposed to model the fibers of the Hodge-Tate period morphism, a connection we will elaborate upon in SS3.
#### 2.2.1. Compactifications
In order to understand (partial) minimal and toroidal compactifications of \(\mathrm{Ig}^{b}\), we must first consider compactifications of the central leaf \(\mathscr{C}_{\mathbb{X}}\). As in the discussion in [11, SS3.1], the central leaf \(\mathscr{C}_{\mathbb{X}}\) is a well-positioned subset of \(\mathfrak{S}(\mathbf{G},X)_{K,\overline{\mathbb{F}}_{p}}\), and thus admits partial toroidal and minimal compactifications, which we will denote by \(\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\) and \(\mathscr{C}^{*}_{\mathbb{X}}\) respectively. Moreover, let \(Z\) be a cusp label at level \(K(N)\). This determines a locally closed boundary stratum \(\mathscr{C}_{\mathbb{X},Z}\subset\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\).
The Igusa variety \(\mathrm{Ig}^{b}\) over \(\mathscr{C}_{\mathbb{X}}\) extends to a perfect scheme \(\mathrm{Ig}^{b,\mathrm{tor}}\) over \(\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\). More precisely, we can define \(\mathrm{Ig}^{b,\mathrm{tor}}\) as follows. Let \(\mathcal{A}\) denote the universal semi-abelian scheme over \(\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\). This is the restriction to \(\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\) of the universal semi-abelian scheme over \(\mathfrak{S}(\mathbf{G},X)_{K,\overline{\mathbb{F}}_{p}}\). Then, we know from [11, Proposition 3.2.1] that the connected part \(\mathcal{A}[p^{\infty}]^{\circ}\) of \(\mathcal{A}[p^{\infty}]\) is a \(p\)-divisible group. Moreover, if we denote by \(\mathcal{A}[p^{\infty}]^{\mu}\) the multiplicative part, then this is also a \(p\)-divisible group. We thus let \(\mathcal{A}[p^{\infty}]^{(0,1)}=\mathcal{A}[p^{\infty}]^{\circ}/\mathcal{A}[p ^{\infty}]^{\mu}\) be the biconnected part. We can similarly define \(\mathbb{X}^{\circ},\mathbb{X}^{(0,1)}\) as the conntned and biconnected parts of \(\mathbb{X}\).
Thus, we can define \(\mathrm{Ig}^{b,\mathrm{tor}}\) to be the scheme which, for a perfect \(\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\)-scheme \(\mathscr{T}\), parametrizes \(\mathcal{O}_{B}\) -linear isomorphisms \(\rho:\mathcal{A}[p^{\infty}]^{\circ}\times_{\mathscr{C}^{\mathrm{tor}}_{ \mathbb{X}}}\mathscr{T}\xrightarrow{\sim}\mathbb{X}^{\circ}_{\mathscr{T}}\) and a scalar in \(\mathbb{Z}^{\times}_{p}(\mathscr{T})\) such that the induced isomorphism \(\rho^{(0,1)}:\mathcal{A}[p^{\infty}]^{(0,1)}\times_{\mathscr{C}^{\mathrm{tor}}_ {\mathbb{X}}}\mathscr{T}\xrightarrow{\sim}\mathbb{X}^{(0,1)}_{\mathscr{T}}\) obtained by quotienting by the multiplicative parts commutes with the polarizations up to the given element of \(\mathbb{Z}^{\times}_{p}(\mathscr{T})\). Here, \(\mathbb{Z}^{\times}_{p}(\mathscr{T})\) is the set of \(\mathscr{T}\)-points of the group scheme \(\mathbb{Z}^{\times}_{p}\times\mathscr{C}^{\mathrm{tor}}_{\mathbb{X}}\).
Finally, we define the partial minimal compactification \(\mathrm{Ig}^{b,*}\) as the normalization of \(\mathscr{C}^{*}_{\mathbb{X}}\) in \(\mathrm{Ig}^{b}\). Since we have \(\mathrm{Ig}^{b}\subset\mathrm{Ig}^{b,*},\mathrm{Ig}^{b,\mathrm{tor}}\), we will denote the boundaries by \(\partial\mathrm{Ig}^{b,*}\) and \(\partial\mathrm{Ig}^{b,\mathrm{tor}}\), respectively. These schemes are all perfect, and we can lift them to \(p\)-adic formal schemes over \(\mathrm{Spf}(W(\overline{\mathbb{F}}_{p}))\). We similarly denote by \(\mathfrak{Ig}^{b,*}_{C}\), \(\mathfrak{Ig}^{b,\mathrm{tor}}_{C}\), \(\partial\mathfrak{Ig}^{b,*}_{C}\), and \(\partial\mathfrak{Ig}^{b,\mathrm{tor}}_{C}\) the associated perfectoid spaces over \(C\).
#### 2.2.2. Igusa Cusp Labels
In order to understand the boundary components \(\partial\mathrm{Ig}^{b,*}\) and \(\partial\mathrm{Ig}^{b,\mathrm{tor}}\), we will recall the notion of Igusa cusp labels, as in [10, Definition 3.2.19] (which we have slightly modified to match the definition of cusp labels previously introduced). We let \(\mathbb{X}_{b}:=\mathbb{X}\) be the completely slope divisible \(p\)-divisible group attached to \(b\) defined above. We reintroduce \(b\) in the notation to emphasise that all constructions here depend on \(b\). Finally, observe that, since from the moduli problem the polarization on \(\mathcal{A}_{x}\) is prime-to-\(p\), the \(p\)-divisible group \(\mathbb{X}_{b}=\mathcal{A}_{x}[p^{\infty}]\) is principally polarized.
**Definition 2.5**.: We define an Igusa cusp label as a tuple \((Z_{b},Z^{p},X,Y,\phi,\varphi_{0},\varphi_{-2},\tilde{\varphi}_{0},\delta_{b})\) where
1. \(Z_{b}\) is an \(\mathcal{O}_{B}\) -stable filtration of \(\mathbb{X}_{b}\) by \(p\)-divisible subgroups of the form \[0=Z_{b,-3}\subset Z_{b,-2}\subset Z_{b,-1}\subset\mathbb{X}_{b},\] where \(\mathrm{Gr}^{Z_{b}}_{-2}=Z_{b,-2}\) is multiplicative, and \(\mathrm{Gr}^{Z_{b}}_{0}=\mathbb{X}_{b}/Z_{b,-1}\) is etale, and \(Z_{b,-1}\), \(Z_{b,-2}\) are Cartier dual to each other under the principal polarization on \(\mathbb{X}_{b}\).
2. \(\delta_{b}\) is an \(\mathcal{O}_{B}\)-linear isomorphism \[\delta_{b}:\mathrm{Gr}^{Z_{b}}\simeq\mathbb{X}_{b}\]
3. \(Z^{p}\) is an \(\mathcal{O}_{B}\) -stable split, symplectic and admissible filtration \[0=Z^{p}_{-3}\subset Z^{p}_{-2}\subset Z^{p}_{-1}\subset L\otimes_{\mathbb{Z}} \hat{\mathbb{Z}}^{p}\]
4. \(X,Y\) are \(\mathcal{O}_{B}\)-lattices of the same \(B\)-multirank, together with an \(\mathcal{O}_{B}\)-linear embedding \(\phi:Y\hookrightarrow X\), and we have isomorphisms \[\varphi_{0}:Y\otimes_{\mathbb{Z}}\hat{\mathbb{Z}}^{p}\simeq\mathrm{Gr}^{Z^{p}}_ {0}\]
\[\varphi_{-2}:\operatorname{Hom}(X\otimes_{\mathbb{Z}}\hat{\mathbb{Z}}^{p},\hat{ \mathbb{Z}}^{p}(1))\simeq\operatorname{Gr}_{-2}^{Z^{p}}\]
\[\tilde{\varphi}_{0}:Y\otimes(\mathbb{Q}_{p}/\mathbb{Z}_{p})\simeq \operatorname{Gr}_{0}^{Z_{b}}\]
such that the pairing \(\langle,\rangle_{20}:\operatorname{Gr}_{-2}^{Z^{p}}\times\operatorname{Gr}_{0 }^{Z^{p}}\to\hat{\mathbb{Z}}^{p}(1)\) induced from the one on \(L\) is the pullback via \(\varphi_{-2},\varphi_{0}\) of the one defined on \(X,Y\).
There is an action of \(J_{b}(\mathbb{Q}_{p})\times\mathbf{G}(\mathbb{A}_{f}^{p})\) on Igusa cusp labels. If \(K\subset J_{b}(\mathbb{Q}_{p})\times\mathbf{G}(\mathbb{A}_{f}^{p})\) is a compact open subgroup then an Igusa cusp label at level \(K\) is a \(K\)-orbit of Igusa cusp labels. For a general closed subgroup \(H\subset J_{b}(\mathbb{Q}_{p})\times\mathbf{G}(\mathbb{A}_{f}^{p})\), an Igusa cusp label at level \(H\) is a compatible family of Igusa cusp labels at level \(K\) for all \(K\supset H\).
#### 2.2.3. Boundary components
We can decompose the boundary \(\partial\mathrm{Ig}^{b,\mathrm{tor}}\) according to Igusa cusp labels of prime-to-\(p\) level \(K^{p}(N)\), in the following way. For every positive integer \(m\), there is a level \(p^{m}\)-Igusa variety \(\mathrm{Ig}^{b}_{m}\), defined as in [16, Definition 4.3.6], and we let \(\Gamma_{m,b}\)-denote the Galois group of the finite etale cover \(\mathrm{Ig}^{b}_{m}\to\mathscr{C}_{\mathbb{X}}\). We also have a toroidal extension of \(\mathrm{Ig}^{b}_{m}\) to a level \(p^{m}\)-Igusa variety \(\mathrm{Ig}^{b,\mathrm{tor}}_{m}\), as defined in [16, Definition 3.2.5]. We let \(\Gamma_{b}(p^{m}):=\ker(\operatorname{Aut}(\mathbb{X})\to\Gamma_{m,b})\), and note that if we let \(K=\Gamma_{b}(p^{m})K^{p}(N)\) then such Igusa cusp labels at level \(K\) have the same data as triples \((Z_{m,b},Z,\Phi)\), where \(Z=(Z,\Phi)\) is a cusp label as level \(K(N)\), and \(Z_{m,b}\) is an \(\mathcal{O}_{B}\)-filtration on \(\mathbb{X}_{b}[p^{m}]\) together with an isomorphism \(X/p^{m}\simeq\operatorname{Gr}_{0}^{Z_{m,b}}\). In particular, we can consider the locally closed boundary stratum \(\mathscr{C}_{\mathbb{X},Z}\), and the \(p^{m}\)-Igusa variety \(\mathrm{Ig}^{b,\mathrm{tor}}_{m,Z}\) which is the preimage of \(\mathscr{C}_{\mathbb{X},Z}\). Moreover, from [14, Theorem 3.2.22], we see that we have a decomposition
\[\mathrm{Ig}^{b,\mathrm{tor}}_{m,Z}=\coprod_{Z_{m}}\mathrm{Ig}^{b,\mathrm{tor }}_{m,\tilde{Z}_{m}},\]
where \(\tilde{Z}_{m}\) denotes Igusa cusp labels of level \(\Gamma_{b}(p^{m})K^{p}(N)\) lying over \(Z\).
Now, let \(\tilde{Z}\) denote an Igusa cusp label of level \(K^{p}(N)\). This is by definition a compatible system \(\{\tilde{Z}_{m}\}\) of Igusa cusp labels for \(\Gamma_{b}(p^{m})K^{p}(N)\), for all positive integers \(m\). We can thus define
\[\mathrm{Ig}^{b,\mathrm{tor}}_{\tilde{Z}}=\lim_{m}\mathrm{Ig}^{b,\mathrm{tor }}_{m,\tilde{Z}_{m}}.\]
For later use, we will want to have a moduli description of points in \(\mathrm{Ig}^{b,\mathrm{tor}}_{Z}\). We first recall some facts about degenerations of abelian schemes, from [13, SS3.4] and [16, SS2.5.1]. Let \(C^{\prime}\) be a complete algebraically closed nonarchimedean field with ring of integers \(\mathcal{O}_{C^{\prime}}\). Consider a polarized abelian variety \((A,\lambda)\) over \(C^{\prime}\) with \(\mathcal{O}_{B}\)-structure, and a degeneration \(\mathcal{A}\) of \(A\) over \(\mathcal{O}_{C^{\prime}}\). Then this uniquely determines a short exact sequence
\[0\to T\to\mathcal{G}\to\mathcal{B}\to 0\]
where \(T\) is a torus, \(\mathcal{B}\) is an abelian scheme over \(\mathcal{O}_{C^{\prime}}\), and \(\mathcal{G}\) is the Raynaud extension. Let \(X=\mathbb{X}_{*}(T)\), which is a free abelian group over \(\mathcal{O}_{C}\). The lattice \(X\) has an action of \(\mathcal{O}_{B}\), and so does \(\mathcal{B}\). Then, \(\mathcal{G}\) determines, and is uniquely determined by, an \(\mathcal{O}_{B}\)-linear map \(c:X\to\mathcal{B}^{\vee}\). Similarly, we can consider a degeneration \(\mathcal{A}^{\vee}\) over \(\mathcal{O}_{C^{\prime}}\) of the dual \(A^{\vee}/C^{\prime}\), which gives us a short exact sequence
\[0\to T^{\vee}\to\mathcal{G}^{\vee}\to\mathcal{B}^{\vee}\to 0,\]
and if we similarly let \(Y=\mathbb{X}_{*}(T^{\vee})=\mathbb{X}^{*}(T)\), the extension \(\mathcal{G}\) determines, and is uniquely determined by an \(\mathcal{O}_{B}\)-linear map \(c^{\vee}:Y\to\mathcal{B}\).
Moreover, the polarization \(\mathcal{G}\to\mathcal{G}^{\vee}\) determines and is uniquely determined by the data of the polarization on \(\mathcal{B}\), and an injective \(\mathcal{O}_{B}\)-linear map \(\phi:Y\to X\).
Given \(\tilde{Z}\), an Igusa cusp label of level \(K^{p}(N)\), we want to understand the \((C,\mathcal{O}_{C})\)-valued points of \(\mathfrak{Ig}^{b,\mathrm{tor}}\) which specialize to points in \(\mathrm{Ig}^{b,\mathrm{tor}}_{\tilde{Z}}\). In particular, note that the data of the Igusa cusp
label means we have fixed an \(\mathcal{O}_{B}\)-stable filtration
\[0=Z_{b,-3}\subset Z_{b,-2}\subset Z_{b,-1}\subset\mathbb{X}_{b},\]
as well as a \(\mathcal{O}_{B}\) -linear splitting \(\delta_{b}\) of this filtration. From the proof of [1, Theorem 4.3.10], we see that \((C^{\prime},\mathcal{O}_{C^{\prime}})\)-valued points are hence given by the following data:
1. A polarized abelian variety \(\mathcal{B}\) over \(\mathcal{O}_{C^{\prime}}\) with \(\mathcal{O}_{B}\)-action,
2. An \(\mathcal{O}_{B}\) -linear extension \[0\to T\to\mathcal{G}\to\mathcal{B}\to 0\] where \(\mathbb{X}_{*}(T)=X\); equivalently, an \(\mathcal{O}_{B}\) -linear map \(c:X\to\mathcal{B}^{\vee}\).
3. An \(\mathcal{O}_{B}\)-linear isomorphism \[\rho:\mathcal{G}[p^{\infty}]\simeq Z_{b,-1}\] that is compatible with the identification \(T[p^{\infty}]=Z_{b,-2}\). By the splitting \(\delta_{b}\), this induces a splitting of \[0\to T[p^{\infty}]\to\mathcal{G}[p^{\infty}]\to\mathcal{B}[p^{\infty}]\to 0,\] and in particular \(c\) extends to a map \(c:X[1/p]\to\mathcal{B}^{\vee}\).
4. An \(\mathcal{O}_{B}\) -linear extension \[0\to T^{\vee}\to\mathcal{G}^{\vee}\to\mathcal{B}^{\vee}\to 0\] where \(\mathbb{X}^{*}(T)=Y\). Equivalently, an \(\mathcal{O}_{B}\) -linear map \(c^{\vee}:Y\to\mathcal{B}\). By the splitting \(\delta_{b}\), as well as by duality (using that \(\mathcal{G}[p^{\infty}]\) and \(\mathcal{B}[p^{\infty}]\) will be principally polarized), we have a splitting of \[0\to T^{\vee}[p^{\infty}]\to\mathcal{G}^{\vee}[p^{\infty}]\to\mathcal{B}^{ \vee}[p^{\infty}]\to 0\] and in particular we extend \(c^{\vee}\) to a map \(c^{\vee}:Y[1/p]\to\mathcal{B}\).
5. An \(\mathcal{O}_{C^{\prime}}\) -point of \(\mathcal{P}^{\prime}_{\Sigma_{\Phi}}\) whose special fibre lies in the boundary. Here, we note that away from the boundary we have a torsor \(\mathcal{P}^{\prime}\) over \(\mathcal{O}_{C^{\prime}}\) for the torus \(E_{\Phi}\) with character group \(S_{\Phi}\), parametrizing lifts of \(c^{\vee}\) to \(\iota:Y[1/p]\to\mathcal{G}\). \(\mathcal{P}^{\prime}_{\Sigma_{\Phi}}\supset\mathcal{P}^{\prime}\) is the torus embedding defined by the admissible rational polyhedral cone decomposition \(\Sigma_{\Phi}\) for the cusp \((Z,\Phi)\).
## 3. Mantovan's Formula and the Hodge-Tate Period Morphism
For \(K\subset\mathbf{G}(\mathbb{A}_{f})\) a sufficiently small open compact, we define \(\mathcal{S}_{K}:=(\operatorname{Sh}(\mathbf{G},X)_{K}\otimes_{E}E_{\mathfrak{ p}})^{\mathrm{ad}}\) to be the adic space over \(\operatorname{Spa}(E_{\mathfrak{p}})\) attached to the Shimura variety. When \(K=K_{p}^{\operatorname{hs}}K^{p}\) with \(K_{p}^{\operatorname{hs}}\) a hyperspecial subgroup, the space \(\mathcal{S}_{K}\) has a canonical integral model \(\mathfrak{S}_{K}\) over \(\mathcal{O}_{E,\mathfrak{p}}\). Let \(\mathcal{S}_{K}^{\circ}\subset\mathcal{S}_{K}\) be the good reduction locus, i.e. the open subspace of \(\mathcal{S}_{K}\) obtained from the adic generic fiber of the \(p\)-adic completion \(\mathfrak{S}_{K}^{\wedge}\) of the scheme \(\mathfrak{S}_{K}\). We define \(\mathcal{S}_{K^{\prime}}^{\circ}\subset\mathcal{S}_{K^{\prime}}\) for \(K^{\prime}\subset K\) by taking the preimage under the natural map from \(\mathcal{S}_{K^{\prime}}\) to \(\mathcal{S}_{K}\). We also consider the adic spaces \(\mathcal{S}_{K}^{*}\) and \(\mathcal{S}_{K}^{\mathrm{tor}}\) attatched to the minimal and toroidal compactification of the Shimura variety \(\mathcal{S}_{K}^{*}\).
Associated to the \(G(\mathbb{R})\)-conjugacy class \(X\), we have a minuscule cocharacter \(\mu\) of \(G_{C}\) which is defined over \(E_{\mathfrak{p}}\). Let \(\mathcal{F}\ell_{G,\mu^{-1}}\) be the flag variety over \(\operatorname{Spa}(C)\) associated to \(\mu^{-1}\) the dominant inverse of \(\mu\). Since \(\mu\) is minuscule, via the Bialynicki-Birula isomorphism, when viewed as a diamond the flag variety \(\mathcal{F}\ell_{G,\mu^{-1}}\) represents the following functor on \(\operatorname{Perf}_{C}\). Given any \(S\in\operatorname{Perf}_{C}\), \(\mathcal{F}\ell_{G,\mu^{-1}}(S)\) is the set of modifications of vector bundles \(\mathcal{E}\dashrightarrow\mathcal{E}_{0}\) of meromorphy \(\mu\) on \(X_{S}\), the relative Fargues-Fontaine curve over \(S\), such that the modification occurs over the untilt of \(S\) corresponding to the map \(S\to\operatorname{Spd}(C)\).
Let
\[\mathcal{S}_{K^{p}}^{\circ}:=\lim_{\widetilde{K}_{p}}\mathcal{S}_{K^{p}K_{p}}^{ \circ}\subset\mathcal{S}_{K^{p}}:=\lim_{\widetilde{K}_{p}}\mathcal{S}_{K^{p}K_ {p}}\subset\mathcal{S}_{K^{p}}^{\mathrm{tor}}:=\lim_{\widetilde{K}_{p}} \mathcal{S}_{K^{p}K_{p}}^{\mathrm{tor}}\to\mathcal{S}_{K^{p}}^{*}:=\lim_{ \widetilde{K}_{p}}\mathcal{S}_{K^{p}K_{p}}^{*}\]
be the associated perfectoid Shimura varieties. We also consider \(\overline{\mathcal{S}}^{\circ}_{K^{p},C}\), the canonical compactification of the good reduction locus. This will be a subspace of \(\mathcal{S}_{K^{p},C}\), since \(\mathcal{S}_{K^{p},C}\) is partially proper. Caraiani-Scholze [11, SS2.1] consider the Hodge-Tate period morphism on \(\mathcal{S}_{K^{p}}\)
\[\pi_{\mathrm{HT}}:\mathcal{S}_{K^{p},C}\to\mathcal{F}\ell_{G,\mu^{-1}},\]
which records the relative position of the Hodge-Tate filtration associated with the \(p\)-divisible group. This extends [11, SS4.1] to a Hodge-Tate period morphism on the minimal compactification
\[\pi^{*}_{\mathrm{HT}}:\mathcal{S}^{*}_{K^{p},C}\to\mathcal{F}\ell_{G,\mu^{-1}}\]
and toroidal compactification
\[\pi^{\mathrm{tor}}_{\mathrm{HT}}:\mathcal{S}^{\mathrm{tor}}_{K^{p},C}\to \mathcal{F}\ell_{G,\mu^{-1}}.\]
We write \(\pi^{\circ}_{\mathrm{HT}}\) for the restriction to the good reduction locus, and \(\overline{\pi}^{\circ}_{\mathrm{HT}}\) for the canonical compactification of \(\pi^{\circ}_{\mathrm{HT}}\), where we note that this again maps to \(\mathcal{F}\ell_{G,\mu^{-1}}\) as this is proper over \(\mathrm{Spa}(C)\).
These maps have the following properties:
1. \(\pi^{*}_{\mathrm{HT}}\) and \(\pi^{\mathrm{tor}}_{\mathrm{HT}}\) are partially proper and qcqs; hence, proper.
2. \(\pi_{\mathrm{HT}}\) and \(\overline{\pi}^{\circ}_{\mathrm{HT}}\) are partially proper, but not always qcqs.
3. \(\pi^{\circ}_{\mathrm{HT}}\) is qcqs, but not partially proper.
With these properties in mind, let us study the fibers of these maps. For our purposes, we will focus on the compactly supported cohomology of \(\mathcal{S}_{K^{p},C}\) and in turn the sheaf \(R\pi_{\mathrm{HT}!}(\overline{\mathbb{F}}_{\ell})\) on \(\mathcal{F}\ell_{G,\mu^{-1}}\).
_Remark 3.1_.: We note that it is always true that the compactly supported cohomology at infinite level is the colimit of the compactly supported cohomology at finite levels, but, for usual cohomology one needs to assume the spaces are qcqs for this to be true (e.g the tower defined by the good reduction locus or the minimal/toroidal compactifications).
Our goal is to describe the stalks of \(R\pi_{\mathrm{HT}}(\overline{\mathbb{F}}_{\ell})\) at a geometric point \(x:\mathrm{Spa}(C,C^{+})\to\mathcal{F}\ell_{G,\mu^{-1}}\). We assume the geometric point \(x\) factors through the adic Newton strata \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\) for \(b\in B(G,\mu)\), and choose a completely slope divisible \(p\)-divisible group \(\mathbb{X}_{b}\) over \(\overline{\mathbb{F}}_{p}\) corresponding to \(b\in B(G,\mu)\). Let \(\mathrm{Ig}^{b}\) be the associated perfect Igusa variety as defined in SS2.2, with toroidal compactification \(\mathrm{Ig}^{b,\mathrm{tor}}\) and minimal compactification \(\mathrm{Ig}^{b,*}\). Recall that we have associated perfectoid Igusa varieties, \(\mathfrak{Ig}^{b}_{C},\mathfrak{Ig}^{b,\mathrm{tor}}_{C},\mathfrak{Ig}^{b,*}_{ C}\), which should model the fibers of \(\overline{\pi}^{\circ}_{\mathrm{HT}}\), \(\pi^{\mathrm{tor}}_{\mathrm{HT}}\), and \(\pi^{*}_{\mathrm{HT}}\), respectively. We let \(\partial\mathfrak{Ig}^{b,*}_{C}\) and \(\partial\mathfrak{Ig}^{b,\mathrm{tor}}_{C}\) be the Zariski closed subspaces attached to the boundaries \(\partial\mathrm{Ig}^{b,*}\) and \(\partial\mathrm{Ig}^{b,\mathrm{tor}}\), respectively.
Let \(g_{b}:\mathrm{Ig}^{b}\hookrightarrow\mathrm{Ig}^{b,*}\) be the natural open immersion of \(\overline{\mathbb{F}}_{p}\)-schemes. We define the partially compactly supported cohomology
\[R\Gamma_{c-\partial}(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell}):=R\Gamma( \mathrm{Ig}^{b,*},g_{b!}(\overline{\mathbb{F}}_{\ell})).\]
Our goal is to show that this computes the fibers of \(R\pi_{\mathrm{HT}!}(\overline{\mathbb{F}}_{\ell})\) for geometric points in \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\).
To get a clearer picture of how these spaces interact with each other, we have the following theorem.
**Theorem 3.2**.: _[_11, 12_]_ _There exists a diagram of spaces of the form_
_where the maps \(i\), \({}^{*}i\), and \({}^{\operatorname{tor}}i\) are open immersions whose image contains all rank \(1\) points. Moreover, the fibers \((\pi_{\operatorname{HT}}^{*})^{-1}(x)\) and \((\pi_{\operatorname{HT}}^{\operatorname{tor}})^{-1}(x)\) are partially proper, so in particular \({}^{*}i\) and \({}^{\operatorname{tor}}i\) are canonical compactifications in the sense of [13, Proposition 18.6]._
Proof.: This theorem in the case where \((\mathbf{G},X)\) is of PEL type \(A\) attached to a globally quasi-split unitary group of even dimension is [13, Theorems 2.7.2, Theorem 4.5.1], and the general case of PEL type \(A\) or \(C\) is proven in [11, Theorems 4.3.10, 4.3.12].
We will also combine this with the following result.
**Theorem 3.3**.: _[_13, 12_]_ _The partially minimally compactified Igusa variety \(\operatorname{Ig}^{b,*}\) is affine; in particular, the attatched adic space \(\mathfrak{Jg}^{b,*}_{C}\) is affinoid perfectoid. Moreover, there exists a proper map_
\[\mathfrak{Jg}^{b,\operatorname{tor}}_{C}\to\mathfrak{Jg}^{b,*}_{C}\]
_which induces an isomorphism on global sections._
Proof.: For the affineness, the case of PEL type \(A\) attached to a global quasi-split unitary group of even dimension is covered by [13, Theorem 1.7] and [13, Lemma 4.5.2]. The general case of PEL type \(A\) or \(C\) is covered in [11, Lemma 3.3.7]. To see the properness, we note that the map
\[\mathfrak{Jg}^{b,\operatorname{tor}}_{C}\to\mathfrak{Jg}^{b,*}_{C}\]
is the one appearing in the Stein factorization described in [13, Proposition 3.3.4] and [11, Proposition 3.3.5]. This also shows that one has an isomorphism on global sections.
We will also need the following Corollary.
**Corollary 3.4**.: _The boundary \(\partial\mathfrak{Jg}^{b,\operatorname{tor}}_{C}\) is quasi-compact and \(\partial\mathfrak{Jg}^{b,*}_{C}\) is affinoid perfectoid (In particular, quasi-compact)._
Proof.: The fact that \(\mathfrak{Jg}^{b,*}_{C}\) is affinoid perfectoid follows from the previous Theorem. Moreover, \(\partial\mathfrak{Jg}^{b,*}_{C}\) is a Zariski closed subspace, since it came from considering the adic generic fiber of a formal model of the perfect closed subscheme \(\partial\operatorname{Ig}^{b,*}\subset\operatorname{Ig}^{b,*}\). The claim for the toroidal compactification follows since the map \(\mathfrak{Jg}^{b,\operatorname{tor}}_{C}\to\mathfrak{Jg}^{b,*}_{C}\) is proper, and maps the boundary \(\partial\mathfrak{Jg}^{b,\operatorname{tor}}_{C}\) to \(\partial\mathfrak{Jg}^{b,*}_{C}\).
It is natural to wonder how one could describe the fiber of \(\pi_{\operatorname{HT}}\) in terms of the spaces described above. In particular, we now deduce the following Corollary.
**Corollary 3.5**.: _For \(x:\operatorname{Spa}(C,C^{+})\to\mathcal{F}\ell^{b}_{G,\mu^{-1}}\) a geometric point, we have isomorphisms_
\[\pi_{\operatorname{HT}}^{-1}(x)\simeq\overline{\mathfrak{Jg}}^{b,*}_{C} \setminus\overline{\partial\mathfrak{Jg}}^{b,*}_{C}\simeq\overline{\mathfrak{ Jg}}^{b,\operatorname{tor}}_{C}\setminus\overline{\partial\mathfrak{Jg}}^{b, \operatorname{tor}}_{C}\]
_induced by the natural open immersions \(\pi_{\operatorname{HT}}^{-1}(x)\hookrightarrow(\pi_{\operatorname{HT}}^{*})^{ -1}(x)\simeq\overline{\mathfrak{Jg}}^{b,*}_{C}\) (resp. \(\pi_{\operatorname{HT}}^{-1}(x)\hookrightarrow(\pi_{\operatorname{HT}}^{ \operatorname{tor}})^{-1}(x)\simeq\overline{\mathfrak{Jg}}^{b,\operatorname {tor}}_{C}\)), as given by Theorem 3.2. Here \(\overline{\partial\mathfrak{Jg}}^{b,*}_{C}\) (resp. \(\overline{\partial\mathfrak{Jg}}^{b,\operatorname{tor}}_{C}\)) is the Zariski closed subset of \(\overline{\mathfrak{Jg}}^{b,*}_{C}\) (resp. \(\overline{\mathfrak{Jg}}^{b,\operatorname{tor}}_{C}\)) defined by the canonical compactification of the boundaries \(\partial\mathfrak{Jg}^{b}_{C}\subset\mathfrak{Jg}^{b}_{C}\) (resp. \(\partial\mathfrak{Jg}^{b,\operatorname{tor}}_{C}\subset\mathfrak{Jg}^{b, \operatorname{tor}}_{C}\))._
Proof.: We first establish the claim for the toroidal compactification. We consider the closed immersion
\[\overline{\mathfrak{Jg}}^{b,\operatorname{tor}}_{C}\times_{\mathcal{S}^{ \operatorname{tor}}_{K^{p},C}}\partial\mathcal{S}^{\operatorname{tor}}_{K^{p},C}\hookrightarrow\overline{\partial\mathfrak{Jg}}^{b,\operatorname{tor}}_{C}\]
obtained by base-changing the closed immersion \(\partial\mathcal{S}^{\operatorname{tor}}_{K^{p},C}\hookrightarrow\mathcal{S}^{ \operatorname{tor}}_{K^{p},C}\) to the fiber \((\pi_{\operatorname{HT}}^{\operatorname{tor}})^{-1}(x)\) and applying Theorem 3.2. To show the desired claim, it suffices to show this is an isomorphism. Note that both the LHS and RHS are partially proper; therefore, to show this is an isomorphism, it suffices to show it induces an isomorphism on rank \(1\) points. In particular, given \(C^{\prime}/C\) a complete
algebraically closed non-archimedean field, we claim that there exists a Cartesian diagram of the form
This can be checked using the moduli interpretation, as in the proof of [15, Theorem 4.4.1]. In particular, given a \(\operatorname{Spa}(C^{\prime},\mathcal{O}_{C^{\prime}})\) point of \(\mathfrak{Ig}_{C}^{b,\operatorname{tor}}\) specializing to a boundary component indexed by an Igusa cusp label \(\tilde{Z}\) (See SS2.2.2 for the definition of Igusa cusp label), then from the discussion in SS2.2.3 this corresponds to the datum of \((\mathcal{B},\mathcal{G},\mathcal{G}^{\vee},\rho,y)\), where
1. \(\mathcal{B}\) is an abelian scheme over \(\mathcal{O}_{C^{\prime}}\) with polarization, and \(\mathcal{O}_{B}\)-structure,
2. \(\mathcal{G}\) is a Raynaud extension \[0\to T\to\mathcal{G}\to\mathcal{B}\to 0\] of \(\mathcal{B}\) by a torus \(T\) with cocharacter group \(X\),
3. \(\mathcal{G}^{\vee}\) is a Raynaud extension \[0\to T^{\vee}\to\mathcal{G}^{\vee}\to\mathcal{B}^{\vee}\to 0\] of \(\mathcal{B}^{\vee}\) by a torus \(T^{\vee}\) with cocharacter group \(Y\),
4. \(\rho\) is an \(\mathcal{O}_{B}\)-linear isomorphism \(\rho:\mathcal{G}[p^{\infty}]\simeq Z_{b,-1}\), extending the isomorphism \(T[p^{\infty}]\simeq Z_{b,-2}\),
5. \(y\in\mathcal{P}_{\Sigma_{\Phi}}(\mathcal{O}_{C^{\prime}})\), where \(\mathcal{P}_{\Sigma_{\Phi}}\) is the toroidal compactification (determined by an admissible rpcd \(\Sigma_{\Phi}\), as in SS2.1.2) of a torsor under a torus \(E_{\Phi}/\mathcal{O}_{C^{\prime}}\), whose character group is given by \(S_{\Phi}\), and \(y\) is a point whose special fiber lies in the boundary.
The natural map \(\mathfrak{Ig}^{b,\operatorname{tor}}(C^{\prime},\mathcal{O}_{C^{\prime}}) \hookrightarrow\mathcal{S}^{\operatorname{tor}}_{K^{p},C}(C^{\prime}, \mathcal{O}_{C^{\prime}})\) is determined by forgetting the trivialization \(\rho:Z_{b,-1}\simeq\mathcal{G}[p^{\infty}]\). The image under this map of a point lying in the boundary \(\partial\mathfrak{Ig}_{C}^{b,\operatorname{tor}}\) is equivalent to insisting that \(y\) is a point whose special fiber lies in the boundary of \(\mathcal{P}_{\Sigma_{\Phi}}\) for some Igusa cusp label \(\tilde{Z}\), which is not the trivial one. Let \(Z=(Z,\Phi)\) be the cusp label which \(\tilde{Z}\) lives over. This is then equivalent to the condition guaranteeing that the image lies in the component of \(\mathcal{S}^{\operatorname{tor}}_{K^{p},C}\) indexed by \(Z=(Z,\Phi)\), where \(Z\) is not the trivial cusp label, and thus lies in the boundary \(\partial\mathcal{S}^{\operatorname{tor}}_{K^{p},C}\). The claim follows.
It remains to see the analogous claim for the minimal compactification. This follows easily using Theorem 3.2 and the fact that the proper surjective map \(\mathfrak{Ig}_{C}^{b,\operatorname{tor}}\to\mathfrak{Ig}_{C}^{b,*}\) sends \(\partial\mathfrak{Ig}_{C}^{b,\operatorname{tor}}\) to \(\partial\mathfrak{Ig}_{C}^{b,*}\) by construction.
We have the following Corollary.
**Corollary 3.6**.: _For a geometric point \(x:\operatorname{Spa}(C,C^{+})\to\mathcal{F}\ell_{G,\mu^{-1}}^{b}\), we have an identification:_
\[R\Gamma_{c-\partial}(\operatorname{Ig}^{b},\overline{\mathbb{F}}_{\ell}) \simeq R\pi_{\operatorname{HTI}}(\overline{\mathbb{F}}_{\ell})_{x}.\]
Proof.: We have an identification \(\pi_{\operatorname{HT}}^{-1}(x)\simeq\overline{\mathfrak{Ig}}_{C}^{b,*} \setminus\overline{\partial\mathfrak{Ig}}_{C}^{b,*}\) by the previous Corollary, so, by proper base-change, we are tasked with computing the compactly supported cohomology of this space. We note, by Theorem 3.3, the adic spaces \(\mathfrak{Ig}_{C}^{b,*}\) are affinoid perfectoid. It follows that the canonical compactification \(\overline{\mathfrak{Ig}}_{C}^{b,*}\simeq(\pi_{\operatorname{HT}}^{*})^{-1}(x)\) is also affinoid perfectoid, by [11, Proposition 18.7 (iv)]. In particular, it is quasi-compact and partially proper, so in particular proper. It therefore follows by excision1 that we have a distinguished triangle
Footnote 1: One easily checks that the excision sequence is exact on points, and this is sufficient by [11, Proposition 14.3].
\[R\Gamma_{c}(\pi_{\operatorname{HT}}^{-1}(x),\overline{\mathbb{F}}_{\ell}) \to R\Gamma(\overline{\mathfrak{Ig}}_{C}^{b,*},\overline{\mathbb{F}}_{\ell}) \to R\Gamma(\overline{\partial\mathfrak{Ig}}_{C}^{b,*},\overline{\mathbb{F}}_ {\ell})\overset{+1}{\longrightarrow}. \tag{3}\]
Applying Theorem 3.2 again, we know that \(k:\mathfrak{Ig}^{b,*}\hookrightarrow\overline{\mathfrak{Ig}}_{C}^{b,*}\) is a qcqs open immersion of perfectoid spaces inducing an isomorphism on rank \(1\) points, and it follows that the same is true for the induced map on the Zariski closed subspaces \({}^{\partial}k:\partial\mathfrak{Ig}^{b,*}\hookrightarrow\overline{\partial \mathfrak{Ig}}_{C}^{b,*}\). Therefore, we can apply [13, Lemma 4.4.2], this tells us that the natural maps
\[\overline{\mathbb{F}}_{\ell}\to k_{*}(\overline{\mathbb{F}}_{\ell})\]
\[\overline{\mathbb{F}}_{\ell}\to{}^{\partial}k_{*}(\overline{\mathbb{F}}_{\ell})\]
are isomorphisms, giving identifications \(R\Gamma(\overline{\mathfrak{Ig}}_{C}^{b,*},\overline{\mathbb{F}}_{\ell}) \simeq R\Gamma(\mathfrak{Ig}_{C}^{b,*},\overline{\mathbb{F}}_{\ell})\) and \(R\Gamma(\partial\overline{\mathfrak{Ig}}_{C}^{b,*},\overline{\mathbb{F}}_{ \ell})\simeq R\Gamma(\partial\mathfrak{Ig}_{C}^{b,*},\overline{\mathbb{F}}_{ \ell})\). Now, by [13, Lemma 4.4.3], we have further identifications of \(R\Gamma(\partial\mathfrak{Ig}_{C}^{b,*},\overline{\mathbb{F}}_{\ell})\) and \(R\Gamma(\mathfrak{Ig}_{C}^{b,*},\overline{\mathbb{F}}_{\ell})\) with the cohomology of the perfect schemes \(\partial\mathrm{Ig}^{b,*}\) and \(\mathrm{Ig}^{b,*}\), respectively. Substituting this into the triangle (3), we get a distinguished triangle
\[R\Gamma_{c}(\pi_{\mathrm{HT}}^{-1}(x),\overline{\mathbb{F}}_{\ell})\to R \Gamma(\mathrm{Ig}^{b,*},\overline{\mathbb{F}}_{\ell})\to R\Gamma(\partial \mathrm{Ig}^{b,*},\overline{\mathbb{F}}_{\ell})\xrightarrow{+1}.\]
By applying quasi-compact base-change [14, Proposition 17.6] and then using that the inclusion \(\partial\mathfrak{Ig}_{C}^{b,*}\subset\mathfrak{Ig}_{C}^{b,*}\) is induced from taking the rigid generic fiber over \(C\) of Witt vectors applied to \(\partial\mathrm{Ig}^{b,*}\subset\mathrm{Ig}^{b,*}\), we identify the last map with the natural restriction map on the cohomology. However, this identifies the first term with precisely the partially compactly supported cohomology, as desired.
We will combine this with the following proposition, which already hints at our expectation that \(R\pi_{\mathrm{HT}}(\overline{\mathbb{F}}_{\ell})\) is connective in some suitable perverse \(t\)-structure.
**Proposition 3.7**.: _If \(d_{b}:=\langle 2\rho_{G},\nu_{b}\rangle=\dim(\mathrm{Ig}^{b,*})=\dim( \mathrm{Ig}^{b})\) then the cohomology of the complex_
\[R\Gamma_{c-\partial}(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell})\simeq R \pi_{\mathrm{HT}}(\overline{\mathbb{F}}_{\ell})_{x}\]
_is concentrated in degrees \(\leq d_{b}\)._
Proof.: We saw in Theorem 3.3 that \(\mathrm{Ig}^{b,*}\) is an affine scheme. So we would like to apply Artin vanishing; however, \(\mathrm{Ig}^{b,*}\) is also a perfect scheme so in particular not of finite type. To remedy this, consider the pro-etale cover
\[\mathrm{Ig}^{b}\to\mathcal{C}^{\mathrm{perf}}_{\mathbb{X}_{b}},\]
with Galois group \(\mathrm{Aut}(\mathbb{X})(\overline{\mathbb{F}}_{p})\) over the perfection of the central leaf attached to \(\mathbb{X}_{b}\). This is obtained as the perfection of the limit of the finite etale covers
\[\mathrm{Ig}^{b}_{m}\to\mathcal{C}_{\mathbb{X}_{b}},\]
described in 2.2.3, as shown in [13, Proposition 4.3.8]. These spaces are of finite type over \(\overline{\mathbb{F}}_{p}\). We now define \(\mathrm{Ig}^{b,*}_{m}\) to be the normalization of \(\mathcal{C}^{*}_{\mathbb{X}}\) in \(\mathrm{Ig}^{b}_{m}\) of the finite etale cover \(\mathrm{Ig}^{b}_{m}\to\mathcal{C}_{\mathbb{X}_{b}}\). By [12, Theorem 3.33], \(\mathcal{C}^{*}_{\mathbb{X}}\) is affine; therefore, it follows that \(\mathrm{Ig}^{b,*}_{m}\) is a normal and affine scheme which will be of finite type, since \(\mathrm{Ig}^{b}_{m}\) is. It follows, by definition of \(\mathrm{Ig}^{b,*}\), that it is the perfection of \(\lim_{m\geq 1}\mathrm{Ig}^{b,*}_{m}\). Therefore, since passing to perfections doesn't change the etale cohomology, we can conclude by combining Artin vanishing with an application of [12, Tag 09QY] to the system of sheaves \(g_{b,m!}(\overline{\mathbb{F}}_{\ell})\), where \(g_{b,m}:\mathrm{Ig}^{b}_{m}\to\mathrm{Ig}^{b,*}_{m}\) is the natural open inclusion at finite level.
Now we would like to link this analysis with the semi-perversity of certain sheaves on \(\mathrm{Bun}_{G}\). We consider the Hodge-Tate period morphism
\[\pi_{\mathrm{HT}}:[\mathcal{S}_{K^{p}}/\underline{G(\mathbb{Q}_{p})}]\to[ \mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}]\]
quotiented out by \(G(\mathbb{Q}_{p})\). We let \(h^{\to}:[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}]\to[ \mathrm{Spd}(C)/\underline{G(\mathbb{Q}_{p})}]\simeq\mathrm{Bun}^{1}_{G}\) be the structure map quotiented out by \(G(\mathbb{Q}_{p})\). Note this is a proper map, since \(\overline{\mathcal{F}}\ell_{G,\mu^{-1}}\) is proper over \(\mathrm{Spd}(C)\).
Then we have an identification
\[R\Gamma_{c}(\mathcal{S}_{K^{p},C},\overline{\mathbb{F}}_{l})\simeq h_{*}^{\to}R \pi_{\mathrm{HT}!}(\overline{\mathbb{F}}_{\ell}) \tag{4}\]
of \(G(\mathbb{Q}_{p})\)-representations, and this computes the compactly supported torsion cohomology of the Shimura variety.
Similarly, we have a map
\[h^{\leftarrow}:[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}] \to\mathrm{Bun}_{G}\]
remembering the isomorphism class of the bundle \(\mathcal{E}_{1}\) in the moduli interpretation of \(\mathcal{F}\ell_{G,\mu^{-1}}\) as a diamond. This defines a cohomologically smooth map by [11, Theorem IV.1.19], and the image identifies with the open subset \(B(G,\mu)\subset B(G)\) under the identification \(|\mathrm{Bun}_{G}|\simeq B(G)\) of topological spaces, where \(|\mathrm{Bun}_{G}|\) denotes the underlying topological space of \(\mathrm{Bun}_{G}\) and \(B(G)\) has the topology given by its natural partial ordering [10]. For each \(b\), we have a locally closed Harder-Narasimhan stratum \(j_{b}:\mathrm{Bun}_{G}^{b}\hookrightarrow\mathrm{Bun}_{G}\), and we can define the locally closed subset \([\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q}_{p})}]\), by pulling back this HN-strata along \(h^{\leftarrow}\). This defines a locally closed stratification of \([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}]\). Let \(i_{b}:[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q}_{p})}] \hookrightarrow[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}]\) denote the associated locally closed immersion. We write \(\pi_{\mathrm{HT}}^{b}:[\mathcal{S}_{K^{p},C}^{b}/\underline{G(\mathbb{Q}_{p}) }]\to[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q}_{p})}]\) (resp. \(\pi_{\mathrm{HT}}^{b,*}:[\mathcal{S}_{K_{p},C}^{b,*}/\underline{G(\mathbb{Q}_{ p})}]\to[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q}_{p})}]\)) for the pullbacks of \(\pi_{\mathrm{HT}}\) (resp. \(\pi_{\mathrm{HT}}^{*}\)) along \(i_{b}\). On the good reduction locus, we also have an additional stratification coming from pulling back the Newton stratification on the special fiber along the specialization map. There is a rather subtle point that this does not agree with the pullback of the locally closed strata \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\) (namely, the closure relationships are opposite with respect to the partial ordering on \(B(G)\)). We write \(\mathcal{S}_{K^{p},C}^{b,\circ,\mathrm{rd}}\) for these Newton strata coming from the special fiber. There exists a natural map
\[\mathcal{S}_{K^{p},C}^{b,\circ,\mathrm{rd}}\times_{\mathcal{F}\ell_{G,\mu^{-1 }}}\mathcal{F}\ell_{G,\mu^{-1}}^{b}\hookrightarrow\mathcal{S}_{K^{p},C}^{b, \circ}\]
which is a qcqs open immersion containing all rank \(1\) points ([11, Page 68][12, Page 8]). We write \(\pi_{\mathrm{HT}}^{b,\circ}:[(\mathcal{S}_{K^{p},C}^{b,\circ,\mathrm{rd}} \times_{\mathcal{F}\ell_{G,\mu^{-1}}}\mathcal{F}\ell_{G,\mu^{-1}}^{b})/G( \mathbb{Q}_{p})]\to[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q}_{ p})}]\) for the Hodge-Tate period map on this locus, and similarly we write \(\overline{\pi}_{\mathrm{HT}}^{b,\circ}:[\overline{\mathcal{S}}_{K^{p},C}^{b, \circ}/\underline{G(\mathbb{Q}_{p})}]\to[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/ \underline{G(\mathbb{Q}_{p})}]\) for the induced map on the canonical compactification, where we note that this agrees with the canonical compactification of \([(\mathcal{S}_{K^{p},C}^{b,\circ,\mathrm{rd}}\times_{\mathcal{F}\ell_{G,\mu^ {-1}}}\mathcal{F}\ell_{G,\mu^{-1}}^{b})/\underline{G(\mathbb{Q}_{p})}]\) by the previous remark on rank \(1\) points and the fact that \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\) is partially proper2.
Footnote 2: The partial properness of these strata follows directly from the moduli interpretation, since the category of vector bundles on the Fargues-Fontaine curve is insensitive to the ring of definition.
Define the group diamond \(\mathcal{J}_{b}:=\mathrm{Aut}(\mathcal{E}_{b})\), as in [11, Proposition III.5.1]. We have an isomorphism \(j_{b}:\mathrm{Bun}_{G}^{b}\simeq[\mathrm{Spd}(C)/\mathcal{J}_{b}]\hookrightarrow \mathrm{Bun}_{G}\) with the locally closed HN-strata in \(\mathrm{Bun}_{G}\) defined by \(b\). There is a \(\mathcal{J}_{b}\)-torsor over the adic Newton strata \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\) given by rigidifying the bundle \(\mathcal{E}\) to be isomorphic to \(\mathcal{E}_{b}\). This gives a map
\[h_{b}^{\leftarrow}:[\mathcal{F}\ell_{G,\mu^{-1}}^{b}/\underline{G(\mathbb{Q}_{ p})}]\to[\mathrm{Spd}(C)/\mathcal{J}_{b}]\simeq\mathrm{Bun}_{G,C}^{b}\]
such that \(h^{\leftarrow}\circ i_{b}=j_{b}\circ h_{b}^{\leftarrow}\).
The perfectoid Igusa variety \(\mathfrak{Ig}_{C}^{b}\) comes equipped with an action of \(\mathcal{J}_{b}\). Namely, using [11, Proposition 4.2.11], we get an action on the trivialization of \(\mathbb{X}_{b}\), as in the moduli description in equation (2). This action extends to the formal model, giving rise to the action of \(\mathcal{J}_{b}\) the generic fiber. This allows us to form the map of \(v\)-stacks:
\[\pi_{\mathfrak{Ig}}^{b}:[\mathfrak{Ig}_{C}^{b}/\mathcal{J}_{b}]\to[\mathrm{ Spd}(C)/\mathcal{J}_{b}].\]
We would like to say \(\pi_{\mathfrak{Ig}}^{b}\) pulls back to the map \(\pi_{\mathrm{HT}}^{b}\). However, as seen in Corollary 3.6, we need to account for the additional points in the fiber of the Hodge-Tate period morphism that are not seen
by the perfectoid Igusa varieties \(\mathfrak{Ig}_{C}^{b}\). To capture this, we need to show that \(\pi_{\mathfrak{Ig}}^{b}\) also extends to the partial minimal compactification. We have the following.
**Proposition 3.8**.: _[_23_, Corollary 9.43]_ _Assuming 1.11, the action of \(\mathcal{J}_{b}\) on the perfectoid Igusa variety \(\mathfrak{Ig}_{C}^{b}\) extends uniquely to an action on \(\mathfrak{Ig}_{C}^{b,*}\). In particular, by functorality of the formation of the canonical compactification ([18, Proposition 18.6]), we have a map_
\[\pi_{\mathfrak{Ig}}^{b,*}:[\overline{\mathfrak{Ig}}_{C}^{b,*}/\mathcal{J}_{ b}]\to[\operatorname{Spd}(C)/\mathcal{J}_{b}]\]
_extending \(\pi_{\mathfrak{Ig}}^{b}\). This action preserves the boundary \(\partial\mathfrak{Ig}^{b,*}\), so, in particular, we also get a map_
\[\pi_{\mathfrak{Ig}}^{b,\partial}:[(\overline{\mathfrak{Ig}}_{C}^{b,*} \setminus\overline{\partial\mathfrak{Ig}}_{C}^{b,*})/\mathcal{J}_{b}]\to[ \operatorname{Spd}(C)/\mathcal{J}_{b}]\]
_by restriction._
Proof.: We consider the open immersion
\[g_{b}:\operatorname{Ig}^{b}\to\operatorname{Ig}^{b,*}\]
of perfect schemes, which we claim induces an isomorphism on global sections. To show this, we write \(g_{b}\) as the perfection of the limit of the corresponding maps at finite level
\[g_{b,m}:\operatorname{Ig}_{m}^{b}\hookrightarrow\operatorname{Ig}_{m}^{b,*},\]
as explained in the proof of Proposition 3.7. Under Assumption 1.11, we can apply the algebraic form of Hartogs' principle (See for example [15, Proposition III.2.9]) to the open inclusion \(g_{b,m}\) to conclude an isomorphism of global sections via restriction. This gives the corresponding claim for the map \(g_{b}\) of perfect schemes. In particular, we have an isomorphism
\[\mathcal{O}(\operatorname{Ig}^{b})\simeq\mathcal{O}(\operatorname{Ig}^{b,*}) \tag{5}\]
on global sections. Taking generic fibers of the corresponding integral models, we claim that we obtain an isomorphism
\[\mathcal{O}(\mathfrak{Ig}_{C}^{b})\simeq\mathcal{O}(\mathfrak{Ig}_{C}^{b,*}), \tag{6}\]
of global sections. We need to be a bit careful here with analytic sheafification. In particular, for an index set \(I\), we let \(\{U_{i}\}_{i\in I}\) be an affine covering of \(\operatorname{Ig}^{b}\), and compute global sections via the Cech complex
\[0\to\mathcal{O}(\operatorname{Ig}^{b})\to\prod_{i\in I}\mathcal{O}(U_{i})\to \prod_{i,j\in I}\mathcal{O}(U_{i}\cap U_{j})\to\cdots. \tag{7}\]
We let \(\mathfrak{U}_{i}\) be the formal schemes obtained by taking Witt vectors of this affine covering, with adic generic fibers \(\mathfrak{U}_{i,C}\) over \(C\). These form an affinoid perfectoid covering of the adic space \(\mathfrak{Ig}_{C}^{b}\), and, by the acyclicity of affinoid perfectoids [15, Theorem 1.8 (iv)], we have that
\[0\to\mathcal{O}(\mathfrak{Ig}_{C}^{b})\to\prod_{i\in I}\mathcal{O}(\mathfrak{ U}_{i,C})\to\prod_{i,j\in I}\mathcal{O}(\mathfrak{U}_{i,C}\cap\mathfrak{U}_{j,C})\to\cdots. \tag{8}\]
It follows that the Cech complex (8) is obtained from Cech complex (7) by taking Witt vectors followed by taking the completed tensor product with \(C\). Using this, we deduce that the identification (6) follows from the identification (5), as desired.
Now \(\mathcal{J}_{b}\) acts on the LHS of (6), as discussed above. Using that \(\mathfrak{Ig}^{b,*}\) is affinoid, this will give the desired action on \(\mathfrak{Ig}^{b,*}\), and one can see that it preserves the boundary by using the moduli interpretation of the toroidal compactifications, and the description of the \(J_{b}(\mathbb{Q}_{p})\) action on cusp labels, as discussed in SS2.2.2.
Lastly, we will consider the map \(\overrightarrow{\pi}^{b}_{\mathfrak{Ig}}:[\overrightarrow{\mathfrak{Ig}}^{b}_{C}/ \mathcal{J}_{b}]\to[*/\mathcal{J}_{b}]\), given by taking the canonical compactification of \(\pi^{b}_{\mathfrak{Ig}}\), where we note, by [13, Proposition 4.2.22], the \(v\)-stack \([\mathrm{Spd}(C)/\mathcal{J}_{b}]\) is partially proper over \(\mathrm{Spd}(C)\). We now have the following Proposition.
**Proposition 3.9**.: _The maps constructed above fit into the following Cartesian squares3_
Footnote 3: We emphasize that these are really diagrams of \(v\)-stacks and that all fiber products are formed in this category.
\[\begin{CD}\left[(\mathcal{S}^{b,\circ,\mathrm{rd}}_{K^{p},C}\times_{ \mathcal{F}_{\ell_{G,\mu^{-1}}}}\mathcal{F}\ell^{b}_{G,\mu^{-1}})/\underline{G (\mathbb{Q}_{p})}\right]@>{\pi^{b,\circ}_{\mathrm{H}T}}>{}>\left[\mathcal{F} \ell^{b}_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}\right]\\ @V{}V{\mathfrak{h}^{-}_{b}}V@V{}V{h^{-}_{b}}V\\ \left[\mathfrak{Ig}^{b}_{C}/\mathcal{J}_{b}\right]@>{\pi^{b}_{\mathfrak{Ig}} }>{}>[\mathrm{Spd}(C)/\mathcal{J}_{b}]\end{CD} \tag{9}\]
_and_
\[\begin{CD}\left[\overrightarrow{\mathcal{S}}^{b,\circ}_{K^{p},C}/G(\mathbb{Q }_{p})\right]@>{\overrightarrow{\pi^{b,\circ}_{\mathrm{H}T}}}>{}>\left[ \mathcal{F}\ell^{b}_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}\right]\\ @V{}V{h^{-}_{b}}V@V{}V{h^{-}_{b}}V\\ \left[\overrightarrow{\mathfrak{Ig}}^{b}_{C}/\mathcal{J}_{b}\right]@>{\overrightarrow {\pi^{b}_{\mathfrak{Ig}}}}>{}>[\mathrm{Spd}(C)/\mathcal{J}_{b}]\,.\end{CD} \tag{10}\]
Proof.: Consider the moduli space of local shtukas \(\mathrm{Sht}(G,b,\mu)_{\infty,C}\), as defined in [14, SS23]. This represents the functor sending \(S\in\mathrm{Perf}_{C}\) to the set of all pairs \((S^{\#},\alpha)\) where \(S^{\#}\) is the untilt of \(S\) coming from the map \(S\to\mathrm{Spd}(C)\), and \(\alpha\) is a modification from \(\mathcal{E}_{b}\dashrightarrow\mathcal{E}_{0}\) with meromorphy along \(S^{\#}\) and bounded by \(\mu\). We have a local Hodge-Tate period morphism
\[\mathrm{Sht}(G,b,\mu)_{\infty,C}\to\mathcal{F}\ell^{b}_{G,\mu^{-1}},\]
which fits into the following Cartesian diagram coming from the definition of \(\mathrm{Sht}(G,b,\mu)_{\infty,C}\).
\[\begin{CD}\mathrm{Sht}(G,b,\mu)_{\infty,C}@>{}>{}>\mathrm{Spd}(C)\\ @V{}V{\mathcal{F}\ell^{b}_{G,\mu^{-1}}}V@V{}V{[\mathrm{Spd}(C)/\mathcal{J}_{b} ]}V\\ \end{CD} \tag{11}\]
Let \(\mathrm{Sht}(G,b,\mu)_{\infty,C}\times^{\mathcal{J}_{b}}\mathfrak{Ig}^{b}_{C}\) denote the quotient of \(\mathrm{Sht}(G,b,\mu)_{\infty,C}\times_{C}\mathfrak{Ig}^{b}_{C}\) by \(\{(jx,j^{-1}y):j\in\mathcal{J}_{b},x\in\mathrm{Sht}(G,b,\mu)_{\infty,C},y\in \mathfrak{Ig}^{b}_{C}\}\). To see that the diagram (9) above is Cartesian, observe that (11) implies we have an isomorphism
\[\mathcal{F}\ell^{b}_{G,\mu^{-1}}\times_{[\mathrm{Spd}(C)/\mathcal{J}_{b}]}[ \mathfrak{Ig}^{b}_{C}/\mathcal{J}_{b}]\simeq\mathrm{Sht}(G,b,\mu)_{\infty,C} \times^{\mathcal{J}_{b}}\mathfrak{Ig}^{b}_{C}.\]
Moreover, we see, by [13, Corollary 4.3.19, Lemma 4.3.20], that we have an isomorphism
\[\mathrm{Sht}(G,b,\mu)_{\infty,C}\times_{C}\mathfrak{Ig}^{b}_{C}\simeq\mathcal{ S}^{b,\circ,\mathrm{rd}}_{K^{p}}\times_{\mathcal{F}\ell_{G,\mu^{-1}}}\mathrm{ Sht}(G,b,\mu)_{\infty,C}, \tag{12}\]
and again applying (11) implies that \(\mathcal{S}^{b,\circ,\mathrm{rd}}_{K^{p}}\times_{\mathcal{F}\ell_{G,\mu^{-1}}} \mathcal{F}\ell^{b}_{G,\mu^{-1}}\) is isomorphic to the quotient of \(\mathcal{S}^{b,\circ,\mathrm{rd}}_{K^{p}}\times_{\mathcal{F}\ell_{G,\mu^{-1}}} \mathrm{Sht}(G,b,\mu)_{\infty,C}\) by the action of \(\mathcal{J}_{b}\) (here \(\mathcal{J}_{b}\) acts via the action on the second factor). Thus, we have an isomorphism:
\[\mathrm{Sht}(G,b,\mu)_{\infty,C}\times^{\mathcal{J}_{b}}\mathfrak{Ig}^{b}_{C} \simeq\mathcal{S}^{b,\circ,\mathrm{rd}}_{K^{p}}\times_{\mathcal{F}\ell_{G,\mu^ {-1}}}\mathcal{F}\ell^{b}_{G,\mu^{-1}}.\]
This gives us the Cartesian diagram (10). Now, the natural map,
\[\mathcal{S}^{b,\circ,\mathrm{rd}}_{K^{p},C}\times_{\mathcal{F}\ell_{G,\mu^{-1}}} \mathcal{F}\ell^{b}_{G,\mu^{-1}}\hookrightarrow\mathcal{S}^{b,\circ}_{K^{p},C}\]
is a qcqs open immersion, which is an isomorphism on rank \(1\) points. In turn, it induces an isomorphism of canonical compactifications over the partially proper strata \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\). Therefore, by passing to canonical compactifications over \(\mathcal{F}\ell_{G,\mu^{-1}}^{b}\), we deduce that diagram (10) is also Cartesian.
We now invoke a result of Zhang [23].
**Theorem 3.10**.: _[_23_, Theorem 1.3]_ _Assuming 1.11, for all \(b\in B(G,\mu)\) the Cartesian diagram (10) extends to a Cartesian diagram_
(13)
_of \(v\)-stacks._
_Remark 3.11_.: In fact Zhang shows a much stronger claim, that there exists a series of larger Cartesian diagrams living over \(\operatorname{Bun}_{G.C}\) such that the diagrams (10) and (13), are the base-change along the inclusions \(j_{b}:\operatorname{Bun}_{G,C}^{b}\hookrightarrow\operatorname{Bun}_{G,C}\) of HN-strata for \(b\in B(G,\mu)\) varying.
_Remark 3.12_.: The rough idea behind proving this is to apply a relative Spa construction in the category of diamonds to the horizontal maps of the diagram (13) by invoking Hartogs' principle, as in Proposition 3.8.
We now state the key Corollary that we will need.
**Corollary 3.13**.: _Assuming 1.11, for all \(b\in B(G,\mu)\) we have a Cartesian diagram_
(14)
Proof.: This follows from the Cartesian diagram (13) and Corollary 3.5.
By the Cartesian diagram (14), if we look at the sheaf
\[i_{b}^{*}R\pi_{\operatorname{HT!}}(\overline{\mathbb{F}}_{\ell})\]
we can see that this is canonically identified with
\[h_{b}^{\leftarrow*}R\pi_{\mathfrak{Ig}}^{b,\partial}(\overline{\mathbb{F}}_{ \ell})\]
via proper base change. Moreover, we can identify \(R\pi_{\mathfrak{Ig}}^{b,\partial}(\overline{\mathbb{F}}_{\ell})\) simply with the complex \(V_{b}:=R\Gamma_{c-\partial}(\operatorname{Ig}^{b},\overline{\mathbb{F}}_{ \ell})\) of \(J_{b}(\mathbb{Q}_{p})\)-modules under the identification \(\operatorname{D}(\operatorname{Bun}_{G}^{b},\overline{\mathbb{F}}_{\ell}) \simeq\operatorname{D}(J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\), as in Corollary 3.6. We can further refine this using the following lemma.
**Lemma 3.14**.: _We have isomorphisms_
\[i_{bl}h_{b}^{\leftarrow*}(V_{b})\simeq h^{\leftarrow*}j_{bl}(V_{b})\]
_and_
\[i_{bs}h_{b}^{\leftarrow*}(V_{b})\simeq h^{\leftarrow*}j_{bs}(V_{b})\]
_of sheaves on \(\mathcal{F}\ell_{G,\mu^{-1}}\)._
Proof.: The first isomorphism follows from proper base-change. For the second isomorphism, we note that
\[h^{\leftarrow}:[\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}] \rightarrow\operatorname{Bun}_{G}\]
is cohomologically smooth separated and representable in locally spatial diamonds; therefore, the result follows by smooth base-change [11, Proposition 23.16 (2)].
_Remark 3.15_.: In particular, the graded pieces of the filtration
\[R\Gamma([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}],i_{bi}i_ {b}^{*}(R\pi_{\operatorname{HT!}}(\overline{\mathbb{F}}_{\ell})))\]
on the cohomology of the Shimura variety are identified with \(h_{*}^{\rightarrow}h^{\leftarrow*}j_{bl}(V_{b})\in\operatorname{D}(G( \mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\), and similarly for \(R\Gamma([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}],i_{bs}i_ {b}^{*}(R\pi_{\operatorname{HT!}}(\overline{\mathbb{F}}_{\ell})))\) and \(h_{*}^{\rightarrow}h^{\leftarrow*}j_{bs}(V_{b})\).
All in all, we get the following.
**Proposition 3.16**.: _Assuming 1.11, we have a filtration on \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell})\) by complexes of smooth representations of \(G(\mathbb{Q}_{p})\), with graded pieces isomorphic to \(h_{*}^{\rightarrow}h^{\leftarrow*}j_{bl}(V_{b})\), where \(V_{b}\simeq R\Gamma_{c-\partial}(\operatorname{Ig}^{b},\overline{\mathbb{F}}_ {\ell})\)._
The functor \(h_{*}^{\rightarrow}h^{\leftarrow*}(-)\) appearing on the graded pieces is manifestly related to the action on \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) by Hecke operators. In particular, for each geometric dominant cocharacter \(\mu\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\), we have a correspondence
where \(\operatorname{Hck}_{G,\leq\mu}\) is the stack parametrizing modifications \(\mathcal{E}_{1}\rightarrow\mathcal{E}_{2}\) of a pair of \(G\)-bundles with meromorphy bounded by \(\mu\) over the fixed untilt defined by \(C\). We define the Hecke operator [11, Section IX.2]
\[T_{\mu}:\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell}) \rightarrow\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{ \ell})^{BW_{E_{\mu}}}\]
\[A\mapsto h_{\mu*}^{\rightarrow}(h_{\mu}^{\leftarrow*}(A)\otimes^{\mathbb{ L}}\mathcal{S}_{\mu})\]
where \(E_{\mu}\) is the reflex field of \(\mu\) and \(\mathcal{S}_{\mu}\) is a sheaf on \(\operatorname{Hck}_{G,\leq\mu}\) attached to the highest weight tilting module \(\mathcal{T}_{\mu}\) by geometric Satake4. Here \(E_{\mu}\) denotes the reflex field of \(\mu\).
Footnote 4: We note that, using [11, Proposition VII.5.2], we can replace the natural push-forward in the category of solid sheaves with the * push-forward in the usual category of Γ©tale \(\overline{\mathbb{F}}_{\ell}\)-sheaves when defining the Hecke operator.
If we now let \(\mu\) be the minuscule cocharacter appearing above then the Bialynicki-Birula map gives an isomorphism of diamonds between the open locus of \(\operatorname{Hck}_{G,\leq\mu}\) where \(\mathcal{E}_{1}\) is isomorphic to the trivial bundle and \([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}]\), which identifies \(h_{\mu}^{\rightarrow}\) (resp. \(h_{\mu}^{\leftarrow}\)) with \(h^{\rightarrow}\) (resp. \(h^{\leftarrow}\)). Moreover, this is a cohomologically smooth space of dimension \(d:=(2\rho_{G},\mu)\), and we have an isomorphism \(\mathcal{S}_{\mu}\simeq\overline{\mathbb{F}}_{\ell}[d](\frac{d}{2})\)5. It follows, by proper base-change, that we have an isomorphism
Footnote 5: This is true for the highest weight module \(V_{\mu}\) and this agrees with the highest weight tilting module \(\mathcal{T}_{\mu}\), since \(\mu\) is minuscule.
\[h_{*}^{\rightarrow}h^{\leftarrow*}j_{bl}(V_{b})\simeq j_{1}^{*}T_{\mu}(j_{bl} (V_{b}))[-d](-\frac{d}{2})\]
of \(G(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-modules, where \(1\in B(G)\) is the trivial element. The \(W_{E_{\mathfrak{p}}}\)-equivariance follows since the above Cartesian diagrams all descend to \(\breve{E}_{\mathfrak{p}}\) and are also compatible with the Frobenius descent datum on \(\operatorname{Sht}(G,b,\mu)_{\infty,C}\rightarrow\operatorname{Spd}(\breve{E }_{\mathfrak{p}})\) (This is true for (9) by the results of [11] and all the other diagrams are constructed from this one). This gives us Theorem 1.13.
**Corollary 3.17**.: _Assuming 1.11, the complex \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell})\) has a \(G(\mathbb{Q}_{p})\times W_{E_{p}}\)-equivariant filtration with graded pieces given by \(j_{1}^{*}T_{\mu}(j_{b!}(V_{b}))[-d](-\frac{d}{2})\) for varying \(b\in B(G,\mu)\), where \(V_{b}\simeq R\Gamma_{c-\partial}(\operatorname{Ig}^{b},\overline{\mathbb{F}}_ {\ell})\)._
_Moreover, we obtain that each graded piece is isomorphic to_
\[(R\Gamma_{c}(G,b,\mu)\otimes_{\mathcal{H}(J_{b})}^{\mathbb{L}}V_{b})[2d_{b}]\]
_as \(G(\mathbb{Q}_{p})\times W_{E_{p}}\)-modules. Here_
\[R\Gamma_{c}(G,b,\mu):=\operatorname{colim}_{K_{p}\to\{1\}}R\Gamma_{c}( \operatorname{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}},\overline{\mathbb{F} }_{\ell}(d_{b}))\]
_is a complex of \(G(\mathbb{Q}_{p})\times J_{b}(\mathbb{Q}_{p})\times W_{E_{p}}\)-modules, where \(\operatorname{Sht}(G,b,\mu)_{\infty,C}\) is as defined above, and \(\overline{\mathbb{F}}_{\ell}(d_{b})\) is the sheaf with trivial Weil group action and \(J_{b}(\mathbb{Q}_{p})\)-action given as in [14, Lemma 7.4]._
Proof.: It remains to explain the description of \(j_{1}^{*}T_{\mu}(j_{b!}(V_{b}))[-d](-\frac{d}{2})\). By applying the second part of [12, Proposition 11.12] and noting that \(\mathcal{S}_{\mu}\simeq\overline{\mathbb{F}}_{\ell}[d](\frac{d}{2})\) since \(\mu\) is minuscule (where we recall that, since \(\mu\) is minuscule, the representation \(\mathcal{T}_{\mu}\) agrees with the usual highest weight representation), we obtain that the graded pieces are isomorphic to
\[\operatorname{colim}_{K_{p}\to\{1\}}(R\Gamma_{c}(\operatorname{Sht}(G,b,\mu) _{\infty,C}/\underline{K_{p}},\overline{\mathbb{F}}_{\ell})\otimes_{\mathcal{ H}(J_{b})}^{\mathbb{L}}V_{b}\otimes\kappa^{-1})[2d_{b}]\]
as desired, where \(\kappa\) is the character of \(J_{b}(\mathbb{Q}_{p})\) defined by the action of \(J_{b}(\mathbb{Q}_{p})\) on the compactly supported cohomology of the \(\ell\)-adically contractible group diamond \(\mathcal{J}_{b}^{>0}\), where \(\mathcal{J}_{b}\simeq\mathcal{J}_{b}^{>0}\ltimes J_{b}(\mathbb{Q}_{p})\) is the semi-direct product structure given by allowing \(\operatorname{Aut}(\mathcal{E}_{b})\) to act on its canonical reduction. However, by combining this with [14, Lemma 7.6] and its proof, we can rewrite this as
\[(\operatorname{colim}_{K_{p}\to\{1\}}R\Gamma_{c}(\operatorname{Sht}(G,b,\mu) _{\infty,C}/\underline{K_{p}},\overline{\mathbb{F}}_{\ell}(d_{b}))\otimes_{ \mathcal{H}(J_{b})}^{\mathbb{L}}V_{b})[2d_{b}],\]
as desired.
## 4. The Local Results
### The Spectral Action
Let \(G/\mathbb{Q}_{p}\) be a quasi-split connected reductive group with a choice of Borel \(B\) and maximal torus \(T\) as before. We will work with \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\), the derived category of etale \(\overline{\mathbb{F}}_{\ell}\)-sheaves on the moduli stack of \(G\)-bundles. Our goal in this section will be to describe a localization \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi_{ \mathfrak{m}}}\subset\operatorname{D}(\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})\) for \(\mathfrak{m}\subset H_{K_{p}^{\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm \ \leftleftleft({\rm{\rm{ \rm \rm \rm \rm { \rm \rm \rm \rm { \rm \rm { \rm \rm \ \ }}}}}}\). We will do this in slightly more generality using the spectral action [13, Section X.2]. We assume that \(\ell\) is very good as in [13, Page 33], and consider the moduli stack \(\mathfrak{X}_{\hat{G}}/\operatorname{Spec}\overline{\mathbb{F}}_{\ell}\) of \(\overline{\mathbb{F}}_{\ell}\)-valued Langlands parameters, as defined in [1, 15, 16]. We let \(\operatorname{Perf}(\mathfrak{X}_{\hat{G}})\) denote the derived category of perfect complexes on this stack, and we write \(\operatorname{Perf}(\mathfrak{X}_{\hat{G}})^{BW^{\prime}_{\mathbb{Q}_{p}}}\) for the derived category of objects with a continuous \(W^{I}_{\mathbb{Q}_{p}}\) action for a finite index set \(I\), and \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})^{\omega}\) for the triangulated sub-category of compact objects in \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\). By [13, Corollary X.I.3], there exists a \(\overline{\mathbb{F}}_{\ell}\)-linear action
\[\operatorname{Perf}(\mathfrak{X}_{\hat{G}}) \to\operatorname{End}(\operatorname{D}(\operatorname{Bun}_{G}, \overline{\mathbb{F}}_{\ell})^{\omega})\] \[C \mapsto\{A\mapsto C\star A\}\]
which, extending by colimits, gives
\[\operatorname{Ind}(\operatorname{Perf}(\mathfrak{X}_{\hat{G}}))\to\operatorname{ End}(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell}))\]
We recall the following basic properties of this action.
1. For \(V=\boxtimes_{i\in I}V_{i}\in\operatorname{Rep}_{\overline{\mathbb{F}}_{\ell}}({}^{L} G^{I})\), there is an attached vector bundle \(C_{V}\in\operatorname{Perf}(\mathfrak{X}_{\hat{G}})^{BW^{I}_{\mathbb{Q}_{p}}}\) whose evaluation at a \(\overline{\mathbb{F}}_{\ell}\)-point of \(\mathfrak{X}_{\hat{G}}\) corresponding to a (not necessarily semi-simple) \(L\)-parameter \(\tilde{\phi}:W_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{F}}_{\ell})\) is the vector space \(V\) with \(W^{I}_{\mathbb{Q}_{p}}\)-action given by \(\boxtimes_{i\in I^{\prime}V_{i}}\circ\tilde{\phi}\). The endomorphism \[C_{V}\star(-):\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{ \ell})\to\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell} )^{BW^{I}_{\mathbb{Q}_{p}}}\] is the Hecke operator \(T_{V}\) attached to \(V\).
2. The action is monoidal in the sense that, given \(C_{1},C_{2}\in\operatorname{Perf}(\mathfrak{X}_{\hat{G}})\), we have a natural equivalence of endofunctors \[(C_{1}\otimes^{\mathbb{L}}C_{2})\star(-)\simeq C_{1}\star(C_{2}\star(-)).\]
If \(\phi:W_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{F}}_{\ell})\) is a semi-simple \(L\)-parameter then this defines a closed \(\overline{\mathbb{F}}_{\ell}\)-point \(x\) in the moduli stack of Langlands parameters, which maps to a closed point in the coarse moduli space. We let \(\mathfrak{m}_{\phi}\subset\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_ {\hat{G}})\) denote the corresponding maximal ideal. We recall that, for all \(f\in\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_{\hat{G}})\) and \(A\in\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\), one obtains an endomorphism
\[A\simeq\mathcal{O}_{\mathfrak{X}_{\hat{G}}}\star A\to\mathcal{O}_{\mathfrak{X }_{\hat{G}}}\star A\simeq A\]
induced by multiplication by \(f\). Under the description of \(\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_{\hat{G}})\) in terms of the excursion algebra, this encodes the action of the excursion algebra on \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\)[22, Theorem 5.2.1]. More precisely, we recall that, since \(\ell\) is very good [14, Page 33], to any Schur-irreducible \(A\in\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) we can, by [14, Proposition I.9.3], attach a conjugacy class of semi-simple \(L\)-parameters
\[\phi^{\operatorname{FS}}_{A}:W_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{F }}_{\ell})\]
called the Fargues-Scholze parameter of \(A\). By [14, Theorem VIII.3.6], we have an identification between the ring of global functions \(\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_{\hat{G}})\) and excursion operators. Since \(A\) is Schur irreducible the endomorphisms corresponding to \(f\in\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_{\hat{G}})\) determine a non-zero scalar in \(\overline{\mathbb{F}}_{\ell}\) which will be determined by the excursion datum evaluated at the Fargues-Scholze parameter \(\phi^{\operatorname{FS}}_{A}\).
With this in hand, we can make our key definition.
**Definition 4.1**.: We define \(\iota_{\phi}:\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{ \ell})_{\phi}\hookrightarrow\operatorname{D}(\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})\) to be the full-subcategory of objects \(A\) for which the endomorphisms \(A\to A\) induced by \(f\in\mathcal{O}_{\mathfrak{X}_{\hat{G}}}\setminus\mathfrak{m}_{\phi}\) are isomorphisms.
It is easy to check that the subcategory \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi} \subset\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) is preserved under colimits and limits, and therefore, by the \(\infty\)-categorical adjoint functor theorem [13, Corollary 5.5.2.9], there exists a left adjoint to the inclusion \(\iota_{\phi}\) denoted by \(\mathcal{L}_{\phi}\). We define \((-)_{\phi}:=\iota_{\phi}\mathcal{L}_{\phi}(-)\). This, by the fully faithfulness of \(\iota_{\phi}\), will define an idempotent functor (See Appendix A for details).
We now have the following key lemma.
**Lemma 4.2**.: _The following is true._
1. _Any Schur irreducible object_ \(A\in\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) _has Fargues-Scholze parameter equal to_ \(\phi\) _as conjugacy classes of parameters._
2. _Given_ \(V\in\operatorname{Rep}_{\overline{\mathbb{F}}_{\ell}}({}^{L}G^{I})\)_, the Hecke operator_ \(T_{V}:\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\to \operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})^{BW^{I}_{ \mathbb{Q}_{p}}}\) _takes the subcategory_ \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) _to_ \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}^{BW^{I }_{\mathbb{Q}_{p}}}\)_, and there is a natural isomorphism_ \(T_{V}((-)_{\phi})\simeq(T_{V}(-))_{\phi}\)_._
3. _Given_ \(A\in\operatorname{D}(G(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\subset \operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\)_, we have an isomorphism_ \[R\Gamma(K^{\operatorname{hs}}_{p},A)_{\mathfrak{m}}\simeq R\Gamma(K^{ \operatorname{hs}}_{p},A_{\phi_{\mathfrak{m}}}),\] _where the LHS is the usual localization under the smooth Hecke algebra._
_._
4. _If_ \(A\in\operatorname{D_{lis}}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) _is ULA then one has a direct sum decomposition_ \[A\simeq\bigoplus_{\phi}A_{\phi}\] _ranging over all semi-simple_ \(L\)_-parameters._
Proof.: Claims (2) and (4) follow from Proposition A.2 and Proposition A.5, respectively, where for claim (2) we use the relationship between Hecke operators and the spectral action described above.
For (1), this follows since the action of \(\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_{\hat{G}})\) on \(A\) will factor through the maximal ideal \(\mathfrak{m}_{A}\) defined by the semi-simple \(L\)-parameter \(\phi_{A}^{\mathrm{FS}}\) attached to \(A\) by the above discussion, and therefore \(A\in\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) forces an equality of maximal ideals: \(\mathfrak{m}_{A}=\mathfrak{m}_{\phi}\).
For (3), we use the arguments in Koshikawa [14, Page 6]. Consider the map
\[\mathcal{O}_{\mathfrak{X}_{\hat{G}}}(\mathfrak{X}_{\hat{G}})\to\operatorname{ End}_{G(\mathbb{Q}_{p})}(\operatorname{cInd}_{K_{p}^{\mathrm{hs}}}^{G( \mathbb{Q}_{p})}(\overline{\mathbb{F}}_{\ell}))\simeq H_{K_{p}^{\mathrm{hs}}} ^{\mathrm{op}}\]
given by the spectral action, where \(\operatorname{cInd}_{K_{p}^{\mathrm{hs}}}^{G(\mathbb{Q}_{p})}(\overline{ \mathbb{F}}_{\ell})\) is regarded as a right \(H_{K_{p}^{\mathrm{hs}}}\)-module. It is shown that after through the usual action by the unramified Hecke algebra composed with the involution \(KhK\to Kh^{-1}K\) gives rise to a map which is compatible with usual \(L\)-parameters for unramified irreducible representations. In particular, the pullback of the maximal ideal \(\mathfrak{m}\subset H_{K_{p}^{\mathrm{hs}}}\) is given by the maximal ideal \(\mathfrak{m}_{\phi_{\mathfrak{m}}}\subset\mathcal{O}_{X_{\hat{G}}}(\mathfrak{ X}_{\hat{G}})\). Now, by arguing as in Proposition A.3, we have an identification:
\[R\mathrm{Hom}(\operatorname{cInd}_{K_{p}^{\mathrm{hs}}}^{G(\mathbb{Q}_{p})}( \overline{\mathbb{F}}_{\ell}),A_{\phi_{\mathfrak{m}}})\simeq R\mathrm{Hom}( \operatorname{cInd}_{K_{p}^{\mathrm{hs}}}^{G(\mathbb{Q}_{p})}(\overline{ \mathbb{F}}_{\ell}),A)_{\mathfrak{m}_{\phi_{\mathfrak{m}}}}.\]
Using Frobenius reciprocity, this gives an identification:
\[R\Gamma(K_{p}^{\mathrm{hs}},A_{\phi_{\mathfrak{m}}})\simeq R\Gamma(K_{p}^{ \mathrm{hs}},A)_{\mathfrak{m}_{\phi_{\mathfrak{m}}}},\]
but the RHS identifies with \(R\Gamma(K_{p}^{\mathrm{hs}},A)_{\mathfrak{m}}\), as explained above.
We note that we get the following Corollary of this.
**Corollary 4.3**.: _Let \(A\) be a complex of smooth \(G(\mathbb{Q}_{p})\)-representations which is admissible (i.e \(A^{K}\) is a perfect complex for all compact open \(K\subset G(\mathbb{Q}_{p})\)). We then have a decomposition_
\[A\simeq\bigoplus_{\phi}A_{\phi}\]
_running over semisimple \(L\)-parameters, where any irreducible constituent \(\pi\) of \(A_{\phi}\) has Fargues-Scholze parameter equal to \(\phi_{\pi}^{\mathrm{FS}}\), as conjugacy classes of parameters._
Proof.: This follows by applying to Lemma 4.2 (1) and (4) to the full subcategory \(\operatorname{D}(G(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\subset \operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\)
Now our goal is to describe the subcategory \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) more explicitly, using the results of [10] in the case that \(\phi\) is induced from a generic toral parameter \(\phi_{T}\). To do this, we will need to have some information about the Fargues-Scholze local Langlands correspondence. First, let us introduce some notation.
We let \(B(G)_{\mathfrak{un}}:=\operatorname{Im}(B(T)\to B(G))\). We recall that these are precisely the elements \(b\in B(G)\) such that the \(\sigma\)-centralizer \(J_{b}\) is quasi-split ([10, Lemma 2.12]). In particular, the fixed choice of Borel \(B\subset G\) transfers to a Borel subgroup \(B_{b}\) for all \(b\in B(G)_{\mathfrak{un}}\), and \(J_{b}\simeq M_{b}\) under the inner twisting, where \(M_{b}\subset G\) is the Levi subgroup of \(G\) determined by the centralizer of the slope homomorphism of \(b\) in \(G\). We let \(\delta_{P_{b}}\) denote the modulus character of the standard parabolic \(P_{b}\) of \(G\) with Levi factor \(M_{b}\) transferred to \(J_{b}\) under the inner twisting. We set \(W_{b}:=W_{G}/W_{M_{b}}\) to be the quotient of the relative Weyl group of \(G\) by the relative Weyl group of \(M_{b}\). We identify \(W_{b}\) with a choice of representatives in \(w\in W_{G}\) of minimal length. We set \(\rho_{b,w}:=i_{B_{b}}^{J_{b}}(X^{w})\otimes\delta_{P_{b}}^{-1/2}\) to
be the normalized parabolic induction of \(\chi^{w}\), where \(\chi\) is the character of \(T(\mathbb{Q}_{p})\) attached to a toral parameter \(\phi_{T}\) under local class field theory and \(\delta_{P_{b}}\) is the modulus character of \(M_{b}\simeq J_{b}\).
We will need to assume the following properties of the Fargues-Scholze local Langlands correspondence, as in [10, Assumption 7.5].
**Assumption 4.4**.: _For a connected reductive group \(H/\mathbb{Q}_{p}\), we have_
* _the set_ \(\Pi(H)\) _of smooth irreducible_ \(\overline{\mathbb{Q}}_{\ell}\)_-representations of_ \(H(\mathbb{Q}_{p})\)_,_
* _the set_ \(\Phi(H)\) _of conjugacy classes of continuous maps_ \[\operatorname{WD}_{\mathbb{Q}_{p}}\to{}^{L}H(\overline{\mathbb{Q}}_{\ell})\] _where_ \(\overline{\mathbb{Q}}_{\ell}\) _has the discrete topology,_ \(\operatorname{SL}(2,\overline{\mathbb{Q}}_{\ell})\) _acts via an algebraic representation, and the map respects the action of_ \(W_{\mathbb{Q}_{p}}\) _on_ \({}^{L}H(\overline{\mathbb{Q}}_{\ell})\)_, the_ \(L\)_-group of_ \(H\)_,_
* _the set_ \(\Phi^{\operatorname{ss}}(H)\) _of continuous semi-simple homomorphisms_ \[W_{\mathbb{Q}_{p}}\to{}^{L}H(\overline{\mathbb{Q}}_{\ell}),\]
* _and the semi-simplification map_ \((-)^{\operatorname{ss}}:\Phi(H)\to\Phi^{\operatorname{ss}}(H)\) _defined by precomposition with_ \[W_{\mathbb{Q}_{p}}\to W_{\mathbb{Q}_{p}}\times\operatorname{SL}(2,\overline{ \mathbb{Q}}_{\ell})\] \[g\mapsto(g,\begin{pmatrix}|g|^{1/2}&0\\ 0&|g|^{-1/2}\end{pmatrix}).\]
_Then, we assume, for all \(b\in B(G)\), that there exists a map_
\[\operatorname{LLC}_{b}:\Pi(J_{b})\to\Phi(J_{b})\]
\[\rho\mapsto\phi_{\rho}\]
_satisfying the following properties:_
1. _The diagram commutes, where_ \(\operatorname{LLC}_{b}^{\operatorname{FS}}\) _is the Fargues-Scholze local Langlands correspondence for_ \(J_{b}\)_._
2. _Consider_ \(\phi_{\rho}\) _as an element of_ \(\Phi(G)\) _given by composing with the twisted embedding_ \({}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\simeq{}^{L}M_{b}(\overline{\mathbb{Q }}_{\ell})\to{}^{L}G(\overline{\mathbb{Q}}_{\ell})\) _(as defined in_ _[_11_, Section IX.7.1]__). Then_ \(\phi_{\rho}\) _factors through the natural embedding_ \({}^{L}T\to{}^{L}G\) _if and only if_ \(b\in B(G)_{\operatorname{un}}\)_._
3. _If_ \(\rho\) _is a representation such that_ \(W_{\mathbb{Q}_{p}}\times\operatorname{SL}(2,\overline{\mathbb{Q}}_{\ell})\to{} ^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\to{}^{L}G(\overline{\mathbb{Q}}_{\ell})\) _factors through_ \({}^{L}T\)_, where the last map is the twisted embedding then, by (_2_), the element_ \(b\) _is unramified, and we require that_ \(\rho\) _is isomorphic to an irreducible constituent of_ \(\rho_{b,w}\) _for_ \(w\in W_{b}\) _and_ \(\chi\) _the character attached to the induced toral parameter_ \(\phi_{T}\)_._
The importance of this assumption is that it allows us to deduce the following Proposition.
**Proposition 4.5**.: _Assuming 4.4, we have that the following is true for a parameter \(\phi\) induced from a generic parameter \(\phi_{T}\). Given any Schur-irreducible object \(A\in\operatorname{D}(\operatorname{Bun}_{G}^{b},\overline{\mathbb{F}}_{\ell}) _{\phi}\subset\operatorname{D}(\operatorname{Bun}_{G}^{b},\overline{\mathbb{F} }_{\ell})\simeq\operatorname{D}(J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{ \ell})\) then \(A\) is non-zero if and only if \(b\in B(G)_{\operatorname{un}}\), and in this case it must be an irreducible sub-quotient of \(\rho_{b,w}\), for some \(w\in W_{b}\)._
Proof.: This follows from combining the proof of [10, Corollary 7.7] with Lemma 4.2 (1).
Since we want some flexibility in the groups for which we have the above results, we discuss how Assumption 4.4 behaves under central isogenies.
#### 4.1.1. Assumption 4.4 under Central Isogenies
We consider an injective map \(\psi:G^{\prime}\hookrightarrow G\) of connected reductive groups which induces an isomorphism of adjoint and derived groups, and the induced map \(\psi_{\mathrm{Bun}}:B(G^{\prime})\to B(G)\) on the associated Kottwitz sets. We now have the following lemma.
**Lemma 4.6**.: _If \(\psi:G^{\prime}\to G\) is an injective map which induces an isomorphism on adjoint and derived groups then it follows that \(\psi_{\mathrm{Bun}}:B(G^{\prime})\to B(G)\) induces an injection \(J_{b^{\prime}}\to J_{b}\) which is an isomorphism of the derived group and adjoint groups for all \(b=\psi_{\mathrm{Bun}}(b^{\prime})\) and \(b^{\prime}\in B(G^{\prime})\)._
Proof.: Since \(\psi\) is an inclusion it easily follows that it induces an inclusion \(J_{b^{\prime}}\to J_{b}\) of \(\sigma\)-centralizers. To see that it induces an isomorphism on derived/adjoint groups, recall that \(J_{b}\) is an inner form of a Levi subgroup \(M_{b}\) of \(G\) given by the centralizer of the slope homomorphism of \(b\). The preimage of \(M_{b}\) under \(\phi\) defines a Levi subgroup \(M_{b^{\prime}}\) of \(G\) which will be the centralizer of the slope homomorphism of \(b^{\prime}\), since \(\phi\) induces an isomorphism on adjoint groups. Moreover, the inner twisting from \(J_{b}\) to \(M_{b}\) and \(J_{b^{\prime}}\) to \(M_{b^{\prime}}\) are compatible with the inclusion in the sense that the inclusion \(J_{b^{\prime}}\to J_{b}\) is given by applying the inner twist of \(M_{b}\) to the inclusion \(M_{b^{\prime}}\to M_{b}\). Since the formation of derived/adjoint groups respects inner twists, this reduces us to showing that the map \(M_{b^{\prime}}\to M_{b}\) on Levi subgroups induces an isomorphism on the derived/adjoint groups, and this is clear.
We now consider a map \(\mathrm{LLC}_{\mathrm{Bun}_{G}}:\bigsqcup_{b\in B(G)}\Pi(J_{b})\to\Phi(G)\) determined by components \(\mathrm{LLC}_{b}:\Pi(J_{b})\to\Phi(J_{b})\) and satisfying Assumption 4.4. We now wish to define \(\mathrm{LLC}_{\mathrm{Bun}_{G^{\prime}}}:\bigsqcup_{b^{\prime}\in B(G^{\prime })}\Pi(J_{b})\to\Phi(G^{\prime})\) in terms of \(\mathrm{LLC}_{\mathrm{Bun}_{G}}\), and show that it also satisfies Assumption 4.4. To do this, we note that, for varying \(b^{\prime}\in B(G^{\prime})\), we define \(\mathrm{LLC}_{b^{\prime}}:\Pi(J_{b^{\prime}})\to\Phi(J_{b^{\prime}})\) to be the correspondence that makes the following diagram
commute, where \(b:=\psi_{\mathrm{Bun}}(b^{\prime})\). Here the right vertical arrow is given by composing a parameter \(\phi:\mathrm{WD}_{\mathbb{Q}_{p}}\to{}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\) with the induced map \({}^{L}J_{b}\to{}^{L}J_{b^{\prime}}\) on the dual groups, and the left vertical arrow is not a map at all, it is a correspondence defined by the subset of \(\Pi(J_{b})\times\Pi(J_{b^{\prime}})\) consisting of pairs \((\pi_{b},\pi_{b^{\prime}})\) such that \(\pi_{b^{\prime}}\) is a constituent of the restriction of \(\pi_{b}\) to \(J_{b^{\prime}}(\mathbb{Q}_{p})\). We will now show that this gives rise to a well-defined map under our assumptions on \(\psi\). Given a representation \(\pi_{b^{\prime}}\in\Pi(J_{b^{\prime}})\), it follows by [1, Lemma 2.3] and the previous Lemma that we can find a lift \(\pi_{b}\in\Pi(J_{b})\) such that \(\pi_{b^{\prime}}\) is an irreducible constituent of \(\pi_{b}|_{J_{b^{\prime}}(\mathbb{Q}_{p})}\). It also follows from [1, Lemma 2.1] and [13, Proposition 2.4, Corollary 2.5] that the set \(\Pi_{\pi_{b}}(J_{b^{\prime}})\) of representations of \(J_{b^{\prime}}\) occurring in the restriction of \(\pi_{b}\) is finite. Now, using the previous Lemma, we have the following.
**Lemma 4.7**.: _[_1_, Lemma 2.4]_ _For the map \(J_{b^{\prime}}\to J_{b}\) of \(\sigma\)-centralizers induced by a map \(\psi\) as above, and \(\pi^{1}_{b},\pi^{2}_{b}\in\Pi(J_{b})\) the following are equivalent._
1. _There exists a character_ \(\chi\in(J_{b}(\mathbb{Q}_{p})/J_{b^{\prime}}(\mathbb{Q}_{p}))^{\vee}\) _such that_ \(\pi^{1}_{b}\simeq\pi^{2}_{b}\otimes\chi\)_, where_ \((-)^{\vee}\) _denotes the Pontryagin dual._
2. \(\Pi_{\pi^{1}_{b}}(J_{b^{\prime}})\cap\Pi_{\pi^{2}_{b}}(J_{b^{\prime}})\neq\emptyset\)__
3. \(\Pi_{\pi^{1}_{b}}(J_{b^{\prime}})=\Pi_{\pi^{2}_{b}}(J_{b^{\prime}})\)_._
Now we can use this to define \(\mathrm{LLC}_{b^{\prime}}:\Pi(J_{b^{\prime}})\to\Phi(J_{b^{\prime}})\) in terms of \(\mathrm{LLC}_{b}:\Pi(J_{b})\to\Phi(J_{b})\) for \(\mathrm{LLC}_{\mathrm{Bun}_{G}}\) satisfying Assumption 4.4. Namely, for \(\pi_{b^{\prime}}\in\Pi(J_{b^{\prime}})\), we let \(\pi_{b}\in\Pi(J_{b})\) be a representation such that \(\pi_{b^{\prime}}\) occurs as an irreducible constituent of \(\pi_{b}|_{J_{b^{\prime}}(\mathbb{Q}_{p})}\). We set \(\phi_{\pi_{b^{\prime}}}\) to be the parameter \(\phi_{\pi_{b}}\) attached to \(\pi_{b}\) under \(\mathrm{LLC}_{b}\) composed with the map \({}^{L}J_{b}\to{}^{L}J_{b^{\prime}}\) on dual groups induced by \(\psi\). By the previous Lemma, any two choices of lifts \(\pi^{1}_{b}\) and \(\pi^{2}_{b}\) of \(\pi_{b^{\prime}}\) will differ by a character twist of \(\chi\in(J_{b}(\mathbb{Q}_{p})/J_{b^{\prime}}(\mathbb{Q}_{p}))^{\vee}\). We note that the Fargues-Scholze local Langlands correspondence is
compatible with character twists [13, Theorem I.9.6 (ii)]. Since \(\operatorname{LLC}_{b}\) is compatible with the Fargues-Scholze local Langlands after semi-simplification by assumption, it follows that the same is true for \(\operatorname{LLC}_{b}\). Therefore, \(\phi_{\pi_{b}^{1}}\) and \(\phi_{\pi_{b}^{2}}\) differ by a character twist that becomes trivial after composing with \({}^{L}J_{b}\to{}^{L}J_{b^{\prime}}\), and so \(\phi_{\pi_{b}}\) does not depend on the choice of lift. We let \(\operatorname{LLC}_{\operatorname{Bun}_{G^{\prime}}}:\bigsqcup_{b^{\prime} \in B(G^{\prime})}\Pi(J_{b^{\prime}})\to\Phi(G)\) be the local Langlands correspondence defined by the \(\operatorname{LLC}_{b^{\prime}}\) for \(b^{\prime}\) varying. We now prove that our assumption is comaptible with central isogenies.
**Proposition 4.8**.: _Suppose we have an injective map \(\psi:G^{\prime}\hookrightarrow G\) of quasi-split connected reductive groups inducing an isomorphism on adjoint and derived groups. Assume we have a local Langlands correspondence \(\operatorname{LLC}_{\operatorname{Bun}_{G}}:\bigsqcup_{b\in B(G)}\Pi(J_{b}) \to\Phi(G)\) such that Assumption 4.4 holds. If we let \(\operatorname{LLC}_{\operatorname{Bun}_{G^{\prime}}}:\bigsqcup_{b^{\prime} \in B(G^{\prime})}\Pi(J_{b^{\prime}})\to\Phi(G^{\prime})\) be the local Langlands correspondence induced by \(\operatorname{LLC}_{\operatorname{Bun}_{G}}\) and \(\psi\) as above then \(\operatorname{LLC}_{\operatorname{Bun}_{G^{\prime}}}\) satisfies Assumption 4.4 as well._
Proof.: We note, since the Fargues-Scholze local Langlands correspondence is compatible with maps \(G^{\prime}\to G\) that induce an isomorphism of adjoint groups [13, Theorem I.9.6 (v)], it follows by the above construction that if Assumption 4.4 (1) holds true for \(\operatorname{LLC}_{\operatorname{Bun}_{G}}\) then it also holds true for \(\operatorname{LLC}_{\operatorname{Bun}_{G^{\prime}}}\). Suppose we have \(b^{\prime}\in B(G^{\prime})\) mapping to \(b\in B(G)_{\operatorname{un}}\). We let \(B_{b}\subset J_{b}\) be the corresponding Borel then, since the map \(J_{b^{\prime}}\to J_{b}\) induces an isomorphism on adjoint groups by Lemma 4.6, it follows that \(B\cap J_{b^{\prime}}=:B_{b^{\prime}}\subset J_{b^{\prime}}\) is a Borel of \(J_{b^{\prime}}\). In particular, \(b^{\prime}\) must be an unramified element of \(B(G^{\prime})_{\operatorname{un}}\). Now, the map \(J_{b^{\prime}}\to J_{b}\) induces an isomorphism
\[J_{b^{\prime}}/B^{\prime}\simeq J_{b}/B.\]
If we let \(T\) be the maximal split torus of \(J_{b}\) then the preimage \(T^{\prime}\) under \(\phi\) is a maximal torus of \(J_{b^{\prime}}\), and the previous isomorphism of flag varieties implies that, given a character \(\chi:T(\mathbb{Q}_{p})\to\overline{\mathbb{Q}}_{\ell}^{*}\), we have an isomorphism:
\[i_{B_{b}}^{J_{b}}(\chi)|_{J_{b^{\prime}}(\mathbb{Q}_{p})}\simeq i_{B_{b^{\prime }}}^{J_{b^{\prime}}}(\chi|_{T^{\prime}(\mathbb{Q}_{p})}).\]
Given \(\pi_{b^{\prime}}\) and a lift \(\pi_{b}\) to \(J_{b}\) then, by definition of \(\phi_{\pi_{b}}\), we have that it is equal to
\[\operatorname{WD}_{\mathbb{Q}_{p}}\xrightarrow{\phi_{\pi_{b}}}{}^{L}J_{b}( \overline{\mathbb{Q}}_{\ell})\to{}^{L}J_{b^{\prime}}(\overline{\mathbb{Q}}_{ \ell})\]
as a conjugacy class of parameters for \(J_{b^{\prime}}\). Therefore, \(\phi_{\pi_{b^{\prime}}}\) factors through \({}^{L}T^{\prime}\) if and only if \(\phi_{\pi_{b}}\) factors through the preimage of \({}^{L}T^{\prime}\) under the map \({}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\to{}^{L}J_{b^{\prime}}(\overline{ \mathbb{Q}}_{\ell})\) of \(L\)-groups, but this is precisely \({}^{L}T\), and so Assumption 4.4 (2) holds for \(\operatorname{LLC}_{\operatorname{Bun}_{G^{\prime}}}\). Moreover, by Assumption 4.4 (3) for \(\operatorname{LLC}_{\operatorname{Bun}_{G}}\), we have that, in the above situation, \(\pi_{b}\) is an irreducible sub-quotient of \(i_{B_{b}}^{J_{b}}(\chi^{w})\otimes\delta_{P_{b}}^{-1/2}\), but this implies that \(\pi_{b^{\prime}}\) is an irreducible constituent of the restriction \(i_{B_{b^{\prime}}}^{J_{b^{\prime}}}(\chi^{w}|_{T^{\prime}(\mathbb{Q}_{p})}) \otimes\delta_{P_{b^{\prime}}}^{-1/2}\). From this, it follows that Assumption 4.4 (3) also holds for holds for \(\operatorname{LLC}_{\operatorname{Bun}_{G^{\prime}}}\).
Now that we have shown this compatibility assumption is somewhat flexible, we can state the groups we know to satisfy Assumption 4.4. This result is largely contained in [11, 13, 13, 14], but we also want to consider an additional group \(\operatorname{GU}_{2}\), where we have the following construction of \(\operatorname{LLC}_{\operatorname{Bun}_{\operatorname{GU}_{2}}}\). Recall that \(\operatorname{GU}_{2}/L\) can be written as
\[\operatorname{GU}_{2}:=(\operatorname{GL}_{2}\times\operatorname{Res}_{L^{\prime }/L}\mathbb{G}_{m})/\mathbb{G}_{m},\]
where \(\mathbb{G}_{m}\) is embedded in \(H:=\operatorname{GL}_{2}\times\operatorname{Res}_{L^{\prime}/L}(\mathbb{G}_{m})\) via \(a\mapsto(\operatorname{diag}(a,a),a^{-1})\), and \(L^{\prime}/L\) is an unramified quadratic extension. Let \(\psi:B(H)\to B(\operatorname{GU}(2))\) and let \(\tilde{\psi}:B(H)\to B(\operatorname{GL}_{2})\) be the map of Kottwitz sets. Given \(b\in B(H)\), let \(b^{\prime}=\psi(b)\), \(\tilde{b}=\tilde{\psi}(b)\).
**Lemma 4.9**.: _There is a bijection between \(\Pi(J_{b^{\prime}})\) and the set of pairs \((\tilde{\pi},\chi)\) such that \(\tilde{\pi}\in\Pi(J_{\tilde{b}})\) and \(\chi\) is a character of \((L^{\prime})^{\times}\) such that \(\chi|_{L^{\times}}=\omega_{\tilde{\pi}}|_{L^{\times}}\), where \(\omega_{\tilde{\pi}}\) is the central character of \(\tilde{\pi}\)._
Proof.: We will show that we have an isomorphism
\[J_{b^{\prime}}\simeq(J_{\tilde{b}}\times\operatorname{Res}_{L^{\prime}/L}\mathbb{ G}_{m})/\mathbb{G}_{m}\]
of groups over \(L\). In particular, we see that \(J_{b}=J_{\tilde{b}}\times\operatorname{Res}_{L^{\prime}/L}\mathbb{G}_{m}\), and the quotient map \(J_{b}\to J_{b^{\prime}}\) induces an isomorphism on adjoint and derived subgroups of \(J_{b^{\prime}}\). Moreover, by Hilbert's Theorem 90, we have \(H^{1}(L,\mathbb{G}_{m})=0\), and thus we also have a surjection on \(L\)-points, from which the lemma follows. To see this isomorphism, recall that \(J_{b}\) (resp. \(J_{b^{\prime}},J_{\tilde{b}}\)) is an inner form of \(M_{b}\) (resp. \(M_{b^{\prime}},M_{\tilde{b}}\)), the Levi subgroup of \(H\) (resp. \(\operatorname{GU}_{2},\operatorname{GL}_{2}\)) given by the centralizer of the slope homomorphism of \(b\) (resp. \(b^{\prime},\tilde{b}\)). In particular, we see that we have an isomorphism
\[M_{b^{\prime}}\simeq(M_{\tilde{b}}\times\operatorname{Res}_{L^{\prime}/L} \mathbb{G}_{m})/\mathbb{G}_{m}\]
and thus we have a surjective map \(M_{b}\to M_{b^{\prime}}\), since \(M_{b}=M_{\tilde{b}}\times\operatorname{Res}_{L^{\prime}/L}\mathbb{G}_{m}\). Moreover, we see that under these maps, we have isomorphisms \(M_{b}^{\operatorname{ad}}\simeq M_{b^{\prime}}^{\operatorname{ad}}\simeq M_ {\tilde{b}}^{\operatorname{ad}}\), and the inner twist \(H^{1}(L,M_{b}^{\operatorname{ad}})\) corresponding to \(J_{b}\) is, under this identification, the inner twist inducing \(J_{b^{\prime}}\) and \(J_{\tilde{b}}\). The identification of \(J_{b^{\prime}}\) then follows.
Moreover, we observe that we have an exact sequence of dual groups
\[1\to\widehat{J}_{b^{\prime}}\to\widehat{J}_{b}\to\mathbb{G}_{m}\to 1,\]
where the map \(p:\widehat{J}_{b}\to\mathbb{G}_{m}\) is defined as follows. We can write \(\widehat{J}_{b}=\widehat{J}_{\tilde{b}}\times\mathbb{G}_{m}^{2}\), and we have maps \(\hat{i}_{1}:\widehat{J}_{\tilde{b}}\to\mathbb{G}_{m}\), \(\hat{i}_{2}:\operatorname{Res}_{L^{\prime}/L}\mathbb{G}_{m}=\mathbb{G}_{m}^{2} \to\mathbb{G}_{m}\) induced from the inclusion maps \(i_{1}:\mathbb{G}_{m}\hookrightarrow J_{\tilde{b}}\) and \(i_{2}:\mathbb{G}_{m}\hookrightarrow\operatorname{Res}_{L^{\prime}/L}\mathbb{G} _{m}\), and \(p(g,h)=\hat{i}_{1}(g)\hat{i}_{2}(h)^{-1}\).
Now, we want to define \(\operatorname{LLC}_{\operatorname{Bun}_{\operatorname{GU}_{2}}}\) in terms of \(\operatorname{LLC}_{\operatorname{Bun}_{H}}\). More precisely, for any \(b^{\prime}\in B(\operatorname{GU}_{2})\) we define \(\operatorname{LLC}_{b^{\prime}}:\Pi(J_{b^{\prime}})\to\Phi(J_{b^{\prime}})\) in terms of \(\operatorname{LLC}_{b}:\Pi(J_{b})\to\Phi(J_{b})\) for \(\operatorname{LLC}_{\operatorname{Bun}_{H}}\). For \(\pi_{b^{\prime}}=(\tilde{\pi},\chi)\in\Pi(J_{b^{\prime}})\), we consider the image \(\phi=\operatorname{LLC}_{\operatorname{Bun}_{H}}((\tilde{\pi},\chi))\), and we want to show that \(\phi:\operatorname{WD}_{L}\to{}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\) factors through \({}^{L}J_{b^{\prime}}(\overline{\mathbb{Q}}_{\ell})\). To see this, from the exact sequence above, it suffices to show that the composition of \(\phi\) with the map \({}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\to{}^{L}\mathbb{G}_{m}(\overline{ \mathbb{Q}}_{\ell})\) is trivial. However, we observe that the condition that \(\chi|_{L^{\times}}=\omega_{\tilde{\pi}}|_{L^{\times}}\) exactly implies that this image is trivial, since the composition is the \(L\)-parameter associated with the character \(\omega_{\tilde{\pi}}\chi^{-1}|_{L^{\times}}\). Thus, we have an \(L\)-parameter \(\phi^{\prime}:\operatorname{WD}_{L}\to{}^{L}J_{b^{\prime}}(\overline{\mathbb{Q }}_{\ell})\). We thus define the map \(\operatorname{LLC}_{b^{\prime}}\) to take \(\pi_{b^{\prime}}\) to \(\phi^{\prime}\).
We now have the following theorem about the groups we know to satisfy Assumption 4.4.
**Theorem 4.10**.: _[_10_, 11, 12, 13, 14]_ _Assumption 4.4 is true and \(\ell\) is very good in the following cases._
1. _The group_ \(\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GSp}_{4})\) _with_ \(p>2\) _and_ \([L:\mathbb{Q}_{p}]\geq 2\) _or_ \(L=\mathbb{Q}_{p}\) _for all_ \(p\)_. In both cases, we need to assume that_ \(\ell\nmid 2[L:\mathbb{Q}_{p}]\)_._
2. _The groups_ \(\operatorname{GU}_{n}\) _or_ \(\operatorname{U}_{n}\) _for_ \(n\) _odd and defined with respect to an unramified quadratic extension_ \(E/\mathbb{Q}_{p}\)_, and_ \(\ell\neq 2\)_._
3. _The group_ \(\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GU}_{2})\) _defined with respect to an unramified quadratic extension_ \(L^{\prime}/L\)_, and_ \(\ell\) _such that_ \(\ell\nmid[L:\mathbb{Q}_{p}]\)_._
4. _The group_ \(\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GL}_{n})\) _for all_ \(p\) _and_ \(\ell\) _such that_ \(\ell\nmid[L:\mathbb{Q}_{p}]\)_._
Proof.: We first start with the conditions on \(\ell\). If \(G\) is of type \(A_{n}\) then all \(\ell\) are very good. However, when \(G\) is a unitary group with \(n>2\), we also need to impose the additional assumption that the action of \(W_{\mathbb{Q}_{p}}\) on \(\hat{G}\) is of order prime to \(\ell\), and this gives us the condition that \(\ell\neq 2\). Similarly, this gives rise to the condition that \(\ell\nmid[L:\mathbb{Q}_{p}]\) in all of the cases. If \(G=\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GSp}_{4})\) then it is of type \(C\) and we also need to impose the additional condition that \(\ell\neq 2\).
Now, we turn to Assumption 4.4 (1). For \(\operatorname{GL}_{n}\), this follows from [12, Theorem I.9.6] and [13, Theorem 1.0.3], where \(\operatorname{LLC}_{b}\) is given by the Harris-Taylor correspondence. For \(\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GSp}_{4})\) and \(L/\mathbb{Q}_{p}\) as described above, this follows from [10, Theorem 1.1], where \(\operatorname{LLC}_{b}\) is given by Harris-Taylor for the non-basic \(b\) and Gan-Takeda [12] and Gan-Tantono [12] for
the basic elements6. For \(\mathrm{U}_{n}\) or \(\mathrm{GU}_{n}\), this is [1, Theorem 1.1], where \(\mathrm{LLC}_{b}\) for \(b\in B(G)\) was constructed by Mok [13] and Kaletha-Minguez-Shin-White [10]. For \(\mathrm{GU}_{2}\) this follows from the compatibility for \(\mathrm{GL}_{2}\), and the fact that the Fargues-Scholze local Langlands correspondence is compatible with taking products as well as maps \(G^{\prime}\to G\) that induces an isomorphism of adjoint groups [11, Theorem I.9.6 (v), (vi)].
Footnote 6: In the current version of [1], the assumption that \(p>2\) is only used to invoke basic uniformization of abelian type Shimura varieties, but when \(L=\mathbb{Q}_{p}\) one can just use Rappoport-Zink uniformization, so this assumption is unnecessary.
Now we explain why Assumption 4.4 (2) is satisfied. We recall that if \(J_{b}\) is a non quasi-split group then the fibers of the \(\mathrm{LLC}_{b}\) over an \(L\)-parameter \(\phi:\mathrm{WD}_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{Q}}_{\ell})\) should be empty if \(\phi\) factors through \({}^{L}M(\overline{\mathbb{Q}}_{\ell})\) for a Levi subgroup \(M\subset G\) which does not transfer to a Levi subgroup of \(J_{b}\). In particular, such parameters are called irrelevant, and we expect the fiber to be empty if and only if \(\phi\) is irrelevant [10, Conjecture A.2]. For the Harris-Taylor correspondence, it is know that the fibers over irrelevant parameters are empty by the standard properties of Jacquet-Langlands. For \(\mathrm{GU}_{n}\) or \(\mathrm{U}_{n}\), odd unitary groups and their Levi subgroups are always quasi-split, so it is reduced to the previous case of \(\mathrm{GL}_{n}\) using compatibility of the correspondence with parabolic induction. For \(\mathrm{GSp}_{4}\), one needs to show this for \(\mathrm{LLC}_{\mathrm{GU}_{2}(D)}\), where \(\mathrm{GU}_{2}(D)\) is the unique non-split inner form of \(\mathrm{GSp}_{4}\). Here this follows from the construction of Gan-Tantono (See the discussion before the main Theorem in [12]). For \(\mathrm{GU}_{2}\), we observe that since \(\mathrm{GU}_{2},H,\mathrm{GL}_{2}\) all have the same adjoint group, \(b^{\prime}\in B(\mathrm{GU}_{2})_{\mathrm{un}}\) is unramified exactly when \(b\) is unramified with notation as in Lemma 4.9. Now, let \(\tilde{T}\) be a maximal split torus of \(\mathrm{GL}_{2}\), and observe that \(T^{\prime}=(\tilde{T}\times\mathrm{Res}_{L^{\prime}/L}\mathbb{G}_{m})/ \mathbb{G}_{m}\) is a maximal torus of \(\mathrm{GU}_{2}\) over \(L\). Given \(\pi_{b^{\prime}}=(\tilde{\pi},\chi)\), and \(\phi_{\pi_{b^{\prime}}}:\mathrm{WD}_{L}\to{}^{L}J_{b^{\prime}}(\overline{ \mathbb{Q}}_{\ell})\), we see from the construction that this factors through \({}^{L}T^{\prime}(\overline{\mathbb{Q}}_{\ell})\) exactly when the associated \(L\)-parameter for \(H\), \(\phi_{(\tilde{\pi},\chi)}:\mathrm{WD}_{L}\to{}^{L}J_{b}(\overline{\mathbb{Q}}_ {\ell})\), factors through \({}^{L}T(\overline{\mathbb{Q}}_{\ell})\), where \(T=\tilde{T}\times\mathrm{Res}_{L^{\prime}/L}\mathbb{G}_{m}\). Since Assumption 4.4 (2) is clearly compatible with taking products, \(H\) satisfies this assumption, and thus so does \(\mathrm{GU}_{2}\).
Now we explain why Assumption 4.4 (3) is satisfied. First, note that any parameter \(\phi:\mathrm{WD}_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{Q}}_{\ell})\) induced from a toral parameter \(\phi_{T}\) has necessarily trivial monodromy, since \({}^{L}T(\overline{\mathbb{Q}}_{\ell})\) consists only of semi-simple elements. Moreover, since \(J_{b}\) is an inner form of \(M_{b}\), it follows that the set of all distinct conjugacy classes of parameters \(\phi^{\prime}:\mathrm{WD}_{\mathbb{Q}_{p}}\to{}^{L}J_{b}(\overline{\mathbb{Q}} _{\ell})\) which can give rise to \(\phi\) under the twisted embedding \({}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\to{}^{L}G(\overline{\mathbb{Q}}_{ \ell})\) are parameterized by a set of minimal length representatives of \(W_{b}=W_{G}/W_{M_{b}}\) via conjugating \(\phi^{\prime}\). We expect (See [10, Conjecture A.5]) that the fiber of \(\mathrm{LLC}_{b}\) over such a \(\phi^{\prime}\) inducing \(\phi\) to be the irreducible constituents of the normalized induction of the \(L\)-packet of \(\phi_{T}^{w}\), which is just \(\chi^{w}\) by local class field theory for \(w\in W_{b}\). This is indeed true in all the cases we consider (See for example [1, Section 2.3.3] for this discussed in the case of unitary groups, and for \(\mathrm{GSp}_{4}\) and its unique non-split inner form \(\mathrm{GU}_{2}(D)\) it follows directly from the construction). We note that the twists by \(\delta_{P_{b}}^{-1/2}\) appear to account for the half Tate twists appearing in the definition of the _twisted_ embedding \({}^{L}J_{b}(\overline{\mathbb{Q}}_{\ell})\to{}^{L}G(\overline{\mathbb{Q}}_{ \ell})\). For \(\mathrm{GU}_{2}\), we see that when \(b^{\prime}\) is unramified, we have isomorphisms of flag varieties
\[J_{b^{\prime}}/B_{b^{\prime}}\simeq J_{\tilde{b}}/B_{\tilde{b}}\simeq J_{b}/B_ {b}.\]
In the above situation where \(\phi_{\pi_{b^{\prime}}}\) factors through \({}^{L}T^{\prime}\), we see that since \(H,\mathrm{GL}_{2}\) satisfy Assumption 4.4 (3), the corresponding representation of \(H(L)\) is of the form \((\tilde{\pi},\chi)\), where \(\tilde{\pi}\) is an irreducible constituent of \(i_{B_{\tilde{b}}}^{J_{b}}(\chi_{1}^{w})\otimes\delta_{P_{b}}^{-1/2}\), for the associated character \(\chi_{1}\) of \(\tilde{T}\). In particular, we see that \(\pi_{b^{\prime}}\) is a constituent of \(i_{B_{b^{\prime}}}^{J_{b^{\prime}}}(\chi_{1}^{w}\otimes\chi)\otimes\delta_{P_{ b^{\prime}}}^{-1/2}\), as desired.
We now turn our attention to deriving our desired consequences.
### Perverse \(t\)-exactness
We recall that \(\operatorname{Bun}_{G}^{b}\simeq[*/\mathcal{J}_{b}]\), where \(\mathcal{J}_{b}:=\operatorname{Aut}(\mathcal{E}_{b})\) is the group diamond parameterizing automorphisms of the bundle \(\mathcal{E}_{b}\) attached to \(b\in B(G)\) on \(X\). The diamond \(\mathcal{J}_{b}\) has pure cohomological \(\ell\)-dimension over the base (in the sense of [13, Definition IV.1.17]) equal to \(\langle 2\rho_{G},\nu_{b}\rangle\), where \(\nu_{b}\) is the slope homomorphism of \(b\). Moreover, we have that \(\operatorname{Bun}_{G}\) is cohomologically smooth of pure \(\ell\)-dimension equal to \(0\) over the base. This motivates the following definition.
**Definition 4.11**.: We define a perverse \(t\)-structure \(({}^{\operatorname{P}}\!\operatorname{D}^{\leq 0}(\operatorname{Bun}_{G}, \overline{\mathbb{F}}_{\ell}),{}^{\operatorname{p}}\!\operatorname{D}^{\geq 0 }(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell}))\) on \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) such that \(A\in\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) lies in \({}^{\operatorname{p}}\!\operatorname{D}^{\leq 0}(\operatorname{Bun}_{G}, \overline{\mathbb{F}}_{\ell})\) (resp. \({}^{\operatorname{p}}\!\operatorname{D}^{\geq 0}(\operatorname{Bun}_{G}, \overline{\mathbb{F}}_{\ell})\)) if and only if \(j_{b}^{*}(A)\) (resp. \(j_{b}^{!}(A)\)) sits in cohomological degrees \(\leq\langle 2\rho_{G},\nu_{b}\rangle\) (\(\geq\langle 2\rho_{G},\nu_{b}\rangle\)).
For \(\phi\) induced from a generic toral parameter, we write \(({}^{\operatorname{p}}\!\operatorname{D}^{\geq 0}(\operatorname{Bun}_{G}, \overline{\mathbb{F}}_{\ell})_{\phi},{}^{\operatorname{p}}\!\operatorname{D}^{ \leq 0}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi})\) for the restriction of this \(t\)-structure to \(\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\). Let \(\operatorname{Perv}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) denote the heart. We are almost ready to formulate our first big result. To do this, we need the following definition.
**Definition 4.12**.: We say that \(\phi_{T}\) is weakly normalized regular if it is generic and if \(\chi\) denotes the character attached to \(\phi_{T}\) under local class field theory, we have, for all \(w\in W_{G}\) non-trivial, that
\[\chi\otimes\delta_{B}^{1/2}\not\simeq(\chi\otimes\delta_{B}^{-1/2})^{w} \tag{15}\]
holds. Similarly, we say \(\phi_{T}\) is regular if for all \(w\in W_{G}\) non-trivial we have that \(\chi\not\simeq\chi^{w}\).
To motivate this, we recall that, since \(\phi_{T}\) is weakly normalized regular, we have by [1, Theorem 10.10] an object \(\operatorname{nEis}(\mathcal{S}_{\phi_{T}})\in\operatorname{Perv}(\operatorname {Bun}_{G},\overline{\mathbb{F}}_{\ell})\), which is a perverse filtered Hecke eigensheaf on \(\operatorname{Bun}_{G}\), assuming 4.4 holds. Moreover, it is supported on the set of unramified elements and, for \(b\in B(G)_{\operatorname{un}}\), its stalks are given by
\[\operatorname{Red}_{b,\phi}^{\operatorname{tw}}:=\bigoplus_{w\in W_{b}}\rho_ {b,w}[-\langle 2\rho_{G},\nu_{b}\rangle],\]
where we recall that \(\rho_{b,w}:=i_{B_{b}}^{J_{b}}(\chi^{w})\otimes\delta_{P_{b}}^{-1/2}\). In particular, by Proposition 4.5 it defines an object in the localized category \(\operatorname{Perv}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\). To show the desired perverse \(t\)-exactness property, we would like to use the Hecke eigensheaf property of \(\operatorname{nEis}(\mathcal{S}_{\phi_{T}})\). Given a geometric dominant cocharacter \(\mu\), we consider the highest weight tilting module \(\mathcal{T}_{\mu}\) attached to \(\mu\). We let
\[T_{\mu}:\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell}) \to\operatorname{D}(\operatorname{Bun}_{G},\overline{\mathbb{F}}_{\ell})^{BW _{E_{\mu}}}\]
be the Hecke operator attached to the representation \(\mathcal{T}_{\mu}\), where \(E_{\mu}\) denotes the reflex field of \(\mu\). The sheaf \(T_{\mu}(\operatorname{nEis}(\mathcal{S}_{\phi_{T}}))\) carries a filtration which, if it splits, guarantees an isomorphism \(\operatorname{nEis}(\mathcal{S}_{\phi_{T}})\boxtimes r_{\mu}\circ\phi\simeq T_{ \mu}(\operatorname{nEis}(\mathcal{S}_{\phi_{T}}))\), and we say that \(\phi_{T}\) is \(\mu\)-regular ([1, Definition 10.11]) if such a splitting exists. Here \(r_{\mu}:\hat{G}\to\operatorname{GL}(\mathcal{T}_{\mu})\) is the map defined by the tilting module \(\mathcal{T}_{\mu}\). The condition of being \(\mu\)-regular is guaranteed by the following stronger condition, using [1, Theorem 1.17].
**Definition 4.13**.: We write \((-)^{\Gamma}:\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})\to\mathbb{X}_{*}(T_{ \overline{\mathbb{Q}}_{p}})/\Gamma\) for the natural map from geometric cocharacters to their \(\Gamma\)-orbits. For a toral parameter \(\phi_{T}:W_{\mathbb{Q}_{p}}\to{}^{L}T(\overline{\mathbb{F}}_{\ell})\) and a geometric dominant cocharacter \(\mu\), we say \(\phi_{T}\) is strongly \(\mu\)-regular if the Galois cohomology complexes
\[R\Gamma(W_{\mathbb{Q}_{p}},(\nu-\nu^{\prime})^{\Gamma}\circ\phi_{T})\]
are trivial for \(\nu\),\(\nu^{\prime}\) defining distinct \(\Gamma\)-orbits of weights in the highest weight tilting module \(\mathcal{T}_{\mu}\).
_Remark 4.14_.: In particular, strong \(\mu\)-regularity implies \(\mu\)-regularity, and if we know strong \(\mu\)-regularity then it implies \(\mu^{\prime}\)-regularity for any \(\mathcal{T}_{\mu^{\prime}}\) which occurs as a direct summand of the tensor product \(\mathcal{T}_{\mu}^{\otimes n}\), by [1, Proposition 10.12]. Also, as we will see, strong \(\mu\)-regularity is often implied by generic for some suitably chosen \(\mu\).
More importantly, we can use this to deduce the following.
**Proposition 4.15**.: _For any \(\phi\) induced from a generic \(\phi_{T}\), assume, for all \(b\in B(G)_{\mathrm{un}}\) and \(w\in W_{b}\), the representations \(\rho_{b,w}\) are semi-simple, and that Assumption 4.4 is true. Then we have a direct sum decomposition_
\[\bigoplus_{b\in B(G)_{\mathrm{un}}}\mathrm{D}^{\mathrm{adm}}(\mathrm{Bun}^{b}_{G },\overline{\mathbb{F}}_{\ell})_{\phi}\simeq\mathrm{D}^{\mathrm{ULA}}( \mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi},\]
_where \(\mathrm{D}^{\mathrm{adm}}(\mathrm{Bun}^{b}_{G},\overline{\mathbb{F}}_{\ell}) \subset\mathrm{D}(\mathrm{Bun}^{b}_{G},\overline{\mathbb{F}}_{\ell})\simeq \mathrm{D}(J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\) denotes the subcategory of admissible complexes._
_Moreover, for any \(A\in\mathrm{D}^{\mathrm{ULA}}(\mathrm{Bun}^{b}_{G},\overline{\mathbb{F}}_{ \ell})_{\phi}\simeq\mathrm{D}^{\mathrm{adm}}(J_{b}(\mathbb{Q}_{p}),\overline {\mathbb{F}}_{\ell})_{\phi}\), we have that the \(!\) and \(*\) pushforwards agree with respect to the inclusion \(j_{b}:\mathrm{Bun}^{b}_{G}\to\mathrm{Bun}_{G}\)._
Proof.: The first part of the Proposition follows from the second part. To see this, we use the semi-orthogonal decomposition of \(\mathrm{D}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})\) into \(\mathrm{D}(\mathrm{Bun}^{b}_{G},\overline{\mathbb{F}}_{\ell})\simeq\mathrm{D} (J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})\) via the excision spectral sequence. Using that the \(!\) and \(*\)-pushforwards agree for all objects \(A\in\mathrm{D}(\mathrm{Bun}^{b}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\), we see that the excision spectral sequence degenerates and the first part of the claim follows. To see the second part, we now use Proposition 4.5 to see that an object \(A\in\mathrm{D}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) can only be supported on the HN-strata \(\mathrm{Bun}^{b}_{G}\) for \(b\in B(G)_{\mathrm{un}}\), and that the restriction of \(A\) to \(\mathrm{Bun}^{b}_{G}\) has irreducible constituents valued in subquotients of the representations \(\rho_{b,w}\) for \(w\in W_{b}\) varying. For the representations \(\rho_{b,w}\), we have the following.
**Proposition 4.16**.: _[_10_, Proposition 11.13]_ _For all \(b\in B(G)_{\mathrm{un}}\) and \(w\in W_{b}\), the natural map_
\[j_{b!}(\rho_{b,w})\to Rj_{b*}(\rho_{b,w})\]
_is an isomorphism assuming \(\phi_{T}\) is generic._
So the \(!\) and \(*\) pushforwards agree on the \(\rho_{b,w}\), and, since we are assuming the representations \(\rho_{b,w}\) are semisimple, the claim follows for any constituent of \(\rho_{b,w}\). This is enough to conclude the claim for any \(A\in\mathrm{D}^{\mathrm{ULA}}(\mathrm{Bun}^{b}_{G},\overline{\mathbb{F}}_{ \ell})_{\phi}\simeq\mathrm{D}^{\mathrm{adm}}(J_{b}(\mathbb{Q}_{p}),\overline{ \mathbb{F}}_{\ell})_{\phi}\) using the following claim.
**Lemma 4.17**.: _Assuming 4.4, for \(\phi\) a generic parameter and any \(A\in\mathrm{D}^{\mathrm{adm}}(J_{b}(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{ \ell})_{\phi}\) the cohomology of \(A\) has finite length._
Proof.: By Assumption 4.4, we know that any irreducible constituent of the cohomology of \(A\) is an irreducible constituent of \(\rho_{b,w}\) for some \(w\in W_{b}\). It follows by [20, p. II.5.13] that there are only finitely many possibilities for the possible irreducible constituents. Therefore, by choosing \(K\subset G(\mathbb{Q}_{p})\) a sufficiently small open compact such that all these representations have an invariant vector, we deduce, since \(A^{K}\) is a perfect complex by assumption, that \(A\) must have finite length cohomology.
We note that the semi-simplicity of \(\rho_{1,1}=i^{G}_{B}(\chi)\) is implied by the conditions discussed above.
**Lemma 4.18**.: _Let \(\phi_{T}:W_{\mathbb{Q}_{p}}\to{}^{L}T(\overline{\mathbb{F}}_{\ell})\) be a weakly normalized regular and regular. Suppose there exists a \(\mu\) which is not fixed under any \(w\in W_{G}\) and \(\phi_{T}\) is \(\mu\)-regular. Then, \(i^{G}_{B}(\chi)\) is irreducible._
Proof.: It follows, by [10, Corollary 11.23] and the assumed \(\mu\)-regularity, that we have an isomorphism \(i^{G}_{B}(\chi)\simeq i^{G}_{B}(\chi^{w})=i^{G}_{B^{w}}(\chi)\) for all \(w\in W_{G}\). Here \(B^{w}\) is the conjugate of \(B\) by \(w\). We write \(r^{G}_{B}\) for the normalized parabolic restriction functor. We recall that we are working with \(\ell\)-modular coefficients in possibly non-banal characteristic so \(i^{G}_{B}(\chi)\) may have cuspidal constituents. In particular, we will need the following lemma.
**Lemma 4.19**.: _Let \(w_{0}\in W_{G}\) be the element of longest length. For a character \(\chi:T(\mathbb{Q}_{p})\to\overline{\mathbb{F}}_{\ell}^{*}\), if we have an isomorphism \(i^{G}_{B}(\chi)\simeq i^{G}_{B}(\chi^{w_{0}})\) of \(G(\mathbb{Q}_{p})\)-modules then any non-zero quotient \(\sigma^{\prime}\) of \(i^{G}_{B}(\chi)\) satisfies \(r^{G}_{B}(\sigma^{\prime})\neq 0\)_
Proof.: We apply second adjointness [1, Corollary 1.3] to the map
\[i^{G}_{B^{w_{0}}}(\chi)\xrightarrow{\simeq}i^{G}_{B}(\chi)\to\sigma^{\prime}\]
to conclude the existence of a non-zero map \(\chi\to r^{G}_{B}(\sigma^{\prime})\), which implies the claim.
Now suppose for the sake of contradiction that \(i^{G}_{B}(\chi)\) is not irreducible. Then there exists an exact sequence
\[0\to\sigma\to i^{G}_{B}(\chi)\to\sigma^{\prime}\to 0.\]
Since parabolic restriction is exact (for example by using second adjointness), we get an exact sequence
\[0\to r^{G}_{B}(\sigma)\to r^{G}_{B}i^{G}_{B}(\chi)\to r^{G}_{B}(\sigma^{\prime} )\to 0.\]
This allows us to conclude an equality of lengths of representations:
\[\ell(r^{G}_{B}(\sigma))+\ell(r^{G}_{B}(\sigma^{\prime}))=\ell(r^{G}_{B}(i^{G}_ {B}(\chi)))\leq|W_{G}|,\]
where the inequality follows from the geometric Lemma [1, Section 2.8]7. By the previous lemma, we conclude that \(\ell(r^{G}_{B}(\sigma))<W_{G}\). Now, since we know that \(\sigma\subset i^{G}_{B}(\chi)\simeq i^{G}_{B}(\chi^{w})\) for all \(w\in W_{G}\), Frobenius reciprocity implies that we have non-zero maps \(r^{G}_{B}(\sigma)\to\chi^{w}\) for all \(w\in W_{G}\). This gives a contradiction by the regularity of \(\chi\).
Footnote 7: Note that this bound however fails without taking normalized restriction because of the aforementioned cuspidal constituents of \(i^{G}_{B}(\chi)\) in non-banal characteristic (cf. [1, Page 48]).
We now have the following key claim.
**Theorem 4.20**.: _Let \(\mu\) be a geometric dominant cocharacter. We write_
\[T_{\mu}:\mathrm{D}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})\to\mathrm{ DBun}_{G},\overline{\mathbb{F}}_{\ell})^{BW_{\mathbb{Q}_{p}}}\]
_for the Hecke operator attached to the highest weight tilting module \(\mathcal{T}_{\mu}\) of highest weight \(\mu\). Then the operator restricted to \(\mathrm{D}^{\mathrm{ULA}}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\) is perverse \(t\)-exact if \(\phi_{T}\) is weakly normalized regular, Assumption 4.4 is true, the \(\rho_{b,w}\) are semi-simple for all \(b\in B(G)_{\mathrm{un}}\) and \(w\in W_{b}\), and \(\phi_{T}\) is \(\mu\)-regular._
Proof.: Using Lemma 4.17, the commutation of Hecke operators with colimits, Proposition 4.5, Proposition 4.15, and semi-simplicity of the representations \(\rho_{b,w}\), we can reduce to showing, for all \(b\in B(G)_{\mathrm{un}}\), that if we consider the complex
\[\mathrm{Red}^{\mathrm{tw}}_{b,\phi}:=\bigoplus_{w\in W_{b}}i^{J_{b}}_{B_{b}}( \chi^{w})\otimes\delta_{P_{b}}^{-1/2}[-\langle 2\rho_{G},\nu_{b}\rangle]\in \mathrm{Perv}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\]
then we have a containment
\[T_{\mu}(j_{bl}(\mathrm{Red}^{\mathrm{tw}}_{b,\phi}))\in\mathrm{Perv}(\mathrm{ Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi}\]
for the fixed \(\mu\). However, \(\mathrm{Red}^{\mathrm{tw}}_{b,\phi}\) are the stalks of the perverse filtered Hecke eigensheaf \(\mathrm{nEis}(\mathcal{S}_{\phi_{T}})\) and, since \(\phi_{T}\) is \(\mu\)-regular by assumption, we have an isomorphism:
\[T_{\mu}(\mathrm{nEis}(\mathcal{S}_{\phi_{T}}))\simeq\mathrm{nEis}(\mathcal{S} _{\phi_{T}})\boxtimes r_{\mu}\circ\phi\in\mathrm{Perv}(\mathrm{Bun}_{G}, \overline{\mathbb{F}}_{\ell})_{\phi}^{BW_{E_{\mu}}}.\]
This gives the desired claim.
We are almost ready to deduce the result we need for torsion vanishing. To do this, we will first need to discuss when the additional assumptions of weak normalized regularity and \(\mu\)-regularity are superfluous, possibly under certain assumptions on \(\ell\).
### Verification of additional assumptions
We first need the following lemma which will allow us to base change to splitting fields.
**Lemma 4.21**.: _Let \(G\) be a quasi-split connected reductive group with splitting field \(E\). If \(\phi_{T}\) is generic then \(R\Gamma(W_{E},\tilde{\alpha}\circ\phi_{T}|_{W_{E}})\) is trivial for all absolute coroots \(\tilde{\alpha}\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})\)._
Proof.: We recall that, given a \(\Gamma\)-orbit of positive absolute coroots \(\alpha\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\backslash\Gamma\), if \(E_{\alpha}\) denotes the reflex field of \(\alpha\) then the representation of \({}^{L}T\) defined by \(\alpha\) is given by choosing a representative \(\tilde{\alpha}\in\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})^{+}\) of \(\alpha\), and inducing the representation of \(\tilde{T}\rtimes W_{E_{\alpha}}/W_{E}\) defined by it to \(W_{\mathbb{Q}_{p}}/W_{E}\). This reduces the claim to Schapiro's Lemma.
Other than the groups listed in Theorem 4.10, there are two more groups of interest to us. We will define them now.
Let \(L/\mathbb{Q}_{p}\) be a finite extension. We have the similitude maps from \(\mathrm{GL}_{n}\) (resp. \(\mathrm{GSp}_{4}\))
\[\nu:\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{GL}_{n}\to\mathrm{Res}_{L/\mathbb{ Q}_{p}}\mathbb{G}_{m}\]
(resp.
\[\nu:\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{GSp}_{4}\to\mathrm{Res}_{L/\mathbb{ Q}_{p}}\mathbb{G}_{m}).\]
We thus define
\[G(\mathrm{SL}_{n,L}):=\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{GL}_{n}\times_{ \nu}\mathbb{G}_{m},\]
\[G(\mathrm{Sp}_{4,L}):=\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{GL}_{n}\times_{ \nu}\mathbb{G}_{m}.\]
**Lemma 4.22**.: _Let \(L/\mathbb{Q}_{p}\) be a finite extension and \(G\) be one of the following groups:_
1. \(\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{U}_{n}\)_,_
2. \(\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{GU}_{n}\)_,_
3. \(\mathrm{Res}_{L/\mathbb{Q}_{p}}\mathrm{GL}_{n}\)_,_
4. \(G(\mathrm{SL}_{n,L})\)_._
_If \(\phi_{T}\) is a generic toral parameter for \(G\) then \(\phi_{T}\) is weakly normalized regular and regular. Moreover, for \((1)-(3)\), \(\phi_{T}\) will be \(\mu\)-regular for all \(\mu\), while, for \((4)\), \(\phi_{T}\) will be \(\mu\)-regular for \(\mu\) which are of the form \(\prod_{\tau:L\mapsto\overline{\mathbb{Q}}_{p}}\mu^{\prime}\) for \(\mu^{\prime}\) a cocharacter of \(\mathrm{GL}_{n}\)._
Proof.: We establish weak normalized regularity, and suppress giving the proof that \(\phi_{T}\) is regular as it is strictly easier. For cases \((1)\), \((2)\), and \((3)\), we may assume for simplicity that \(L=\mathbb{Q}_{p}\) with the proof in general essentially being the same. If \(G=\mathrm{GL}_{n}\) then this is [1, Lemma 3.10].
We now consider the case of \(G=\mathrm{U}_{n}\) defined with respect to a quadratic extension \(E/\mathbb{Q}_{p}\). Suppose there exists a non-trivial \(w\in W_{G}\) such that we have an isomorphism:
\[\chi\otimes\delta_{B}^{1/2}\simeq(\chi\otimes\delta_{B}^{-1/2})^{w}\]
of characters on \(T(\mathbb{Q}_{p})\). We recall that \(G_{E}\simeq\mathrm{GL}_{n,E}\) where \(E/\mathbb{Q}_{p}\) denotes the quadratic extension defining the unitary group. By the definition of the modulus character in terms of the transformation character of Haar measures, we observe that the precomposition of \(\delta_{B}\) with the Norm map \(T(E)\to T(\mathbb{Q}_{p})\) gives the modulus character on the Borel of \(\mathrm{GL}_{n,E}\). Therefore, by precomposing the previous isomorphism with this norm map, we obtain an analogous relationship of characters on the torus \(T(E)\), which is the maximal torus of \(\mathrm{GL}_{n,E}\). Then Lemma 4.21 reduces us to the \(\mathrm{GL}_{n}\) case.
The case of \(\mathrm{GU}_{n}\) similarly reduces to the \(\mathrm{U}_{n}\) case by setting the coordinate on \(T(\mathbb{Q}_{p})\) corresponding to the similitude factor to be equal to \(1\).
For case \((4)\), let \(d=[L:\mathbb{Q}_{p}]\). Observe that we have an isomorphism \(G(\mathrm{SL}_{n,L})_{L}\simeq H_{L}\), where
\[H=\left\{(g_{i})\in\prod_{L\hookrightarrow\overline{\mathbb{Q}}_{p}}\mathrm{GL }_{n}:\det(g_{i})=\det(g_{j})\ \forall i,j\right\}.\]
Applying Lemma 4.21 again and arguing as for unitary groups, it suffices to work with \(H_{L}\). We assume \(L=\mathbb{Q}_{p}\) for notation simplicity. The maximal torus \(T^{\prime}\) in \(H\) can be identified with
\[\mathbb{G}_{m}^{d}\times\mathbb{G}_{m},\]
via the map \((t_{1},\ldots,t_{d},t)\mapsto(\operatorname{diag}(t_{i},tt_{i}^{-1}))\). Since \(W_{H}=\prod W_{\operatorname{GL}_{2}}\), consider any element \(w^{\prime}\in W_{G}\), which we assume for notational simplicity is of the form
\[(w,\ldots,w,\operatorname{id},\ldots,\operatorname{id}),\]
where \(w\) is the non-trivial element of the Weyl group of \(\operatorname{GL}_{2}\), and we have \(w\) in the first \(k\) entries, for some integer \(0\leq k\leq d\). The general case follows similarly. Observe that the isomorphism
\[\chi\otimes\delta_{B}^{1/2}\simeq(\chi\otimes\delta_{B}^{-1/2})^{w^{\prime}}\]
becomes
\[\prod_{i=1}^{k}\chi_{i1}(t_{i}^{2}t^{-1})\chi_{i2}(tt_{i}^{-2})\simeq\prod_{i =k+1}^{d}|t_{i}^{-2}t|.\]
If we substitute \(t=x,t_{i}=x\) for \(i=1,\ldots,k\), while for \(k+1\leq i\leq d\) we set \(t_{i}=1\) if \(i-k\) is odd and \(t_{i}=x\) if \(i-k\) is even, we see that we get \(\prod_{i=1}^{k}\chi_{i1}(x)\chi_{i2}^{-1}(x)\) is isomorphic to either the trivial representation \(\mathbf{1}\) or \(|\cdot|\), which is a contradiction to genericity.
We now show \(\mu\)-regularity. Again, for cases (1), (2), and (3), observe that if \(G\) is of the form \(\operatorname{Res}_{L/\mathbb{Q}_{p}}G^{\prime}\) and \(T^{\prime}\) denotes the maximal torus of \(G^{\prime}\) then we have an isomorphism
\[\mathbb{X}_{*}(T_{\overline{\mathbb{Q}}_{p}})\simeq\prod_{\phi\in \operatorname{Hom}_{\mathbb{Q}_{p}}(L,\overline{L})}\mathbb{X}_{*}(T^{\prime }_{\overline{L}})\]
where \(\overline{L}\) is an algebraic closure of \(L\). Using this, we can without loss of generality assume that \(L=\mathbb{Q}_{p}\). In the case that \(G=\operatorname{GL}_{n},\operatorname{U}_{n}\), or \(\operatorname{GU}_{n}\), this follows as in the proof of [1, Corollary 10.16]. We recall briefly how this goes.
One can consider the geometric dominant cocharacter \(\mu=(1,0,\ldots,0,0)\) of \(\operatorname{GL}_{n}\). This defines the standard representation \(V_{\operatorname{std}}\) of \(\hat{G}\simeq\operatorname{GL}_{n}\). This cocharacter is in particular minuscule so the weights form a closed Weyl group orbit with representative \((1,0,\ldots,0,0)\). From here, it easily follows that the difference of the weights appearing in \(V_{\operatorname{std}}\) define coroots of \(G\). In particular, it follows that, if \(\phi_{T}\) is generic then it is strongly \(\mu\)-regular for \(\mu=(1,0,\ldots,0)\) in the sense of Definition 4.13, and this implies the filtration on \(T_{\mu}(\operatorname{nEis}(\mathcal{S}_{\phi_{T}}))\) splits by [1, Theorem 10.10] for this \(\mu\). Now, the tilting modules \(\mathcal{T}_{\omega_{i}}=\Lambda^{i}(V_{\operatorname{std}})\) attached to the other fundamental coweights \(\omega_{i}=(1^{i},0^{n-i})\) of \(G\) can be realized as direct summands of \(V_{\operatorname{std}}^{\otimes i}\), and it follows that \(\phi_{T}\) is \(\mu\)-regular for \(\mu=\omega_{i}\) by [1, Proposition 10.12]. Since any dominant cocharacter can be written as a linear combination of fundamental weights, the claim for any \(\mu\) now follows from [1, Corollary 10.13]. The case of \(\operatorname{GU}_{n}\) and \(\operatorname{U}_{n}\) follows in a very similar way, using Lemma 4.21.
For case (4), observe that as before, we can base change to \(L\), and since \(H\) is a subgroup of \(\prod\operatorname{GL}_{n}\), all cocharacters \(\mu\) of \(H\) define products of cocharacters for \(\operatorname{GL}_{n}\). Now, consider a cocharacter of the form \(\mu=\prod_{\tau}\mu_{\tau}\), where \(\mu_{\tau^{\prime}}=\mu_{\tau}\) for all \(\tau,\tau^{\prime}\) and \(\mu_{\tau}\) is a cocharacter of \(\operatorname{GL}_{n}\). Note that every dominant minuscule cocharacter of \(H\) will be of this form. This is because a cocharacter \(\mu=\prod_{\tau}\mu_{\tau}\) of \(\prod\operatorname{GL}_{n}\) factors through \(H\) exactly when the composition with the determinant is equal for all \(\tau\), and we see that for \(\mu\) to be minuscule, \(\mu_{\tau}\) must be one of the fundamental coweights \(\omega_{i}\), which, after composing with the determinant, give different characters for \(i\neq j\). Now, we observe that the same argument as above holds to show that the difference of Weyl conjugates define a coroot of \(H\) for the cocharacter \(\mu_{1}=\prod(1,0,\ldots,0)\), while for all other cocharacters of the form \(\mu=\prod\mu^{\prime}\), where \(\mu^{\prime}\) is a fundamental weight of \(\operatorname{GL}_{n}\), they appear as weights in some tensor power of the highest weight representation corresponding to \(\mu_{1}\). The claim for any \(\mu=\prod\mu^{\prime}\), where \(\mu^{\prime}\) is a dominant cocharacter of \(\operatorname{GL}_{n}\), follows by the same argument as above, using [1, Corollary 10.13].
**Lemma 4.23**.: _Let \(L/\mathbb{Q}_{p}\) be a finite extension, and \(G\) be one of the following groups:_
1. \(\operatorname{Res}_{L/\mathbb{Q}_{p}}\mathrm{GSp}_{4}\)__
2. \(G(\mathrm{Sp}_{4,L})\)_._
_Suppose moreover that \(\ell\neq 2\) and \(\ell\) is banal with respect to \(L\) (i.e. \((q^{4}-1,l)=1\), where \(q\) is the size of the residue field of \(L\)). If \(\phi_{T}\) is a generic toral parameter for \(G\) then \(\phi_{T}\) is weakly normalized regular and regular. Moreover, for \((1)\), \(\phi_{T}\) will be \(\mu\)-regular for all \(\mu\), and, for \((2)\), \(\phi_{T}\) will be \(\mu\)-regular for \(\mu\) which are of the form \(\prod_{\tau:L\to\overline{\mathbb{Q}}_{p}}\mu^{\prime}\) for \(\mu^{\prime}\) a cocharacter of \(\mathrm{GSp}_{4}\)._
Proof.: We will first establish weak normalized regularity, and again suppress giving the proof that \(\phi_{T}\) is regular as it is strictly easier. Again, for \((1)\), we assume that \(L=\mathbb{Q}_{p}\) for this part with the proof in general being more or less the same. We will show this by contradiction. Suppose on the contrary that there exists some \(w\in W_{G}\) such that we have an isomorphism
\[\chi\otimes\delta_{B}^{1/2}\simeq(\chi\otimes\delta_{B}^{-1/2})^{w}. \tag{16}\]
For case \((1)\), consider the following parametrization of the maximal torus \(T\)
\[a:(\mathbb{Q}_{p}^{*})^{2}\times\mathbb{Q}_{p}^{*}\to T(\mathbb{Q}_{p}) \tag{17}\]
\[(t_{1},t_{2},t)\mapsto\begin{pmatrix}t_{1}&0&0&0\\ 0&t_{2}&0&0\\ 0&0&tt_{2}^{-1}&0\\ 0&0&0&tt_{1}^{-1}\end{pmatrix}\]
as in [14, Page 135]. This allows us to write the character \(\chi:T(\mathbb{Q}_{p})\to\overline{\mathbb{F}}_{\ell}^{*}\) as \(\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)\), for characters \(\mathbb{Q}_{p}^{*}\to\overline{\mathbb{F}}_{\ell}^{*}\). Similarly, we can express the modulus character as
\[\delta_{B}(t_{1},t_{2},t)=|t_{1}|^{4}|t_{2}|^{2}|t|^{-3}\]
where \(|\cdot|\) is the norm character. We now check that \((16)\) cannot hold for all seven non-trivial elements of the Weyl group.
Consider the Weyl group element corresponding to the translation:
\[w_{1}:a(t_{1},t_{2},t)\mapsto a(t_{2},t_{1},t)\]
If we consider equation \((16)\) with respect to this element and evaluate on \((x,1,x)=(t_{1},t_{2},t)\) then we obtain the equation
\[\chi_{1}(x)|x|^{2}|x|^{-3/2}\simeq\chi_{2}(x)|x|^{-1}|x|^{3/2}\]
which gives an isomorphism \(\chi_{1}\chi_{2}^{-1}(x)\simeq\mathbf{1}\) contradicting genericity.
Similarly, if we consider the simple Weyl group element
\[w_{2}:a(t_{1},t_{2},t)\mapsto a(t_{1},t_{2}^{-1}t,t)\]
then evaluating equation \((16)\) for this relationship reduces to
\[\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t_{2}||t|^{-3/2}\simeq\chi_{1} (t_{1})\chi_{2}(t_{2}^{-1}t)\nu(t)|t_{1}|^{-2}|t_{2}||t|^{-1}|t|^{3/2}\]
cancelling terms we obtain that
\[\chi_{2}(t)^{-1}\chi_{2}(t_{2})^{2}\simeq|t_{1}|^{-4}|t|^{2}\]
so if we evaluate at \((t_{1},t_{2},t)=(x^{3},x^{2},x^{4})\) then we obtain
\[\mathbf{1}\simeq|x|^{-4}\]
which contradicts the assumption that \((p^{4}-1,\ell)=1\).
Consider now the Weyl group element
\[w_{3}:a(t_{1},t_{2},t)\mapsto a(t_{2}^{-1}t,t_{1},t)\]
if we evaluate equation (16) then we obtain
\[\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t_{2}||t|^{-3/2}\simeq\chi_{1}(t _{2})^{-1}\chi_{1}(t)\chi_{2}(t_{1})\nu(t)|t_{2}|^{2}|t|^{-2}|t_{1}|^{-1}|t|^{3/2}\]
rearranging and cancelling terms we obtain
\[\chi_{1}\chi_{2}^{-1}(t_{1})\chi_{2}\chi_{1}(t_{2})\chi_{1}(t)^{-1}\simeq|t_{1 }|^{-3}|t_{2}||t|\]
so if we evaluate at \((t_{1},t_{2},t)=(1,1,x)\) we obtain that
\[\chi_{1}^{-1}(x)\simeq|x|\]
which contradicts genericity (See [14, Page 167] for the enumeration of 1-parameter subgroups attached to the coroots in the parametrization (17). Note that we could also have substituted \((t_{1},t_{2},t)=(x,x,x)\) to obtain
\[\chi_{1}(x)\simeq|x|^{-1}.\]
Consider the reflection
\[w_{4}:a(t_{1},t_{2},t)\mapsto a(t_{1}^{-1}t,t_{2},t)\]
then equation (16) becomes
\[\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t_{2}||t|^{-3/2}\simeq\chi_{1} (t_{1}^{-1}t)\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t|^{-2}|t_{2}|^{-1}|t|^{3/2}\]
which gives
\[\chi_{1}(t_{1}^{2})\chi_{1}(t)^{-1}\simeq|t_{2}|^{-2}|t|\]
so if we evaluate at \((t_{1},t_{2},t)=(1,1,x)\), this becomes
\[\chi_{1}(x)^{-1}\simeq|x|\]
which contradicts genericity. Note that we could also have substituted \((t_{1},t_{2},t)=(1,x,x)\) to obtain
\[\chi_{1}(x)^{-1}\simeq|x|^{-1}.\]
Now consider the Weyl group element
\[w_{5}:a(t_{1},t_{2},t)\mapsto a(t_{2},t_{1}^{-1}t,t)\]
then equation (16)
\[\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t_{2}||t|^{-3/2}\simeq\chi_{1 }(t_{2})\chi_{2}(t_{1}^{-1}t)\nu(t)|t_{2}|^{-2}|t_{1}||t|^{-1}|t|^{3/2}\]
which simplifies to
\[\chi_{2}\chi_{1}^{-1}(t_{2})\chi_{1}\chi_{2}(t_{1})\chi_{2}(t)^{-1}\simeq|t_{ 1}|^{-1}|t_{2}|^{-3}|t|^{2}\]
so if we evaluate at \((t_{1},t_{2},t)=(x,1,x)\) then this gives
\[\chi_{1}(x)\simeq|x|\]
which contradicts genericity. Note that we could also have substituted \((t_{1},t_{2},t)=(1,x,x)\) to obtain
\[\chi_{1}^{-1}(x)\simeq|x|^{-1}.\]
Now consider the Weyl group element
\[w_{6}:a(t_{1},t_{2},t)\mapsto a(t_{1}^{-1}t,t_{2}^{-1}t,t)\]
then equation (16) becomes
\[\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t_{2}||t|^{-3/2}\simeq\chi_{1 }(t_{1}^{-1}t)\chi_{2}(t_{2}^{-1}t)\nu(t)|t_{1}|^{2}|t|^{-2}|t_{2}||t|^{-1}|t|^{ 3/2}\]
which simplifies to
\[\chi_{1}^{2}(t_{1})\chi_{2}^{2}(t_{2})\chi_{1}\chi_{2}(t)^{-1}\simeq\mathbf{1}\]
so if we evaluate at \((t_{1},t_{2},t)=(1,1,x)\) then this becomes
\[\chi_{1}\chi_{2}(x)\simeq\mathbf{1}\]
which contradicts genericity.
Now finally we consider
\[w_{7}:a(t_{1},t_{2},t)\mapsto a(t_{2}^{-1}t,t_{1}^{-1}t,t)\]
then equation (16) becomes
\[\chi_{1}(t_{1})\chi_{2}(t_{2})\nu(t)|t_{1}|^{2}|t_{2}||t|^{-3/2}\simeq\chi_{1}( t_{2}^{-1}t)\chi_{2}(t_{1}^{-1}t)\nu(t)|t_{2}|^{2}|t|^{-2}|t_{1}||t|^{-1}|t|^{3/2}\]
which simplifies to
\[\chi_{1}\chi_{2}(t_{1})\chi_{1}\chi_{2}(t_{2})\chi_{1}\chi_{2}(t^{-1})\simeq| t_{1}|^{-1}|t_{2}|\]
evaluated at \((t_{1},t_{2},t)=(1,1,x)\) simplifies to
\[\chi_{1}\chi_{2}(x)\simeq\mathbf{1}\]
which contradicts genericity.
This concludes our discussion of weakly normalized regularity for \(\operatorname{Res}_{L/\mathbb{Q}_{p}}\operatorname{GSp}_{4}\).
We now turn to the case of \(G(\operatorname{Sp}_{4,L})\). As in the proof of the previous lemma, observe that if we let
\[H=\{(g_{i})\in\prod_{L\hookrightarrow\overline{\mathbb{Q}}_{p}}\operatorname{ GSp}_{4}\text{ such that }\nu(g_{i})=\nu(g_{j}),\forall i,j\}, \tag{18}\]
then we have \(H_{L}\simeq G(\operatorname{Sp}_{4,L})_{L}\). Thus, we may reduce to the case of \(H\). Since \(H\subset\prod\operatorname{GSp}_{4}\), we may also use the parametrization in [14, p. 135] to see that the maximal torus \(T^{\prime}\) is given by a parametrization
\[(\mathbb{Q}_{p}^{*})^{2d}\times\mathbb{Q}_{p}^{*}\to T^{\prime}(\mathbb{Q}_{p}),\]
\[((t_{\tau 1},t_{\tau 2})_{\tau:L\hookrightarrow\overline{\mathbb{Q}}_{p}},t) \mapsto\begin{pmatrix}t_{\tau 1}&0&0&0\\ 0&t_{\tau 2}&0&0\\ 0&0&tt_{\tau 2}^{-1}&0\\ 0&0&0&tt_{\tau 1}^{-1}\end{pmatrix}_{\tau:L\hookrightarrow\overline{\mathbb{Q}}_{p}}\]
where we note that the common similitude factor is the last coordinate \(t\).
Since \(\delta_{B}\) is just the restriction of the character for the Borel of \(\prod\operatorname{GSp}_{4}\) from the torus \(\prod T\) to \(T^{\prime}\), we see that the modulus character is
\[\delta_{B}((t_{\tau 1},t_{\tau 2})_{\tau},t)=|t|^{-3d}\prod_{\tau}|t_{\tau 1}|^{ 4}|t_{\tau 2}|^{2}.\]
Since \(W_{G}=\prod_{\tau}W_{\operatorname{GSp}_{4}}\), consider any element \(w=(w_{\tau})\in W_{G}\), where \(w_{\tau}\in W_{\operatorname{GSp}_{4}}\). Observe that the expression obtained from the isomorphism (16) for \(w=(w_{\tau})\) is simply the product of the isomorphisms for \(\operatorname{GSp}_{4}\) each \(w_{\tau}\). Thus, if we wanted to argue by contradiction, using the notation of the proof above, when \(w_{\tau}=w_{i}\) for \(i=1,\dots,7\), we should substitute for \(t_{\tau 1},t_{\tau 2},t\) the values we considered above, subject to the additional constraint that we must have \(t\), the similitude factor, being equal for all \(\tau\).
We thus have two possibilities: either some \(w_{\tau}\) is the Weyl group element \(w_{2}\) (i.e. corresponding to the reflection
\[w_{2}:a(t_{1},t_{2},t)\mapsto a(t_{1},t_{2}^{-1}t,t)\]
or none of the \(w_{\tau}\) are this element.
In the first situation, suppose that for some \(\tau\), \(w_{\tau}\) is the Weyl reflection \(w_{2}\). If we consider the equation (16), evaluated on the element \(t_{\tau 1}=x,t_{\tau 2}=x^{2}\),and \(t_{\tau^{\prime}1}=t_{\tau^{\prime}2}=x^{2},t=x^{4}\) for all \(\tau^{\prime}\neq\tau\), then equation (16) simplifies to
\[\mathbf{1}\simeq|x|^{-4}, \tag{19}\]
since one can check that substituting \(t_{1}=t_{2}=x^{2},t=x^{4}\) into the isomorphism (16) for \(\operatorname{GSp}_{4}\) for all the Weyl elements not equal to \(w_{2}\) above simply gives the isomorphism \(\mathbf{1}\simeq\mathbf{1}\) after simplification, and thus does not matter when taking products). This contradicts the banality assumption that \((p^{4}-1,\ell)=1\).
Now, we suppose we are in the second situation, i.e. no \(w_{\tau}=w_{2}\). For any \(w=(w_{\tau})\), let \(J=\{\tau:L\hookrightarrow\overline{\mathbb{Q}}_{p}:w_{\tau}=\operatorname{id},w_{ 3},w_{4},w_{5}\}\), and \(J^{\prime}=\{\tau:L\hookrightarrow\overline{\mathbb{Q}}_{p}:w_{\tau}\neq \operatorname{id},w_{3},w_{4},w_{5}\}\). For some choice of \(((t_{\tau 1},t_{\tau 2})_{\tau},t)\) we see the equation (16) becomes
\[\prod_{\tau\in J^{\prime}}\chi_{\tau 1}\chi_{\tau 2}\prod_{\tau\in J}\chi_{\tau 1 }^{-1}\simeq\mathbf{1} \tag{20}\]
or
\[\prod_{\tau\in J^{\prime}}\chi_{\tau 1}\chi_{\tau 2}\prod_{\tau\in J}\chi_{\tau 1 }^{-1}\simeq|\cdot| \tag{21}\]
which contradicts genericity. Indeed, if we chose some \(((t_{\tau 1},t_{\tau 2})_{\tau},t)\) for each \(w_{\tau}\) such that we derived a contradiction in the case of \(\operatorname{GSp}_{4}\) as above, then we see that the right-hand side of the isomorphism (16) simplifies to \(|\cdot|^{n}\), for some \(n\). If \(n\neq 0,1\), then we see that by changing the values of \((t_{\tau 1},t_{\tau 2})\) for some \(\tau\in J\), to the other choice of substitution, the right-hand side evaluates to \(|\cdot|^{n-2}\), and continuing this process as necessary we get either equation (20) or (21).
We now show \(\mu\)-regularity. As in the proof of the previous lemma, for case (1) it suffices to check the claim when \(G=\operatorname{GSp}_{4}\), the Langlands dual group is given by \(\operatorname{GSpin}_{5}\), which is isomorphic to \(\operatorname{GSp}_{4}\). The spin representation
\[\operatorname{spin}:\operatorname{GSpin}_{5}\to\operatorname{GL}_{4}(V_{ \operatorname{spin}})\]
defines a minuscule highest weight representation, which, under the isomorphism \(\operatorname{GSpin}_{5}\simeq\operatorname{GSp}_{4}\), identifies with the defining representation of \(\operatorname{GSp}_{4}\). From here it is easy to see that the differences of the weights are roots of \(\operatorname{GSp}_{4}\) (= coroots of \(\operatorname{GSpin}_{5}\)). For example, by using the parametrization of the maximal torus, as in (17), and the description of the roots in this parametrization provided on [10, Page 167]. Therefore, genericity guarantees strong \(\mu\)-regularity for this representation which implies \(\mu\)-regularity as before. The other fundamental tilting module of \(\operatorname{GSpin}_{5}\) is given by the defining representation \(\operatorname{GSpin}_{5}\to\operatorname{SO}_{5}\to\operatorname{GL}_{5}\) assuming \(\ell\neq 2\) (See [11, Appendix B]). Moreover, this occurs as a \(5\)-dimensional summand of \(V_{\operatorname{spin}}\otimes V_{\operatorname{spin}}\), and it follows by [11, Proposition 10.12] that we know \(\mu\)-regularity for this representation as well. Therefore, since we know \(\mu\)-regularity for the fundamental coweights, we are now done by [11, Corollary 10.13] as before.
Now, for case (2), note that, as in the previous lemma, all cocharacters of \(H\) are of the form \(\mu=\prod_{\tau}\mu_{\tau}\) for some cocharacters \(\mu_{\tau}\) of \(\operatorname{GSp}_{4}\). The argument given above for \(\operatorname{GSp}_{4}\) shows that if we let \(\mu^{\prime}\) be one of the fundamental coweights of \(\operatorname{GSp}_{4}\), then if we take \(\mu=\prod\mu^{\prime}\), (i.e. \(\mu_{\tau}=\mu^{\prime}\) for all \(\tau\)) then we are \(\mu\)-regular for such \(\mu\). Applying [11, Corollary 10.13] again shows that if \(\mu=\prod_{\tau}\mu^{\prime}\), where \(\mu^{\prime}\) is a dominant cocharacter of \(\operatorname{GSp}_{4}\), then \(\phi_{T}\) will be \(\mu\)-regular for such \(\mu\). Note that, as in the case of \(G(\operatorname{SL}_{n})\), every dominant minuscule cocharacter of \(H\) is of this form.
Now let us package our final result in a nice form. We first consider the following table, summarizing the groups and primes for which our results apply. We have left the entry blank if no constraint is imposed, and just mentioned the groups that appear as local constituents of global groups that admit a Shimura datum and for which \(G\) is unramified.
\[\begin{array}{|c|c|c|c|}\hline G&\text{Constraint on }G&\ell&p\\ &\\ \hline\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GL}_{n})&L/\mathbb{Q} _{p}\text{ unramified}&(\ell,[L:\mathbb{Q}_{p}])=1&\\ \hline\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GSp}_{4})&L= \mathbb{Q}_{p}&(\ell,2(p^{4}-1))=1&\\ &L/\mathbb{Q}_{p}\text{ unramified}&(\ell,2[L:\mathbb{Q}_{p}](p^{4[L:\mathbb{Q}_ {p}]}-1))=1&p\neq 2\\ \hline\operatorname{Res}_{L/\mathbb{Q}_{p}}(\operatorname{GU}_{2})&L/\mathbb{Q }_{p}\text{ unramified}&(\ell,[L:\mathbb{Q}_{p}])=1&\\ \hline G=\operatorname{U}_{n}(L/\mathbb{Q}_{p})&n\text{ odd }L\text{ unramified}&\ell\neq 2&\\ \hline G=\operatorname{GU}_{n}(L/\mathbb{Q}_{p})&n\text{ odd }L\text{ unramified}&\ell\neq 2&\\ \hline G(\operatorname{SL}_{2,L})&L/\mathbb{Q}_{p}\text{ unramified}&(\ell,[L: \mathbb{Q}_{p}])=1&\\ \hline G(\operatorname{Sp}_{4,L})&L/\mathbb{Q}_{p}\text{ unramified, }L\neq \mathbb{Q}_{p}&(\ell,2[L:\mathbb{Q}_{p}](p^{4[L:\mathbb{Q}_{p}]}-1))=1&p\neq 2 \\ \hline\end{array} \tag{22}\]
We now apply Theorem 4.20.
**Corollary 4.24**.: _Assume \(G\) is a product of the groups appearing in Table (22) with \(p\) and \(\ell\) satisfying the corresponding conditions. Then, we have that the natural map_
\[j_{1}^{*}T_{\mu}(-):\operatorname{D}^{\operatorname{ULA}}(\operatorname{Bun}_{ G},\overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\to\operatorname{D}^{ \operatorname{adm}}(G(\mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})_{\phi_{ \mathfrak{m}}}\]
_is exact with respect to the perverse \(t\)-structure on the source and the natural \(t\)-structure (= perverse \(t\)-structure) on the target for all minuscule \(\mu\)._
Proof.: First note that, using the decomposition \(\operatorname{Bun}_{G_{1}\times G_{2}}:=\operatorname{Bun}_{G_{1}}\times \operatorname{Bun}_{G_{2}}\), we can assume that \(G\) is isomorphic to one of the groups appearing in Table (22).
Observe that all the groups in Table (22) satisfy Assumption 4.4, where the first five rows follows from Theorem 4.10, and the last two from Proposition 4.8. We now apply Theorem 4.20. To do this, we also need to check that if \(\phi_{T}\) is a generic toral parameter then is also weakly normalized regular, and \(\mu\)-regular for all minuscule \(\mu\). This follows from Lemma 4.22 and Lemma 4.23.
Lastly, we need to check that the representations \(\rho_{b,w}:=i_{B_{b}}^{J_{b}}(\chi^{w})\otimes\delta_{P_{b}}^{-1/2}\) are semi-simple for all \(b\in B(G)_{\mathfrak{un}}\) and \(w\in W_{b}\). We claim that they are in fact irreducible. Recall that \(J_{b}\simeq M_{b}\subset G\), where \(M_{b}\) is a Levi of \(G\). Moreover, we note that any such Levi \(M_{b}\) is a product of groups also appearing in (22). Therefore, the desired irreducibility follows from the \(\mu\)-regularity, weak normalized regularity, and regularity of \(\phi_{T}\) combined with Lemma 4.18. Note that in the case of \(G(\operatorname{SL}_{2,L})\) and \(G(\operatorname{Sp}_{4,L})\), we can always find a cocharacter \(\mu\) of the form \(\prod_{\tau}\mu^{\prime}\) which is not fixed by the Weyl group, since we can simply look at any cocharacter of \(\mu^{\prime}\) of \(\operatorname{GL}_{2}\) (resp. \(\operatorname{GSp}_{4}\)) which is not fixed by the Weyl group, and take \(\mu\) to be the product of these \(\mu^{\prime}\).
We also have the following.
**Corollary 4.25**.: _Assume \(G\) is a product of the groups appearing in Table (22) with \(p\) and \(\ell\) satisfying the corresponding conditions. Then, for \(\mathfrak{m}\subset H^{\operatorname{hs}}_{K_{p}}\) a generic maximal ideal, we have that_
\[\operatorname{D}^{\operatorname{ULA}}(\operatorname{Bun}_{G},\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\simeq\bigoplus_{b\in B(G)_{ \mathfrak{un}}}\operatorname{D}^{\operatorname{adm}}(J_{b}(\mathbb{Q}_{p}, \overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}.\]
_Moreover, the \(!\) and \(*\) pushforwards agree for any \(A\in\operatorname{D}^{\operatorname{adm}}(J_{b}(\mathbb{Q}_{p}),\overline{ \mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\simeq\operatorname{D}^{\operatorname {ULA}}(\operatorname{Bun}_{G}^{b},\overline{\mathbb{F}}_{\ell})_{\phi_{ \mathfrak{m}}}\)_
Proof.: This follows from Proposition 4.15, where the semisimplicity of the \(\rho_{b,w}\) follows as in the proof of the Previous Corollary.
## 5. The Proof of Theorems 1.15 and 1.17
### Proof of Theorems 1.15 and 1.17
Throughout this section, we assume \((\mathbf{G},X)\) is a PEL datum of type \(A\) or \(C\) such that \(\mathbf{G}_{\mathbb{Q}_{p}}\) is a product of simple groups as in Table (1) with \(p\) and \(\ell\) satisfying the corresponding conditions, and that Assumption 1.11 holds.
Proof.: (Theorem 1.15) By Corollary 3.17, the complex \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell})\) has a \(G(\mathbb{Q}_{p})\times W_{E}\)-equivariant filtration with graded pieces isomorphic to \(j_{1}^{*}T_{\mu}j_{bl}(V_{b})[-d](-\frac{d}{2})\). The cohomology of Igusa varieties \(V_{b}\) and the global Shimura variety is admissible [23, Proposition 8.21], so we can apply the results of the previous section to them. We let \(\mathfrak{m}\) be a generic maximal ideal of the spherical Hecke algebra, and consider the localization
\[(j_{1}^{*}T_{\mu}j_{bl}(V_{b}))_{\phi_{\mathfrak{m}}}[-d](-\frac{d}{2}).\]
This defines a filtration on \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell}) _{\phi_{\mathfrak{m}}}\). The filtration on \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell} )_{\phi_{\mathfrak{m}}}\) considered above comes from applying \((-)_{\phi_{\mathfrak{m}}}\) to \(R\Gamma([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}],i_{bl}i _{b}^{*}(R\pi_{\mathrm{HT}!}(\overline{\mathbb{F}}_{\ell})))\) viewed as a \(G(\mathbb{Q}_{p})\)-representation. Using Corollary 4.25, we see that these graded pieces are also isomorphic to
\[(j_{1}^{*}T_{\mu}j_{b*}(V_{b}))_{\phi_{\mathfrak{m}}}[-d](-\frac{d}{2})\]
via the natural transformation \(j_{bl}\to j_{b*}\) and are trivial for \(b\in B(G,\mu)_{\mathfrak{m}}\). However, using Lemma 3.14, this implies that the natural map
\[R\Gamma([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}],i_{b!}i _{b}^{*}(R\pi_{\mathrm{HT}!}(\overline{\mathbb{F}}_{\ell})))_{\phi_{\mathfrak{ m}}}\to R\Gamma([\mathcal{F}\ell_{G,\mu^{-1}}/\underline{G(\mathbb{Q}_{p})}],i_{b*} i_{b}^{*}(R\pi_{\mathrm{HT}!}(\overline{\mathbb{F}}_{\ell})))_{\phi_{ \mathfrak{m}}}\]
is an isomorphism, using Remark 3.15. Therefore, we see that the edge maps in the excision spectral sequence actually degenerate after applying \((-)_{\phi_{\mathfrak{m}}}\), giving us a direct sum decomposition
\[R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell} )_{\phi_{\mathfrak{m}}}\simeq\bigoplus_{b\in B(G,\mu)_{\mathfrak{m}}}j_{1}^{* }T_{\mu}j_{bl}(V_{b})[-d](-\frac{d}{2})_{\phi_{\mathfrak{m}}}.\]
By applying \(R\Gamma(K_{p}^{\mathrm{hs}},-)\) and invoking Lemma 4.2 (3), we obtain that
\[R\Gamma(K_{p}^{\mathrm{hs}},R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p}}, \overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}})\simeq R\Gamma_{c}( \mathcal{S}(\mathbf{G},X)_{K^{p}K_{p}^{\mathrm{hs}}},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}}\]
has a filtration with graded pieces isomorphic to
\[R\Gamma(K_{p}^{\mathrm{hs}},j_{1}^{*}T_{\mu}j_{bl}(V_{b}))_{\mathfrak{m}}[-d]( -\frac{d}{2}).\]
Just as in the proof of Corollary 3.17, we can rewrite this as
\[(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}^{\mathrm{hs}}},\overline{\mathbb{F}}_{\ell}(d_{b}))_{\mathfrak{m}}\otimes_{\mathcal{H}(J_{b })}^{\mathbb{L}}V_{b})[2d_{b}],\]
as desired.
Proof.: (Theorem 1.17) We recall, by Proposition 3.7, that \(V_{b}\) is a complex of smooth \(J_{b}(\mathbb{Q}_{p})\)-representations concentrated in degree \(\leq d_{b}\). It follows that we have an inclusion \(\bigoplus_{b\in B(G,\mu)}j_{bl}(V_{b})_{\phi_{\mathfrak{m}}}\in{}^{\mathfrak{ p}}{\rm D}^{\leq 0,\mathrm{ULA}}(\mathrm{Bun}_{G},\overline{\mathbb{F}}_{\ell})_{\phi_{ \mathfrak{m}}}\), using Proposition A.5. Corollary 4.24 implies that
\[j_{1}^{*}T_{\mu}j_{bl}(V_{b})[-d]_{\phi_{\mathfrak{m}}}\in{\rm D}^{\leq d}(G( \mathbb{Q}_{p}),\overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}}\]
after forgetting the Weil group action. Therefore, we conclude that
\[R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell})_ {\phi_{\mathfrak{m}}}\]
is concentrated in degrees \(0\leq i\leq d\). By applying Poincare duality at finite level and Corollary A.7, this allows us to conclude that the non-compactly supported cohomology
\[R\Gamma(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{\ell})_{\phi_ {\mathfrak{m}}^{\vee}}\]
localized at \(\phi_{\mathfrak{m}}^{\vee}\) is concentrated in degrees \(d\leq i\leq 2d\), where we define this to be the colimit over the non-compactly supported cohomology of finite levels (cf. Remark 3.1). Moreover, we note that generic is preserved under the Chevalley involution, since it just exchanges the role of positive and negative roots. It therefore follows that
\[R\Gamma(K_{p}^{\mathrm{hs}},R\Gamma(\mathcal{S}(\mathbf{G},X)_{K^{p},C}, \overline{\mathbb{F}}_{\ell})_{\phi_{\mathfrak{m}}^{\vee}})\]
is also concentrated in degrees \(\geq d\), but this isomorphic to
\[R\Gamma(\mathcal{S}(\mathbf{G},X)_{K^{p}K^{\mathrm{bs}}_{p},C},\overline{\mathbb{ F}}_{\ell})_{\mathfrak{m}^{\vee}}\]
by Lemma 4.2 (3). This establishes Theorem 1.17, by applying Poincare duality to the Shimura variety at finite level again.
### Proof of Corollary 1.19
We would like to obtain the main theorem for Shimura varities of non-PEL type, especially Hilbert-Siegel modular varieties (attached to \(\mathrm{Res}_{F/\mathbb{Q}}\mathrm{GL}_{2}\) or \(\mathrm{Res}_{F/\mathbb{Q}}\mathrm{GSp}_{4}\)). We will show this in a more general setup, as follows.
Let \((\mathbf{G},X)\), \((\mathbf{G}_{2},X_{2})\) be a pair of abelian type Shimura data such that \(\mathbf{G},\mathbf{G}_{2}\) are centrally isogenous, and we have an isomorphism of derived subgroups
\[\mathbf{G}^{\mathrm{der}}\xrightarrow{\sim}\mathbf{G}_{2}^{\mathrm{der}},\]
as well as adjoint quotients. Consider the associated Shimura varieties \(\mathrm{Sh}(\mathbf{G},X)_{K}\) and \(\mathrm{Sh}(\mathbf{G}_{2},X_{2})_{K_{2}}\), where we choose the level \(K,K_{2}\) such that the level at \(p\), satisfies that we have an equality \(K_{p}\cap G^{\mathrm{der}}(\mathbb{Q}_{p})=K_{2,p}\cap G_{2}^{\mathrm{der}}( \mathbb{Q}_{p})\).
We now assume \(K_{p}\) and \(K_{2,p}\) are both hyperspecial. Observe that this implies that \(K_{p}^{\prime}=K_{p}\cap G^{\mathrm{der}}(\mathbb{Q}_{p})\) is also hyperspecial. By the Satake isomorphism, we have an isomorphism of \(\overline{\mathbb{F}}_{\ell}\)-algebras
\[H_{K_{p}}\simeq\overline{\mathbb{F}}_{\ell}[X_{*}(T)]^{W_{G}},\]
and, since \(G,G^{\mathrm{der}}\) have isomorphic adjoint groups, the inclusion of cocharacters \(X_{*}(T^{\prime})\subset X_{*}(T)\) induces an inclusion of Hecke algebras \(H_{K_{p}^{\prime}}^{\prime}\subset H_{K_{p}}\), where \(T^{\prime}\) denotes the torus \(T\cap G^{\mathrm{der}}\), and \(H_{K_{p}^{\prime}}^{\prime}\) denotes the spherical Hecke algebra for \(G^{\mathrm{der}}\). Moreover, given a maximal ideal \(\mathfrak{m}\subset H_{K_{p}}\), then \(\mathfrak{m}^{\prime}=\mathfrak{m}\cap H_{K_{p}^{\prime}}^{\prime}\) is a maximal ideal of \(H_{K_{p}^{\prime}}^{\prime}\).
Fix a connected component \(X^{+}\subset X\). This also fixes a \(X_{2}^{+}\subset X_{2}\), and an isomorphism \(X^{+}\simeq X_{2}^{+}\), since \(\mathbf{G},\mathbf{G}_{2}\) have isomorphic adjoint quotients. For any compact open subgroup \(K\subset\mathbf{G}(\mathbb{A}_{f})\), we let \(\mathrm{Sh}^{+}(\mathbf{G},X)_{K}\) be the geometrically connected component which is the image of \(X^{+}\times 1\). Moreover, we will let
\[\mathrm{Sh}^{+}(\mathbf{G},X)_{K_{p}}=\lim_{\overset{\sim}{K^{p}}}\mathrm{Sh} ^{+}(\mathbf{G},X)_{K_{p}K^{p}}.\]
Note that since all the transition morphisms are finite etale and hence affine, \(\mathrm{Sh}^{+}(\mathbf{G},X)_{K_{p}}\) is also a qcqs scheme by [16, Lemma 01YX].
Since \(\mathbf{G},\mathbf{G}_{2}\) have isomorphic derived subgroups, this implies that we have an isomorphism of geometric connected components
\[\mathrm{Sh}^{+}(\mathbf{G},X)_{K_{p}}\simeq\mathrm{Sh}^{+}(\mathbf{G}_{2},X^ {\prime})_{K_{2,p}}.\]
Moreover, we see that the action of \(H_{K_{p}^{\prime}}^{\prime}\) on \(\mathrm{Sh}(\mathbf{G},X)_{K_{p}}\) preserves the geometric connected component, since we see that \(\mathrm{Sh}^{+}(\mathbf{G},X)_{K_{p}}\) is simply the connected Shimura variety associated to \((\mathbf{G}^{\mathrm{der}},X^{+})\). Indeed, we see that the set of \(\mathbb{C}\)-points of \(\mathrm{Sh}^{+}(\mathbf{G},X)_{K_{p}}\) is given by
\[\mathbf{G}^{\mathrm{der}}(\mathbb{Q})_{+}^{(p),\mathrm{cl}}\backslash X^{+} \times(\mathbf{G}^{\mathrm{der}}(\mathbb{A}_{f}^{p})),\]
where \(\mathbf{G}^{\mathrm{der}}(\mathbb{Q})_{+}^{(p),\mathrm{cl}}\) denotes the closure in \(\mathbf{G}^{\mathrm{der}}(\mathbb{A}_{f}^{p})\) of \(\mathbf{G}^{\mathrm{der}}(\mathbb{R})_{+}\cap\mathbf{G}^{\mathrm{der}}( \mathbb{Q})\cap K_{p}\), and \(\mathbf{G}^{\mathrm{der}}(\mathbb{R})_{+}\) denotes the preimage of the neutral connected component \(\mathbf{G}^{\mathrm{ad}}(\mathbb{R})^{+}\) in \(\mathbf{G}^{\mathrm{der}}(\mathbb{R})\) under the quotient map.
Following the description of the connected components of Shimura varities from [13, SS2], and using the notation of [16, SS3.3] we see that there exist groups \(\mathscr{A}(\mathbf{G})^{\circ},\mathscr{A}(\mathbf{G}_{2}),\mathscr{A}( \mathbf{G})\) such that we have \(\mathbf{G}(\mathbb{A}_{f})\) (resp. \(\mathbf{G}_{2}(\mathbb{A}_{f})\)) equivariant isomorphisms
\[\mathrm{Sh}(\mathbf{G},X)_{K_{p}}\simeq[\mathscr{A}(\mathbf{G})\times\mathrm{ Sh}^{+}(\mathbf{G},X)_{K_{p}}/\mathscr{A}(\mathbf{G})^{\circ}]\]
and
\[\operatorname{Sh}(\mathbf{G}_{2},X_{2})_{K_{2,p}}\simeq[\mathscr{A}(\mathbf{G}_{2} )\times\operatorname{Sh}^{+}(\mathbf{G},X)_{K_{p}}/\mathscr{A}(\mathbf{G})^{ \circ}].\]
Observe that, since \(\mathscr{A}(\mathbf{G})^{\circ}=\mathscr{A}(\mathbf{G}^{\mathrm{der}})^{\circ}\) is a subgroup of \(\mathscr{A}(\mathbf{G}_{2})\), the Shimura variety \(\operatorname{Sh}(\mathbf{G}_{2},X_{2})_{K_{2,p}}\) is simply an (infinite) union of copies of \(\operatorname{Sh}^{+}(\mathbf{G},X)_{K_{p}}\). Moreover, we see that the action of \(H^{\prime}_{K^{\prime}_{p}}\) on the right-hand side of the above isomorphism is given by the action on \(\operatorname{Sh}^{+}(\mathbf{G},X)_{K_{p}}\). In particular, we observe that
\[H^{i}(\operatorname{Sh}^{+}(\mathbf{G},X)_{K_{p}},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}^{\prime}}\]
vanishes if and only if
\[H^{i}(\operatorname{Sh}(\mathbf{G}_{2},X)_{K_{2,p}},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}^{\prime}}\]
does as well. We thus have the following proposition.
**Proposition 5.1**.: _Suppose that \((\mathbf{G},X)\) is of PEL type \(A\) or C satisfying the conditions in Theorem 1.17. Then, Conjecture 1.2 also holds for \((\mathbf{G}_{2},X_{2})\)._
Proof.: Since Conjecture 1.2 is true for \((\mathbf{G},X)\), we will first show that a maximal ideal \(\mathfrak{m}\) of \(H_{K_{p}}\) is generic if and only if the maximal ideal \(\mathfrak{m}^{\prime}\) of \(H^{\prime}_{K^{\prime}_{p}}\) is generic. To see this, we will reformulate this in terms of \(L\)-parameters. This is equivalent to showing that an \(L\)-parameter
\[\phi:W_{\mathbb{Q}_{p}}\to{}^{L}T(\overline{\mathbb{F}}_{\ell})\]
is generic if and only if the composition \(\phi^{\prime}\) with the map \(g:{}^{L}T(\overline{\mathbb{F}}_{\ell})\to{}^{L}T^{\prime}(\overline{\mathbb{ F}}_{\ell})\) induced by the inclusion of tori \(T^{\prime}\hookrightarrow T\) is generic. (Here, \(T^{\prime}=G^{\mathrm{der}}\cap T\)). This follows from the observation that any coroot \(\alpha\) factors through \(G^{\mathrm{der}}\), and hence the composition \(\alpha\circ\phi\) is equal to \(\alpha\circ\phi^{\prime}\).
Now, we consider the limit
\[\operatorname{Sh}(\mathbf{G},X):=\lim_{\overline{K^{p}}}\operatorname{Sh}( \mathbf{G},X)_{K^{p}}.\]
Note that since all schemes appearing in the limit are qcqs, by [10, Theorem 09YQ] we have an isomorphism of cohomology groups
\[H^{i}(\operatorname{Sh}(\mathbf{G},X),\overline{\mathbb{F}}_{\ell})\simeq\lim _{\overline{K^{p}}}H^{i}(\operatorname{Sh}(\mathbf{G},X)_{K^{p}},\overline{ \mathbb{F}}_{\ell}).\]
We now have the following lemma.
**Lemma 5.2**.: _Let \(G^{\prime}\to G\) be a map inducing an isomorphism on adjoint groups with \(g:{}^{L}G\to{}^{L}G^{\prime}\), the induced map on dual groups. For \(\phi:W_{\mathbb{Q}_{p}}\to{}^{L}G(\overline{\mathbb{F}}_{\ell})\) a L-parameter and \(A\) an admissible complex of \(G(\mathbb{Q}_{p})\)-modules, there is a natural isomorphism of \(G^{\prime}(\mathbb{Q}_{p})\)-modules_
\[(A|_{G^{\prime}(\mathbb{Q}_{p})})_{\phi^{\prime}}\simeq\bigoplus_{\begin{subarray} {c}\phi\\ \phi^{\prime}=g\circ\phi\end{subarray}}A_{\phi}|_{G^{\prime}(\mathbb{Q}_{p})},\]
_with notation as in Corollary 4.3._
Proof.: By applying Corollary 4.3, we obtain a decomposition
\[A\simeq\bigoplus_{\phi}A_{\phi}\]
of \(A\) as a \(G(\mathbb{Q}_{p})\)-module. We restrict to \(G^{\prime}(\mathbb{Q}_{p})\) and apply the localization map \((-)_{\phi^{\prime}}\). This gives an isomorphism
\[(A|_{G^{\prime}(\mathbb{Q}_{p})})_{\phi^{\prime}}\simeq\bigoplus_{\phi}(A_{ \phi}|_{G^{\prime}(\mathbb{Q}_{p})})_{\phi^{\prime}},\]
where we have used that localization commutes with direct sums since it is a left adjoint by definition. Now, using the compatibility of the Fargues-Scholze correspondence with central isogenies [12,
Theorem IX.6.1], either \(\phi^{\prime}=g\circ\phi\) and \(A_{\phi}|_{G^{\prime}(\mathbb{Q}_{p})}\in\mathrm{D}(G^{\prime}(\mathbb{Q}_{p}), \overline{\mathbb{F}}_{\ell})_{\phi^{\prime}}\) and, by the idempotence of the localization map, we have that \((A_{\phi}|_{G^{\prime}(\mathbb{Q}_{p})})_{\phi^{\prime}}=A_{\phi}|_{G^{\prime }(\mathbb{Q}_{p})}\) or \((A_{\phi}|_{G^{\prime}(\mathbb{Q}_{p})})_{\phi^{\prime}}\) is \(0\). The claim follows.
By the previous lemma applied to \(G^{\mathrm{der}}=G^{\prime}\subset G\), we have a natural decomposition
\[H^{i}(\mathrm{Sh}(\mathbf{G},X)_{K^{p}},\overline{\mathbb{F}}_{\ell})_{\phi^{ \prime}}\simeq\bigoplus_{\begin{subarray}{c}\phi\\ \phi^{\prime}=g\circ\phi\end{subarray}}H^{i}(\mathrm{Sh}(\mathbf{G},X)_{K^{p} },\overline{\mathbb{F}}_{\ell})_{\phi}.\]
Taking limits over \(K^{p}\), we obtain
\[H^{i}(\mathrm{Sh}(\mathbf{G},X),\overline{\mathbb{F}}_{\ell})_{\phi^{\prime}} \simeq\bigoplus_{\begin{subarray}{c}\phi\\ \phi^{\prime}=g\circ\phi\end{subarray}}H^{i}(\mathrm{Sh}(\mathbf{G},X), \overline{\mathbb{F}}_{\ell})_{\phi}.\]
Hence, we see that \(H^{i}(\mathrm{Sh}(\mathbf{G},X),\overline{\mathbb{F}}_{\ell})_{\phi^{\prime}}\) vanishes for \(i<d\), since all the \(\phi\) appearing on the right-hand side are generic, and hence we can apply Theorem 1.17 and take limits to see that all the direct summands vanish. Applying \(R\Gamma(K_{p},-)\) and Lemma 4.2 (3), we see that
\[H^{i}(\mathrm{Sh}(\mathbf{G},X)_{K_{p}},\overline{\mathbb{F}}_{\ell})_{ \mathfrak{m}^{\prime}}\]
vanishes for \(i<d\). Thus, we see that \(H^{i}(\mathrm{Sh}^{+}(\mathbf{G},X)_{K_{p}},\overline{\mathbb{F}}_{\ell})_{ \mathfrak{m}^{\prime}}\) vanishes for \(i<d\), and therefore the same is true for \(H^{i}(\mathrm{Sh}(\mathbf{G}_{2},X_{2})_{K_{2,p}},\overline{\mathbb{F}}_{\ell })_{\mathfrak{m}^{\prime}}\) from the discussion above. By the Hochschild-Serre spectral sequence, we see that for all sufficiently small \(K_{2}^{p}\), \(H^{i}_{c}(\mathrm{Sh}(\mathbf{G}_{2},X_{2})_{K_{2,p}K_{2}^{p}},\overline{ \mathbb{F}}_{\ell})_{\mathfrak{m}^{\prime}}\) vanishes for \(i<d\).
Now, consider a generic maximal ideal \(\mathfrak{m}_{2}\) for the spherical Hecke algebra \(H_{K_{2,p}}\) of \(G_{2}\). This corresponds to a generic maximal ideal \(\mathfrak{m}^{\prime}\) of \(H^{\prime}_{K_{p}^{\prime}}\). It remains to observe that for any finitely-generated \(H_{K_{2,p}}\)-module \(A\), if the localization \(A_{\mathfrak{m}^{\prime}}=0\), then there is some element in \(r\in H^{\prime}_{K_{p}^{\prime}}\backslash\mathfrak{m}^{\prime}\) such that \(rA=0\). Thus we must have \(A_{\mathfrak{m}_{2}}=0\) as well since \(H^{\prime}_{K_{p}^{\prime}}\backslash\mathfrak{m}^{\prime}\subset H_{K_{2,p}} \backslash\mathfrak{m}_{2}\).
As a corollary, we can strengthen previous results of Caraiani-Tamiozzo [15, Theorem B], who previously showed torsion vanishing for Hilbert modular varieties under the additional assumption that \(p\) was split in the totally real field \(F\) (though we also note that they showed torsion vanishing under a hypothesis on \(\mathfrak{m}\) which is weaker than the genericity considered here, see Remark 1.7).
**Corollary 5.3**.: _Conjecture 1.2 is true for Hilbert-Siegel Shimura varieties (attached to \(\mathrm{Res}_{F/\mathbb{Q}}\mathrm{GL}_{2}\), \(\mathrm{Res}_{F/\mathbb{Q}}\mathrm{GSp}_{4}\)) and quaternionic Shimura varieties._
Proof.: Observe that for Hilbert-Siegel Shimura varieties (attached to \(\mathrm{Res}_{F/\mathbb{Q}}\mathrm{GL}_{2}\), \(\mathrm{Res}_{F/\mathbb{Q}}\mathrm{GSp}_{4}\)), there is a cover by a PEL-type Shimura variety with local group \(G\) of the form \(G(\mathrm{SL}_{2})\) and \(G(\mathrm{Sp}_{4})\) respectively. For the case of quaternionic Shimura varieties, we can relate their geometric connected components to unitary PEL-type Shimura varieties with local group \(\mathrm{GU}_{2}\), as described in [16, Corollary 3.11]. Therefore, the result follows from Theorem 1.17.
## 6. Conjectures and Concluding Remarks
### Relationship to Xiao-Zhu
Assume that the basic element \(b\in B(G,\mu)_{\mathrm{un}}\) is unramified (See [16, Remark 4.2.11] for a classification). Let us look at the middle degree cohomology \(H^{d}(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\overline{\mathbb{F}}_{ \ell})_{\phi_{\mathfrak{m}}})\). By Theorem 1.15, it has a summand isomorphic to
\[H^{d}(R\Gamma_{c}(G,b,\mu)\otimes_{\mathcal{H}(J_{b})}^{\mathbb{L}}R\Gamma_{c- \partial}(\mathrm{Ig}^{b},\overline{\mathbb{F}}_{\ell})).\]
To describe this, let \(\mathbf{G}^{\prime}\) be the unique \(\mathbb{Q}\)-inner form of \(\mathbf{G}\) such that \(\mathbf{G}(\mathbb{A}^{p\infty})\simeq\mathbf{G}^{\prime}(\mathbb{A}^{p\infty})\), \(\mathbf{G}^{\prime}(\mathbb{R})\) is compact modulo center, and \(\mathbf{G}_{\mathbb{Q}_{p}}\simeq J_{b}\) (See [11, Proposition 3.1] for the existence).
We write \(C(\mathbf{G}^{\prime}(\mathbb{Q})\backslash\mathbf{G}^{\prime}(\mathbb{A}_{f})/K^{ p},\overline{\mathbb{F}}_{\ell})\) for the set of all continuous functions on the profinite set \(\mathbf{G}^{\prime}(\mathbb{Q})\backslash\mathbf{G}^{\prime}(\mathbb{A}_{f})/K ^{p}\). It is easy to show that one has an isomorphism
\[C(K^{p}\backslash\mathbf{G}^{\prime}(\mathbb{A}_{f})/\mathbf{G}^{\prime}( \mathbb{Q}),\overline{\mathbb{F}}_{\ell})\simeq R\Gamma_{c-\partial}(\mathrm{ Ig}^{b},\overline{\mathbb{F}}_{\ell})\]
for example by combining [10, Theorem 3.4] and Corollary 3.6. We let \(V_{\mu}\in\mathrm{Rep}_{\overline{\mathbb{F}}_{\ell}}(\hat{G})\) be the usual highest weight module of highest weight \(\mu\), which in particular agrees with the highest weight tilting module, since \(\mu\) is minuscule. We let \(b_{T}\) denote the unique (since \(b\) is basic) reduction of \(b\in B(G)\) to \(B(T)\), and regard it as an element in \(B(T)\simeq\mathbb{X}^{*}(\hat{T}^{\Gamma})\) in what follows. It should be the case that, under possible additional constraints on \(\mathfrak{m}\) depending on \(\mu\) (See for example [10, Conjecture 1.25] and [11, Definition 1.4.2]), we have an isomorphism
\[C(K^{p}\backslash\mathbf{G}^{\prime}(\mathbb{A}_{f})/\mathbf{G}^{\prime}( \mathbb{Q}),\overline{\mathbb{F}}_{\ell})\otimes^{\mathbb{L}}R\Gamma_{c}( \mathrm{Sht}(G,b,\mu)_{\infty,C}/K^{\mathrm{hs}}_{p},\overline{\mathbb{F}}_{ \ell})_{\mathfrak{m}}\simeq C(K^{p}\backslash\mathbf{G}^{\prime}(\mathbb{A}_{ f})/\mathbf{G}^{\prime}(\mathbb{Q}),\overline{\mathbb{F}}_{\ell})_{ \mathfrak{m}}\otimes V_{\mu}|_{\hat{G}^{\Gamma}}(b_{T})[-d](-\frac{d}{2}) \tag{23}\]
of \(G(\mathbb{Q}_{p})\)-representations8, where we note that \(J_{b}\simeq G\) if \(b\in B(G,\mu)_{\mathrm{un}}\) since \(b\) is basic, and \(J_{b}\) must be quasi-split since \(b\) is unramified. In particular, by arguing as in Koshikawa [14, Page 6], we know that \(R\Gamma_{c}(\mathrm{Sht}(G,b,\mu)_{\infty,C}/K^{\mathrm{hs}}_{p},\overline{ \mathbb{F}}_{\ell})_{\mathfrak{m}}\) will have irreducible constituents given by the representations of \(J_{b}(\mathbb{Q}_{p})\) with Fargues-Scholze parameter equal \(\phi_{\mathfrak{m}}\) as conjugacy classes of parameters. Moreover, using that Assumption 4.4 holds for the groups appearing in Table (1), we know by Propsoition 4.5 that they have to be constituents of \(i^{G}_{B}(\chi)\), which will also be irreducible under the generic assumption and the constraints appearing in Table (1) (See the proof of Corollary 4.24). Then [10, Conjecture 1.25] would imply that \(R\Gamma_{c}(G,b,\mu)[i^{G}_{B}(\chi)]\simeq i^{G}_{B}(\chi)\otimes V_{\mu}|_{ \hat{G}^{\Gamma}}(b_{T})[-d](\frac{-d}{2})\) as \(G(\mathbb{Q}_{p})\)-modules. Assume \(\ell\) is band (i.e coprime to the pro-order of \(K^{\mathrm{hs}}_{p}\)) then passing to \(K^{\mathrm{hs}}_{p}\)-invariants, recalling that it is exact under the band hypothesis, gives us the isomorphism (23).9
Footnote 8: One should also be able describe the Weil group action, as in [10, Conjecture 1.25].
Footnote 9: For this comparison, it would have been more natural to consider an analogue of Theorem 1.15 with \(\overline{\mathbb{Q}}_{\ell}\)-coefficients. This is indeed doable assuming that \(\phi_{\mathfrak{m}}\) admits a \(\overline{\mathbb{Z}}_{\ell}\)-lattice as in [10, Theorem 1.17]. This integrality condition is however an artifiact of the theory of solid \(\overline{\mathbb{Q}}_{\ell}\)-sheaves not being properly understood (e.g excision fails) and should be removable with more technology.
_Remark 6.1_.: If \(B(G,\mu)_{\mathrm{un}}\) consists of only the basic element and the \(\mu\)-ordinary element and \(\phi_{T}\) is strongly \(\mu\)-regular (Definition 4.13) then [10, Conjecture 1.25] is true. In particular, it follows from [10, Theorem 1.27] that the isomorphism (23) can be made unconditional.
We note that this description of the middle degree cohomology on the generic fiber of the Shimura variety at hyperspecial level parallels Theorem [11, Theorem 1.14 (1)], describing the middle degree cohomology on the special fiber of the natural integral model.
### A General Torsion Vanishing Conjecture
Consider now a general Shimura datum \((\mathbf{G},X)\). Let \(\Lambda\in\{\overline{\mathbb{Q}}_{\ell},\overline{\mathbb{F}}_{\ell}\}\). If \(\Lambda=\overline{\mathbb{F}}_{\ell}\) assume that \(\ell\) is very good with respect to \(G:=\mathbf{G}_{\mathbb{Q}_{p}}\), as in [11, Page 33]. We can then look at the \(G(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-representation.
\[R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)\]
defined by the cohomology at infinite level. By applying Corollary 4.3, we obtain a \(G(\mathbb{Q}_{p})\times W_{E_{\mathfrak{p}}}\)-equivariant decomposition of this
\[R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)=\bigoplus_{\phi}R \Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)_{\phi}\]
running over semi-simple \(L\)-parameters \(\phi:W_{\mathbb{Q}_{p}}\to{}^{L}G(\Lambda)\). For such a \(\phi\), we let \((\phi_{M},M)\) denote a cuspidal support. I.e \(M\) is a Levi of \(G\) and \(\phi_{M}:W_{\mathbb{Q}_{p}}\to{}^{L}M(\Lambda)\) is a supercuspidal \(L\)-parameter such that \(\phi\) is induced by composing with the natural embedding \({}^{L}M(\Lambda)\to{}^{L}G(\Lambda)\). We want to describe the degrees of cohomology that \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)_{\phi}\) sits in for suitably nice \(\phi\). The
case where \(\phi\) factors through \(M=T\) is covered by Conjecture 1.2. To go beyond this, we give the following definition.
**Definition 6.2**.: For a semi-simple \(L\)-parameter \(\phi\) with a cuspidal support \((M,\phi_{M})\), we let \(P\) be a parabolic with Levi factor \(M\) and unipotent radical \(N\). We consider the representation \(r\) given by looking at the action of \({}^{L}M\) on the Lie algebra of \({}^{L}N\) via the adjoint action. We say \(\phi\) is of Langlands-Shahidi type if the Galois cohomology groups
\[R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M})\]
and
\[R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M}^{\vee})\]
are trivial. Similarly, we say \(\phi\) is of weakly Langlands-Shahidi type if
\[H^{2}(R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M}))\]
and
\[H^{2}(R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M}^{\vee})\]
are trivial.
_Remark 6.3_.: We note that since we enforced this condition on both \(r\circ\phi_{M}\) and \(r\circ\phi_{M}^{\vee}\) that this is independent of the choice of parabolic \(P\) and the choice of cuspidal support. Moreover, it is easy to check that, if \(M=T\), this precisely recovers Definition 1.1.
The terminology of "Langlands-Shahidi type" comes from the fact that the representation \(r\circ\phi_{M}\) is precisely the representation which appears in the description of the constant term of the usual Eisenstein series via the Langlands-Shahidi method. The motivation for this definition comes from considering the behavior of geometric Eisenstein series over the Fargues-Fontaine curve for general parabolics, by making analogies with the classical theory over function fields, as developed in [1, 1]. In particular, this should be the correct definition that guarantees that the eigensheaves \(\mathcal{S}_{\phi}\) on \(\operatorname{Bun}_{G}\) with eigenvalue \(\phi\) are as simple as possible, and the analysis carried out in [1] generalizes to the non-principal case. This is discussed in more detail in [1, Chapter 3]. In addition, we expect that the consequences derived from the analysis in [1] in the principal case should also generalize. More precisely, we conjecture the following generalization of Proposition 4.15 and Corollary 4.24
**Conjecture 6.4**.: _Let \(B(G)_{M}:=\operatorname{Im}(B(M)_{\operatorname{basic}}\to B(G))\) be the set of \(M\)-reducible elements, and let \(\phi\) be a semi-simple \(L\)-parameter of Langlands-Shahidi type with cuspidal support \((M,\phi_{M})\). The category \(\operatorname{D_{lis}(\operatorname{Bun}_{G},\Lambda)_{\phi}}\) of \(\phi\)-local lisse-etale \(\Lambda\)-sheaves (as defined in Appendix A) breaks up as direct sum_
\[\operatorname{D_{lis}(\operatorname{Bun}_{G},\Lambda)_{\phi}}\simeq\bigoplus_ {b\in B(G)_{M}}\operatorname{D}(\operatorname{Bun}_{G}^{b},\Lambda)_{\phi}\]
_via excision, and the \(!\) and \(*\) pushhforwards agree for any smooth irreducible representation \(\rho\) of \(J_{b}(\mathbb{Q}_{p})\) lying in \(\operatorname{D_{lis}(\operatorname{Bun}_{G}^{b},\Lambda)_{\phi}}\) for \(b\in B(G)_{M}\)._
_Given a tilting module \(V\in\operatorname{Tilt}_{\Lambda}({}^{L}G^{I})\), if \(\phi\) is of weakly Langlands-Shahidi type then the map induced by associated the Hecke operator_
\[T_{V}:\operatorname{D_{lis}(\operatorname{Bun}_{G},\Lambda)_{\phi}}\to \operatorname{D_{lis}(\operatorname{Bun}_{G},\Lambda)_{\phi}^{BW^{I}_{ \mathbb{Q}_{p}}}\]
_is perverse \(t\)-exact, where the fact the Hecke operator preserves this subcategory is proven as in Lemma 4.2 (2)._
_Remark 6.5_.: During the preparation of this manuscript, Hansen formulated similar conjectures with rational coefficients [10]. He refers to Langlands-Shahidi parameters as generous parameters [10, Definition 2.5] and to weakly Langlands-Shahidi parameters as generic semi-simple parameters [10, Section 2.3]. One can show that these two definitions are equivalent. Indeed, note that the Galois cohomology \(H^{1}(R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M}))\) controls the lifts of a semi-simple parameter \(\phi_{M}:W_{\mathbb{Q}_{p}}\to{}^{L}M(\Lambda)\) to a \({}^{L}P(\Lambda)\)-valued parameter and that such lifts correspond to finding parameters whose semi-simplification is equal to \(\phi\). Moreover, insisting that \(H^{1}(R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M}))\) is trivial is equivalent to insisting that \(R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M})\) is trivial using local Tate-duality and that the Euler-Poincare characteristic of this complex is \(0\). This shows the equivalence of the generous condition with the Langlands-Shahidi type condition, using that the stack of Langlands parameters with rational coefficients is reduced. Lastly, the set of such lifts coming from classes in \(H^{0}(R\Gamma(W_{\mathbb{Q}_{p}},r\circ\phi_{M}))\) will give rise to non Frobenius semi-simple L-parameters allowing one to see that weakly Langlands-Shahidi is equivalent to generic semi-simple.
In particular, by combining this with a generalization of Theorem 1.13 to arbitrary Shimura varieties and the analysis carried out in SS5, we could deduce the following as a consequence.
**Conjecture 6.6**.: _Let \(\phi\) be a semi-simple \(L\)-parameter of weakly Langlands-Shahidi type with cuspidal support \((M,\phi_{M})\). Then the complex \(R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)_{\phi}\) (resp. \(R\Gamma(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)_{\phi}\)) is concentrated in degrees \(0\leq i\leq d\) (resp. \(d\leq i\leq 2d\))._
_Remark 6.7_.: For \((\mathbf{G},X)\) of PEL type \(A\) or \(C\), assuming 1.11 and that \(\phi\) of Langlands-Shahidi type, we should also obtain a \(W_{E_{p}}\times G(\mathbb{Q}_{p})\)-equivariant direct sum decomposition
\[R\Gamma_{c}(\mathcal{S}(\mathbf{G},X)_{K^{p},C},\Lambda)_{\phi}\simeq\bigoplus _{b\in B(G,\mu)_{M}}(R\Gamma_{c}(G,b,\mu)_{\phi}\otimes^{\mathbb{L}}V_{b})[2d _{b}],\]
where \(R\Gamma_{c}(G,b,\mu):=\operatorname{colim}_{K_{p}\to\{1\}}R\Gamma_{c}( \operatorname{Sht}(G,b,\mu)_{\infty,C}/\underline{K_{p}},\Lambda(d_{b}))\) and \(R\Gamma_{c}(G,b,\mu)_{\phi}\) is the projection applied to the complex viewed as a \(G(\mathbb{Q}_{p})\)-representation. This should also generalize once one has appropriate general definitions of \(\operatorname{Ig}^{b}\) and \(\operatorname{Ig}^{b,*}\) so that one can actually define \(V_{b}:=R\Gamma_{c-\partial}(\operatorname{Ig}^{b},\Lambda)\). Under possible additional constraints on \(\phi\), one should also be able to describe the contribution of \(R\Gamma_{c}(G,b,\mu)_{\phi}\) in terms of the decomposition \(V_{\mu}|_{Z(\hat{M}^{\Gamma})}=\mathcal{T}_{\mu}|_{Z(\hat{M}^{\Gamma})}\) for \(b\in B(G)_{M}\) (along the lines of [10, Conjecture 1.25]), as is explained in the toral case in SS6.1. It would be interesting to formulate an optimal conjecture.
_Remark 6.8_.: We believe that this conjecture should be true under just the weakly Langlands-Shahidi condition. However, we strongly suspect that the splitting of the semi-orthogonal decomposition and in turn the splitting of Mantovan's filtration discussed in the previous Remark should not hold unless the set \(B(G,\mu)_{M}\) is a singleton. In particular, in [10, Section 2.2] Hansen conjectures the existence of perverse sheaves lying \(\operatorname{D_{lis}}(\operatorname{Bun}_{G},\Lambda)_{\phi}\), for which the semi-orthogonal decomposition does not split. Nonetheless, one still expects perverse \(t\)-exactness of Hecke operators to hold in these cases [10, Conjecture 2.32].
## Appendix A Spectral Decomposition of Sheaves on \(\operatorname{Bun}_{G}\), by David Hansen
Let \(G/\mathbb{Q}_{p}\) be a connected reductive group, \(\Lambda/\mathbf{Z}_{\ell}\) an algebraically closed field. If \(\operatorname{char}(\Lambda)\neq 0\) we assume \(\ell\) is very good for \(G\).
Set \(\operatorname{D}(\operatorname{Bun}_{G})=\operatorname{D_{lis}}( \operatorname{Bun}_{G},\Lambda)\) to be the derived category of lisse-etale \(\Lambda\)-sheaves, regarded as a stable \(\infty\)-category whenever convenient. Let \(\mathfrak{X}_{\hat{G}}=Z^{1}(W_{E},\hat{G})_{\Lambda}/\hat{G}\) be the stack of \(L\)-parameters over \(\Lambda\), and let \(X_{\hat{G}}\) be its coarse moduli space, \(q:\mathfrak{X}_{\hat{G}}\to X_{\hat{G}}\) the natural map. We will regard \(\mathfrak{X}_{\hat{G}}\) as a disjoint union of finite type algebraic stacks over \(\Lambda\), and \(X_{\hat{G}}\) as a disjoint union of finite type affine \(\Lambda\)-schemes. As in [11], we have the spectral action of \(\operatorname{Perf}(\mathfrak{X}_{\hat{G}})\) on \(\operatorname{D}(\operatorname{Bun}_{G})\), and there is a natural map \(\Psi_{G}:\mathcal{O}(\mathfrak{X}_{\hat{G}})=\mathcal{O}(X_{\hat{G}})\to \mathfrak{Z}(\operatorname{D}(\operatorname{Bun}_{G})):=\pi_{0}(\operatorname{ id}_{\operatorname{D}(\operatorname{Bun}_{G})})\), where we recall that
\(Z^{1}(W_{E},\hat{G})_{\Lambda}\) is a disjoint union of affine schemes by [11, Theorem VIII.1.3]. These two structures are compatible (as proven by Zou [10, Theorem 5.2.1]).
By [11, Prop. VIII.3.8], the set of closed points \(X_{\hat{G}}(\Lambda)\) is naturally in bijection with the set of isomorphism classes of semisimple \(L\)-parameters \(\phi:W_{E}\to{}^{L}\!G(\Lambda)\). Let \(\mathfrak{m}_{\phi}\subset\mathcal{O}(X_{\hat{G}})\) be the maximal ideal associated with a given \(\phi\).
**Definition A.1**.: Given any \(\phi\) as above, \(\mathrm{D}(\mathrm{Bun}_{G})_{\phi}\subset\mathrm{D}(\mathrm{Bun}_{G})\) is the full subcategory of sheaves \(A\in\mathrm{D}(\mathrm{Bun}_{G})\) such that for every \(f\in\mathcal{O}(X_{\hat{G}})\smallsetminus\mathfrak{m}_{\phi}\), \(A\stackrel{{\cdot f}}{{\to}}A\) is an isomorphism. Here \(\cdot f\) is the endomorphism of \(A\) induced by \(\Psi_{G}\).
We will call objects of \(\mathrm{D}(\mathrm{Bun}_{G})_{\phi}\)\(\phi\)-_local_ sheaves.
By construction, \(\mathrm{D}(\mathrm{Bun}_{G})_{\phi}\) is a full subcategory of \(\mathrm{D}(\mathrm{Bun}_{G})\) stable under arbitrary limits and colimits, and the tautological inclusion functor \(\iota_{\phi}:\mathrm{D}(\mathrm{Bun}_{G})_{\phi}\hookrightarrow\mathrm{D}( \mathrm{Bun}_{G})\) commutes with limits and colimits. By the \(\infty\)-categorical adjoint functor theorem [12, Cor. 5.5.2.9.(2)], it therefore admits a left adjoint \(\mathcal{L}_{\phi}:\mathrm{D}(\mathrm{Bun}_{G})\to\mathrm{D}(\mathrm{Bun}_{G })_{\phi}\).10 The unit of the adjunction gives a map \(A\to\iota_{\phi}\mathcal{L}_{\phi}A=:A_{\phi}\) functorially in \(A\). Since \(\iota_{\phi}\) is fully faithful, \(\mathcal{L}_{\phi}\iota_{\phi}=\mathrm{id}\), so \((A_{\phi})_{\phi}=A_{\phi}\), i.e. the endofunctor \(A\rightsquigarrow A_{\phi}\) is idempotent. We remark that \(\mathrm{D}(\mathrm{Bun}_{G})_{\phi}\) is a Bousfield localization of \(\mathrm{D}(\mathrm{Bun}_{G})\), and the map \(A\to A_{\phi}\) is the initial map from \(A\) to a \(\phi\)-local sheaf.
Footnote 10: To see that \(\iota_{\phi}\) is accessible, use [12, Prop. 5.4.7.7] together with the fact that \(\iota_{\phi}\) admits a right adjoint, which follows from [12, Cor. 5.5.2.9.(1)].
**Proposition A.2**.: _The full subcategory \(\mathrm{D}(\mathrm{Bun}_{G})_{\phi}\) is preserved by the spectral action, and \(A\rightsquigarrow A_{\phi}\) commutes with the spectral action. Moreover, \(\mathrm{supp}(A_{\phi})\subseteq\mathrm{supp}(A)\)._
Proof.: The first claim is clear, since the spectral action commutes with the action of \(\mathcal{O}(X_{\hat{G}})\). For the remaining claims (and some later arguments), it is useful to give an explicit formula for \(A_{\phi}\). Let \(\mathcal{I}_{\phi}\) be the diagram category whose objects are elements of \(\mathcal{O}(X_{\hat{G}})\smallsetminus\mathfrak{m}_{\phi}\) and where a morphism \(f\to g\) is an element \(h\in\mathcal{O}(X_{\hat{G}})\smallsetminus\mathfrak{m}_{\phi}\) such that \(g=fh\). This is clearly cofiltered. Let \(F\in\mathrm{Fun}(\mathcal{I}_{\phi},\mathrm{D}(\mathrm{Bun}_{G}))\) be the functor sending \(f\) to \(A\) and sending a morphism \(h\in\mathrm{Mor}(f,g)\) to \(\cdot h\in\mathrm{End}(A)\). Then \(A_{\phi}=\mathrm{colim}_{i\in\mathcal{I}_{\phi}}F(i)\). The remaining claims are now immediate.
To make sense of the next proposition, note that for any \(A,B\in\mathrm{D}(\mathrm{Bun}_{G})\), \(\mathrm{Hom}(B,A)\) is naturally a \(\mathfrak{Z}(\mathrm{D}(\mathrm{Bun}_{G}))\)-module, whence a \(\mathcal{O}(X_{\hat{G}})\)-module.
**Proposition A.3**.: _If \(C\in\mathrm{D}(\mathrm{Bun}_{G})\) is compact, then \(\mathrm{Hom}(C,A_{\phi})\cong\mathrm{Hom}(C,A)_{\mathfrak{m}_{\phi}}\) functorially in \(A\) and \(C\), where the RHS is the usual localization as an \(\mathcal{O}(X_{\hat{G}})\)-module._
Proof.: Notation as in the previous proof, we have
\[\mathrm{Hom}(C,A_{\phi}) \cong\mathrm{Hom}(C,\mathrm{colim}_{i\in\mathcal{I}_{\phi}}F(i))\] \[\cong\mathrm{colim}_{i\in\mathcal{I}_{\phi}}\mathrm{Hom}(C,F(i))\] \[\cong\mathrm{Hom}(C,A)_{\mathfrak{m}_{\phi}}\]
where the second isomorphism follows from the compactness of \(C\) and the third isomorphism is immediate from the definition of \((-)_{\mathfrak{m}_{\phi}}\).
**Proposition A.4**.: _If \(A\) is ULA, then also \(A_{\phi}\) is ULA._
Proof.: Recall from [11, Prop. VII.7.9] that \(B\in\mathrm{D}(\mathrm{Bun}_{G})\) is ULA iff \(R\mathrm{Hom}(C,B)\in\mathrm{Perf}(\Lambda)\) is a perfect complex for all compact objects \(C\in\mathrm{D}(\mathrm{Bun}_{G})\). Now, if \(C\) is compact, \(R\mathrm{Hom}(C,-)\) commutes with filtered colimits, so
\[R\mathrm{Hom}(C,A_{\phi}) \simeq R\mathrm{Hom}(C,\mathrm{colim}_{i\in\mathcal{I}_{\phi}}F(i))\] \[\simeq\mathrm{colim}_{i\in\mathcal{I}_{\phi}}R\mathrm{Hom}(C,F(i))\]
with notation as in the proof of Proposition A.2. Since \(F(i)\simeq A\) for all \(i\), \(\operatorname{colim}_{i\in\mathcal{I}_{\phi}}R\mathrm{Hom}(C,F(i))\) is a filtered colimit of perfect complexes \(P_{i}\) which vanish outside a finite interval independent of \(n\), and with \(\dim_{\Lambda}(H^{j}(P_{i}))\) bounded independently of \(i\). It then easily follows that \(\operatorname{colim}_{i\in\mathcal{I}_{\phi}}R\mathrm{Hom}(C,F(i))\) is perfect, whence the claim.
**Proposition A.5**.: _If \(A\) is ULA, the natural maps \(A\to\prod_{\phi}A_{\phi}\leftarrow\oplus_{\phi}A_{\phi}\) are isomorphisms, where the direct sum and direct product are taken over all semi-simple \(L\)-parameters. In particular, \(A_{\phi}\) is functorially a direct summand of \(A\) for ULA sheaves \(A\), and the functor \((-)_{\phi}\) on ULA sheaves is perverse t-exact._
_Remark A.6_.: The isomorphism \(\oplus_{\phi}A_{\phi}\stackrel{{\sim}}{{\to}}\prod_{\phi}A_{\phi}\) may be surprising at first glance. To put this in context, we remind the reader that if \((\pi_{i})_{i\in I}\) is a collection of admissible smooth \(\Lambda[G(\mathbb{Q}_{p})]\)-modules whose product \(\prod_{i}\pi_{i}\) is admissible, then \(\oplus_{i}\pi_{i}\stackrel{{\sim}}{{\to}}\prod_{i}\pi_{i}\) automatically, because admissibility of \(\prod_{i}\pi_{i}\) implies that for any given compact open subgroup \(K\subset G(\mathbb{Q}_{p})\) we have \(\pi_{i}^{K}=0\) for all but finitely many \(i\). A similar argument occurs in the following proof, which actually shows that if \((A_{i})_{i\in I}\) is any collection of ULA sheaves on \(\mathrm{Bun}_{G}\) whose product \(\prod_{i}A_{i}\) is ULA, then \(\oplus_{i}A_{i}\stackrel{{\sim}}{{\to}}\prod_{i}A_{i}\) automatically.
Proof.: We first show that \(A\to\prod_{\phi}A_{\phi}\) is an isomorphism. Let \(C\) be any compact object. It suffices to prove that the natural map
\[\mathrm{Hom}(C,A)\to\prod_{\phi}\mathrm{Hom}(C,A_{\phi})\cong\mathrm{Hom}(C, \prod_{\phi}A_{\phi})\]
is an isomorphism, since \(\mathrm{D}(\mathrm{Bun}_{G})\) is compactly generated [10, Theorem I.5.1 (iii)]. As in the previous proof, \(R\mathrm{Hom}(C,A)\) is a perfect complex, so \(\mathrm{Hom}(C,A)\) is a finite \(\Lambda\)-vector space. In particular, it is a finite length \(\mathcal{O}(X_{\hat{G}})\)-module supported at a finite set of closed points \(S\subset X_{\hat{G}}(\Lambda)\), so if \(\phi\notin S\) then \(\mathrm{Hom}(C,A_{\phi})=\mathrm{Hom}(C,A)_{\mathfrak{m}_{\phi}}=0\) using Proposition A.3. We then conclude that
\[\mathrm{Hom}(C,A) =\oplus_{\phi\in S}\mathrm{Hom}(C,A)_{\mathfrak{m}_{\phi}}\] \[=\oplus_{\phi\in S}\mathrm{Hom}(C,A_{\phi})\] \[=\prod_{\phi}\mathrm{Hom}(C,A_{\phi})\]
where the first equality follows from general nonsense about finite length modules over commutative rings, the second equality follows from Proposition A.3, and the third equality follows from the vanishing of \(\mathrm{Hom}(C,A_{\phi})\) for all but finitely many \(\phi\). This also shows that \(\mathrm{Hom}(C,\oplus_{\phi}A_{\phi})\cong\oplus_{\phi}\mathrm{Hom}(C,A_{\phi})\to \prod_{\phi}\mathrm{Hom}(C,A_{\phi})\) is an isomorphism (here again the first isomorphism follows from compactness of \(C\)), which implies that \(\oplus_{\phi}A_{\phi}\stackrel{{\sim}}{{\to}}\prod_{\phi}A_{\phi}\) is an isomorphism.
Next, recall the Verdier duality functor \(\mathbb{D}_{\mathrm{Bun}_{G}}\) on \(\mathrm{D}(\mathrm{Bun}_{G})\), which induces an involutive anti-equivalence on the subcategory of ULA sheaves. Recall also that, for any \(A\), the diagram
commutes, where \(f\mapsto f^{\vee}\) is the involution of \(\mathcal{O}(X_{\hat{G}})\) induced by composition with the Chevalley involution at the level of \(L\)-parameters. Since \(f\in\mathfrak{m}_{\phi}\) iff \(f^{\vee}\in\mathfrak{m}_{\phi^{\vee}}\), we deduce that if \(A\) is \(\phi\)-local then \(\mathbb{D}_{\mathrm{Bun}_{G}}(A)\) is \(\phi^{\vee}\)-local. Using biduality, we also get that if \(A\) is ULA then \(A\) is \(\phi\)-local if and only if \(\mathbb{D}_{\mathrm{Bun}_{G}}(A)\) is \(\phi^{\vee}\)-local.
**Corollary A.7**.: _If \(A\) is ULA, then \(\mathbb{D}_{\mathrm{Bun}_{G}}(A_{\phi})\cong(\mathbb{D}_{\mathrm{Bun}_{G}}(A))_{ \phi^{\vee}}\)._
Proof.: By Proposition A.5 and the remarks preceding its proof, the decomposition \(A=\oplus_{\psi}A_{\psi}\) dualizes to a decomposition
\[\mathbb{D}_{\operatorname{Bun}_{G}}(A)=\prod_{\psi}\mathbb{D}_{\operatorname{Bun }_{G}}(A_{\psi})\cong\oplus_{\psi}\mathbb{D}_{\operatorname{Bun}_{G}}(A_{\psi})\]
where the second isomorphism follows from the discussion in Remark A.6. On the other hand, applying Proposition A.5 directly to \(\mathbb{D}_{\operatorname{Bun}_{G}}(A)\) gives a decomposition
\[\mathbb{D}_{\operatorname{Bun}_{G}}(A)\cong\oplus_{\psi^{\prime}}(\mathbb{D}_ {\operatorname{Bun}_{G}}(A))_{\psi^{\prime}},\]
so comparing these we get a natural isomorphism
\[\oplus_{\psi}\mathbb{D}_{\operatorname{Bun}_{G}}(A_{\psi})\cong\oplus_{\psi^{ \prime}}(\mathbb{D}_{\operatorname{Bun}_{G}}(A))_{\psi^{\prime}}.\]
Applying \((-)_{\phi^{\vee}}\) to both sides, we get \(\mathbb{D}_{\operatorname{Bun}_{G}}(A_{\phi})\) on the left side (using that \(\mathbb{D}_{\operatorname{Bun}_{G}}(A_{\phi})\) is \(\phi^{\vee}\)-local), and \((\mathbb{D}_{\operatorname{Bun}_{G}}(A))_{\phi^{\vee}}\) on the right side. This gives the claim.
|
2309.07436 | What exactly does Bekenstein bound? | The Bekenstein bound posits a maximum entropy for matter with finite energy
confined to a spatial region. It is often interpreted as a fundamental limit on
the information that can be stored by physical objects. In this work, we test
this interpretation by asking whether the Bekenstein bound imposes constraints
on a channel's communication capacity, a context in which information can be
given a mathematically rigorous and operationally meaningful definition. We
study specifically the \emph{Unruh channel} that describes a stationary Alice
exciting different species of free scalar fields to send information to an
accelerating Bob, who is confined to a Rindler wedge and exposed to the noise
of Unruh radiation. We show that the classical and quantum capacities of the
Unruh channel obey the Bekenstein bound that pertains to the decoder Bob. In
contrast, even at high temperatures, the Unruh channel can transmit a
significant number of \emph{zero-bits}, which are quantum communication
resources that can be used for quantum identification and many other primitive
protocols. Therefore, unlike classical bits and qubits, zero-bits and their
associated information processing capability are generally not constrained by
the Bekenstein bound. However, we further show that when both the encoder and
the decoder are restricted, the Bekenstein bound does constrain the channel
capacities, including the zero-bit capacity. | Patrick Hayden, Jinzhao Wang | 2023-09-14T05:37:20Z | http://arxiv.org/abs/2309.07436v2 | # What exactly does Bekenstein bound?
###### Abstract
The Bekenstein bound posits a maximum entropy for matter with finite energy confined to a spacetime region. It is often interpreted as a fundamental limit on the information that can be stored by physical objects. In this work, we test this interpretation by asking whether the Bekenstein bound imposes constraints on a channel's communication capacity, a context in which information can be given a mathematically rigorous and operationally meaningful definition. We first derive a bound on the accessible information and demonstrate that the Bekenstein bound constrains the decoding instead of the encoding. Then we study specifically the _Unruh channel_ that describes a stationary Alice exciting different species of free scalar fields to send information to an accelerating Bob, who is therefore confined to a Rindler wedge and exposed to the noise of Unruh radiation. We show that the classical and quantum capacities of the Unruh channel obey the Bekenstein bound. In contrast, the entanglement-assisted capacity is as large as the input size even at arbitrarily high Unruh temperatures. This reflects that the Bekenstein bound can be violated if we do not properly constrain the decoding operation in accordance with the bound. We further find that the Unruh channel can transmit a significant number of _zero-bits_, which are communication resources that can be used as minimal substitutes for the classical/quantum bits needed for many primitive information processing protocols, such as dense coding and teleportation. We show that the Unruh channel has a large zero-bit capacity even at high temperatures, which underpins the capacity boost with entanglement assistance and allows Alice and Bob to perform quantum identification. Therefore, unlike classical bits and qubits, zero-bits and their associated information processing capability are not constrained by the Bekenstein bound.
## 1 Introduction
Information-theoretic concepts have amply demonstrated their utility for understanding fundamental aspects of physics. One notable example is the Bekenstein bound [1], originally proposed as a way to safeguard the second law of thermodynamics against violations by black holes [2]. The bound asserts that the entropy \(S\) of matter confined within a spatial region of size \(R\) and energy \(E\) is subject to a specific limit,
\[S\leq\lambda RE. \tag{1}\]
where \(\lambda\) is some order-one constant independent of \(G_{N}\). It is one of the first profound theoretical discoveries identifying an intrinsic connection between information and energy,1 and
inspired many subsequent developments. (See [5] for a nice review.) The Bekenstein bound serves as a compelling reminder that information, despite its abstract nature, is necessarily carried by physical systems.
The terms in the formula proposed by Bekenstein were ambiguously defined [6]. The proper formulation and validity of the bound remained elusive and critically debated (see [7] for a review), until Casini successfully proposed a precise version of equation (1) within the framework of quantum field theory [8]. Casini made the observation that the relative entropy between the quantum state of matter, denoted \(\rho\), and the vacuum state, denoted \(\Omega\), can be expanded as the difference between an energy term and an entropy term,2
Footnote 2: Here some UV regularization is assumed to decompose the relative entropy into the two terms.
\[S(\rho_{B}||\Omega_{B})=\delta\langle K_{\Omega_{B}}\rangle_{\rho_{B}}-\delta S (\rho_{B})\geq 0\, \tag{2}\]
where the relative entropy is evaluated with respect to the observables supported on some region \(B\); \(\delta S(\rho_{B})\) is the vacuum subtracted von Neumann entropy of \(\rho_{B}\); and \(\delta\langle K_{\Omega_{B}}\rangle_{\rho_{B}}\) is the modular energy for which the vacuum energy is set to zero. (The modular Hamiltonian \(K_{\rho}\) for any positive operator \(\rho\) is defined as \(K_{\rho}:=-\log\rho\)). Casini's entropy bound then simply follows from the positivity of the relative entropy.
Casini's entropy bound is a widely accepted proven version of the Bekenstein bound, but it is not the only viable formulation. A general interpretation of the Bekenstein bound is that it represents a fundamental limit on the information content carried by matter [7; 9]. Bekenstein himself also made efforts to generalize the scope of his original proposition to communication scenarios [10; 11; 12]. However, it is important to note that this interpretation remains folklore rather than established fact, mainly because different operational tasks lead to distinct realizations and characterizations of the notion of _information_, among which the von Neumann entropy is just one instance.
The key insight of Casini's formulation is that the Bekenstein bound can be understood as an information-theoretic statement about state distinguishability, measured by the relative entropy. The bound arises from the fact that excited states are generally not perfectly distinguishable from the vacuum. The Bekenstein bound becomes interesting, meaning it approaches saturation, when the state becomes indistinguishable from the vacuum locally within the region of interest. The distinguishability can be improved by investing energy to better localize the distinguishing features of the states as well as by providing more spatial room to examine their differences.
However, even Casini's entropy bound does not perfectly capture Bekenstein's vision for information being bounded by a quantity that pertains to the spatial extent and energy of a region as in (1). In order to make Casini's entropy bound closer to (1), one needs to adopt the decomposition (2). There are issues with the decomposition both technically and conceptually. The technical issue is that the vacuum regularization is not obviously legitimate for arbitrary regions in a given QFT. Perhaps more importantly, the vacuum-subtracted modular Hamiltonian does not admit the form \(RE\) in general.
One can certainly restrain to scenarios where the technical issue is resolvable. However, there is the further conceptual issue of _operational meaning_. The von Neumann entropy characterizes the optimal rate at which the quantum state of matter can be compressed into a memory [13]. However, the operational interpretation becomes less apparent when the entropy is renormalized by subtracting the vacuum contribution. In particular, both sides of the inequality (2) can be _negative_, making the operational meaning even more obscure.3
Footnote 3: Itβs conceivable that the regularized entropy could potentially be understood as a quantum conditional entropy, which admits negative values and characterizes operational tasks like state merging.
We regard _the Bekenstein bound_ as a general principle that admits different formulations in different operational contexts. Could there be other formulations of it? To facilitate our discussions in this paper, we refer to any bound on an information measure that is derived from the positivity of relative entropy as a _Casini bound_, so to distinguish it from a _canonical Bekenstein bound_ which has the defining feature of taking the form \(RE\) as in (1).4 They are however not mutually exclusive. In some special cases, such as the Rindler wedge or an interval in conformal field theory, the Casini bound on entropy can be regarded as a canonical Bekenstein bound.
Footnote 4: The precise definitions of the size \(R\) and energy \(E\) could depend on the context.
In fact, Blanco and Casini also proved a canonical Bekenstein bound distinct from (2) [14]. It states that the information associated with a strip region of width \(R\) is bounded by \(RE\) where \(E\) is the global energy. Information in their case is measured in terms of the conditional mutual information between the strip region and the purifier of the global state. While this version of the Bekenstein bound has positive quantities on both sides, the information measure is more contrived and the operational meaning of this bound is also unclear.5
Footnote 5: See [15] for yet another Bekenstein bound that is derived using a similar argument. The operational meaning of their result is unclear either.
Page also formulated an appealing version of the Bekenstein bound [16]. He considered an ensemble of unitaries, supported locally on a finite region of width \(R\), acting on the vacuum state. In such a way, no information can be accessed from the complement region outside. Then he conjectured that the total entropy of this ensemble state should be bounded by \(RE\) where \(E\) is the average energy of the ensemble. Unfortunately, Page himself gave a generic counterexample to his version of the Bekenstein bound.
The failure of Page's proposal highlights a subtle but crucial aspect of the Bekenstein bound: _Given some finite energy allowance, the Bekenstein bound is not about how much information can be stored in a box, but rather the amount of information that can be decoded with operations confined within the box_. We will argue that the failure of Page's formulation originates in the fact that his measure of total entropy implicitly presumes a spatially unconstrained decoding procedure. For the Bekenstein bound to be applicable, we should focus on the constraints on the decoding operations. Casini's entropy bound is a working example as it pertains to state distinguishability with operations confined to the specified region.
### Our contributions
Our goal in this work is to test the scope of the Bekenstein bound as a bound on information from an operational perspective. We would like to know whether the Bekenstein bound admits alternative formulations, such that they have clear operational meanings. Drawing on the lesson from Page, we will focus on the information that can be _decoded_ from a region.
We first derive a Casini bound on the _accessible information_ of an ensemble of states (codewords) that constrains the classical information that can be read out with measurements confined within a region. This is technically achieved by a universal bound on the Holevo information and our bound is mathematically equivalent to Casini's entropy bound (2).
In addition to the operational meaning, the upshot of this Casini bound is that the Holevo information is UV finite, so no vacuum subtraction is needed, in contrast to the von Neumann entropy case (2). Our bound applies to extracting classical information from a confined region, regardless of how the information is encoded in the first place. Using our bound, we further propose a reasonable modification to Page's formulation to incorporate the idea of constrained decoding. The modified version is once again equivalent to Casini's entropy bound. This is to be contrasted with the failure of Page's original proposal, which implies that the Bekenstein bound _cannot apply_ to the encoding alone in a way that disregards the decoding.
Does the Bekenstein bound apply universally in some form to decoding every kind of information? The natural framework to address this question, which incorporates both encoding and decoding, is quantum Shannon theory. The constraints on the encoding and the decoding, as well as the evolution of codeword states, can be modeled together as a quantum channel. We can then formulate the general problem by asking whether various channel capacities are bounded by the energy of the codewords times the size of the receiver's domain. The capacities describe the amount of classical and quantum information transmittable per channel use in the limit of many uses of identical and independent distributed (i.i.d.) channels. Again, a virtue of channel capacities, as compared to the von Neumann entropy, is that they are by themselves UV-finite and operationally meaningful.
Instead of answering the question in full generality, we consider the concrete model of free scalar fields in Rindler spacetime. We consider communication from a stationary Alice, who sees the entire space, and an accelerating Bob, who only sees a portion of space. Alice encodes her messages by exciting a particle from one of several distinct species, and Bob receives these signals and tries to decode Alice's message. The fact that Bob's decoding operations are constrained within the Rindler wedge is equivalent to the presence of Unruh radiation [17; 18; 19] that adds noise to this communication channel. We shall refer to the channel as the _Unruh channel_.
We evaluate its various channel capacities for transmitting classical and quantum information. Our findings reveal that both the quantum and the classical capacities of the communication channel respect the Bekenstein bound \(\beta E\), where \(\beta\) is the inverse temperature
that Bob measures and \(E\) is the energy of the message signals he measures.
However, we find that the entanglement-assisted classical and quantum capacities are _not_ constrained by the Bekenstein bound. Notably, even in scenarios where Bob is infinitely accelerating, the unassisted classical capacity is negligibly small, and \(\beta E\to 0\), entanglement assistance allows for the transmission of an amount of classical information growing without bound as the number of species grows. The violation of the bound demonstrates the principle that the spatial constraint in the Bekenstein bound should restrict the entire decoding procedure. In this case, entanglement with degrees of freedom Alice never acts on is enhancing Bob's decoding. The violation of the Bekenstein bound can be traced directly to the failure to impose spatial restrictions on those degrees of freedom.
This considerable capacity boost due to entanglement assistance can be attributed to specific quantum communication resources known as _zero-bits_, first studied in the context of quantum identification by Hayden-Winter [20] and later formalized by Hayden-Penington [21]. Zero-bits serves as minimal substitutes for cbits or qubits in various primitive quantum information processing protocols such as teleportation and dense coding. They are well-defined and useful, however, even in the absence of entanglement assistance. Although zero-bits are less powerful than cbits or qubits, they are more robust against noise. We show that the Unruh channel possesses a large zero-bit capacity regardless of the level of background noise caused by the Unruh radiation. Moreover, this capacity increases indefinitely as the number of particle species accessible to Alice grows. In this regard, unlike cbits and qubits, there is no Bekenstein bound that constrains zero-bits and their information-processing capabilities.
_Outline:_ We discuss a Casini bound on accessible information and use it to revise Page's proposal in Section 2. In Section 3, we study the Bekenstein bound on channel capacities and use the Unruh channel as a concrete example. We evaluate its various channel capacities and compare them to the Bekenstein bound. We show that the Unruh channel sends a large number of zero-bits that are not constrained by the Bekenstein bound in Section 4. Section 5 offers some concluding remarks.
## 2 The Bekenstein bound on the accessible information
Consider the task of Bob extracting classical information from an ensemble of quantum states encoded by Alice. Let the random variable describing Alice's message be \(X\), which is distributed according to the law \(p_{X}\). For each message \(x\in\mathcal{X}\), Alice assigns a codeword \(\rho^{x}\) which is a density operator in the operator algebra of the QFT. Bob has access to any measurement operations supported in a region \(B\), and he tries to measure the codewords and figure out the classical information stored by Alice. Note that Alice's encoding was not spatially restricted. Let the random variable describing Bob's outcome be \(Y\). The task is to maximize the mutual information between \(X\) and \(Y\) over decoding measurements. We define the accessible
information as the maximal mutual information,
\[I_{\rm acc}(B,\rho):=\max_{\{\Pi_{y}\}}I(X:Y)_{P_{XY}},\quad P_{XY}:=\sum_{x\in \mathcal{X}}\sum_{y\in\mathcal{X}}\mathrm{Tr}\ \Pi_{y}\rho_{B}^{x}\left|x\right\rangle\!\!\left\langle x \right|_{X}\otimes\left|y\right\rangle\!\!\left\langle y\right|_{Y}\, \tag{1}\]
where we use \(\rho\) as a shorthand for the ensemble of codewords \(\{p_{x},\rho_{B}^{x}\}\).
We would like to know whether \(I_{\rm acc}(B,\rho)\) obeys the Bekenstein bound. This quantity does not have a general closed-form formula, but there is a useful upper bound known as the Holevo information [22],
\[I_{\rm acc}(B,\rho)\leq\chi(\{p_{x},\rho_{B}^{x}\}):=\sum_{x\in\mathcal{X}}p_{ x}S(\rho_{B}^{x}||\bar{\rho}_{B})\, \tag{2}\]
where \(\bar{\rho}_{B}:=\sum_{x\in\mathcal{X}}p_{x}\rho_{B}^{x}\). The relative entropy here is well-defined in QFT using Tomita-Takesaki theory and Araki's definition. The bound will be saturated if and only if the ensemble consists of commuting codewords, such as an orthogonal set.
### A bound on the Holevo information
We proceed to bound the Holevo information. Note the following identity6 of the Holevo information for any ensemble of states [23],
Footnote 6: This identity holds generally for von Neumann algebras.
\[\chi(\{p_{x},\rho_{B}^{x}\})=\sum_{x\in\mathcal{X}}p_{x}S(\rho_{B}^{x}||\Omega _{B})-S(\bar{\rho}_{B}||\Omega_{B})\, \tag{3}\]
where the reference state \(\Omega\) is chosen to be the vacuum.7
Footnote 7: One can pick any appropriate state as the reference depending on the context. For example, for a black hole in equilibrium, the Hartle-Hawking state could be chosen instead.
On the RHS, we have the relative entropy between the mixture \(\bar{\rho}\) and the vacuum \(\Omega\) over the region \(B\), and its positivity is what Casini identifed as the Bekenstein bound (2). It follows that
\[I_{\rm acc}(B,\rho)\leq\chi(\{p_{x},\rho_{B}^{x}\})\leq\sum_{x\in\mathcal{X}} p_{x}S(\rho_{B}^{x}||\Omega_{B}). \tag{4}\]
The Holevo information and thus the accessible information are bounded by the average relative entropy between the codewords and the vacuum in the region \(B\). This is a Casini bound on the accessible information. Expanding both sides we find
\[S(\rho)-\sum_{x\in\mathcal{X}}p_{x}S(\rho_{B}^{x})\leq\langle K_{\Omega_{B}} \rangle_{\bar{\rho}}-\sum_{x\in\mathcal{X}}p_{x}S(\rho_{B}^{x})=\sum_{x\in \mathcal{X}}p_{x}\left(\langle K_{\Omega_{B}}\rangle_{\rho_{B}^{x}}-S(\rho_{B }^{x})\right). \tag{5}\]
Therefore, it is the same as Casini's bound (2) up to the regularization. While the vacuum subtracted energy may be natural, vacuum-subtracted entropy is more ad hoc. The regularized quantities in our bounds, in contrast, are always well-defined as relative entropies. Bound (5) can be viewed as mixing the modular energies, \(\langle K_{\Omega_{B}}\rangle_{\rho_{B}^{x}}-S(\rho_{B}^{x})\), with each renormalized in a
state-dependent way.8 While Casini's entropy bound \(\delta\langle K_{\Omega_{B}}\rangle_{\rho}\) is linear in the quantum state \(\rho\), our bound is linear in \(p_{X}\), which describes different sources of classical information. Most importantly, the Holevo information is operationally meaningful and nonnegative.
Footnote 8: As compared to the natural choice of setting the vacuum modular energy to be zero, a state-dependent renormalization might seem a bit strange. However, note that the codewords are fixed in this operational scenario and the only variable is the classical source distribution \(p_{X}\). It is therefore alright to give a bound for each codeword and then mix them up.
Like Casini's entropy bound, our bound is not directly expressed in terms of physical quantities in arbitrary regions and in any QFT. We also do not know if Casini's entropy bound \(\delta\langle K_{\Omega_{B}}\rangle_{\rho}\) always bounds the Holevo capacity. This holds when any codeword is more entropic than the vacuum because then the vacuum subtraction is not as tight,
\[S(\rho_{B}^{x})\geq S(\Omega_{B}). \tag{6}\]
In fact, this is exactly how Bousso argued for a similar bound on communication in terms of modular energy 'a la Casini' [24]; he considers only signals that are classical enough that the codewords are more entropic than the vacuum.
Though we believe the accessible information is not generally bounded by \(\delta\langle K_{\Omega_{B}}\rangle\), we shall see that it is true in Rindler space, and the bound (4) on the Holevo capacity turns out to be slightly tighter than Casini's entropy bound. Because Casini's bound is a canonical Bekenstein bound in this case, ours is as well.
### Page's proposal revisited
Another situation where Casini's entropy bound \(\delta\langle K_{\Omega_{B}}\rangle\) applies to the accessible information is when the information is encoded by local unitaries. This is the scenario that Page conceived. Page let Alice encode her message only via local unitaries supported over the region \(B\). Then Page suggested considering whether the entropy of \(S(\bar{\rho})\), where \(\bar{\rho}\) is the global mixture of the codewords, is Bekenstein-bounded by the width \(R\) of the region, times the energy \(E:=\langle H\rangle_{\bar{\rho}}\).
Unfortunately, Page showed that any such Bekenstein bound can be violated by engineering a mixture between the vacuum state and any excited state \(|\psi\rangle=U|\Omega\rangle\). Consider the mixture \(\bar{\rho}=(1-p)\Omega+p\psi\), where \(\psi\) may not be orthogonal to \(\Omega\). No matter what the width \(R\) of the region that supports \(U\) and the energy \(E\) of \(\psi\), the entropy \(S(\bar{\rho})\) is larger than \(ER\) for sufficiently small \(p\). This is because the energy \(E\) scales linearly with \(p\); whereas the \(S(\bar{\rho})\) is controlled by a binary entropy function which has divergent derivative at \(p=0\). The gradient diverges as \(-\log p\) at small \(p\). (See Appendix A for more details.)
What kind of information is captured by this (global) entropy that fails to admit a canonical Bekenstein bound? Note that this entropy is the (global) Holevo information of the ensemble because all the states are pure. Therefore, \(S(\bar{\rho})\) upper bounds the globally accessible information \(I_{\rm acc}(BB^{c},\rho)\), and Page's proposal amounts to finding a canonical Bekenstein bound for the globally accessible information.9 For example, suppose that Alice can encode
her message optimally so that the ensemble \(\{\rho^{x}\}\) is an orthonormal set. Then the globally accessible information is exactly given by \(S(\bar{\rho})\).
Footnote 10: Note that this does not rule out the possibility that a tighter bound could be obtained when considering the encoding is also spatially constrained.
Let us understand why this proposal fails. We claim the reason is that the decoding is not spatially constrained. The globally accessible information presumes that Bob's measurements are not restricted to act on \(B\) alone. While it's true that Alice's local unitary encoding prevents any leakage of information to \(B^{c}\), meaning Bob can learn nothing from \(B^{c}\) on its own, Bob can nonetheless learn more by accessing \(B^{c}\)_in addition_ to \(B\). That is because he can perform nonlocal measurements jointly on \(BB^{c}\), and the entanglement in the vacuum allows Bob to probe the different correlations created by Alice's local encoding. In contrast, Casini's proposal (2) works because the state distinguishing task is operationally confined to the region \(B\), and the relative entropy depends only on the reduced states on \(B\).
We can try to "fix" Page's proposal by restricting Bob's measurement to the region \(B\) only. Then our bound (4) can be applied to the information locally accessible from \(B\). Moreover, since the encodings are unitary, \(-\sum_{x}p_{x}S(\rho_{B}^{x})=-\sum_{x}p_{x}S(\Omega_{B})=-S(\Omega_{B})\). Therefore, (4) is equivalent to the vacuum subtracted bound of Casini (2). We thus have
\[I_{\rm acc}(B,\rho)\leq\sum_{x}p_{x}S(\rho_{B}^{x}||\Omega_{B})=\delta\langle K _{\Omega_{B}}\rangle_{\bar{\rho}_{B}}. \tag{7}\]
This modification to Page's proposal makes it compatible with Casini's entropy bound. In this scenario, both sides of the inequality are well-defined, positive, and operationally meaningful.
In sum, we have a canonical Bekenstein bound on the accessible information, regardless of how the encoding is implemented;11 whereas no Bekenstein bound could apply had one left the decoding spatially unconstrained. The general lesson here is that the Bekenstein bound seems to originate in the spatial constraints on the decoding rather than any spatial constraints on the encoding.
Footnote 11: Note that this does not rule out the possibility that a tighter bound could be obtained when considering the encoding is also spatially constrained.
## 3 The Bekenstein bound for the Unruh channel
The accessible information pertains to classical information. To understand the scope of the Bekenstein bound for quantum information or other operational notions of information decodable from a region, we need to adopt the general framework of quantum Shannon theory. The theory was developed for the primary purpose of analyzing the capacities of quantum communication channels. We can incorporate various energy and spatial constraints in the encoding and decoding into the description of a quantum channel, and use the Shannon theory toolkit to look for appropriate Bekenstein bounds for their channel capacities.
In this language, the Bekenstein bound we obtained in the previous section can in turn be applied to bound the "one-shot" classical capacity of a quantum channel.11 This is because the Holevo information of the output codewords optimized over the input upper bounds the one-shot classical capacity of a quantum channel with outputs restricted to the region \(B\).12
Footnote 11: The terminology is misleading, unfortunately. The one-shot capacity is more properly called the product state capacity of the channel. It is the maximum rate at which bits can be communicated over many uses of the channel provided codewords are not entangled between successive uses of the channel.
Judging from earlier efforts [10; 11; 12; 24], it is challenging to address this problem in generality and with precision. Instead, we study a particular channel, known as the Unruh channel, in Rindler space. This channel models a communication scenario where the sender Alice has access to the entire Minkowski space whereas the receiver Bob only has access to the right Rindler wedge. We will grant Alice the ability to encode her message in an unconstrained way, but Bob's decoding is constrained within the Rindler wedge. A priori, one wouold expect that the standard Bekenstein bound in Rindler space puts a limit on the capacities of the Unruh channel. As we will see, however, that is not always true. The Bekenstein bound can be violated if we give the decoder access to entanglement without explicitly accounting for it or even without entanglement for some more exotic forms of communication.
The same setup was used by Marolf-Minic-Ross (MMR) [26; 27] to resolve the species problem [6]. The concerning issue was the possibility of increasing the entropy without a corresponding increase in energy by introducing additional particle species, which suggested a possible violation of the Bekenstein bound. The work of MMR was an important precursor to Casini's entropy bound. It will prove to be valuable to revisit their model and explore a different set of information-theoretic measures to test the validity of the Bekenstein bound.
Incidentally, the Unruh channel was also analyzed by Bradler-Hayden-Panangaden (BHP) [28] in studying communication scenarios in the Rindler space. (See also [29; 30]). Their work revealed several useful technical properties of the Unruh channel that will prove beneficial for our calculations. While BHP evaluated the quantum capacity, their primary focus was not the Bekenstein bound. In our study, we will build upon their results and investigate whether the Bekenstein bound is respected by the Unruh channel.
### Communication with the Unruh channel
We consider free scalar fields of \(d\) distinct species in Minkowski spacetime. Alice is a stationary observer who sees the entire spacetime. Bob is a right Rindler move with acceleration \(a\). He experiences Unruh radiation at the temperature of \(\beta=2\pi/a\), which also measures the proper distance to the Rindler horizon. Alice communicates with Bob by exciting modes in QFT, and Bob receives the noisy signals disrupted by the Unruh radiation.
For simplicity, we restrict Alice's encoding to single-particle states in the _right Unruh mode_\(U^{R}_{i,\beta,\omega}\)[19], which are analytic continuations of the right Rindler modes \(R^{R}_{i,\beta,\omega}\) that are
labeled by the temperature \(\beta\) and the frequency \(\omega\) measured in Bob's frame. The index \(i\in[d]\) labels the particle species. Using the Bogoliubov transformation, we can write the creation operator of \(U^{R}_{i,\beta,\omega}\), denoted \(a^{\dagger}_{iR}\), in terms of the left and right Rindler creation and annihilation operators of \(R^{L}_{i,\beta,\omega}\) and \(R^{R}_{i,\beta,\omega}\), denoted \(b^{\dagger}_{iL}\) and \(b^{\dagger}_{iR}\),
\[a^{\dagger}_{iR}=(1-e^{-\beta\omega})^{-\frac{1}{2}}(b^{\dagger}_{iR}-e^{- \beta\omega/2}b_{iL}). \tag{19}\]
Let \({\cal H}_{A}\) denote the \(d\)-dimensional subspace of these single-particle states in distinct species sectors. We restrict Alice's encoding to the single excitation subspace via
\[V^{(d)}:{\cal H}_{A}\rightarrow{\cal H},\quad|i\rangle\mapsto|0\rangle_{1} \otimes\cdots\otimes a^{\dagger}_{iR}|0\rangle_{i}\otimes\cdots\otimes|0 \rangle_{d}\, \tag{20}\]
where \({\cal H}\) denotes the Hilbert space of quantum fields, \(a^{\dagger}_{iR}\) denotes the creation operator for \(U^{R}_{i,\beta,\omega}\) and the superscript \((d)\) denotes the input dimension. Note that the Unruh modes share the same vacuum as the Minkowski modes, so the vacuum state in the above equation denotes the Minkowski/Unruh vacuum.
The modes propagate to Bob, who perceives the particle content differently than Alice. The Minkowski/Unruh vacuum can be represented using the Rindler Fock basis appropriate to Bob's perspective,
\[|0\rangle=\bigotimes_{i=1}^{d}|0\rangle_{i}=\bigotimes_{i=1}^{d}\bigotimes_{R_ {v}}\sum_{N_{i}^{v}=0}^{\infty}\sqrt{1-e^{-\beta\omega_{v}}}e^{-\frac{\beta}{ 2}\omega_{v}N_{i}^{v}}|N_{i}^{v},N_{i}^{v}\rangle\, \tag{21}\]
where \(i\) labels the particle species and \(v\) labels the Rindler modes, and \(|N_{i}^{v},N_{i}^{v}\rangle\) denotes the Fock states of the mode \(R_{v}\) in the \(i^{\rm th}\)-particle sector.
The other modes are simply not relevant in this calculation, so we might as well ignore them. Let each \(|\Omega\rangle_{i}\) be the _projected vacuum_ that only involves a single pair of modes \((R^{L}_{i,\beta,\omega},R^{R}_{i,\beta,\omega})\),
\[|\Omega\rangle_{i}=P|0\rangle_{i}=\sum_{N_{i}=0}^{\infty}\sqrt{1-e^{-\beta \omega}}e^{-\frac{\beta}{2}\omega N_{i}}|N_{i},N_{i}\rangle,\quad|\Omega \rangle=\bigotimes_{i=1}^{d}|\Omega\rangle_{i}. \tag{22}\]
where \((N_{i},N_{i})\) is the number of particles in this pair of Rindler modes \((R^{L}_{i,\beta,\omega},R^{R}_{i,\beta,\omega})\).
The remaining vacuum entanglement is among the Fock states of this mode, without the infinite tensor product among different modes. Therefore, we can write down the density matrix of the reduced state in the right (or left) Rindler wedge, which does not exist for the full vacuum.
\[\Omega=\bigotimes_{i=1}^{d}(1-e^{-\beta\omega})e^{-N_{i}\beta\omega}=(1-e^{- \beta\omega})^{d}e^{-N\beta\omega}\,\quad N:=\sum_{i=1}^{d}N_{i}=\sum_{i=1}^{d}b^{ \dagger}_{iR}b_{iR}. \tag{23}\]
Since Bob is immersed in the Unruh radiation, the message he receives from Alice is degraded by the noise. This thermal noise fundamentally originates from tracing out the left Rindler wedge \(L\). We refer to this noisy communication channel as the _Unruh channel_,
\[{\cal N}_{d,\beta,\omega}:={\rm Tr}_{L}\circ{\cal V}^{(d)}:{\cal P}({\cal H}_ {A})\rightarrow{\cal P}({\cal H}_{B})\, \tag{24}\]
where \(\mathcal{V}^{(d)}(\,\cdot\,):=V^{(d)}\cdot\,V^{(d)\dagger}\). We have identified the Hilbert space of the right Rindler wedge \(\mathcal{H}_{R}\) as Bob's Hilbert space \(\mathcal{H}_{B}\).13
Footnote 13: Note that the quantum field theory Hilbert space \(\mathcal{H}\) is generally nonfactorizable. However, when restricted to the subspace of particles in a single mode, the restricted Hilbert space \(\mathcal{H}\) does factorize into \(\mathcal{H}=\mathcal{H}_{L}\otimes\mathcal{H}_{R}\).
We will study the Unruh channels, parameterized by their input dimension \(d\), or equivalently, the number of particle species that Alice has access to. They are further parameterized by the inverse temperature that determines the noise and the energy \(\omega\) of the Unruh mode. We shall henceforth omit the superscripts and subscripts in \(\mathcal{N}_{d,\beta,\omega}\) for notational clarity.
Before proceeding, let us briefly discuss the physicality of this communication channel. Unruh modes are often used to approximate Minkowski modes in the so-called single-mode approximation [31], which is sometimes problematic [32]. We assume Alice can prepare the Unruh modes directly rather than use them as an approximation. They are convenient because their Bogoliubov transformation to the Rindler modes is easier to analyze than the one for Minkowski modes. However, the Unruh modes are less well-behaved than the Minkowski modes. For an inertial observer, they are non-monochromatic, ill-localized, and rapidly oscillating near the Rindler horizon. Hence, they are much more difficult to prepare physically.
Another caveat is that we would need Alice to operate nonlocally because we are considering single particle states that cannot be prepared with quantum channels that act locally (without post-selection). Likewise, for Bob to decode the message in the Rindler wedge, he might also need to implement measurements/operations that are nonlocal across the wedge. In this sense, the Unruh channel describes an idealized communication scenario that might not be physically realizable.
However, to study the questions of principle we are concerned with here, the Unruh channel is still a useful model for testing certain operational aspects of the Bekenstein bound. We grant Alice nonlocal operations to prepare globally distinguishable codewords while constraining Bob to the Rindler wedge. We would like to know if the Bekenstein bound limits how much information Bob can decode.14
Footnote 14: There is a related study of information transfer using Unruh modes [33], where general mixtures of the left and right Unruh modes are studied. The analysis is not carried out in the framework of Shannon theory but similar quantities like Holevo information and coherent information are computed. It would be interesting to extend our analysis to those cases as well.
### A lightning review of channel capacities
We now review some basics about channel capacities. In particular, we will study the classical capacity, quantum capacity, and entanglement-assisted capacities. See [34] for a comprehensive review.
In the classical case, the message is modeled as a discrete random variable \(X\) valued in the alphabet \(\mathcal{X}\). Alice encodes her message in a joint quantum state \(\rho_{x}\) that is input to a channel \(\mathcal{N}:\mathcal{P}(\mathcal{H}_{A})\rightarrow\mathcal{P}(\mathcal{H}_ {B})\), where we use \(\mathcal{P}(\mathcal{H})\) to denote positive operators on the Hilbert space \(\mathcal{H}\). Then Bob decodes Alice's transmitted message by measuring the output state using his POVM \(\{E_{x}\}_{x\in\mathcal{X}}\) to generate the output random variable \(Y\). We say the protocol \((\{\rho_{x}\},\{E_{x}\})\)
is a \((k,\varepsilon)\)-code if \(|\mathcal{X}|=k\) and the transmission error is bounded by \(\varepsilon\),
\[\Pr(X\neq Y|X=x)=\text{Tr}(1-E_{x})\mathcal{N}^{\otimes n}(\rho^{n}(x))\leq \varepsilon,\forall a. \tag{3.7}\]
Now consider Alice making \(n\) uses of the Unruh channel to send a classical message. Let \(k_{\varepsilon}\) denote the largest message size given an error \(\varepsilon\),
\[k_{\varepsilon}(\mathcal{N}):=\max\{k\in\mathbb{N}\ |\ \exists\text{ a }(k, \varepsilon)\text{-code for }\mathcal{N}\}\, \tag{3.8}\]
where we say \(r\) is an achievable rate for \(\mathcal{N}\) if \(\forall\varepsilon>0\), \(k_{\varepsilon}(\mathcal{N}^{\otimes n})\geq rn\) for large enough \(n\).
The classical capacity \(C(\mathcal{N})\) is defined as the supremum over achievable rates,
\[C(\mathcal{N}):=\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}\lfloor \log k_{\varepsilon}(\mathcal{N}^{\otimes n})\rfloor. \tag{3.9}\]
The famous Holevo-Schumacher-Westmoreland theorem [35; 36; 37] demonstrates that the classical capacity is given by the regularized Holevo capacity,
\[C(\mathcal{N})=\lim_{n\to\infty}\frac{1}{n}\chi(\mathcal{N}^{\otimes n})\, \tag{3.10}\]
where the Holevo capacity is defined as
\[\chi(\mathcal{N}):=\max_{\{p_{x},\psi_{x}\}}\left[S\left(\sum_{a}p_{x} \mathcal{N}(\psi_{x})\right)-\sum_{x}p_{x}S(\mathcal{N}(\psi_{x}))\right]\, \tag{3.11}\]
where the maximization is over the encoded ensemble of pure states \(\{\psi_{x}\}_{x\in\mathcal{X}}\) and random variables \(X\).15
Footnote 15: As in the case for the Holevo information (2.2), the Holevo capacity has an alternative formulation in terms of the relative entropy [38; 39], \(\chi(\mathcal{N})=\min_{\sigma}\max_{\rho}S(\rho\circ\mathcal{N}^{\star}||\sigma)\). This Heisenberg-picture formula is manifestly well defined for subalgebras in QFT.
Now consider Alice sending quantum information to Bob instead of classical messages. We demand that the entanglement between the quantum message and any reference purification be preserved by the communication. This extra requirement implies that the quantum capacity can never exceed the classical capacity. Let Alice's encoding channel be \(\mathcal{E}:\mathcal{P}(\mathcal{H}_{X})\to\mathcal{P}(\mathcal{H}_{A})\), and Bob's decoding channel be \(\mathcal{D}:\mathcal{P}(\mathcal{H}_{B})\to\mathcal{P}(\mathcal{H}_{Y})\) we say the protocol \((\mathcal{E},\mathcal{D})\) is a \((k,\varepsilon)\)-quantum code if \(\dim\mathcal{H}_{X}=k\) and
\[||\mathcal{D}\circ\mathcal{N}\circ\mathcal{E}-\mathcal{I}||_{\diamond}\leq \varepsilon\, \tag{3.12}\]
where the diamond norm of a channel \(\mathcal{M}_{A\to B}\) is defined as \(||\mathcal{M}||_{\diamond}:=\max_{\psi_{AR}}||\mathcal{I}\otimes\mathcal{M}( \psi)||_{1}\). As for the classical capacity, we can then define the quantum capacity as the supremum over all achievable rates,
\[C(\mathcal{N}):=\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}\lfloor \log k_{\varepsilon}^{q}(\mathcal{N}^{\otimes n})\rfloor\, \tag{3.13}\]
where \(k_{\varepsilon}^{q}:=\max\{k\in\mathbb{N}\ |\ \exists\text{ a }(k, \varepsilon)\text{-quantum code for }\mathcal{N}\}\).
The coherent information \(I_{c}(\mathcal{N})\) for a channel \(\mathcal{N}\) is defined as
\[I_{c}(\mathcal{N}):=\max_{\psi\in\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime}}}I (A)B)_{\mathcal{N}(\psi)}\, \tag{3.14}\]
where \(\psi\in\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime}}\), \(\mathcal{N}(\psi)=\mathcal{N}_{A\to B}(\psi)\in\mathcal{H}_{A^{\prime}} \otimes\mathcal{H}_{B}\) and \(I(A)B)_{\mathcal{N}(\psi)}:=-H(A^{\prime}|B)\).
The Lloyd-Shor-Devetak theorem [40; 41; 42] shows that the quantum capacity is given by the regularized coherent information,
\[Q(\mathcal{N}):=\lim_{n\to\infty}\frac{1}{n}I_{c}(\mathcal{N}). \tag{3.15}\]
We are also interested in the classical and quantum capacities of the Unruh channel when Alice and Bod are given _free entanglement_ as resources. In that setting, Alice encodes her messages into her half of a set of shared Bell pairs and sends the encoded halves to Bob through the channel. Bob will try to decode Alice's messages with the help of his half of the Bell pairs. Since entanglement can only help them achieve better communication rates, the entanglement-assisted capacities are generally higher than without the assistance. The capacities are defined in a similar way, so we won't repeat the details. We quote two facts here. Bennett et al [43; 44] showed that the entanglement-assisted classical capacity is always given by a single-letter formula,
\[C_{E}(\mathcal{N}):=\max_{\psi}I(\psi;\mathcal{N}):=\max_{\psi\in\mathcal{H}_ {A}\otimes\mathcal{H}_{A^{\prime}}}\left(S(A)_{\psi}+S(B)_{\mathcal{I}\otimes \mathcal{N}_{A\to B}(\psi)}-S(A^{\prime}B)_{\mathcal{I}\otimes\mathcal{N}_{A \to B}(\psi)}\right)\, \tag{3.16}\]
where the channel mutual information \(I(\psi;\mathcal{N})\) is basically the mutual information between the output \(B\) and the reference \(A^{\prime}\). Note that \(C_{E}(\mathcal{N})\) also equals to
\[C_{E}(\mathcal{N}):=\max_{\psi\in\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime }}}\left(S(A^{\prime})_{\psi}+I(A)B)_{\mathcal{N}(\psi)}\right). \tag{3.17}\]
The second fact is that with entanglement assistance, the classical capacity is twice the quantum capacity, \(C_{E}=2Q_{E}\), so it's sufficient to focus here on the classical capacity.
A variant of the Unruh channel has been studied by BHP, where the messages are encoded in distinguishable modes instead of in different sectors of particle species [28]. Despite the physical difference, these two versions of Unruh channels are mathematically identical. (See Definition 11 in BHP for a more detailed description of the Unruh channel). We shall henceforth not distinguish them. Using group-theoretic tools, BHP established several useful properties of the Unruh channel. We list a few of them that will be useful later.
* _Covariance._ A covariant channel has its input and output transform covariantly under the same unitary group. More precisely, we say a channel \(\mathcal{M}:\mathcal{P}(\mathcal{H}_{A})\to\mathcal{P}(\mathcal{H}_{B})\) is \(G\)-covariant with respect to representations \(r_{1}:G\to\mathrm{GL}(\mathcal{H}_{A}),r_{2}:G\to\mathrm{GL}(\mathcal{H}_{B})\), if \(\mathcal{M}(r_{1}(g)\rho r_{1}(g)^{\dagger})=r_{2}(g)\mathcal{E}(\rho)r_{2}(g) ^{\dagger},\ \forall g\in G,\rho\in\mathcal{P}(\mathcal{H}_{A})\). The claim is that the Unruh channel is SU(\(d\))-covariant with respect to the fundamental representation \(U(g)\) in the input space and the isometrically mapped representation \(VU(g)V^{\dagger}\) in the output space.
To see why, consider an arbitrary unitary \(U\) in the fundamental representation and a rotated input state \(U\rho U^{\dagger}\). We have \(\mathcal{V}(U\rho U^{\dagger})=(VUV^{\dagger})\mathcal{V}(\rho)(VUV^{\dagger})^{ \dagger}.\) The encoded unitary \(VUV^{\dagger}\) only acts by shuffling the tensor factors of different species, without acting within each factor at all. Hence, \(VUV^{\dagger}\) commutes with tracing out the left Rindler wedge, which acts on each individual factor. The \(\mathrm{SU}(d)\)-covariance follows.
* _Degradability._ The Unruh channel is transpose degradable, meaning that, up to a transpose (\(\mathcal{T}\)), the complement channel equals the primary channel concatenated with some other channel, \[\exists\mathcal{E},\text{ s.t. }\mathcal{T}\circ\mathcal{N}^{c}=\mathcal{E} \circ\mathcal{N}\.\] (3.18) Physically, (transpose) degradability means the information that leaks to the environment can always be simulated from the primary channel. In this sense, it's never the case that "more information" flows to the left wedge. This is quite intuitive for the Unruh channel as the wave packet is well localized on the right Rindler wedge. Since quantum information cannot be cloned, the complementary channel of a degradable channel has zero quantum capacity. The quantum capacity is generally superadditive, but it is additive for degradable channels [45]. We therefore have a single-letter formula \[Q(\mathcal{N})=\max_{\psi\in\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime}}}I (A)B)_{\mathcal{N}(\psi)}\.\] (3.19)
* _Hadamard._ The Unruh channel is a Hadamard channel [46],16 meaning that its complement channel is entanglement-breaking.17 Hadamard channels are a special class of degradable channels.18 The classical capacity is generally superadditive [48], but it is additive for Hadamard channels [49]. We thus have a single-letter formula \[C(\mathcal{N})=\chi(\mathcal{N})\.\] (3.20)
Footnote 16: The channel action in the Choi representation takes the form of a Hadamard product.
Footnote 17: Entanglement-breaking channels output separable states for any inputs entangled with a reference [47]. They have additive classical capacities and zero quantum capacity. Here \(\mathcal{N}^{c}\) being entanglement-breaking follows from the fact that its transpose \(\mathcal{T}\circ\mathcal{N}^{c}\) is also a quantum channel.
Footnote 18: Hence, in fact, the Unruh channel is both degradable and transpose degradable.
### Bounding the output entropy
As mentioned earlier, free scalar fields of multiple species in Rindler space were used by MMR to resolve the species problem. In terms of the setup we have introduced, they computed the output entropy of the Unruh channel for maximally mixed states of increasing dimension corresponding to the species number \(d\). They found that the entropy saturated as \(d\) was increased, just as the Bekenstein bound itself approached saturation, thereby avoiding the expected violation of the bound. The entropy saturation effect provided a resolution to the species problem.
Historically, MMR's calculation was an important precursor to Casini's general proof of the Bekenstein bound in Rindler space. In order to find the appropriate Bekenstein bound for the Unruh channel, it's instructive to review the main result of MMR in light of Casini's entropy bound. We measure all the entropies and capacities in units of nats instead of bits and use the natural logarithm by default.
Consider the encoded maximally mixed state \(\pi:=\frac{1}{d}\sum_{i=1}^{d}|i\rangle\!\langle i|\),
\[\mathcal{N}(\pi)=\frac{1}{d}\sum_{i=1}^{d}\rho_{i}=\frac{e^{\beta\omega}-1}{d }N\Omega\, \tag{3.21}\]
where \(N\) is the total number operator, and \(\Omega\) is the right Rindler wedge reduced density matrix of the projected vacuum (3.5).
Its entropy was evaluated approximately by MMR, then numerically by Marolf, and later analytically by Casini and BHP. Here we quote the formula from BHP:
\[H(d):=S(\mathcal{N}(\pi))=S(\Omega)+\log d-\log(e^{\beta\omega}- 1)+\frac{\beta\omega}{1-e^{-\beta\omega}}\\ -\frac{(1-e^{-\beta\omega})^{d+1}}{d}\sum_{k=1}^{\infty}\binom{d +k-1}{k}e^{-\beta\omega(k-1)}k\log ke^{-\beta\omega(k-1)}. \tag{3.22}\]
We will refer to this entropy quantity several times, so it will be convenient to define it as a function of the input dimension, \(H(d)\). In the formula, \(S(\Omega)\) denotes the von Neumann entropy of the vacuum (right) density operator (3.5),
\[S(\Omega)=d\left(\frac{\beta\omega}{e^{\beta\omega}-1}-\log(1-e^{-\beta\omega })\right). \tag{3.23}\]
We can make the following approximations for the entropy of \(\mathcal{N}(\pi)\),
\[\delta S:=S(\mathcal{N}(\pi))-S(\Omega)\stackrel{{\beta\omega \gg 1}}{{\approx}}\begin{cases}\log d,&\log d\leq\beta\omega\\ \beta\omega,&\log d\geq\beta\omega\.\end{cases} \tag{3.24}\]
The vacuum-subtracted entropy \(\delta S\) captures the entropy of the logical information. We see that \(\delta S\) increases with \(\log d\) until it saturates at \(\beta\omega\). \(S\) is thus constrained by the Bekenstein bound. That is the main takeaway from MMR's work.
To work out the Bekenstein bound a la Casini, we evaluate the relative entropy [8],
\[S(\mathcal{N}(\pi)||\Omega)=\frac{\beta\omega}{1-e^{-\beta\omega}}-\delta S\, \tag{3.25}\]
and it follows that
\[\delta S\leq\frac{\beta\omega}{1-e^{-\beta\omega}}\, \tag{3.26}\]
which is now an exact bound that doesn't involve the approximation \(\beta\omega\gg 1\) as above. The RHS is the modular Hamlitonian
\[\langle\delta K_{\Omega}\rangle_{\mathcal{N}(\pi)}=\frac{\beta\omega}{1-e^{- \beta\omega}}. \tag{3.27}\]
In this case, it also matches with \(\beta\) times the expectation value of the Hamiltonian, \(H=\omega N\), so the energy contained in the mode reads
\[\delta E:=\text{Tr}(\mathcal{N}(\pi)-\Omega)\omega N=\frac{\omega}{1-e^{-\beta \omega}}. \tag{3.28}\]
Thus, we obtain \(\delta S\leq\beta\delta E\). This can also be understood as the statement that the (coarse-grained) thermal entropy is larger than the (fine-grained) information-theoretic entropy. The same result also follows from Landauer's principle.
We'd like to show that the Bekenstein bound applies to all states. Because the von Neumann entropy function is concave and the Unruh channel is SU(\(d\))-covariant, the maximal output entropy is achieved for the maximally mixed input state. We therefore find that
\[\forall\rho,\quad S(\mathcal{N}(\rho))\leq S(\mathcal{N}(\pi))\leq\beta \delta E=\frac{\beta\omega}{1-e^{-\beta\omega}}. \tag{3.29}\]
We have established that \(\beta\delta E\) is the appropriate Bekenstein bound for the Unruh channel as far as the output entropy is concerned. We shall now study the extent to which the Bekenstein bound actually constrains the ability of Alice to communicate with Bob.
### Bounding the channel capacities
Now let us evaluate these capacities for the Unruh channel and see if they too are constrained by the Bekenstein bound. Unlike the von Neumann entropy, the channel capacities are often expressed as differences between two entropic quantities so the UV divergences cancel out. We therefore do not need to impose the vacuum subtraction by hand. We shall compare the results against both Casini's entropy bound of the vacuum-subtracted modular energy \(\beta\delta E\) as well as the general bound we had in Section 2.1.
_Classical capacity._ To find the maximizer for the Holevo capacity (3.11), note that the second term is independent of the message random variable \(X\) and the ensemble \(\{\psi_{x}\}\), i.e. \(\sum_{x}p_{x}S(\mathcal{N}(\psi_{x}))=S(\mathcal{N}(\psi_{x}))\), because \(\mathcal{N}\) is covariant so \(S(\mathcal{N}(\psi_{x}))=S(\mathcal{N}(\psi_{y})),\ \forall x,y\).
We only need to maximize the first term \(S\left(\sum_{x}p_{x}\mathcal{N}(\psi_{x})\right)\). As we have argued, because of the channel covariance and the concavity of von Neumann entropy, the maximizer is the maximally mixed state \(\pi\). Likewise, the Holevo capacity is maximized by a uniformly distributed ensemble over computational basis states, \(\{1/d,|i\rangle\!\langle i|\}_{i=1}^{d}\). We get
\[\chi(\mathcal{N})=S(\mathcal{N}(\pi))-\frac{1}{d}\sum_{i}S(\mathcal{N}(|i \rangle\!\langle i|))=S(\mathcal{N}(\pi))-S(\mathcal{N}(|i\rangle\!\langle i |))\, \tag{3.30}\]
where the second equality follows from covariance, \(S(\mathcal{N}(|i\rangle\!\langle i|))=S(\mathcal{N}(|j\rangle\!\langle j|)),\ \forall i,j\).
The first term is \(H(d)\), given by (3.22). For the second term, we have
\[\mathcal{N}(|i\rangle\!\langle i|)=(e^{\beta\omega}-1)N_{i}\Omega\, \tag{3.31}\]
where \(N_{i}\) is the number operator for the \(i^{\rm th}\)-particle. The entropy evaluates to
\[S({\cal N}(|i\rangle\!\langle i|))=H(1)+\frac{d-1}{d}S(\Omega)\, \tag{3.32}\]
and so we find
\[\chi({\cal N})=H(d)-H(1)-\frac{d-1}{d}S(\Omega). \tag{3.33}\]
To estimate its value, let's rewrite it as
\[\chi({\cal N})=H(d)-S(\Omega)+S(\Omega)/d-H(1)=\delta S+\left[S(\Omega)/d-H(1 )\right]. \tag{3.34}\]
The second term is some order-one quantity \(-\gamma\leq\left[S(\Omega)/d-H(1)\right]\leq 0\) where \(\gamma\approx 0.577\) is Euler's constant. (See Appendix B.) So we have
\[\chi({\cal N})\leq\delta S\leq\beta\delta E. \tag{3.35}\]
We conclude that the classical capacity obeys the Bekenstein bound. Interestingly, even in the infinite temperature limit (\(\beta\to 0\)), the classical capacity is small but nonzero,
\[\chi({\cal N})\rightarrow(1-\gamma)\approx 0.61. \tag{3.36}\]
Note that this does not violate the Bekenstein bound as \(\beta\delta E\to 1\) in this limit. We can also verify the Casini bound we derived on the accessible information (2.4). It gives a slightly tighter bound than the canonical Bekenstein bound \(\beta\delta E\) because of (2.6) and
\[S({\cal N}(|i\rangle\!\langle i|))-S(\Omega)=H(1)-S(\Omega)/d\geq 0. \tag{3.37}\]
_Quantum capacity._ To evaluate the quantum capacity, we need to calculate the maximal coherent information (3.19). Since the Unruh channel is both covariant and transpose-degradable, the maximum is once again achieved by the maximally mixed state \(\pi\). The quantum capacity is simply given by \(S({\cal N}(\pi))-S({\cal N}^{c}(\pi))\). This has been evaluated by BHP. The first term is given in (3.22) and the second term is given in Appendix B. The quantum capacity is [28]
\[Q({\cal N})=S({\cal N}(\pi))-S({\cal N}^{c}(\pi))=\frac{1}{d(1-e^{-\beta\omega })^{d+1}}\sum_{k=1}^{\infty}{d+k-1\choose k}ke^{-\beta\omega(k-1)}\log\frac{d+ k-1}{k}. \tag{3.38}\]
We can rewrite the expression as
\[Q({\cal N})=\delta S+\left[S(\Omega)-S({\cal N}^{c}(\pi))\right]\, \tag{3.39}\]
where the second term in the bracket is negative and lower bounded by \(-1\) (see Appendix B), so we have
\[Q({\cal N})\leq\delta S\leq\beta\delta E=\frac{\beta\omega}{1-e^{-\beta\omega }}. \tag{3.40}\]
We conclude that the quantum capacity obeys the Bekenstein bound. In fact, we already know this because the quantum capacity is upper bounded by the classical capacity, which obeys the Bekenstein bound. Unlike the classical capacity, \(Q({\cal N})\) drops to zero as \(\beta\to 0\). Note that \(C({\cal N})\) is not much larger than \(Q({\cal N})\) as they are both less than \(\delta S\) within one bit. We have
\[Q({\cal N})\leq C({\cal N})\leq Q({\cal N})+1. \tag{41}\]
_Entanglement-assisted capacity._ For the entanglement-assisted classical capacity (3.17), it is clear that both terms are maximized by the maximally entangled state. We thus obtain
\[C_{E}({\cal N})=\log d+Q({\cal N})\geq\log d+\delta S-1. \tag{42}\]
This is a rather simple calculation, but physically a quite striking result. Recall that the classical capacity of the Unruh channel is almost negligible at the infinite temperature limit. Nonetheless, with entanglement assistance, the capacity is at least \(\log d\), which is extensive in the number of input qubits. Indeed, in that limit, the entanglement-assisted capacity of the channel almost matches that of a noiseless classical channel on \(d\) letters despite it being almost useless for communication without entanglement assistance!
Entanglement assistance can _at most_ boost the capacity of any channel up by \(\log d\), i.e., \(C_{E}\leq\log d+C\). Since \(C({\cal N})-Q({\cal N})\leq 1\), the entanglement assistance is _nearly optimal_. Such a big boost in classical capacity was not known in the literature for any physically mo
Figure 1: **Capacities of the Unruh channel.** The classical capacity \(C({\cal N})\) in blue and the quantum capacity \(Q({\cal N})\) in orange plotted gainst the channel parameters \(\beta\omega\) and \(\log d\) respectively. They both satisfy the Bekenstein bound \(\beta\delta E=\beta\omega/(1-e^{-\beta\omega})\) plotted as the dashed line. However, the entanglement-assisted classical capacity \(C_{E}({\cal N})\) in green does violate the Beksntein bound. On the left figure, \(C({\cal N})\) and \(Q({\cal N})\) asymptote to the maximum \(\log d\) and \(C_{E}({\cal N})\) asymptotes to the maximum \(2\log d\). We have \(C({\cal N})\approx Q({\cal N})\approx\min\{\log d,\beta\delta E\}\).
tivated channels.19 The same kind of capacity boost also exists for the entanglement-assisted quantum capacity \(Q_{E}=C_{E}/2\geq\frac{1}{2}\log d\). However, for quantum capacities, such a separation is more common. For example, the noiseless classical bit channel has zero quantum capacity, but with entanglement assistance, we have \(Q_{E}=\frac{1}{2}\log d\).
Footnote 19: Bennett el al showed that the ratio \(C_{E}/C\) can be large for the depolarizing channel in the large noise limit [43], but both capacities also tend to zero. The case of the Unruh channel is more surprising because both capacities remain finite.
Note that \(C_{E}(\mathcal{N})\) is not bounded above by \(\beta\delta E\). Bob ends up receiving more bits than the Bekenstein bound allows. This happens because we have allowed Bob to decode the message using his share of the additional entanglement resources that the Bekenstein bound does not take into account. This violation concurs with what we concluded from Page's proposal. If we forbid Bob from using his halves of the Bell pairs in decoding, then Alice running her dense coding protocol with her shares is of no value for enhancing the classical communication. It follows that the best achievable rate will just be the classical capacity \(C(\mathcal{N})\) and the Bekenstein bound will be respected.
Therefore, if we allow Bob to decode with his share of the auxiliary system, we also need to account for the contributions from the Bell pairs. To ensure that the entanglement is a free resource between Alice and Bob, we can assume that the Bell pairs are kept in some cavities held by Alice and Bob.20 The degrees of freedom are described by some auxiliary Hilbert space \(\mathcal{H}^{\prime}_{A}\otimes\mathcal{H}^{\prime}_{B}\) independent from the free fields in the Rindler space. Since the protocol consumes at most \(\log d\) Bell states per channel use, we then need to increase the Bekenstein bound by \(\log d\).21
Footnote 20: The cavities can be made to preserve the prepared entanglement under acceleration [50; 51; 52].
Footnote 21: A quick way to see this \(\log d\) increase is to look at Casiniβs entropy bound. When \(\log d\) Bell pairs \(\Phi_{AB}^{\otimes\log d}\) are included, the relevant bound is \(S(\rho\otimes\Phi_{B}^{\otimes\log d}||\Omega\otimes\Phi_{B}^{\otimes\log d})\geq 0\), which is equivalent to \(\langle\delta K_{\Omega}\rangle_{\rho}+\log d\geq\delta S(\rho)+\log d\).
## 4 No Bekenstein bound for zero-bits
A puzzle concerning the large capacity boost from entanglement assistance remains. We need to bear in mind that free entanglement alone does not allow Alice to communicate with Bob. On the other hand, the Unruh channel becomes so noisy at high temperatures that no qubits and only a negligible fraction of a classical bit can be sent, regardless of how many particle species are at Alice's disposal. What is then the communication resource transmitted by the Unruh channel that allows Alice and Bob to utilize the entanglement and achieve the capacity of at least \(\log d\)?
It is instructive to see explicitly how well we can distinguish two noisy codewords \(\mathcal{N}(|i\rangle\!\langle i|)\), \(\mathcal{N}(|j\rangle\!\langle j|)\) for any orthogonal \(|i\rangle,|j\rangle\). We measure the distinguishability using fidelity. The smaller the fidelity is, the more distinguishable the two states are. Using (3.31), we obtain
\[F(\mathcal{N}(|i\rangle\!\langle i|),\mathcal{N}(|j\rangle\! \langle j|))=(1-e^{-\beta\omega})^{2}(e^{\beta\omega}-1)\,\mathrm{Li}_{-\frac{ 1}{2}}(e^{-\beta\omega})^{2}\stackrel{{\beta\omega\gg 1}}{{ \approx}}e^{-\beta\omega}\, \tag{4.1}\]
where we approximate the polylogarithm function for a small argument by the leading term in the expansion. We conclude that the fidelity is exponentially suppressed in the Bekenstein bound.22 We therefore say the Unruh channel is able to preserve geometry when \(\beta\omega\gg 1\).
Footnote 22: Note that this error, however small, is not zero, so the error can accumulate to set a limit on how well one can distinguish a large ensemble of states. It is therefore expected that itβs increasingly difficult to distinguish all the members of a larger ensemble of noisy codewords. This intuitively explains why the classical capacity also obeys the Bekenstein bound.
Hayden-Winter realized that geometry preservation implies that the noisy channel still maintains some amount of quantum coherence that can be utilized [20]. This feature can be operationalized in the task of _quantum identification_ (QID), where Bob is supposed to simulate measurements that test whether the state sent by Alice is an arbitrary pure state. More precisely, we say that \(\mathcal{E}:\mathcal{P}(\mathcal{H}_{M})\rightarrow\mathcal{P}(\mathcal{H}_ {A})\) is an \(\varepsilon\)-QID code for a channel \(\mathcal{M}:\mathcal{P}(\mathcal{H}_{A})\rightarrow\mathcal{P}(\mathcal{H}_ {B})\), if for any \(\psi\in\mathcal{H}_{M}\) that Alice encodes and any projective measurement \(\{\ket{\varphi}\!\!\bra{\varphi},1-\ket{\varphi}\!\!\bra{\varphi}\}\) that Bob wants to simulate on his output end, there exists a binary POVM \(\{D_{\varphi},1-D_{\varphi}\}\) that Bob can implement such that it approximately simulates the ideal measurements,
\[\left|\text{Tr}(\mathcal{M}\circ\mathcal{E}(\psi)\cdot D_{\varphi})-\left| \bra{\varphi}\right.\ket{\psi}\right.|^{2}\right|\leq\varepsilon. \tag{4.2}\]
It is the universality that makes the task challenging: the same QID code must work for all Alice message states and all Bob projective measurements. The claim is that a channel that preserves geometry well can also be used for quantum identification, and vice versa.
Hayden-Penington (HP) [21] later formalized geometry preservation by referring to this quantum information transmitted by the channel as a _zero-dit_ with error \(\varepsilon\), where \(d\) is the input dimension. A zero-dit is a one-shot notion associated with an error quantifier. It's more elegant and conceptually useful to make a further abstraction by introducing the _zero-bit_ as an asymptotic communication resource. Heuristically, a zero-dit corresponds to \(\log_{2}d\) zero-bits23 when \(d\rightarrow\infty,\varepsilon\to 0\).
Footnote 23: Note that while we measure capacities in nats, we switch to \(\log_{2}\) when we refer to the number of zero-bits. Otherwise, it would be \(\log d\) βzero-natsβ.
Let us use the Unruh channel as a concrete example. To obtain a larger input dimension, we use the channel \(n\) times, but still only want to decode a two-dimensional subspace. The repetition helps to exponentially suppress the error. By encoding into a random subspace of the typical subspace of the input, it is possible to use \(\mathcal{N}^{\otimes n}\) to send a zero-\(\mathrm{d}^{n-o(n)}\)it with error decreasing exponentially with \(n\)[20; 21].24 In the asymptotic limit \(n\rightarrow\infty\), the error vanishes and we say the Unruh channel sends \(\log_{2}d\)_zero-bits_ per channel. As compared to the zero-dit, the notion of zero-bit gets rid of any error quantifiers and therefore proves to be very useful as a building block for communication protocols.
Footnote 24: The optimal zero-bit code is especially simple in this case because the coherent information is positive at all temperatures, eliminating the need for some complications found in the general case [20; 21].
The argument above implies that the Unruh channel can at least achieve a communication rate of \(\log_{2}d\) zero-bits per channel use. We can define the _zero-bit capacity_ for a channel as the supremum of achievable rates, denoted \(Q_{0}\). This capacity is also known as the _quantum
identification capacity_, which measures the optimal asymptotic achievable rate at which the Unruh channel can support QID with vanishing error. HP showed that the zero-bit capacity admits a general formula that single-letterizes for any (transpose) degradable channel, such as the Unruh channel. The single-letter formula reads
\[Q_{0}(\mathcal{N})=\sup_{\psi\in\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime}}} \left[I(\psi;\mathcal{N})\quad\text{s.t.}\quad I(A)B)_{\mathcal{N}(\psi)}>0 \right]\,, \tag{4.3}\]
where \(I(\psi;\mathcal{N})\) is the channel mutual information, as in the formula for \(C_{E}\) (3.17), and the coherent information is as in the formula for \(Q\) (3.19). Note that \(C_{E}\geq Q_{0}\) because they have the same formula except that \(Q_{0}\) has an extra condition demanding strictly positive coherent information. For the Unruh channel, both \(C_{E}(\mathcal{N})\) and \(Q(\mathcal{N})\) are evaluated at a maximally entangled state \(\psi=\Phi_{AA^{\prime}}\), so we know that supremum for \(Q_{0}\) is achieved also at the same state provided the strict positivity constraint is satisfied for this state. Fortunately, it is indeed satisfied because we have shown that \(Q(\mathcal{N})>0\) for all parameters \(\beta,\omega\) of the channel. We therefore have (see Fig 2)
\[Q_{0}(\mathcal{N})=C_{E}(\mathcal{N})=\log d+Q(\mathcal{N}). \tag{4.4}\]
Let us elaborate on this result. The zero-bit may sound like a mildly useful communication resource. However, besides its use in quantum identification, we can substitute the classical/quantum bits for zero-bits in many primitive quantum information processing protocols, such as teleportation, state merging, dense coding, entanglement distillation, and so on [21]. They are also the minimal resources needed for these protocols, and so become readily useful in scenarios when the standard protocols are too resource-demanding. The relevant protocol in the context of classical communication is the zero-bit version of dense coding.
Figure 2: **Zero-bit capacity of the Unruh channel.** We plot \(Q_{0}(\mathcal{N})\) in green against \(\beta\omega\) and \(\log d\) respectively. They do not obey the Bekenstein bound. The opaque capacity curves of \(C(\mathcal{N}),Q(\mathcal{N})\) are there for comparison.
Standard dense coding achieves the following resource inequality,
\[1\;{\rm qubit}+2\;{\rm ebits}\geq 2\;{\rm cbits}\, \tag{4.5}\]
which means that one qubit and two Bell pairs (ebits) are sufficient to simulate the communication resource of two classical bits (cbits). However, it turns out that qubits are not necessary for this task, and we can replace them with two zero-bits. There is even a variant of the dense coding protocol that can operate in a noisy communication channel that is not capable of sending qubits provided it can send zero-bits, to help animate the Bell pairs by turning them into classical communication [21]. We obtain
\[1\;{\rm zero}{\rm-bit}+1\;{\rm ebit}\geq 1\;{\rm cbit}. \tag{4.6}\]
This can be viewed as a tightening of the dense coding resource inequality (4.5).25 We review the protocol in Appendix C.
Footnote 25: To make these identities rather than inequalities, one needs to substitute a _coherent bit_ on the RHS [53], which is stronger than a cbit.
Zero-bits are hence directly responsible for the capacity boost of entanglement-assisted classical communication. In the high temperature limit, the quantum capacity tends to zero, so we have _no qubits_ available for the standard textbook dense coding protocol (4.5). However, we can still use the zero-bit version (4.6) because the Unruh channel can send \(\log_{2}d\) zero-bits. With free entanglement, the entanglement-assisted classical capacity \(C_{E}\) is at least \(\log d\). Recall that generally \(C_{E}\geq Q_{0}\), but this is saturated for the Unruh channel (4.4). This means no more bits can be sent once all the zero-bits are used up. We thus identify the essential ingredient responsible for the capacity boost in a channel that is seemingly too noisy to be useful.
Since \(Q_{0}\) increases unboundedly with the number of species \(d\), we conclude that zero-bits are not constrained by the Bekenstein bound. Bob can then further process, transmit, or store zero-bits for use in various quantum information processing tasks.
Note that zero-bits can be "turned" into cbits and qubits when combined with other resources such as ebits, such as dense coding or teleportation, and then the Bekenstein bound is obeyed by these so-obtained cbits and qubits. Importantly, there are also tasks like QID that do not invoke other resources such that the zero-bits are directly consumed, and we have shown that the Bekenstein bound does not impose a limit in those scenarios.
One might think that there could be some alternative version of the Bekenstein bound that is not exactly \(\beta\delta E\) such that the zero-bit capacity can be constrained. This is not possible because zero-bits exhibit the species problem. For any finite bound determined by the energy and the spatial extent, it can always be violated by the zero-bit capacity at a sufficiently large species number \(d\). In contrast, the von Neumann entropy, classical and quantum capacities do not have the species problem. Therefore, there is simply no Bekenstein bound for zero-bits.
We re-emphasize that our claim is not in tension with Casini's bound in Rindler space, as we are not making a statement regarding the (regularized) von Neumann entropy but rather
the zero-bit channel capacity. Our result instead challenges the folklore interpretation of the Bekenstein bound as a universal bound on the information content.
## 5 Concluding remarks
We close by making a few remarks on future directions.
* _Bekenstein bound for general channels._ It is natural to ask if the Bekenstein bound constrains the classical and quantum capacities for general channels in QFT. Consider a general quantum channel (in the Heisenberg picture) that embeds the operator sub-algebra of some local region into the algebra of bounded operators on the QFT Hilbert space. We would like to know if the regularized Holevo information and the regularized coherent information are bounded by a Bekenstein bound \(RE\). Here, \(R\) refers to the size of the region and the \(E\) refers to the energy cap that we enforce on the codewords. On the information-theoretic side, a major difficulty is that channel capacities are generally superadditive and we do not have a good understanding of this phenomenon nor effective tools to estimate the regularized quantities [34]. Nonetheless, for the case of the Unruh channel that outputs to a Rindler wedge, one can try to lift the restrictions we have made on the mode and one-particle codewords. This appears feasible because the Unruh channel can be treated as a Gaussian channel, so the formidable apparatus of Gaussian quantum information can be brought to bear to analyze the channel capacities [54; 55; 56; 57]. We shall leave the investigation to future works, however.
* _Operationality in QFT._ There is another fundamental issue with doing quantum information in QFT. Results in quantum information are usually proven in finite dimensions and it remains a nontrivial task to rigorously show that most results generalize to QFT. The easy option is to set the technicalities aside and assume that the essential operational meaning of the entropic quantities calculated do carry over to QFT. This is plausible if the operations (preparation, channels, measurement) can be drawn freely from the local algebra. Unfortunately, many operations that belong to the local algebra aren't physical because they tend to violate causality [58]. There is only a limited subset of causal and local operations that are physically implementable [59; 60; 61]. A full-fledged measurement theory is needed to ensure that the information-theoretic statements correspond to realizable operations. There has been some ongoing effort in this direction that builds the theory using detector models [62] or within the framework of algebraic QFT [63]. It would be useful to properly analyze quantum information protocols in QFT with the help of these new tools.
* _Zero-bit code._ The Unruh channel at high temperatures is particularly interesting because it has an extensive zero-bit capacity in contrast to the vanishing quantum capacity and negligible classical capacity. We are not aware of another such channel exhibiting
this behaviour.26 On the other hand, we know that the geometry preservation is poor at high temperatures so the Unruh channel itself is not a good zero-bit code. Therefore, one would need a good encoding to achieve the extensive zero-bit capacity for many parallel uses of this noisy Unruh channel. So far the noiseless classical channel remains the only channel for which we know an explicit capacity-achieving encoding [64]. It is of independent interest to find an explicit capacity-achieving code for the Unruh channel, which could also be physically illuminating. Footnote 26: Because the Unruh channel can be built out of the optimal cloners [46], the same feature holds for a \(1\to N\) optimal cloner at large \(N\).
* _Zero-bits in holography._ Historically, the Bekenstein bound initiated the fruitful development of holographic entropy bounds that also universally appeal to gravity. These entropy bounds eventually led to the development of holographic dualities that deeply connect information and geometry. In this vein, we believe that our new twist to the old story could have further implications in quantum information and quantum gravity. For example, the fact that zero-bits are unconstrained by the Bekenstein bound should also extend to holographic bounds. A related observation was made in [65], where the authors claim that the number of available zero-bits is not bounded by the area differences between two quantum Ryu-Takayanagi surfaces [66], so they can be used to substitute cbits in one-shot state merging that underpins the entanglement wedge reconstruction in holography. We expect more to be found in holography.
## Acknowledgments
We are grateful to Ahmed Almheiri, Horacio Casini, Kfir Dolev, Henry Lin, Eduardo Martin-Martinez, Arvin Shahbazi-Moghaddam, Zhenbin Yang, and Shunyu Yao for their discussions and valuable comments. This work was supported by ARO (award W911NF2120214), CIFAR and the Simons Foundation.
## Appendix A Generalizing Page's counterexample
We first review Page's counterexample. He considered the mixture between the vacuum \(|\Omega\rangle\) and some excited state obtained from a local unitary acting on a compact region of width \(R\), \(|\psi\rangle=U|\Omega\rangle\). The mixed state reads \(\bar{\rho}=(1-p)\Omega+p\psi\). We do not assume that \(\psi\) is orthogonal to \(\Omega\) so we let the state overlap be \(r=\text{Tr}\ \Omega\psi\). Suppose that the Hamiltonian of the theory \(H\) is renormalized such that \(H|\Omega\rangle=0\). Then the energy of \(\rho\) reads \(E=\langle H\rangle_{\bar{\rho}}=p\langle\psi|H|\psi\rangle\). Hence, the Bekenstein bound can be arbitrarily small and it scales _linearly_ with \(p\).
Consider now the entropy of \(\bar{\rho}\), which has eigenvalues \(\{q,1-q\}\) and \(q=p(1-p)(1-r)\). The entropy of \(\bar{\rho}\) is given by the binary entropy function \(h(q):=-q\log q-(1-q)\log(1-q)\). Note that at small \(p\), \(q\) scales linearly with \(p\) and \(h(q)\) scales like \(-q\log(q)\). Therefore, the gradient of the entropy at small \(p\) goes like \(\mathcal{O}(-\log p)\), and for sufficiently small \(p\) it overwhelms the Bekenstein bound which has a finite gradient \(\langle\psi|H|\psi\rangle R\).
In the operational context of distinguishing an ensemble of states created by local unitaries, entropy is an upper bound on the accessible information. The bound is not tight when the states are nonorthogonal. This gap leaves the possibility that the Bekenstein bound could still apply to the accessible information. We now show that the same counterexample also forbids this possibility. Namely, when \(p\) is small enough, the accessible information from the ensemble \(\{(1-p,\Omega),(p,\psi)\}\) surpasses the Bekenstein bound \(RE\).
The violation essentially boils down to the same divergent gradient at \(p=0\) of the accessible information. For an ensemble of two pure states parameterized by the mixing probability \(p\) and the state overlap \(r\), the optimal measurement is the Holevo-Helstrom measurement, and the accessible information \(I_{\rm acc}(p,r)\) has an explicit expression (see equation (23) in [67]). It turns out that the gradient \(\partial_{p}I_{\rm acc}(p,r)\) has the same logarithmic divergence near \(p=0\), so the violation follows.
## Appendix B Some estimates
We give some details of some estimates that are used in the main text. Let us collect some entropy formulas which can be found in BHP [28].
\[H_{0}(d):=S(\Omega)=d\left(\frac{\beta\omega}{e^{\beta\omega}-1}-\log(1-e^{- \beta\omega})\right)\, \tag{101}\]
\[H(d):=S(\mathcal{N}(\pi))=\log d-(d+1)\log(1-e^{-\beta\omega})+( 1+d)\frac{\beta\omega e^{-\beta\omega}}{1-e^{-\beta\omega}}\\ -\frac{(1-e^{-\beta\omega})^{d+1}}{d}\sum_{k=1}^{\infty}{d+k-1 \choose k}k\log ke^{-\beta\omega(k-1)}\, \tag{102}\]
\[H^{c}(d):=S(\mathcal{N}^{c}(\pi))=\log d-(d+1)\log(1-e^{-\beta \omega})+(1+d)\frac{\beta\omega e^{-\beta\omega}}{1-e^{-\beta\omega}}\\ -\frac{(1-e^{-\beta\omega})^{d+1}}{d}\sum_{k=1}^{\infty}{d+k-1 \choose k}k\log(d+k-1)e^{-\beta\omega(k-1)}. \tag{103}\]
Let us first justify the claim used to estimate the value of classical capacity
\[\gamma\geq H(1)-H_{0}(1)\geq 0. \tag{104}\]
We have
\[H(1)-H_{0}(1)=-\log(1-e^{-\beta\omega})+\frac{\beta\omega e^{- \beta\omega}}{1-e^{-\beta\omega}}-(1-e^{-\beta\omega})^{2}\sum_{k=1}^{\infty}k \log ke^{-\beta\omega(k-1)}\\ =h(e^{-\beta\omega})/(1-e^{-\beta\omega})+e^{\beta\omega}(1-e^{- \beta\omega})^{2}\partial_{s}\text{Li}_{s}(\text{e}^{-\beta\omega})|_{s=-1}\, \tag{105}\]
where \(h(\ \cdot\ )\) is the binary entropy function and \(\mathrm{Li}_{\mathrm{s}}\) is the poly-logarithmic function of order \(s\). We can use Mathematica to verify that this function of \(\beta\omega\) satisfies (144). We give a plot of it in Fig. 2(a).
We also used the following to estimate the value of quantum capacity,
\[-1\leq H_{0}(d)-H^{c}(d)\leq 0. \tag{146}\]
Note that the \(\leq 0\) part essentially follows the fact that the quantum capacity must be smaller than the quantum capacity. Unlike (144), \(H_{0}(d)-H^{c}(d)\) depends on the dimension. Since we cannot analytically evaluate the sum in \(H^{c}(d)\) for all \(d\), we provide some numerical evidences in Fig. 2(b) that (146) holds. The exact lower bound \(-1\) is not important to establish our main results.
## Appendix C Zero-bit dense coding
We first remind the readers of the standard dense coding protocol which helps send \(2\log d\) cbits with a qudit or a channel that has \(\log d\) quantum capacity. Alice and Bob share a \(d^{2}\)-dimensional Bell state \(\Psi\). Alice acts following local unitaries
\[X:=\sum_{0\leq j<d}|j+1\mod\ d\rangle\!\langle j|\,,\quad Z:=\sum_{0\leq j<d}e^ {i2\pi j/d}\,|j\rangle\!\langle j| \tag{147}\]
on her share of the Bell state depending on her \(2\log d\)-bits message. She divides her message string into two equal parts, and we can label each message with \(xy\), where \(0\leq x,y<d\) labels the message string. Given a message \(xy\), Alice prepares the orthonormal codewords as follows
\[|\Psi_{xy}\rangle:=Z^{x}X^{y}|\Psi\rangle. \tag{148}\]
She then sends her share (a qudit) to Bob, and Bob can decode the cbits from a \(d^{2}\) orthonormal set of Bell states.
Figure 3: Plots in support of (144) and (146).
The zero-bit dense coding protocol of Hayden-Penington [21] can send cbits at a rate of \(\log_{2}d\) with \(\log_{2}d\) zero-bits. In the one-shot setting, the protocol can help send \(\log d+1\) cbits with some error \(\delta(\varepsilon)\) using a zero-dit with some error \(\varepsilon\).
Alice now has a zero-dit instead of a qudit at her disposal. She encodes her message of \(\log_{2}d\) bits entirely in \(x\), plus one more bit \(y\in\{0,1\}\).
\[|\Psi_{x}\rangle=Z^{x}X^{y}|\Psi\rangle=\frac{1}{\sqrt{d}}\sum_{0\leq j<d}e^{i 2\pi xj/d}|j\rangle_{B^{\prime}}|j+y\rangle_{A}. \tag{110}\]
Alice then sends her share to Bob as a zero-dit, which only allows Bob to decode any two dimensional subspace. Had Bob chosen to measure his own share of the Bell pair in the computational basis, he can then figure out the \(y\) bit. More explicitly, let the Stinespring dilation of Alice's encoding and channel be \(U^{A\to BE}\). For any state in \(|\chi_{j}\rangle_{A}\in\text{span}\{|j\rangle_{A},|j+1\rangle_{A}\}\), Bob applies a corresponding decoding isometry \(V_{j}^{B\to AE^{\prime}}\)
\[V_{j}^{B\to AE^{\prime}}U^{A\to BE}|\chi_{j}\rangle_{A}\approx|\chi_{j} \rangle_{A}\otimes|\eta\rangle_{EE^{\prime}} \tag{111}\]
where the approximation error depends on the \(\varepsilon\) of the zero-dit; and the environment state \(|\eta\rangle_{EE^{\prime}}\) can be set independent of \(j\). The purification of the channel output is
\[\frac{1}{\sqrt{d}}\sum_{j}e^{i2\pi xj/d}|j\rangle_{B^{\prime}}U^{A\to BE}| \chi_{j}\rangle_{A}. \tag{112}\]
Bob applies the following isometry on the above state
\[V=\sum_{j}|j\rangle\!\langle j|\otimes V_{j}^{B\to AE^{\prime}} \tag{113}\]
and we obtain
\[\frac{1}{\sqrt{d}}\sum_{j}e^{i2\pi xj/d}|j\rangle_{B^{\prime}}V_{j}^{B\to AE^ {\prime}}U^{A\to BE}|\chi_{j}\rangle_{A}\approx\left(\frac{1}{\sqrt{d}}\sum_{ j}e^{i2\pi xj/d}|j\rangle_{B^{\prime}}|\chi_{j}\rangle_{A}\right)\otimes|\eta \rangle_{EE^{\prime}} \tag{114}\]
For \(|\chi_{j}\rangle_{A}\in\{|j\rangle_{A},|j+1\rangle_{A}\}\), we obtain Alice's codewords and Bob can measure and decode the \((\log d+1)\)-bits classical message.
See Theorem 5 in [21] for the proof that this protocol achieves the entanglement-assisted capacity of \(\log_{2}d\) with vanishing error.
|
2309.07422 | Grid-Aware On-Route Fast-Charging Infrastructure Planning for Battery
Electric Bus with Equity Considerations: A Case Study in South King County | The transition from traditional bus fleets to zero-emission ones necessitates
the development of effective planning models for battery electric bus (BEB)
charging infrastructure. On-route fast charging stations, distinct from on-base
charging stations, present unique challenges related to safe operation and
power supply capacity, making it difficult to control grid operational costs.
This paper establishes a novel framework that integrates the bus route network
and power network, which leverages the inter-dependency between both networks
to optimize the planning outcomes of on-route BEB charging stations in South
King County. The problem is formulated as a mixed-integer second-order cone
programming model, aiming to minimize the overall planning cost, which includes
investments in charging equipment, power facility, and grid operation.
Furthermore, fairness measurements are incorporated into the planning process,
allowing for the consideration of both horizontal transit equity and vertical
transit equity based on different zone merging criteria within the county's
existing census tracts. The results of this planning model offer valuable
insights into achieving both economic efficiency and social justice in the
design of on-route charging facilities for BEBs in South King County. | Xinyi Zhao, Chaoyue Zhao, Grace Jia | 2023-09-14T04:34:58Z | http://arxiv.org/abs/2309.07422v1 | [
###### Abstract
The transition from traditional bus fleets to zero-emission ones necessitates the development of effective planning models for battery electric bus (BEB) charging infrastructure. On-route fast charging stations, distinct from on-base charging stations, present unique challenges related to safe operation and power supply capacity, making it difficult to control grid operational costs. This paper establishes a novel framework that integrates the bus route network and power network, which leverages the inter-dependency between both networks to optimize the planning outcomes of on-route BEB charging stations in South King County. The problem is formulated as a mixed-integer second-order cone programming model, aiming to minimize the overall planning cost, which includes investments in charging equipment, power facility, and grid operation. Furthermore, fairness measurements are incorporated into the planning process, allowing for the consideration of both horizontal transit equity and vertical transit equity based on different zone merging criteria within the county's existing census tracts. The results of this planning model offer valuable insights into achieving both economic efficiency and social justice in the design of on-route charging facilities for BEBs in South King County.
Grid-Aware On-Route Fast-Charging Infrastructure Planning for Battery Electric Bus with Equity Considerations] Grid-Aware On-Route Fast-Charging Infrastructure Planning for Battery Electric Bus with Equity Considerations: A Case Study in South King County
X. Zhao, C. Zhao, G. Jia]Xinyi Zhao\({}^{a}\), Chaoyue Zhao\({}^{a,}\)+ and Grace Jia\({}^{b}\)
Footnote β : ORCID(s): 0000-0002-9655-4889 (X. Zhao)]
## 1 Introduction
With a 27% contribution to greenhouse gas emissions in 2020, the transportation system is the biggest economic sector that consumes fossil fuels [1]. To reduce the exhaust gas emissions of public transportation, the concept of electromobility, involving the adoption of electric vehicles (EVs) for transportation purposes, is rapidly being embraced by public transportation authorities. When it comes to bus systems, electromobility offers substantial advantages in terms of decreased operating and maintenance costs, increased energy efficiency, improved reliability, and reduced air and noise pollution [2].
Over the course of recent decades, the global implementation of bus fleet electrification has emerged as a prominent and noteworthy trend. Notably, Shenzhen in China became the world's first city to fully electrify its public transit bus fleet in 2018, marking a historic achievement [3]. In Europe, the nations of the Netherlands and Luxembourg have made notable strides, with more than half of their registered city buses categorized as zero-emission vehicles [4]. Similarly, King County in Washington, USA, has positioned itself as an early adopter of electric buses and is ambitiously transitioning towards a completely zero-emissions fleet by 2035 [5]. With remarkable advancements in battery technology, battery electric buses (BEBs) are becoming increasingly viable and appealing options for sustainable urban mobility, thus propelling cities worldwide toward a cleaner and more environmentally friendly future.
As bus agencies embrace this transition, they are driven by the dual objectives of ensuring economic efficiency and maintaining the service quality of their BEB fleets. Consequently, the optimization of charging infrastructure planning in this area becomes crucial, aiming to minimize investment and operation costs associated with the required charging facilities [6], as well as any additional costs that may arise during the electrification process.
A significant body of literature suggests that bus agencies often opt to construct charging facilities at designated base stations. In this approach, electric buses can only be charged after completing one or multiple full trips [7, 8], requiring them to deviate from their scheduled routes [9] and travel deadheading distances for the purpose of charging [10, 11]. This off-route charging strategy is typically employed during overnight and layover periods when buses are not in service and have sufficient time for complete battery recharge [12]. However, relying solely on this strategy may prove insufficient, especially in the case of King County, where the current on-base charging facilities can only meet 70% of the bus assignments [13].
To bridge this energy gap, an alternative and promising direction to explore is the implementation of on-route charging stations. By strategically incorporating fast-charging facilities at on-street bus stops [14, 15], BEBs can conveniently recharge during regular service operations. However, deploying on-route charging stations presents critical challenges that require attention. From an operational standpoint, limited research in BEB planning has explored the impact of the additional power load imposed by these on-route charging stations on the power grid [16]. This includes assessing power loss costs that may occur during electricity
transmission. Furthermore, from a social perspective, the introduction of on-route charging stations must be approached with fairness in mind. Given that these stations can serve specific fixed routes [17], it becomes imperative to ensure an equitable distribution of BEB routes across the regional transportation network. This ensures that diverse communities can access the associated benefits, such as cleaner air and enhanced environmental sustainability offered by BEBs.
Our proposed on-route fast-charging planning method effectively addresses the dual research gaps previously identified. Firstly, we prioritize the impacts on the local power grid in the placement of the on-route fast-charging infrastructure. To achieve this, we have developed a coupled power and transportation network specific to South King County. This integrated approach facilitates optimized planning, minimizing charging infrastructure investment and power system operational costs. Secondly, we recognize the limited attention given to equity in fleet electrification planning within the existing literature. To fill this gap, we have incorporated fairness measures into our planning approach to promote transit equity. Specifically, during the partial implementation of BEB routes in a particular region, our planning method promotes both horizontal and vertical transit equity by carefully selecting routes to be designated as BEB routes from the overall bus network.
The integration of the power and transportation networks, along with considerations of cost optimization and transit equity, positions our approach as an effective and comprehensive solution for the planning of on-route fast charging for BEBs. Furthermore, to emphasize the uniqueness of our method, we thoroughly examine existing research in fleet electrification planning, specifically focusing on the domains of power grid interaction and transit equity.
### Power Grid Interaction
The successful implementation of fleet electrification necessitates a strong interconnection between transportation and power systems. To ensure efficient management of this interaction, an integrated approach that considers the coupled power and transportation network is crucial in charging infrastructure planning. While this approach has received limited attention in the context of electric bus on-route charging stations, some research has integrated the power grid and transportation network when planning EV charging stations. This integration can take two forms: coupling a transportation test case with a power system test case or coupling a real-world transportation network with a power system test case. The latter approach incorporates authentic data and conditions from a functioning transportation system, resulting in enhanced practicality.
In the first type of coupled system, the Sioux Falls network is widely used as a transportation test case. For example, in a study by He et al. [18], a coupled network was created using the topology of the Sioux Falls network and a subset of the IEEE-118 bus system. The goal of this study was to allocate a specified number of charging stations for plug-in EVs. The potential locations of these charging stations were identified as common nodes in both the transportation and power grid systems. In another study by He et al. [19], the topology of the Sioux Falls network was retained for the transportation system, but the authors used a simplified version of the IEEE 34-bus system for the power grid. The authors matched the destination nodes in the transportation system with the corresponding buses in the power grid, but the intention behind this was unspecified. He et al. [20] built a coupled system using the Sioux Falls network and the IEEE 33-bus system; nevertheless, there was no direct relationship between the road distances and the power line lengths in this study.
In addition to the Sioux Falls network, other researchers have created their own transportation networks to build coupled systems for EV planning. For instance, He et al. [21] created a coupled system through the utilization of a nine-node road network and a subset of the IEEE 118-bus system. In this case, each link in the transportation network was connected to a particular bus in the power system, and the energy consumption of EVs on that link resulted in a power load on the grid. Wang et al. [22] employed a 25-node traffic network and an 11 kV 33-node distribution system to construct their coupled system. The authors considered the geographical positioning of the nodes and established a direct relationship between the nodes in the transportation and power systems, where the traffic nodes 1-25 overlapped with the distribution system nodes 1-25. Furthermore, Zhang et al. [23] adopted a 25-node highway transportation network and designed a 14-node 110 kV high voltage distribution network to establish a relationship between the transportation link distance and the power line length within their coupled system.
Regarding the second type of coupled system, an exemplar is a work by Lin et al. [9], who employed a real-world transportation network from the city of Shenzhen and integrated it with the virtual power network established by Zhang et al. [23]. To retrieve distances within the transportation network, they utilized the API of Baidu Map. However, it is important to note that their studies did not account for the correlation between the actual road distance and the line length in the virtual power network.
Building upon the second type of coupled system discussed in the literature, we propose a comprehensive framework that integrates a real-world transportation network with a virtual power system. Our framework establishes a correlation between the actual bus route distance and the power line length in the coupled system, enhancing the practicality of the planning outcome. Unlike previous approaches, our framework is designed to be adaptable and suitable for various bus networks in different regions. By utilizing our generic coupled network framework, our objective is to address the existing research gap and provide a comprehensive solution for on-route BEB charging infrastructure planning.
### Transit Equity
Existing research has highlighted the presence of transit inequities among underserved communities, including
people of color and low-income individuals, due to inadequate spatial coverage of transportation infrastructure [24, 25]. Addressing and rectifying this long-standing spatial gap between low-income settlements and their access to transit services pose great challenges [26]. However, fleet electrification, being a significant transit initiative, presents an opportunity to address these inequities right from the planning phase. This involves strategically locating charging infrastructure and designing efficient routes to serve historically underserved areas [27]. By adopting an equitable perspective, BEB planning [28] can serve as a means to mitigate the discriminatory impact on socially vulnerable populations caused by transit-related spatial mismatch.
The concept of transit equity encompasses two dimensions: horizontal equity, promoting equal treatment for all individuals [29], and vertical equity, tailoring treatments to diverse needs or circumstances [30]. Despite the importance of transit equity, there is a noticeable dearth of research that applies its principles to transportation-network-related planning [31]. Fan and Machemehl [32] were the first to consider horizontal equity in solving the transportation network redesign problem by introducing a spatial equality constraint. Building upon this work, Camporeale et al. [31] combine both horizontal and vertical equity goals in a constraint of the transit network design problem, ensuring that the final configuration of the public transport service strikes the fairest compromise by considering both spatial distribution and social needs.
Furthermore, in the context of electric bus planning, there is even less research that incorporates transit equity. The work conducted by Zhou et al. [33] closely aligns with our research scope. They proposed a bi-objective model to support transit agencies in the optimal deployment of BEBs, taking into account capital investment and environmental equity. However, their primary focus lies in maximizing vertical equity in one of their objectives, which involves weighting disadvantages populations based on air pollutant concentration. Notably, to the best of our knowledge, there have been no attempts to incorporate both horizontal and vertical equity into the planning problems of electric bus charging infrastructure.
Given the limited research on the topic, it becomes necessary to draw upon metrics used in other domains to measure the fairness of the transit planning result. Camporeale et al. [31] employed the Gini coefficient, a widely used fairness metric in economics, to develop their equality constraint. Similarly, we have identified Jain's index, which is commonly used to measure fairness in resource allocation within telecommunication networks [34], as a suitable metric to characterize the distribution of BEB routes across a bus network in a given region.
### Objective and Contribution
This paper presents a novel mixed-integer second-order cone programming (MISOCP) model that aims to optimize the placement of BEB on-route charging infrastructure. The objective is to minimize the planning and operation costs associated with the fleet electrification process in South King County, considering both the transportation and power systems. Additionally, we emphasize the importance of equity in the planning stage by incorporating fairness measurements, ensuring both horizontal and vertical equity. This research makes two primary contributions:
* To address the potential challenge of BEB charging infrastructure on the power system effectively, we have implemented a coupled networks approach that integrates the local power grid and the bus networks into our planning model. Using South King County as a representative example, our proposed generic framework focuses on establishing a virtual power grid based on the under-planning bus network. By strategically deploying on-route charging stations at bus stops via solving the planning model, we establish coupling relationships between transportation nodes and power grid nodes, effectively integrating the two systems in the planning outcome.
* To ensure equity in fleet electrification, we incorporate Jain's index as a fairness metric in our planning model for BEB charging infrastructure. In South King County, we aggregate census tracts based on both population and bus-commuter features, creating distinct subareas. By imposing a fairness constraint that ensures the desired level of Jain's index in these subareas, we promote equity in the planning results. The planning outcomes in the population-based subareas exhibit horizontal equity, ensuring an equal distribution of resources among all individuals. Conversely, the planning outcomes in the bus-commuter-based subareas demonstrate vertical equity, aiming for a fair allocation within the bus-commuter group.
The remainder of this paper is structured as follows. Section 2 presents essential background information and prior knowledge on bus operation, coupled networks, and transit equity analysis in King County. This information is necessary for formulating the MISOCP model in Section 3. Section 4 introduces a generic framework for establishing the coupled power and transportation network based on the given bus network, along with the corresponding algorithm. Case studies of the planning model, both with and without fairness measurement, are conducted in Section 5. Finally, the conclusion is drawn in Section 6.
## 2 Problem Statement
The South Annex Base in King County is scheduled to open in 2025 and is expected to accommodate up to 250 BEBs [13]. To ensure energy support for these vehicles, a combination of slower and faster on-base charging is planned to be adopted. Additionally, on-route fast charging is being considered as an augmented charging strategy for more frequent routes.
Among the various charging strategies for BEBs, there is a significant degree of flexibility in the deployment of
on-route charging stations in different areas. This offers the opportunity to not only address the impacts of climate change but also to promote equity and social justice across the county. Thus, it is of utmost importance to determine the optimal location and capacity of on-route fast-charging stations for BEBs, such that the planning cost is minimized and fairness is maximized.
Furthermore, it is essential to consider the complex interplay between the transportation system and the power network in the planning process. The operation of fast-charging stations for BEBs will result in a significant load demand on the power grid, while at the same time providing stable energy services to BEBs on different routes. A comprehensive understanding of bus operation, the coupled network, and the transit equity status in King County is necessary to effectively implement the planning model.
### Bus Operation
King County Metro conducted range tests on the 40-ft and 60-ft models of BEBs, validating their capacity to cover distances of up to 140 miles, which accounts for 70% of the service needs [13]. These buses are specifically designed for operation during morning and evening rush hours. By implementing on-route charging infrastructure, the BEBs can utilize smaller battery packs without compromising their operational effectiveness.
In the planning of on-route charging facilities, it is crucial to consider the existing on-base chargers at the Interim Base and explore cost-effective strategies for combining both charging strategies. While the current on-base chargers may not accommodate the extension of all new BEB routes, they do enable BEBs to be charged at varying initial state-of-charge (SOC) levels upon departure from their origin stations. Moreover, the implementation of on-route charging expands the range of the tested BEBs beyond the 140-mile capacity, enabling them to handle the remaining 30% of vehicle assignments. This approach optimizes energy management for BEBs, ensuring they maintain sufficient charge to successfully complete their designated routes while minimizing the need for costly infrastructure upgrades.
### Network Representation
In our forthcoming implementation of a coupled network framework for on-route charging station planning, we employ two symbolic systems to enhance the description of components within the transportation and power networks. This approach also simplifies the mathematical expression in our model formulation.
In regards to the transportation network, we denote the set of nodes as \(N^{T}\) and the set of directed links as \(L\). Within this network, a node \(m\in N^{T}\) corresponds to a bus stop, and \(N^{T}\) is a subset of \(N^{T}\) that represents the bus origin stations. For each bus route \(\alpha\in\Omega_{\alpha}\), the route always starts from its origin station \(o_{\alpha}\), where \(o_{\alpha}\in\hat{N}^{T}\). The directed links and nodes that comprise bus route \(\alpha\) are represented by \(L_{\alpha}\) and \(N^{T}_{\alpha}\), respectively.
Concerning the power grid, we denote the set of nodes as \(N^{G}\) and the set of branches as \(\mathcal{E}\). In this context, a node in the power grid is represented by \(i\in N^{G}\), while a power line is denoted as \((i,j)\in\mathcal{E}\).
### Equity Analysis
Neighborhoods in King County that are burdened with elevated air pollution levels tend to be home to low-income households and marginalized racial and ethnic groups [13]. This disparity is demonstrated in Figure 1, where the southern base areas, encompassing Renton, Burien, Tukwila, SeaTac, and Kent, are prominently affected. These communities have long endured inadequate transportation services, resulting in heightened exposure to transportation-related noise and air pollution. Notably, the South Base exhibits a higher number of daily service miles compared to other bases, indicating a greater extent of service inadequacy. Moreover, approximately 31% of the census blocks along South Base routes are categorized as highly vulnerable to the adverse impacts of air pollution [28].
By prioritizing the implementation of BEB plans in South King County, we can maximize transit equity countywide. The introduction of zero-emission bus routes significantly improves air quality and public health, particularly benefiting minority communities. Moreover, BEBs offer a more comfortable commuting experience with smoother and quieter acceleration and deceleration [35]. This improved ride quality is valuable for daily bus commuters, especially low-income individuals who heavily depend on buses. By
Figure 1: Map of air pollution vulnerable areas and priority quintiles for zero-emission bus service in King County1.
enhancing their overall experience, we have the potential to increase ridership and improve public transportation accessibility, thus advancing social equity objectives in fleet electrification planning.
## 3 Model Formulation
In this section, we present the formulation of a mathematical model designed for BEB on-route charging station planning. We delve into the details of incorporating Jain's index as a fairness measure within the planning framework. The model optimization includes determining the optimal placement of charging stations, the number of charging piles at each station, the interconnection between bus stations and power grid nodes, and the current flow through power lines that connect the bus stations to the power grid.
### Objective Function
The total planning cost for the on-route charging infrastructure of BEBs is determined by considering the costs associated with both the transportation network for constructing the facilities and the power grid for integrating the new charging stations. This cost is represented by (1), which consists of four components that are summed together. The first three components pertain to the investment cost for the charging stations, charging piles, and the power lines connecting the charging stations to the power grid. The final component represents the operational cost, which accounts for the energy loss in the power grid integrated with on-route charging stations.
\[\begin{split}\min&\sum_{m\in N^{T}}(f_{s,m}\cdot X _{m}+f_{c,m}\cdot\beta_{m})+\sum_{m\in N^{T}}\sum_{i\in N^{G}}c_{i,m}\cdot \Psi_{i,m}\\ &+\sum_{\epsilon\colon(i,j)}T\cdot c_{\epsilon}\cdot\mathscr{L} _{ij}\cdot r_{ij},\end{split} \tag{1}\]
where \(f_{s,m}\) and \(f_{c,m}\) stand for the unit cost of charging stations and charging piles at bus station \(m\), respectively. The binary decision variable \(X_{m}\) represents the construction of a charging station at bus station \(m\), with \(X_{m}=1\) signifying its presence. The integer decision variable \(\beta_{m}\) represents the number of charging piles installed at bus station \(m\). The cost of constructing the power line that connects bus station \(m\) to the power grid node \(i\) is represented by \(c_{i,m}\), which is determined by the geographical distance between the two nodes. The binary decision variable \(\Psi_{i,m}\) indicates whether the power line has been established, where \(\Psi_{i,m}=1\) signifies that bus station \(m\) has been successfully integrated into power grid node \(i\). The power loss time in the planning period is represented by \(T\), while the electricity price is \(c_{e}\cdot\mathscr{L}_{ij}\) denotes the square of the magnitude of the complex current from node \(i\) to \(j\) after building charging stations, while \(r_{ij}\) represents the resistance of power line \((i,j)\).
### Constraints
The introduced constraints in the planning model cover three essential processes: ensuring that BEB batteries have sufficient charge to complete their routes, managing the energy transfer from the power grid to the BEB batteries at the charging stations, and assessing the influence of integrated charging stations on the local power grid's power flow. These constraints also establish the coupling relationship between the bus stops in the transportation network and their corresponding nodes in the power grid.
In order for a bus to be charged at bus station \(m\), it is mandatory for a charging station to be constructed at that location:
\[y_{a,m}\leq X_{m},\ \ \forall\alpha\in\Omega_{a},\forall m\in N^{T}, \tag{2}\]
where \(y_{a,m}\) is a binary decision variable, and \(y_{a,m}=1\) denotes the bus on route \(\alpha\) charges at bus station \(m\). In addition, it is necessary to construct the charging piles at an established charging station, which can be formalized as follows using a Big-M method:
\[0\leq\beta_{m}\leq M\cdot X_{m},\ \ \forall m\in N^{T}, \tag{3}\]
where \(M\) can be considered as the total number of available charging piles to be invested during the planning period.
To avoid any queuing during the limited on-route charging time slots, a practical approach is to assign dedicated charging piles for each bus route at shared stations. This strategy ensures smooth charging operations and minimizes potential disruptions or delays caused by congested charging stations. Therefore, it is necessary to ensure that the number of installed charging piles is no less than the number of bus routes assigned to charge at the station:
\[\sum_{a\in\Omega_{a}}y_{a,m}\leq\beta_{m},\ \ \forall m\in N^{T}. \tag{4}\]
In order to establish a functional coupling between the transportation and power network, it is imperative to connect bus stations that are selected for installing charging infrastructure to a power grid node:
\[\sum_{i\in N_{G}}\Psi_{i,m}=X_{m},\ \ \forall m\in N^{T}. \tag{5}\]
Given the requirement for all BEBs to complete their round trips successfully, we consider the initial SOC of their batteries, deviating from previous studies [14, 36] that assume fully-charged batteries at the start. By exploring various levels of initial SOC as BEBs depart from their origin stations, we can determine the corresponding optimal scale of on-route charging facilities. As a result, we can effectively manage the investment in on-route charging facilities and make efficient use of the existing on-base charging stations at the Interim Base. The initial energy of the BEBs at the time of departure from the origin station \(o_{a}\) can be quantified as \(\theta_{0}\cdot u_{a}^{bat}\), where \(\theta_{0}\) represents the initial SOC of the batteries, and \(u_{a}^{bat}\) denotes the specific battery capacity of bus route \(\alpha\).
During BEB operation, it is necessary to maintain the battery's SOC within a specific safe range:
\[e_{a,m}\geq\varrho^{l}\cdot u_{a}^{bat},\ \ \forall\alpha\in\Omega_{a},\forall m \in N^{T}, \tag{6}\]
\[e_{a,m}+s_{a,m}\leq\theta^{a}\cdot u_{a}^{bat},\ \ \forall a\in\Omega_{a}, \forall m\in N^{T}, \tag{7}\]
where \(\theta^{l}\) and \(\theta^{u}\) are the lower and upper bound of the battery capacity of BEBs. \(e_{a,m}\) and \(s_{a,m}\) represent the energy level and the battery's energy supply in BEBs for route \(a\) at the station \(m\).
At each bus station, the energy conservation constraint for BEB batteries accounts for the energy consumption during travel between stations \(m\) and \(n\):
\[e_{a,n}=e_{a,m}+s_{a,m}-e_{a}^{0}\cdot d_{mn},\ \ \forall a\in\Omega_{a}, \forall(m,n)\in L_{a}, \tag{8}\]
where \(e_{a}^{0}\) denotes the average energy consumption of BEBs per unit distance for route \(a\), which depends on the specific BEB model used. The driving distance between stations \(m\) and \(n\) is represented by \(d_{mn}\). It is worth noting that we consider round-trip routes for each BEB, and the bus must satisfy the energy conservation constraint during the completion of its route in both directions.
All BEBs are required to adhere to the predefined operation schedule and cannot spend excessive time at a charging station. Therefore, the charging energy must not exceed the maximum available energy supply:
\[0\leq s_{a,m}\leq P_{a}^{e}\cdot\tau_{a,m}\cdot y_{a,m},\ \ \forall a\in\Omega_{a}, \forall m\in N^{T}, \tag{9}\]
where \(P_{a}^{e}\) denotes the nominal power of the charging pile for bus route \(a\), and \(\tau_{a,m}\) represents the maximum dwelling time for bus route \(a\) at station \(m\).
To determine the actual power loss in the power grid after incorporating on-route charging stations, we can utilize a branch flow model as described in Farivar and Low [37]:
\[s_{j} =\sum_{k:j\to k}S_{jk}-\sum_{i:i\to j}(S_{ij}-z_{ij}\ell_{ij}),\ \ \forall(i,j)\in\mathcal{E}, \tag{10}\] \[v_{j} =v_{i}-2(r_{ij}P_{ij}+x_{ij}Q_{ij})+(r_{ij}^{2}+x_{ij}^{2})\cdot \ell_{ij},\ \ \forall(i,j)\in\mathcal{E},\] (11) \[\ell_{ij} =\frac{P_{ij}^{2}+Q_{ij}^{2}}{v_{i}},\ \ \forall(i,j)\in\mathcal{E}, \tag{12}\]
where \(s_{j}\) represents the power injection at power grid node \(j\). \(S_{ij}\) denotes the sending-end power flow from node \(i\) to \(j\), given by \(S_{ij}=P_{ij}+\mathbf{i}Q_{ij}\). \(z_{ij}\) is the impedance of line \((i,j)\), represented as \(z_{ij}=r_{ij}+\mathbf{i}x_{ij}\). \(\ell_{ij}\) represents the square of the magnitude of the complex current from node \(i\) to \(j\), while \(v_{j}\) represents the square of the magnitude of the complex voltage at node \(j\). The resistance and reactance of line \((i,j)\) are represented by \(r_{ij}\) and \(x_{ij}\), respectively. Furthermore, the real power flow from node \(i\) to node \(j\) is denoted as \(P_{ij}\), and \(Q_{ij}\) signifies the reactive power flow between these nodes.
The power injection at a power grid node in (10) consists of two components: the charging power from integrated on-route charging stations, if applicable, and the original load demand:
\[s_{i}=-\sum_{m\in N^{T}}\sum_{a\in\Omega_{a}}P_{a}^{e}\cdot y_{a,m}\cdot\Psi_ {i,m}-s_{i}^{load},\ \ \forall i\in N^{G}, \tag{13}\]
where \(s_{i}^{load}\) is the original load demand of power node \(i\).
For the reliable and safe operation of the power grid after integrating on-route charging stations, it is crucial to maintain both the voltage and current within a specific range:
\[\underline{v_{i}}\leq v_{i}\leq\overline{v_{i}},\ \ \forall i\in N^{G}, \tag{14}\] \[0\leq\ell_{ij}\leq\overline{\ell_{ij}},\ \ \forall(i,j)\in \mathcal{E}, \tag{15}\]
where \(\underline{v_{i}}\) and \(\overline{v_{i}}\) denote the lower and upper bound of the square of the node voltage, respectively. \(\overline{\ell_{ij}}\) represents the maximum square of the current in line \((i,j)\).
Finally, we ensure that all binary and integer decision variables used in the planning model satisfy the following conditions:
\[X_{m} \in\{0,1\},\ \forall m\in N^{T}, \tag{16}\] \[\beta_{m} \in\mathbb{Z},\ \forall m\in N^{T},\] (17) \[y_{a,m} \in\{0,1\},\ \forall a\in\Omega_{a},\forall m\in N^{T},\] (18) \[\Psi_{i,m} \in\{0,1\},\ \forall i\in N^{G},\forall m\in N^{T}. \tag{19}\]
### Model Relaxations
In order to make the planning model compatible with commercial solvers like Gurobi and CPLEX, certain nonlinear constraints in Section 3.2 need to be relaxed. The first constraint to be handled with is (12) due to its quadratic term. To address this, we adopt the approach proposed by Farivar and Low [37] and reformulate it as the following second-order cone constraint:
\[\left\|\begin{array}{c}2P_{ij}\\ 2Q_{ij}\\ \ell_{ij}-v_{i}\end{array}\right\|_{2}\leq\ell_{ij}+v_{i},\ \ \forall(i,j)\in\mathcal{E}. \tag{20}\]
Another non-linearity lies in (13), which involves the product of two binary decision variables \(y_{a,m}\) and \(\Psi_{i,m}\). We introduce an auxiliary variable \(Y_{a,m,i}\) to replace the product. Constraint (13) is then reformulated as follows:
\[s_{i}=-\sum_{m\in N^{T}}\sum_{a\in\Omega_{a}}P_{a}^{e}\cdot Y_{a,m,i}-s_{i}^{ load},\ \ \forall i\in N^{G}. \tag{21}\]
To ensure the consistency between the auxiliary variable \(Y_{a,m,i}\) and the product of \(y_{a,m}\) and \(\Psi_{i,m}\), we introduce additional constraints:
\[Y_{a,m,i} \leq y_{a,m},\ \ \forall a\in\Omega_{a},\forall m\in N^{T},\forall i \in N^{G}, \tag{22}\] \[Y_{a,m,i} \leq\Psi_{i,m},\ \ \forall a\in\Omega_{a},\forall m\in N^{T},\forall i \in N^{G},\] (23) \[Y_{a,m,i} \geq y_{a,m}+\Psi_{i,m}-1,\ \ \forall a\in\Omega_{a},\forall m\in N^{T}, \forall i\in N^{G},\] (24) \[Y_{a,m,i} \in\{0,1\},\ \forall a\in\Omega_{a},\forall m\in N^{T},\forall i \in N^{G}. \tag{25}\]
Consequently, the entire planning model is reformulated as a MISOCP problem:
\[\min\sum_{m\in N^{T}}(f_{s,m}\cdot X_{m}+f_{c,m}\cdot\beta_{m})+ \sum_{m\in N^{T}}\sum_{i\in N^{G}}c_{i,m}\cdot\Psi_{i,m}\] \[\
### Fairness Measures
In our model, we utilize Jain's index [38] as a measure of fairness in the planning of on-route charging stations for BEBs. Jain's index possesses several desirable properties, including population size independence, scale and metric independence, boundedness, and continuity. If we divide the planning area in South King County into \(H\) areas and assign an allocation of \(w_{h}\) to the \(h\)th area, then the expression for Jain's index can be given as follows:
\[f(w)=\frac{(\sum_{h=1}^{H}w_{h})^{2}}{H\sum_{h=1}^{H}w_{h}^{2}}. \tag{27}\]
To determine the fairness index \(w_{h}\) in our planning model, we must consider the impact of a zero-emission fleet on the residents of King County. Figure 1 emphasizes the southern regions, marked by the red bus routes and dark-shaded areas, which are disproportionately affected by air pollution and inadequate transit service. The deployment of zero-emission buses in these areas would provide significant equity benefits. Hence, the fairness index \(w_{h}\) should reflect the reduction in air pollution and the improvement in traditional bus services resulting from our BEB planning efforts.
A viable approach to establishing such a fairness index would be to consider the proportion of BEB routes within a specific area relative to all bus routes. In Figure 2, we present a simplified transportation network comprising 14 nodes and 19 directed links. These links are divided into three distinct areas, with the assumption that links crossing multiple areas are evenly distributed. We define the fairness index \(w_{h}\) as the ratio of the total length of BEB routes to the total length of all bus routes within each area. The ratios for BEB routes in areas 1, 2, and 3 are denoted as \(w_{1}\), \(w_{2}\), and \(w_{3}\) respectively, and can be computed using the following equations:
\[w_{1} =\frac{d_{2}+d_{4}+d_{5}+d_{6}+0.5d_{7}+0.5d_{9}}{d_{1}+d_{2}+d_{ 3}+d_{4}+d_{5}+d_{6}+0.5d_{7}+0.5d_{8}+0.5d_{9}},\] \[w_{2} =\frac{0.5d_{9}+d_{10}}{0.5d_{8}+0.5d_{9}+d_{10}+d_{11}+d_{12}+0.5 d_{13}+0.5d_{16}},\] \[w_{3} =\frac{0.5d_{7}+d_{14}+d_{15}+d_{18}}{0.5d_{7}+0.5d_{13}+d_{14}+d_ {15}+0.5d_{16}+d_{17}+d_{18}+d_{19}}, \tag{28}\]
where \(d_{1}\) (\(l=1,\cdots,19\)) denotes the driving distance along each link \(l\) of the bus route, as shown in the figure. This approach ensures that the fairness index \(w_{h}\) within each area remains independent of other factors, including population density, the number of routes, or the size of the area.
To explicitly express the fairness index as a percentage of BEB routes in our planning model, we introduce a new binary decision variable \(I_{a}\). Here, \(I_{a}=1\) indicates that bus route \(a\) is selected as a BEB route. Once all directed links \((m,n)\in L\) are assigned to the \(H\) areas, we define \(L_{A}^{h}\) as the set of all links in the \(h\)th area. Consequently, we have the following relationship:
\[w_{h}=\frac{\sum_{a\in\Omega_{a}}\sum_{L:(m,n)\in L_{A}^{h}L_{a}}d_{mn}\cdot I _{a}}{\sum_{L:(m,n)\in L_{A}^{h}}d_{mn}},\ \forall h=1,\ldots,H. \tag{29}\]
Considering the bounded nature of Jain's index as defined in (27), we can observe that \(f(w)\) satisfies the inequality \(\frac{1}{H}\leq f(w)\leq 1\). As the value of \(f(w)\) increases, the fairness level also increases, reaching maximum fairness when \(f(w)=1\) (100% fair). To ensure a desired fairness level, we introduce a constraint as follows:
\[f(w)=\frac{(\sum_{h=1}^{H}w_{h})^{2}}{H\sum_{h=1}^{H}w_{h}^{2}}\geq\eta, \tag{30}\]
where \(\eta\) denotes a predetermined fairness level of the planning result, constrained to be between \(\frac{1}{H}\) and 1.
Note that the quadratic term in (30) results in non-linearity, which can be reformulated in the following manner:
\[\sum_{h=1}^{H}w_{h}^{2}\leq\frac{1}{H\cdot\eta}(\sum_{h=1}^{H}w_{h})^{2}. \tag{31}\]
This inequality constraint can be further rewritten as a second-order cone constraint:
\[\left\|\begin{matrix}w_{1}\\ \vdots\\ w_{H}\end{matrix}\right\|_{2}\leq\sum_{h=1}^{H}\sqrt{\frac{1}{H\cdot\eta}}w_{h}. \tag{32}\]
The introduction of \(I_{a}\) necessitates the reformulation of certain constraints in Section 3.2. First, it enables us to quantify the initial energy \(e_{a,\rho_{a}}\) saved in all BEB batteries as below:
\[e_{a,\rho_{a}}=\theta_{0}\cdot u_{a}^{bat}\cdot I_{a},\ \forall a\in\Omega_{a}. \tag{33}\]
Figure 2: Illustration of the definition of fairness index \(w_{h}\).
In a similar fashion, we can redefine constraints (6)-(8) as follows:
\[e_{a,m} \geq\theta^{l}\cdot u_{a}^{bat}\cdot I_{a},\ \ \forall a\in\Omega_{a}, \forall m\in N^{T}, \tag{34}\] \[e_{a,m}+s_{a,m} \leq\theta^{u}\cdot u_{a}^{bat}\cdot I_{a},\ \ \forall a\in\Omega_{a}, \forall m\in N^{T},\] (35) \[e_{a,n}=e_{a,m}+s_{a,m}-e^{0}\cdot d_{mn}\cdot I_{a},\ \ \forall a\in\Omega_{a}, \forall(m,n)\in L_{a}. \tag{36}\]
Regarding BEB routes, when \(I_{a}=1\), constraints (34)-(36) are equivalent to (6)-(8). On the other hand, for non-BEB routes where \(I_{a}=0\), we have \(e_{a,o_{a}}=e_{a,m}=s_{a,m}=0\).
Furthermore, additional constraints need to be incorporated to account for the new decision variable \(I_{a}\), which ensures that buses can only charge if their routes are designated for BEBs:
\[y_{a,m}\leq I_{a},\ \ \forall a\in\Omega_{a},\forall m\in N^{T}. \tag{37}\]
And if a bus on route \(a\) never undergoes charging, it indicates that the route is not intended for BEBs.
\[I_{a}\leq\sum_{m\in N^{T}}y_{a,m},\ \ \forall a\in\Omega_{a}. \tag{38}\]
Considering the budget limitations associated with constructing on-route charging facilities, we impose an upper limit on the number of BEB routes:
\[\sum_{a\in\Omega_{a}}I_{a}\leq I_{\max}, \tag{39}\]
where \(I_{\max}\) represents the maximal number of BEB routes to be invested in the planning period. And we introduce the following binary constraint:
\[I_{a}\in\{0,1\},\ \ \forall a\in\Omega_{a}. \tag{40}\]
As a result, the model formulation that takes fairness measurement into consideration remains a MISOCP problem:
\[\min \sum_{m\in N^{T}}(f_{s,m}\cdot X_{m}+f_{c,m}\cdot\beta_{m})+\sum _{m\in N^{T}}\sum_{i\in N^{G}}c_{i,m}\cdot\Psi_{i,m}\] \[+\sum_{\varepsilon:(i,j)}T\cdot c_{\varepsilon}\cdot\mathscr{C}_ {ij}\cdot r_{ij}\] \[\text{s.t.}\ (2)-(5),(9)-(11),(14)-(25),(29),(32)-(40). \tag{41}\]
## 4 Coupled Network Framework
As outlined in Section 1.1, the electrification of bus fleets involves two types of coupled power and transportation systems. Our study adopts the second type, whereby a real-world transportation network in South King County is integrated with a virtual distribution power network. Initially, we acquired the transportation map of the potential electric bus routes in South King County, which served as the basis for designing a virtual power network. Subsequently, our study aimed to establish a practical connection between these two networks.
It is important to highlight that our coupled network framework differs from existing research studies [22, 23] in that we did not assume an equivalence between the transportation links and the power line lengths in the coupled system. Instead, we determine the transportation link based on the driving distance between two transportation nodes, while the length of the power line is determined by the straight-line distance between power grid nodes. As illustrated in Figure 3, our approach involves constructing on-route charging stations at existing bus stations that represent transportation nodes. And the newly-added power lines invested in (1), depicted as red dashed lines, facilitate the functional coupling between the transportation network and the power network. These power lines efficiently transmit electrical energy, ensuring a stable and reliable bus service across the entire transportation network. This approach enhances the practical significance of the links in both networks, enabling more accurate and efficient calculations in the planning problem.
### Transportation Network Design
We construct a transportation network using the electric bus routes in South King County, with the on-route bus stations serving as transportation nodes and the route segments serving as transportation links within the network. To identify the potential routes, we refer to Appendix C of the King County Transit report [13] and exclude the non-operational bus routes. The remaining routes, including 22, 101, 102, 111, 150, 153, 156, 168, 177, 181, 182, 183, 187, 190, and 193, are used to establish the transportation network. This network is derived from the general transit feed specification (GTFS) data [39].
When abstracting the transportation network from the intricate bus route map, it is essential to consider that using all bus stations along the identified bus routes as nodes in the transportation network may not be feasible due to computational complexity. Therefore, we have developed a set of rules for selecting the nodes from the available bus stations. These rules are designed to take into account practical considerations and are as follows:
Figure 3: Functional interconnection of nodes in the power and transportation networks.
1. When selecting nodes, priority is assigned to common bus stops that are connected to multiple bus routes, as they have a greater impact on the transportation network.
2. For each bus route, the origin station and terminal station are identified as nodes in the transportation network.
3. A distance threshold is set starting from the origin station. During the node selection process, we assess the distance between the current bus station and the previously selected node. If this distance exceeds the threshold, we incorporate the station preceding the current bus station as a new node within the transportation network.
4. The service loop of a bus is also taken into account. Bus stations located on opposite sides of the street are considered separate nodes if both are selected.
```
Data:\(S_{\text{all}}\), \(S_{a}\), \(d_{0,s}\), and \(d_{\theta}^{T}\) Result: selected transportation nodes \(N^{T}\)
1\(N^{T}\leftarrow\emptyset\)
2\(S_{\text{count}}\leftarrow\) an array of zeros with length \(|S_{\text{all}}|\)for\(\alpha\in\Omega_{a}\)do /* selection rule 2) */
3\(N^{T}\gets N^{T}\cup\{S_{a}[0]\}\)
4\(N^{T}\gets N^{T}\cup\{S_{a}[-1]\}\)
5\(\Delta d\gets 0\)
6for\(s\in S_{a}\)do \(S_{\text{count}}[s]\gets S_{\text{count}}[s]+1\) /* selection rule 3) & 4) */
7if\(d_{0,s}>\Delta d+d_{\theta}^{T}\)then
8if\(d_{0,s}-d_{0,s}^{-}>d_{\theta}^{T}\)then
9\(N^{T}\gets N^{T}\cup\{s^{-},s\}\)
10\(\Delta d\gets d_{0,s}\)
11else
12\(N^{T}\gets N^{T}\cup\{s^{-}\}\)
13\(\Delta d\gets d_{0,s^{-}}\)
14 end if
15
16 end for
17
18 end for
19for\(s\in S_{\text{all}}\)do /* selection rule 1) */
20if\(S_{\text{count}}[s]>n_{\text{count}}\)then \(N^{T}\gets N^{T}\cup\{s\}\)
21 end for
22
23 end for
24return\(N^{T}\)
```
**Algorithm 1**Transportation Node Selection
Algorithm 1 outlines the methodology for selecting transportation nodes from all bus stations to identify potential locations for building on-route charging stations. The set of all bus stations from the 15 identified bus routes is denoted as \(S_{\text{all}}\), while \(S_{a}\) represents on-route bus stations along a specific route \(\alpha\). We represent the initial and final bus stations of route \(\alpha\) as \(S_{a}[0]\) and \(S_{a}[-1]\), respectively. Notably, \(S_{a}[0]\) corresponds to the origin station \(o_{a}\) defined in Section 2.2. The current and preceding bus station IDs are designated as \(s\) and \(s^{-}\) respectively. Additionally, the number of occurrences of each bus station for all identified bus routes is recorded in \(S_{\text{count}}\). \(d_{0,s}\) indicates the cumulative driving distance of bus station \(s\) from its origin station, and \(\Delta d\) is the cumulative driving distance of the previously selected transportation node from its origin station. The distance threshold set in selection rule 3) is represented by \(d_{\theta}^{T}\), while \(n_{\text{count}}\) denotes the minimum number of bus routes that will be served by the common bus stop as defined in selection rule 1).
The algorithm begins by initializing the selected transportation node-set and the occurrence of each station in lines 1-2. Then, lines 4-5 iterate over all routes, adding the origin and terminal bus stations to the \(N^{T}\) set as requested by selection rule 2). For each bus station \(s\) along the route, its occurrence count is incremented by 1 in line 8, and lines 9-10 compare the distance between the current bus station and the previously selected node to the distance threshold \(d_{\theta}^{T}\). If the distance exceeds the threshold, the algorithm selects the station before the current bus station as a new node in the transportation network, as requested by selection rule 3).
Once the occurrences of all bus stations have been counted, lines 21-25 check if the number of bus routes that serve the station \(s\) exceeds \(n_{\text{count}}\). If so, the station is added to the \(N^{T}\) set in accordance with selection rule 1). The algorithm also considers selection rule 4) by using the parameter \(S_{a}\), which collects the bus stations along a round trip journey of bus route \(\alpha\) in the order of their actual driving cycle.
By applying the selection criteria and algorithm, we have effectively generated a transportation network, as depicted in Figure 4. This network comprises the 15 currently available BEB routes, which are indicated by different colors, and the
Figure 4: Transportation network of 15 BEB routes in South King County.
selected bus stations are represented by purple points. In this study, we defined a common bus stop as a station that serves more than three bus routes (\(n_{\text{count}}=3\)). We set the driving distance threshold \(d_{\theta}^{T}\) to 40,000 ft and identified a total of 84 bus stations as network nodes, each with the potential to accommodate on-route charging station installations.
### Coupled Virtual Power Network Design
Using the transportation network depicted in Figure 4 as a foundation, we construct a virtual power network that aligns geographically with the selected bus stations. However, the distance between two adjacent bus stations is usually much shorter than the distance between two power grid nodes in reality. Therefore, we need to further refine the selection of power grid nodes from the 84 transportation nodes in Section 4.1. For clustered bus stations, we connect them to a single node in the virtual power network.
To accomplish this, we calculate the geographical distance between transportation nodes using their latitude and longitude coordinates and subsequently define a distance threshold. If the distance between any two nodes within a cluster of transportation nodes is below the threshold, only one node from the cluster will be selected as the corresponding power grid node.
As illustrated in Figure 5, the pairwise geographical distances between transportation nodes \(ab\), \(ac\), and \(bc\) are below the threshold, while the distances between node \(d\) and nodes \(a,b\), or \(c\) exceed the threshold. Accordingly, we choose bus stations \(a\) and \(d\) as power grid nodes \(A\) and \(D\) in the power network, respectively. These nodes occupy the same geographical location in both the transportation and power networks.
We develop Algorithm 2 to select the power nodes from the set of transportation nodes \(N^{T}\), where the distance between any two power nodes is at least the threshold \(d_{\theta}^{G}\). The algorithm employs an empty list \(N_{\text{visit}}\) to collect the transportation nodes that fall within the specified distance threshold from the selected power nodes. These transportation nodes, which are added to \(N_{\text{visit}}\), constitute the cluster nodes associated with the selected nodes identified as new power nodes in the transportation network. The function \(\mathcal{D}(\cdot,\cdot)\) is used to calculate the geographical distance between two nodes based on their latitude and longitude coordinates.
The algorithm starts by looping through each transportation node in \(N^{T}\). If the current node has already been identified in \(N_{\text{visit}}\), the algorithm skips it and moves on to the next node. Otherwise, it adds the current node to the power node set \(N^{G}\) in line 5 and loops through each subsequent node.
For each subsequent node, the algorithm checks if the distance between the current selected power node and the subsequent node is less than the threshold \(d_{\theta}^{G}\) in line 8. If it is, the algorithm marks the subsequent node as visited by adding it to \(N_{\text{visit}}\) in line 9. If it is not, the algorithm continues to the next subsequent node. In this way, the algorithm can obtain the set of power nodes \(N^{G}\).
To plot the topology of the power network, we need to establish the branches between the selected power nodes. Since the distribution power network typically operates in a radial topology, we aim to create a radial topology of the power network while minimizing the total length of power lines to reduce costs. To achieve this, we use the minimum spanning tree (MST) algorithm, such as Prim's algorithm [40] or Kruskal's algorithm [41], to determine the links between power nodes.
```
Data: Selected transportation nodes \(N^{T}\) Result: Selected power nodes \(N^{G}\)
1\(N^{G}\leftarrow\emptyset\)
2\(N_{\text{visit}}\leftarrow\emptyset\)
3for\(m\gets 1\)to\(|N^{T}|\)do
4if\(N^{T}[m]\not\in N_{\text{visit}}\)then
5\(N^{G}\gets N^{G}\cup\{N^{T}[m]\}\)
6for\(n\gets m+1\)to\(|N^{T}|\)do
7if\(N^{T}[n]\not\in N_{\text{visit}}\)then
8if\(D(N^{T}[m],N^{T}[n])<d_{\theta}^{G}\)then
9\(N_{\text{visit}}\gets N_{\text{visit}}\cup\{N^{T}[n]\}\)
10
11 end if
12
13 end if
14
15 end for
16
17 end for return\(N^{G}\)
```
**Algorithm 2**Power Node Selection
We begin by constructing a graph with the selected power nodes, where each node represents a power node, and the edges between nodes represent potential power lines. The geographical distance between all pairs of nodes is calculated and added as an edge weight to the graph. We then apply the MST algorithm to the graph to find the minimum spanning tree, which identifies the subset of edges that connect all the power nodes with the lowest possible
Figure 5: Illustration of node selection rules for constructing the virtual power network.
total edge weight. Finally, we visualize the power network topology by plotting the graph and the minimum spanning tree. The NetworkX library in Python is used to implement the above steps and the pseudocode is omitted for brevity.
As shown in Figure 6, there is a geographic overlap between the nodes of the power network and a subset of nodes in the transportation network, thereby forming the coupled network. Notably, by setting the geographical distance threshold \(d_{\theta}^{G}\) to 2 km in Algorithm 2, we have identified 24 power grid nodes from the initial pool of 84 transportation nodes. The node selection rules and algorithms utilized in the design of the transportation network and virtual power grid can be extended to other bus systems, highlighting the versatility and applicability of this framework. For ease of access and further exploration, we have made the complete implementation code of our coupled network framework available on our GitHub repository [42].
## 5 Case Studies
In this section, we will execute the planning model presented in Section 3 on the coupled networks established in Section 4. Initially, we will run the planning model without incorporating any fairness constraints to examine the variations in planning metrics based on various initial SOC levels for BEBs. Subsequently, we will rerun the planning model with fairness considerations to ensure a high degree of horizontal or vertical equity in the planning outcomes. For equity analysis, the census tracts traversed by all 15 bus routes will be partitioned into distinct subareas based on different socio-demographic characteristics.
### Parameter Settings
The topology of the 84-node transportation network is illustrated in Figure 7 and the diagram of the 110 kV high voltage distribution network is shown in Figure 8. For the parameters of the distribution network, please refer to [43]. The resistance and reactance of each power line reflect the actual geographical distance between its connecting power nodes. And the coupling relationship between the power and transportation network nodes is summarized in Table 1.
In the objective function (1), the fixed cost of constructing each charging station is represented by \(f_{s,m}\) and assumed to be $200,000 at all bus stations. The fixed cost of building each pile is denoted by \(f_{c,m}\) and set at $25,000. Overhead power line costs are estimated to be $390,000 per mile. The construction cost of power lines \(c_{i,m}\) is determined based on the geographical distance between the power node \(i\) and bus station \(m\), using the unit cost mentioned above. For the planning period, the power loss time \(T\) is assumed to be 15 hours per day. Furthermore, the electricity cost for BEBs is
\begin{table}
\begin{tabular}{c c c|c c c} \hline Power Trans & Latitude & Longitude & Power Trans & Latitude & Longitude \\ Node & & Node Node & & & \\ \hline
1 & 47.6187107 & -122.3306 & 13 & 39 & 47.2969099 & -122.24944 \\
2 & 7 47.545311 & -122.38711 & 14 & 41 & 47.315258 & -122.17787 \\
3 & 8 & 47.516553 & -122.3769 & 15 & 47.4643172 & -122.27198 \\
4 & 10 & 47.593438 & -122.83096 & 16 & 46 & 47.468002 & -122.17017 \\
5 & 17 & 47.3099785 & -122.36103 & 17 & 49.47616432 & -122.14659 \\
6 & 19 & 47.478775 & -122.20813 & 18 & 50.47312761 & -122.30338 \\
7 & 20 & 47.387194 & -122.30814 & 19 & 53 & 47.296532 & -122.32065 \\
8 & 21 & 47.3594251 & -122.29468 & 20 & 55 & 47.3051619 & -122.01903 \\
9 & 22 & 47.437902 & -122.3423 & 21 & 56 & 47.350878 & -122.14958 \\
10 & 26 & 47.4877625 & -122.14824 & 22 & 57 & 47.365775 & -122.10149 \\
11 & 31 & 47.3848267 & -122.2327 & 23 & 69 & 47.5571404 & -122.18928 \\
12 & 37 & 47.4413147 & -122.24831 & 24 & 76 & 47.5722656 & -122.32739 \\ \hline \end{tabular}
\end{table}
Table 1Geographically Overlapping Nodes between the Transportation Network and the Distribution Power Network
Figure 8: Representation of network topology for the 24-node distribution power network.
Figure 6: Coupled power and transportation network in South King County.
Figure 7: Representation of network topology for the 84-Node transportation network.
taken to be $0.20/kWh, which is based on the average rate paid by King County Metro for electricity [44].
To guarantee the efficient performance of BEBs, we have established both lower and upper limits for the SOC of the bus batteries at 10% and 90%, respectively. In our planning problem, we consider two types of coaches for BEBs: 40-ft and 60-ft. The coach type for each bus route is provided in Appendix C of the King County Transit report [13]. We specifically utilize the 40-ft BYD K9M and 60-ft BYD K11M BEB models, which have battery capacities of 313 kWh and 578 kWh, respectively. The estimated average energy consumptions per kilometer for 40-ft and 60-ft buses are 1.99 kWh/mile and 3.74 kWh/mile, as indicated in the sources [45, 46]. According to the official published specifications from BYD [47, 48], the nominal charging powers for K9M and K11M electric buses are 150 kW and 200 kW, respectively. Moreover, we have set a maximum waiting time of 12 minutes at each station to ensure that BEBs receive timely energy support.
As a benchmark for the model scale, we employed Gurobi [49] to solve the MISOCP planning problem on a laptop equipped with a 2.4 GHz Quad-Core Intel Core i5 processor and 8 GB of memory. Using a tolerance of \(1.00e^{-4}\), the model can be solved within one minute.
### Fairness Zone Division
Jain's index, as defined in (27), serves as a metric for assessing transit equity across multiple areas. To evaluate fairness in the BEB planning area of South King County with Jain's index, we need to partition the region into several distinct areas. Notably, census tracts, which offer stable and relatively permanent geographic units for statistical data presentation [50], are well-suited for this purpose. Previous studies on transit equity have conducted analyses at the census tract level [51, 52], considering the social demographic characteristics associated with each tract. Therefore, we will evaluate the fairness of our planning outcome based on census tracts. However, considering the small size of individual census tracts, they may not contain a sufficient number of bus routes for meaningful equity analysis. To address this issue, we will merge census tracts into larger subareas based on two specific criteria related to different dimensions of transit equity.
#### 5.2.1 Population-Based Census Tract Merging
The map of the census tract polygons, obtained from King County geographic information system open data [53], is presented in Figure 9. Our focus lies primarily on census tracts that are intersected by bus routes, as the residents in these areas are more vulnerable to the impacts of air pollution and the bus services associated with these routes.
To prioritize the development of horizontally equitable bus services, we aim to merge these census tracts into larger areas with comparable total populations and an adequate number of bus routes. By doing so, the fairness constraint (30) will encourage a balanced distribution of BEB resources across the merged subareas, ensuring that the BEB route ratio (\(w_{h}\)) for each resident in any of these areas is similar to that of residents in other areas. This approach allows us to analyze horizontal equity in the BEB planning outcomes by merging census tracts into larger subareas based on their respective populations.
As the majority of the bus routes in this region run from north to south, we have horizontally arranged the four merged census tracts, as depicted in Figure 10. We obtained the population data for each tract from the consolidated demographics index for King County census tracts [54]. The resulting merged areas have been designated as Zone 1, Zone 2, Zone 3, and Zone 4, ordered from left to right. To achieve a balanced population distribution across the merged areas, Zone 1, Zone 2, Zone 3, and Zone 4 have respective populations of 173,501, 179,359, 176,994, and 165,661 people. The bus routes passing through each subarea are as follows: Zone 1 includes routes 22, 101, 102, 111, 150, 156, 177, 181, 187, 190, and 193. Zone 2 includes routes 156, 177, 181, 182, 183, 187, 190, and 193. Zone 3 includes routes 101,
Figure 10: Map of 4 subareas formed by aggregating census tracts based on the population feature.
Figure 9: Illustration of the census tract map in South King County.
102, 150, 153, 156, 168, 181, 183, and 193. Zone 4 includes routes 102, 111, 168, and 181.
#### 5.2.2 Bus-Commuter-Based Census Tract Merging
In contrast to the population-based method in Section 5.2.1, the evaluation of vertically equitable bus services focuses on a specific community that is particularly vulnerable to bus services: bus commuters. Rather than considering the benefits of BEBs for all residents, we merge census tracts into subareas based on the total number of workers who rely on buses for their daily commute. By creating subareas with similar bus-commuter populations, we aim to distribute the benefits of BEBs more evenly among this specific group. To gather the necessary information, we refer to the US census American community survey data table for the "journey to work" subject area [55]. Identifying the bus-commuter community as a group heavily reliant on bus services, we incorporate their specific needs into our vertical analysis by enforcing the fairness constraint (\(30\)) to ensure that all bus commuters in these subareas have equitable access to BEB routes.
We have merged the census tracts into three subareas, ensuring an equitable distribution of the bus-commuter population among each subarea, as illustrated in Figure 11. These subareas are named Region 1, Region 2, and Region 3, arranged from left to right. The Seattle downtown area, located in the top left corner of the figure, contains a significant concentration of bus commuters. As a result, Region 1, although relatively smaller in size, has a similar number of bus commuters compared to the other two regions. Specifically, Region 1, Region 2, and Region 3 have 11,658, 11,985, and 11,526 bus commuters.
In Region 1, the following 10 bus routes pass through: 22, 101, 102, 111, 150, 177, 181, 187, 190, and 193. Region 2 comprises 12 routes: 101, 102, 111, 150, 156, 177, 181, 182, 183, 187, 190, and 193. Region 3 consists of 12 bus routes: 101, 102, 111, 150, 153, 156, 168, 177, 181, 183, 190, and 193. Each subarea has its unique set of bus routes, with Region 1 featuring route 22, Region 2 having route 182, and Region 3 having routes 153 and 168 that are not found in the other subareas.
### Planning Results without Fairness
Assuming a 10-year planning period, we begin by solving the planning model (26) without incorporating any fairness constraints. Table 2 provides information on the frequency of on-route charging required to sustain the round trips for all 15 bus routes. The table illustrates that as the initial SOC of the BEB batteries increases, the overall number of required charging sessions decreases. Notably, once the initial SOC exceeds 50%, all BEBs are capable of completing their round trips without the need for on-route charging. This finding highlights the importance of ensuring that BEBs are charged to adequate SOC levels prior to departure, which can help to optimize their operational efficiency and minimize the need for additional charging infrastructure.
Table 3 presents the planning results for initial SOC values ranging from 0.1 to 0.4, considering that on-route charging is no longer needed for the 15 BEBs when their initial SOC reaches or exceeds 50%. The table demonstrates that increasing the initial SOC leads to a decrease in the total planning cost. This reduction can be attributed to the decreased charging demand resulting from a higher initial SOC, which in turn reduces the investment required for
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Route} & Round-trip & Charge & \multicolumn{4}{c}{Initial SOC \(\theta_{b}\)} \\ & Length (mile) & Power (MW) & 0.1 & 0.2 & 0.3 & 0.4 \(\geq\) 0.5 \\ \hline
22 & 13.82 & 150 & 1 & 0 & 0 & 0 & 0 \\
101 & 28.28 & 200 & 3 & 2 & 0 & 0 & 0 \\
102 & 47.81 & 200 & 5 & 4 & 2 & 1 & 0 \\
111 & 52.34 & 200 & 5 & 4 & 3 & 1 & 0 \\
150 & 44.18 & 200 & 5 & 3 & 2 & 0 & 0 \\
153 & 16.37 & 150 & 2 & 1 & 0 & 0 & 0 \\
156 & 25.04 & 150 & 2 & 1 & 0 & 0 & 0 \\
168 & 24.35 & 150 & 2 & 1 & 0 & 0 & 0 \\
177 & 48.66 & 200 & 5 & 4 & 2 & 1 & 0 \\
181 & 30.08 & 150 & 2 & 1 & 0 & 0 & 0 \\
182 & 15.10 & 150 & 1 & 0 & 0 & 0 & 0 \\
183 & 21.79 & 150 & 2 & 1 & 0 & 0 & 0 \\
187 & 11.81 & 150 & 1 & 0 & 0 & 0 & 0 \\
190 & 41.79 & 200 & 4 & 3 & 2 & 0 & 0 \\
193 & 50.62 & 200 & 5 & 4 & 2 & 1 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: On-Route Charging Frequency per Round-Trip for Each Bus Route at Different Departure SOC \(\theta_{0}\)
Figure 11: Map of 3 subareas formed by aggregating census tracts based on the bus-commuter feature.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Planning Metric & \(\theta_{b}=0.1\) & \(\theta_{b}=0.2\) & \(\theta_{b}=0.3\) & \(\theta_{b}=0.4\) \\ \hline Number of stations & 27 & 14 & 7 & 3 \\ Number of piles & 45 & 29 & 13 & 4 \\ Total cost & $101,103,392,342,792 & $52,033,864 & $5926,386 \\ Station investment & $56,400,000 & $2,800,000,514,000 & $500,000,000 \\ Pile investment & $1,125,000 & $5725,000 & $525,000 & $100,000 \\ Power line investment & $3,156,620 & $5463,221 & $62,508 & $50 \\ Power loss cost & $5436,772 & $341,506 & $5246,356 & $526,336 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of Planning Results without Fairness Consideration When Initial SOC \(\theta_{0}\) Ranges from 0.1 to 0.4
charging stations and piles. Additionally, the cost of power line investment decreases consistently as the length of power lines depends on the number of charging stations and the distance between the stations and the power nodes in which they are integrated. When \(\theta_{0}=0.4\), the power line investment becomes zero because the three charging stations are built directly on the power nodes, eliminating the need for extra power lines to connect the stations to the power grid. Moreover, the cost of power loss declines steadily with increasing \(\theta_{0}\), reflecting the fact that BEBs with adequate SOC require less electric power, resulting in lower current flow in the power lines and reduced power loss.
Figure 12 displays the siting and sizing outcomes of on-route fast charging stations, represented by the transportation node ID and the number of charging piles installed at each station. Since each bus route has its dedicated charging piles, we can determine the number of bus routes being charged at each station by counting the corresponding charging piles. When \(\theta_{0}=0.1\), all origin stations of the 15 identified BEBs are included in these 27 charging stations. Notably, transportation node 83 situated in the Industrial District (SODO Busway & S Royal Brougham Way) hosts the highest number of BEB routes that charge here, totaling 4 routes. This node serves as an on-route bus stop for 5 BEB routes, which is consistent with selection rule 1) that designates common stops. Additionally, among the nodes with three charging piles, nodes 4 and 9 are origin stations for two bus routes each, while nodes 21 and 77 serve for no fewer than 3 bus routes.
When \(\theta_{0}=0.2\), only one origin station for route 153, located at node 31, still requires the construction of charging stations. However, as \(\theta_{0}\) increases to 0.3 and 0.4, none of the origin stations require charging stations since the BEBs have enough energy to run the first few stops while maintaining a safe SOC. With larger initial SOC, the number of charging stations decrease evidently which aligns with the findings in Table 3. From \(\theta_{0}=0.2\) to 0.4, the number of BEB routes requiring on-route charging decreases. At \(\theta_{0}=0.4\), only four routes require on-route charging, as confirmed by the data in Table 2. The nodes with the most charging piles built between \(\theta_{0}=0.2\) and 0.4 are common stops, including nodes 54, 77, 83, and 10. This highlights the importance of building on-route charging stations at stops that serve multiple routes and further validates the effectiveness of selection rule 1) in forming the coupled network.
### Planning Results with Fairness Consideration
In this section, we will maintain the assumption of a 10-year planning period. However, we will now incorporate the fairness measurement (32) into the planning model, as represented by (41). Notably, a maximum of 5 BEB routes (\(I_{\max}=5\)), which is one-third of the total bus routes, will be selected for investment. To include all 15 bus routes as candidate BEB routes, we will set the initial SOC to 0.1 based on the information provided in Table 2. This approach will enable us to evaluate both the horizontal equity of the BEB route ratio across the population-based merged subareas and the vertical equity within the bus-commutered-based merged subareas.
Table \(4\) presents the planning results of the horizontal equity analysis across four subareas merged based on the population feature. The fairness level \(\eta\) ranges from 0 to 0.99, with a value of 0 renderings (32) invalid. In such cases, the initial fairness index \(f(w)\) is computed, prioritizing the minimization of the total planning cost. As the value of \(\eta\) increases, a stricter rule is imposed on the equitable distribution of BEB routes among the four subareas. Figure 13 illustrates the allocation of these five BEB routes when \(\eta=0.99\), demonstrating a similar proportion of BEB routes to all bus routes in each subarea.
Without considering (32), the initial fairness index achieved through the most cost-effective planning scheme is 0.915800, surpassing the fairness level of 0.9. Therefore, we observe that the planning results are identical for fairness levels of 0 and 0.9 in Table \(4\). However, as the fairness level increases to 0.95, we observe a corresponding rise in the planning cost, primarily due to increased power line investment expenses. This adjustment is necessary to ensure a higher level of fairness, leading to a reconsideration of the five BEB route IDs. Specifically, from fairness levels of 0.9 to 0.95, route 183 replaces route 182 as one of the BEB routes to be invested in.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Planning Metric & \(\eta=0\) & \(\eta=0.9\) & \(\eta=0.95\) & \(\eta=0.99\) \\ \hline Number of stations & 7 & 7 & 7 & 10 \\ Number of piles & 8 & 8 & 8 & 11 \\ Total cost & 52,022,265 & 52,022,256,931 & 52,879,586 \\ Station investment & 51,400,000 & 51,400,000 & 52,000,000 \\ Pile investment & 5200,000 & 5200,000 & 5200,000 & 5275,000 \\ Power line investment & 524,253 & 5324,253 & 3307,758 & 5340,700 \\ Power loss cost & 5259,935 & 5259,937 & 5259,137 & 5259,806 \\ Fairness index & 0.915800 & 0.915800 & 0.959993 & 0.992838 \\ BEB route ID & 182, 187, 182, 187, 187, 186, 182, 168, 156, 185, 153 & 2,165, 156, 153 \\ & 22 & 22 & 22 & 190 \\ \hline \hline \end{tabular}
\end{table}
Table 4Summary of Planning Results Considering Horizontal Equity with a Maximum Number of BEB Routes \(I_{\max}=5\)
Figure 12: Optimal placement and charging pile allocation result of charging stations without fairness consideration.
At a fairness level of 0.99, the planning model requires three additional charging stations and three more charging piles to accommodate the replacement of routes 187 and 183 with routes 182 and 190 as BEB routes. This expansion of the charging infrastructure leads to increased costs in both power line investments and power loss. The planning model prioritizes fairness by selecting bus routes with higher on-route charging demand to be included as BEB routes, even if it results in higher planning costs. These findings underscore the inherent trade-off between equity and economic efficiency in the planning process. While striving for an equitable distribution of BEB routes, compromises need to be made in terms of increased economic expenses.
Similarly, using the three subareas obtained in Section 5.2.2, we solve (41) again and present the planning results in Table 5. Notably, the planning metrics for \(\eta=0\) in Table 5 and Table 4 are nearly identical, with only slight variations in the calculated fairness index due to the utilization of different subareas. Within the bus-commuter-based merged subareas, the initial fairness index is 0.687317, which falls below the threshold of 0.9. Consequently, the planning outcomes for fairness levels of 0 and 0.9 are no longer the same.
As the fairness level increases from 0 to 0.9, there is a corresponding increase in the planning cost, and an additional charging station is required when \(\eta=0.9\). This adjustment involves replacing route 22 with route 183. When \(\eta\) further increases to 0.95, the investment cost in charging infrastructure remains relatively stable, but there is a significant rise in power line investment. This change can be attributed to the altered locations of the charging stations. Interestingly, at \(\eta=0.99\), although three additional charging stations must be constructed, there is a reduction in the investment required for power lines. This is due to the decreased total distance between the charging stations and the power grid nodes. However, the increase in both power loss costs and investment in charging infrastructure outweighs the savings achieved, leading to the highest planning cost when \(\eta=0.99\).
The distribution of the five BEB routes and the locations of their charging stations within the three bus-commuter-based merged subareas are visualized in Figure 14 for a fairness level of \(\eta=0.99\). Comparing this figure with Figure 13, we can observe that routes 22 and 190 from Figure 13 have been replaced by routes 187 and 177 in Figure 14. This adjustment from horizontal equity to vertical equity results in longer BEB routes (as indicated in Table 2) primarily located in the western portion of the census tracts.
This observation suggests that residents in the western region have a higher reliance on bus transportation, which aligns with the actual transportation landscape. In contrast, the eastern part of King County shows a scarcity of bus routes, indicating that residents in this area must rely on alternative transportation methods, such as household cars, to fulfill their commuting needs.
Figure 14: Planning results reflecting the highest level of vertical equity across 3 subareas with \(\eta=0.99\).
Figure 13: Planning results reflecting the highest level of horizontal equity across 4 subareas with \(\eta=0.99\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Planning Metric & \(\eta=0\) & \(\eta=0.9\) & \(\eta=0.95\) & \(\eta=0.99\) \\ \hline Number of stations & 7 & 8 & 9 & 11 \\ Number of piles & 8 & 9 & 9 & 12 \\ Total cost & 52,022,265 & 543,023,729 & 528,122 & 52,960,454 \\ Station investment & 51,400,000 & 51,600,000 & 52,000,000 & 52,000,000 \\ Pile investment & 5200,000 & 5225,000 & 525,000 & 5300,000 \\ Power line investment & 524,253 & 530,758 & 5729,021 & 518,242 \\ Power loss cost & 559,593 & 5270,792 & 5271,101 & 528,212 \\ Fairness index & 0.687317 & 0.901361 & 0.986707 & 0.996923 \\ BEB route ID & 182, 187, 182, 187, 181, 182, 182, 187, 186, 185, 185, 186, 183, 187, 188, 186, 187, 187 & 186, 187, 187 \\ & 22 & 153 & 153 & 153 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Summary of Planning Results Considering Vertical Equity with a Maximum Number of BEB Routes \(I_{max}=5\)
In Figure 15, the locations of charging stations and the corresponding number of charging piles in each station are depicted for both the population-based and bus-commuter-based merged subareas. The visualization considers fairness levels spanning from 0 to 0.99. It is worth noting that when \(\eta=0\), subplots (1) and (5) exhibit identical patterns. This similarity arises because the fairness constraint (32) is not considered, resulting in planning results that remain consistent across different subareas and prioritize economic efficiency as the primary objective.
As the fairness level increases, variations become apparent between the results obtained from the population-based and bus-commuter-based merged subareas. These differences indicate that the two census-tract merging criteria capture distinct facets of transit equity and have some influence on the placement of charging infrastructure. However, it is important to note that the majority of charging station locations remain consistent across the population-based and bus-commuter-based merged subareas. For instance, when considering a fairness level of 0.99, subplots (9) and (12) illustrate six charging station locations that are identical: nodes 19, 21, 31, 50, 53, and 55. Furthermore, these shared locations feature an equal number of installed charging piles. This finding is significant for decision-makers as it suggests that prioritizing the construction of charging stations in these shared locations would effectively address both horizontal and vertical equity concerns.
The previous planning results with fairness consideration were obtained under the assumption that the budget could support up to five BEB routes. However, we have observed that the initial fairness index of the model is influenced by the choice of the maximum number of BEB routes to be invested. To visualize this relationship, we have created Figure 16, which illustrates the initial fairness index obtained without enforcing fairness constraint (32) plotted against the maximum number of BEB routes. This graphical representation offers valuable insights into how the choice of the maximum number of BEB routes influences the fairness outcomes of the planning model when fairness is not explicitly considered.
The plot exhibits a general trend where the initial fairness index tends to increase as the maximum number of BEB routes increases. However, this trend does not strictly follow a monotonic increase within both the population-based and bus-commuter-based merged subareas. Notably, there is a decrease in the initial fairness index when the maximum BEB routes are set to 9 in the population-based merged subareas and 12 in the bus-commuter-based merged subareas. This finding suggests that even if there is more budget available to build charging infrastructure for BEBs, ignoring fairness constraints and pursuing the most economical planning results can lead to less transit equity. Therefore, it is crucial to account for fairness limitations when implementing BEB planning models.
## 6 Conclusion
In this paper, a coupled power and transportation network framework is established for the planning of on-route charging infrastructure for BEBs. By integrating charging
Figure 16: Impact of varying maximum number of BEB routes on initial fairness level in planning results.
Figure 15: Optimal placement and charging pile allocation result of charging stations considering both horizontal equity (population-based subareas) and vertical equity (bus-commuter-based subareas).
stations into both networks, we consider not only the investment cost of charging stations and charging piles but also additional investment in power lines and increased power loss costs in the power grid. These costs are minimized through the utilization of MISOCP. Additionally, we introduce fairness measurements into the planning results using Jain's index, which aligns well with the MISOCP model. This allows decision-makers to customize the level of fairness implemented during different phases of fleet electrification. All experiments in this study were conducted in South King County, a region recognized for being at the forefront of full electrification efforts. This area has been significantly impacted by air pollution, making it a pertinent location for our research.
Without fairness measurements, we compare the planning results under different levels of battery SOC when BEBs depart from origin stations. This analysis assists decision-makers in predicting the need for additional on-route charging infrastructure based on the current on-base charging station condition. Our siting and sizing results indicate that, regardless of the initial SOC of BEB batteries, on-route charging stations are more likely to be located at stops serving multiple routes.
Furthermore, we incorporate a fairness measurement by imposing the fairness constraint in the planning model. By merging census tracts that intersect with bus routes into distinct subareas based on two tract features - the resident population and the population of bus commuters - we are able to measure both horizontal and vertical equity in the planning results. Comparing the planning outcomes under different fairness levels, we observe that a greater emphasis on fairness in the distribution of BEB routes among subareas results in higher planning costs. This information offers valuable insights to decision-makers on how to strike a balance between equity and economic efficiency in fleet electrification planning.
Our framework, which leverages the existing bus route map to create a virtual power network, has the potential to be applied to transportation systems in other cities. Additionally, our MISOCP model and fairness measurements provide practical guidance for allocating budgets and promoting social justice during the step-by-step electrification of bus fleets. In future research, we aim to extend the application of this planning model to larger transit systems and investigate acceleration algorithms to enhance its computational efficiency.
## Acknowledgments
This work is supported by the USDOT Tier 1 University Transportation Center TOMNET and National Science Foundation CMMI #2053373. We would like to extend our special thanks to Dr. Cynthia Chen, Professor at the University of Washington, for her invaluable suggestions throughout this research project.
|
2309.14338 | 3D Indoor Instance Segmentation in an Open-World | Existing 3D instance segmentation methods typically assume that all semantic
classes to be segmented would be available during training and only seen
categories are segmented at inference. We argue that such a closed-world
assumption is restrictive and explore for the first time 3D indoor instance
segmentation in an open-world setting, where the model is allowed to
distinguish a set of known classes as well as identify an unknown object as
unknown and then later incrementally learning the semantic category of the
unknown when the corresponding category labels are available. To this end, we
introduce an open-world 3D indoor instance segmentation method, where an
auto-labeling scheme is employed to produce pseudo-labels during training and
induce separation to separate known and unknown category labels. We further
improve the pseudo-labels quality at inference by adjusting the unknown class
probability based on the objectness score distribution. We also introduce
carefully curated open-world splits leveraging realistic scenarios based on
inherent object distribution, region-based indoor scene exploration and
randomness aspect of open-world classes. Extensive experiments reveal the
efficacy of the proposed contributions leading to promising open-world 3D
instance segmentation performance. | Mohamed El Amine Boudjoghra, Salwa K. Al Khatib, Jean Lahoud, Hisham Cholakkal, Rao Muhammad Anwer, Salman Khan, Fahad Khan | 2023-09-25T17:59:26Z | http://arxiv.org/abs/2309.14338v1 | # 3D Indoor Instance Segmentation in an Open-World
###### Abstract
Existing 3D instance segmentation methods typically assume that all semantic classes to be segmented would be available during training and only seen categories are segmented at inference. We argue that such a closed-world assumption is restrictive and explore for the first time 3D indoor instance segmentation in an open-world setting, where the model is allowed to distinguish a set of known classes as well as identify an unknown object as unknown and then later incrementally learning the semantic category of the unknown when the corresponding category labels are available. To this end, we introduce an open-world 3D indoor instance segmentation method, where an auto-labeling scheme is employed to produce pseudo-labels during training and induce separation to separate known and unknown category labels. We further improve the pseudo-labels quality at inference by adjusting the unknown class probability based on the objectness score distribution. We also introduce carefully curated open-world splits leveraging realistic scenarios based on inherent object distribution, region-based indoor scene exploration and randomness aspect of open-world classes. Extensive experiments reveal the efficacy of the proposed contributions leading to promising open-world 3D instance segmentation performance. Code and splits are available at: [https://github.com/aminebdj/3D-OWIS](https://github.com/aminebdj/3D-OWIS).
## 1 Introduction
3D semantic instance segmentation aims at identifying objects in a given 3D scene, represented by a point cloud or mesh, by providing object instance-level categorization and semantic labels. The ability to segment objects in the 3D domain has numerous vision applications, including robotics, augmented reality, and autonomous driving. Following the developments in the sensors that acquire depth information, a variety of datasets has been presented in the literature which provides instance-level annotations. In view of the availability of large-scale 3D datasets and the advances in deep learning methods, various 3D instance segmentation methods have been proposed in recent years.
The dependence of 3D instance segmentation methods on available datasets has a major drawback: a fixed set of object labels (vocabulary) is learned. However, object classes in the real world are plentiful, and many unseen/unknown classes can be present at inference. Current methods that learn on a fixed set not only discard the unknown classes but also supervise them to be labeled as background. This prevents intelligent recognition systems from identifying unknown or novel objects that are not part of the background. Given the importance of identifying unknown objects, recent works have explored open-world learning setting for 2D object detection [18; 11; 28; 33]. In the open-world setting, a model is expected to identify unknown objects, and once new classes are labeled, the new set is desired to be incrementally learned without retraining [18]. While previous methods have been mostly suggested for open-world 2D object detection, it is yet to be explored
in the 3D domain. The main challenge lies in understanding how objects appear in 3D in order to separate them from the background and other object categories.
3D instance segmentation in the open world, illustrated in Fig. 1, offers more flexibility, allowing the model to identify unknown objects and request annotations for these novel classes from an oracle for further training. However, this approach presents several challenges: (i) the lack of annotations for unknown classes, necessitating quality pseudo-labeling techniques; (ii) the similarities between predicted features of known and unknown classes, requiring separation techniques for improved prediction; and (iii) the need for a more reliable objectness scoring method to differentiate between good and bad predicted masks for 3D point clouds.
In this work, we investigate a novel problem setting, namely open-World indoor 3D Instance Segmentation, which aims at segmenting objects of unknown classes while incrementally adding new classes. We define real-world protocols and splits to test the ability of 3D instance segmentation methods to identify unknown objects. In the proposed setup, unknown object labels are also added incrementally to the set of known classes, akin to real-world incremental learning scenarios. We propose an unknown object identifier with a probability correction scheme that enables improved recognition of objects. To the best of our knowledge, we are the first to explore 3D instance segmentation in an open-world setting. The key contributions of our work are:
* We propose the first open-world 3D indoor instance segmentation method with a dedicated mechanism for accurate identification of 3D unknown objects. We employ an auto-labeling scheme to generate pseudo-labels during training and induce separation in the query embedding space to delineate known and unknown class labels. At inference, we further improve the quality of pseudo-labels by adjusting the probability of unknown classes based on the distribution of the objectness scores.
* We introduce carefully curated open-world splits, having known vs. unknown and then incremental learning over the span of 200 classes, for a rigorous evaluation of open-world 3D indoor segmentation. Our proposed splits leverage different realistic scenarios such as inherent distribution (frequency-based) of object classes, various class types encountered during the exploration of indoor areas (region-based), and the randomness aspect of object classes in the open-world. Extensive experiments reveal the merits of the proposed contributions towards bridging the performance gap between our method and oracle.
## 2 Related Work
**3D semantic instance segmentation:** The segmentation of instances in 3D scenes has been approached from various angles. Grouping-based or clustering-based techniques use a bottom-up pipeline by learning an embedding in the latent space to help cluster the object points. [4; 13; 14; 17; 20; 21; 34; 38]. Proposal-based methods work in a top-down fashion, first detecting 3D bounding boxes, then segmenting the object region within the box [10; 15; 22; 36; 37]. Recently, spurred by related 2D work [5; 6], the transformer design [31] has also been applied for the purpose of segmenting 3D instances [29; 30]. Other methods present weakly-supervised alternatives to methods that use dense annotations in order to lower the cost of annotating 3D data [7; 16; 35]. While all these methods aim to improve the quality of 3D instance segmentation, they are trained on a known set of semantic labels. On the other hand, our proposed method aims at segmenting objects with both known and unknown class labels.
Figure 1: **3D instance segmentation in an open-world. During each iterative learning phase, the model detects _unknown_ objects, and a human operator gradually assigns labels to some of them and incorporates them into the pre-existing knowledge base for further training.**
**Open-world object recognition:** Open-world object recognition was introduced in [2], where the Nearest Mean Classifier was extended for an open-world setting. In the direction of open-world object detection, many studies [41; 18; 11; 25] have been conducted in the past. In[18], pseudo-labels for the unknowns are generated to perform contrastive clustering during training for a better unknown-known classes separation, where an energy-based unknown class identifier was proposed to detect the unknown classes, based on the energy of the logits from the known classes. For incremental learning, they adopted exemplar replay to avoid catastrophic forgetting of old classes. In the same task as [18], [11] used a transformer-based model and proposed another way of unknown pseudo-labels generation, by using a new method of objectness estimation, and introduced a foreground objectness branch that separates the background from the foreground. For the task of outdoor 3D point cloud semantic segmentation, [3] proposed a model that predicts old, novel, and unknown classes from three separate classification heads. The latter is trained on the labels of the known classes and pseudo-labels for old classes generated by the same model to alleviate catastrophic forgetting, while the unknown class is assigned the second-highest score for a better unknown class segmentation. Other methods proposed in [40; 12; 39], primarily focus on enhancing the generalizability of 3D models for novel classes by leveraging supervision from 2D Vision Language Models for object recognition and 3D semantic segmentation tasks. However, these approaches exhibit several limitations, including (i) The 3D model's performance becomes dependent on the 2D Vision Language model. (ii) The 3D geometric properties of unseen objects in the training data are neglected during the training process. (iii) There exists no avenue for enhancing the model's performance on novel classes in cases where new labels are introduced.(iv) The training process necessitates pairs of images and corresponding 3D scenes.
## 3 Closed-world 3D Instance Segmentation
We adopted the state-of-the-art 3D instance segmentation model Mask3D [29] as our baseline. The latter is a hybrid model that combines Convolutional Neural Networks (CNNs) with transformers to learn class-agnostic masks and labels for instance separation. The backbone of Mask3D is CNN-based and used to extract feature maps from multiple levels. Meanwhile, the decoder is transformer-based and used to refine \(n_{Q}\in\mathbb{N}\) instance queries \(Q=\{q_{j}\in\mathbb{R}^{D}\ \mid\ j\in(1,...,n_{Q})\}\), using the extracted feature maps. The learning scheme consists of a Cross-entropy loss for learning semantic class labels and binary cross-entropy loss for learning instance masks during training.
Figure 2: **Proposed open-world 3D instance segmentation pipeline.** From left to right: 3D instance segmentation model, where the point cloud goes through a 3D convolutional backbone. The extracted feature maps are used in the transformer decoder to refine some initial queries, which then pass through two MLPs to generate label and mask predictions. The Contrastive Clustering block takes the refined queries, the prediction masks, and labels to further process the queries by assigning a target or an _unknown_ pseudo label in the Query Processing module, and then storing them in a Query Store to finally update the class prototypes, which are finally used for contrastive clustering. During inference, the queries are used to correct the probability of the predicted labels based on their reachability to the _known_ class prototypes.
Open-World 3D Instance Segmentation
### Problem formulation
We start by formulating the problem setting of open-world 3D instance segmentation. At a Task \(\mathcal{T}^{t}\), there exists a set of _known_ object categories \(\mathcal{K}^{t}=\{1,2,..,C\}\) and a set of _unknown_ object categories \(\mathcal{U}^{t}=\{C+1,...\}\) that may exist on inference time. The training dataset \(\mathcal{D}^{t}=\{\mathbf{X}^{t},\mathbf{Y}^{t}\}\) includes samples from the classes \(\mathcal{K}^{t}\). The input set \(\mathbf{X}^{t}=\{\mathbf{P}_{1},..,\mathbf{P}_{M}\}\) is made of \(M\) point clouds, where \(\mathbf{P}_{i}\in\mathbb{R}^{N\times 3}\) is a quantized point cloud of \(N\) voxels each carrying average RGB color of the points within. The corresponding labels are \(\mathbf{Y}^{t}=\{\mathbf{Y}_{1},..,\mathbf{Y}_{M}\}\), where \(\mathbf{Y}_{i}=\{\mathbf{y}_{1},..,\mathbf{y}_{k}\}\) encodes \(k\) object instances. Each object instance \(\mathbf{y}_{i}=[\mathbf{B}_{i},l_{i}]\) represents a binary mask \(\mathbf{B}_{i}\in\{0,1\}^{N}\) and a corresponding class label \(l_{i}\in\mathcal{K}^{t}\).
In our problem setting, \(\mathcal{M}_{C}\) is a 3D instance segmentation model that is trained on \(C\) object categories, and, on test time, can recognize instances from these classes, in addition to instances from new classes not seen during training by classifying them as _unknown_. The detected _unknown_ instances can be used by a human user to identify a set of \(n\) new classes not previously trained on, which can be incrementally added to the learner that updates itself to produce \(\mathcal{M}_{C+n}\) without explicitly retraining on previously seen classes. At this point in Task \(\mathcal{T}^{t+1}\), the _known_ class object categories are \(\mathcal{K}^{t+1}=\mathcal{K}^{t}\cup\{C+1,..,C+n\}\). This process repeats throughout the lifespan of the instance segmentation model, continuously improving itself by incorporating new information from new classes until it reaches its maximum capacity of classes it can learn. In the rest of the paper, We assign the _unknown_ class a label \(\mathbf{0}\).
### Open-world scenarios
In order to simulate different realistic scenarios that might be encountered in an open-world, we propose three different ways of grouping classes under three tasks. These scenarios split scenes based on the inherent distribution (frequency-based) of object classes, the various classes encountered during the exploration of various indoor areas (region-based), and the randomness aspect of object classes in the open world.
**Split A (Instance frequency-based):** We introduce a split that leverages the inherent distribution of objects, with _known_ classes being more prevalent than _unknown_ categories. Task \(\mathcal{T}^{1}\) encompasses all the head classes as defined in the ScanNet200 benchmark [8; 27], while tasks \(\mathcal{T}^{2}\) and \(\mathcal{T}^{3}\) group
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Split A**} & \multicolumn{3}{c}{**Split B**} & \multicolumn{3}{c}{**Split C**} \\ \cline{2-10} & Task 1 & Task 2 & Task 3 & Task 1 & Task 2 & Task 3 & Task 1 & Task 2 & Task 3 \\ \hline Classes count & 64 & 68 & 66 & 73 & 55 & 70 & 66 & 66 & 66 \\ Train instances & 24224 & 3791 & 1612 & 15327 & 8177 & 6123 & 13483 & 8239 & 7905 \\ Validation instances & 6539 & 1000 & 428 & 4177 & 2261 & 1529 & 3776 & 2102 & 2089 \\ Train scenes & 1201 & 924 & 627 & 1201 & 1002 & 895 & 1169 & 1089 & 1159 \\ Validation scenes & 312 & 242 & 165 & 312 & 264 & 236 & 307 & 273 & 300 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **The statistics of each split across the three tasks. The number of known classes per task is reported along with the count of instances (3D objects) in the training and validation set, we also show the number of non-empty scenes used during training and validation.**
Figure 3: Point-wise count for each class across the three tasks under the three open-world scenarios
the common and tail classes, respectively. This division allows us to effectively capture the varying frequency and significance of object categories within the dataset.
**Split B (Region-based):** In this split, our objective is to replicate the diverse class types encountered during indoor exploration. This partition draws inspiration from the sequence of classes that a robot might encounter when navigating indoors. To achieve this, we group classes that are likely to be encountered initially when accessing an indoor space and share similarities in scenes. Initially, we assign each class to a specific scene where it predominantly occurs. Subsequently, we divide the classes into three distinct groups, corresponding to the three tasks.
**Split C (Random sampling of classes):** This third split introduces a different challenge inspired by the randomness aspect of the open-world, where tasks can exhibit random levels of class imbalance. To create this split, we randomly shuffled the classes and sampled without replacement, selecting 66 classes three times for each task.
### Generating pseudo-labels for the unknown classes
Because of the wide range of classes in an open-world setting, the auto-labeler is used as an alternative to manual labeling. The former makes use of the existing target labels from the available ground truth classes (_known_ classes) to generate pseudo-labels for the _unknown_ class in the process of training. In [18], the model is assumed to be class agnostic, where _unknown_ objects are predicted as _known_ with high confidence. As a result, the authors of the paper proposed to use the predictions with top-k confidence scores that do not intersect with the ground truth as pseudo-labels for the _unknown_ class. In our study, we show that top-k pseudo-label selection can severely harm the performance of the model on the _known_ and _unknown_ classes. Hence, we propose a Confidence Thresholding (**CT**) based selection of pseudo-labels. We show that the performance on the _known_ and the _unknown_ classes increases by a large margin in terms of mean Average Precision (mAP).
The _auto-labeler_ unit, depicted in Fig. 2, is used for _unknown_ pseudo-labels generation. It takes a set of predicted binary masks \(\textbf{B}=\{\textbf{B}_{i}\ \mid\ i\in(1,...,n_{Q})\}\), where \(n_{Q}\) is the number of queries, \(\textbf{B}_{i}=\mathds{1}(M_{i}>0.5)\) is a mask from a single query, and \(M_{i}=\{m_{i,j}\in[0,1]\ \mid\ j\in(1,...,N)\}\) is a heat map measuring the similarity between a query \(q_{j}\in\mathbb{R}^{D}\) and the features of \(N\) voxels extracted from the high-resolution level in the backbone.
Moreover, each query \(q_{j}\) encodes semantic information and can generate a class prediction \(\mathbb{P}_{cls}(q_{j})=\{\mathbb{P}_{cls}(c;q_{j})\ \mid\ c\in(0,1,...,| \mathcal{K}^{t}|)\}\) using a classification head (refer to Fig. 2). Subsequently, the objectness confidence score is assigned to predictions following Eq 1.
\[s_{j}=s_{cls,j}\cdot\frac{M_{j}\cdot\mathds{1}(M_{j}>0.5)^{T}}{|\mathds{1}(M_ {j}>0.5)|_{1}} \tag{1}\]
where \(s_{cls,j}\in\mathbb{R}\) is the max output probability from the classification head \(\mathbb{P}_{cls}(q_{j})\), and \(\mathds{1}\) is the indicator function. After scoring the predictions, the auto-labeler returns \(m\) pseudo-labels \(\tilde{\textbf{Y}}=\{\tilde{\textbf{y}}_{i}=[\tilde{\textbf{B}}_{i},\textbf{0}] \mid\ i\in(1,...,m)\}\) with confidence above a threshold and has a low IoU with the _known_ classes' target masks.
### Query target assignment and contrastive clustering
Similar to [18], we utilize contrastive clustering to enhance the separation of classes within the query embedding space. To achieve this, we employ a set of query prototypes denoted as \(\mathcal{Q}_{p}=\{\textbf{q}_{i}\in\mathbb{R}^{D}\ \mid\ i\in(0,1,..,| \mathcal{K}^{t}|)\}\), where \(\textbf{q}_{0}\) denotes the prototype of the class _unknown_. We apply a contrastive loss that encourages queries with similar classes to be attracted to their respective prototypes while pushing them away from those representing negative classes, as illustrated in Fig. 2. Since the queries are used to determine the class of the objects (see Fig. 2 inference block), the class prototypes are expected to hold general semantic knowledge of their corresponding classes.
_Hungarian matching_ is performed in the _Assign target to query_ module, depicted in Fig. 2, where the indices of prediction-target are used to assign a label to the queries used to generate the matched prediction. The labeled queries are then stored in a _query store_\(\mathcal{Q}_{store}\), which represents a queue with a maximum capacity. This queue is employed to update the query prototypes \(\mathcal{Q}_{p}\) using an exponential moving average.
Hinge embedding loss is utilized according to Eq 2. This loss ensures that queries belonging to the same class denoted as \(q_{c}\), are pulled towards their corresponding class prototype \(\mathbf{q}_{c}\), while being pushed away from other prototypes representing different classes.
\[\mathcal{L}_{cont}(q_{c})=\sum_{i=0}^{|\mathcal{K}^{t}|}\ell(q_{c},\mathbf{q}_{i}) \tag{2}\]
\[\ell(q_{c},\mathbf{q}_{i})=\begin{cases}||q_{c}-\mathbf{q}_{i}||_{2}&i=c\\ \max(0,\Delta-||q_{c}-\mathbf{q}_{i}||_{2})&i\neq c\end{cases}\]
where \(\Delta\) is the margin of the contrastive clustering.
### Reachability-based probability correction (PC)
In [23], an architecture that can deal with long-tail distribution and _unknown_ class prediction for open-world object recognition was proposed, where _unknown_ classes are assumed to be very different in color and texture from the _known_ classes without prior on the _unknown_ classes. However, we show in Fig. 6 that many _unknown_ instances hold similar features to the _known_ ones.
In our method, we relax the strict assumption of high dissimilarity of _unknown_ and _known_ classes and correct the predicted output probability following two characteristics of a feature from an _unknown_ object: (1) it has to be far from the nearest _known_ class, as features of the class _unknown_ are expected to be pushed away from the prototypes of the _known_ classes, after applying constructive clustering, and (2) the feature should correspond to an object that is not a _known_ class. We show that applying this approach during inference boosts the performance of the model on the _unknown_ class considerably by compensating for the weak pseudo-labels provided by the auto-labeler.
Our probability correction scheme is the following
\[\mathbb{P}(\mathbf{0};q_{j})=\mathbb{P}_{cls}(\mathbf{0};q_{j})\cup\mathbb{P }_{corr}(\mathbf{0};q_{j}) \tag{3}\]
where \(\mathbb{P}_{cls}\) is the probability from the classification head, and \(\mathbb{P}_{corr}\) is the correction probability. We base our intuition on the fact that _unknown_ classes have high objectness scores, which makes them not too far from the prototypes of the _known_ classes. To model this behavior we choose
\[\mathbb{P}_{corr}(\mathbf{0};q_{j})=\mathbb{P}_{corr}(\mathbf{0};o,q_{j})\cdot \mathbb{P}_{corr}(o;q_{j})\]
where \(\mathbb{P}_{corr}(o;q_{j})\) is the likelihood of the query to correspond to an object that is not _known_ (either background or true _unknown_). Since the query prototypes encode class-specific information we propose the following method to measure the objectness of a query given all prototypes from the _known_ classes, where it assigns a high objectness probability if it is close to only a few _known_ classes. This probability distribution defines the objectness of _unknown_ objects around a certain boundary from the prototypes as follows.
\[\mathbb{P}_{corr}(o;q_{j})=1-\sum_{k=1}^{|\mathcal{K}^{t}|}\mathbb{P}_{cls}(k; q_{j})\]
while \(\mathbb{P}_{corr}(\mathbf{0};o,q_{j})\) is the probability of the query being an _unknown_ object, which has a high value the further it is from the nearest prototype of the _known_ classes.
\[\mathbb{P}_{corr}(\mathbf{0};o,q_{j})=\sigma\left(\frac{\gamma(q_{j})-a}{b} \right);\hskip 14.226378pt\gamma(q_{j})=\min_{\mathbf{q}_{i}}||q_{j}-\mathbf{q} _{i}||_{2}\]
Figure 4: Illustration of the region in the query embedding space where the class probability is corrected.
Here \(\sigma\) is the sigmoid function, \(\gamma(q_{j})\) is the reachability of the query \(q_{j}\), \(\mathbf{q}_{i}\) is the prototype of the \(i^{th}\) class, and \(a,b\) are the shift and scale of the sigmoid function that assure \(\mathbb{P}_{corr}(\mathbf{0};o,q_{j},\gamma(q_{j})=0)=0.05\) and \(\mathbb{P}_{corr}(\mathbf{0};o,q_{j},\gamma(q_{j})=\frac{\Delta}{2})=0.95\), for a contrastive clustering margin \(\Delta\).
We finally normalize the probabilities from the classification head of the _known_ classes as follows
\[\mathbb{P}(c;q_{j})=\frac{\mathbb{P}_{cls}(c;q_{j})}{\sum_{l\in\mathcal{K}^{t }}\mathbb{P}_{cls}(l;q_{j})}(1-\mathbb{P}(\mathbf{0};q_{j}))\]
### Alleviating catastrophic forgetting for incremental learning
Following the success of exemplar replay in avoiding catastrophic forgetting of the old classes during incremental learning for object detection [18; 11; 41], we adopt it for the task of incremental learning in 3D instance segmentation where we use exemplars from the classes of the previous task to fine-tune the model trained on the novel classes. In our setting, we use the same dataset for the three tasks and mask the classes of the previous task when training on the novel classes from the current task. As a result, the novel classes of the current task might be encountered again when replaying the exemplars from the previous task, as the same scenes are being used in fine-tuning.
## 5 Experiments
### Open-world evaluation protocol
We use our proposed splits of classes which mimic the challenges that are mostly faced in the open-world to ensure a strict performance evaluation for 3D instance segmentation models.
**Evaluation metrics.** We adopt three common evaluation metrics, _wilderness impact_ (WI) [9], _absolute open set error_ (A-OSE) [26], and the _recall of the unknown classes_ (U-Recall) [1; 24; 11] to evaluate the performance of our model on the _unknown_ classes and to provide a fair comparison with and without contributions. For the _known_ classes, we use mean Average Precision (mAP). WI measures the impact of the _unknown_ classes on the precision of the model at a specific confidence level. Ideally, WI is nil, i.e., there are no _unknown_ objects predicted as _known_. For our evaluation, we report WI at 0.5 confidence. It can be computed as follows: \(\text{WI}=\frac{P_{\mathcal{K}^{t}}}{P_{\mathcal{K}^{t}\cup\mathcal{U}}}-1\).
Figure 5: **Qualitative results for 3D instance segmentation results on some ScanNet200 validation scenes**. Points highlighted in blue belong to _unknown_ classes and those highlighted in green belong to _known_ classes. We show the performance of our model in retrieving the _unknown_ class objects compared to **3D-OWIS\(-\)PC\(-\)CT** for the three scenes.
We also report A-OSE, which represents the count of _unknown_ instances misclassified as one of the _known_ classes, and the U-Recall at 0.5 IoU, which reflects the ability of the model to recover _unknown_ objects.
### Implementation details
We adapt Mask3D [29] for the task of open-world instance segmentation. We add an extra prediction output for the _unknown_ class. In training, we assign an _ignore_ label to the classes of the future and previous tasks, while we keep the labels of the previous task and assign an _unknown_ class label to the classes of the future task during evaluation. For contrastive clustering, we use the indices obtained after matching the predictions with the target using _Hungarian matching_ to assign a label to the queries and store them in the _Query Store_\(\mathcal{Q}_{store}\). The store is then averaged per class and used to periodically update the prototypes every 10 iterations for the hinge loss computation. Finally, we use 40 exemplars per class on average for incremental learning. The classes from the current task are kept during class exemplar replay since we are using the same dataset for the three tasks.
### Open-world results
Table 2 provides a comprehensive performance comparison between the Oracle, our implementation of [11] as 3D-OW-DETR, **3D-OWIS**, and **3D-OWIS-PC - CT** when excluding the Probability Correction (**PC**) and Confidence Thresholding (**CT**) components. Across all scenarios and tasks,
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c|c c c} \hline \hline
**Task IDs** (\(\rightarrow\)) & \multicolumn{8}{c|}{**Task 1**} & \multicolumn{8}{c|}{**Task 2**} & \multicolumn{8}{c}{**Task 3**} \\ \hline & \multicolumn{2}{c|}{**WI**} & \multicolumn{2}{c|}{**A-OSE**} & \multicolumn{2}{c|}{**U-Recall**} & \multicolumn{2}{c|}{**mAP (\(\uparrow\))**} & \multicolumn{2}{c|}{**WI**} & \multicolumn{2}{c|}{**A-OSE**} & \multicolumn{2}{c|}{**U-Recall**} & \multicolumn{2}{c}{**mAP (\(\uparrow\))**} & \multicolumn{2}{c}{**mAP (\(\uparrow\))**} \\ & \multicolumn{2}{c|}{(\(\downarrow\))} & \multicolumn{2}{c|}{(\(\downarrow\))} & \multicolumn{2}{c|}{(\(\uparrow\))} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{**Environment**} \\ & & & & & & & & & & & & & & & & & & & \\ \hline \multicolumn{11}{c}{**Split A**} \\ \hline Oracle & 0.129 & 227 & 55.94 & 38.75 & 38.60 & 0.03 & 112 & 45.40 & 38.25 & 20.91 & 29.40 & 29.58 & 17.78 & 26.10 \\ Mask3D [29] & - & - & - & 39.12 & - & - & - & 38.30 & 20.57 & 29.15 & 28.61 & 18.33 & 25.58 \\
3D-OW-DETR [11] & 0.547 & 221.2 & 35.56 & 39.56 & 0.50 & 28.22 & 25.32 & 26.24 & 18.18 & 13.62 & 15.65 & **21.26** & **0.538** & 17.67 \\
3D-OWIS \(-\)PC & **1**,589 & 707 & 30.72 & 37.50 & 37.00 & **0.800** & **4** & 04.75 & 11.00 & 17.30 & 14.10 & 21.40 & 08.00 & 17.50 \\
**Ours: 3D-OWIS** & **0.397** & **607** & **34.75** & **40.2** & **39.7** & 0.007 & 126 & **27.03** & **29.40** & **16.40** & **22.70** & 20.20 & **15.20** & **18.70** \\ \hline \hline \multicolumn{11}{c}{**Split B**} \\ \hline Oracle & 1.126 & 939 & 70.31 & 24.57 & 24.80 & 10.80 & 441 & 73.16 & 25.50 & 20.30 & 23.40 & 23.40 & 30.40 & 26.00 \\ Mask3D [29] & - & - & - & 23.48 & 23.48 & - & - & - & 21.81 & 18.91 & 20.37 & 24.20 & 29.22 & 26.06 \\
3D-OW-DETR [11] & 3.229 & 1935 & 17.18 & 20.00 & 19.73 & 20.63 & 1389 & **33.31** & 12.36 & 13.86 & 12.93 & 07.27 & 18.96 & 11.62 \\
3D-OWIS-PC \(-\)CT & **313.38** & 1895.2 & 21.67 & 18.94 & 18.70 & 3.169 & 1081 & 26.63 & 18.00 & 16.40 & 17.20 & 17.30 & 20.10 & 18.30 \\
**Ours: 3D-OWIS** & 3.684 & **1780** & **24.79** & **23.60** & **23.30** & **0.785** & **881** & 24.21 & **18.70** & **17.30** & **17.90** & **18.70** & **24.60** & **20.90** \\ \hline \multicolumn{11}{c}{**Split C**} \\ \hline Oracle & 1.039 & 651 & 71.61 & 23.30 & 23.6 & 0.249 & 591 & 62.83 & 20.50 & 18.40 & 19.60 & 25.30 & 28.20 & 26.30 \\ Mask3D [29] & - & - & - & 20.82 & 21.15 & - & - & - & 22.67 & 26.67 & 24.13 & 25.41 & 25.21 & 25.35 \\
3D-OWIS \(+\)PC & 2.901 & 1752 & **15.66** & 15.00 & 14.80 & 1.799 & 666 & 15.99 & 13.50 & 19.70 & 16.40 & 17.50 & **17.70** & 17.50 \\
**Ours: 3D-OWIS** & **0.419** & **1294** & 14.34 & **18.00** & **17.60** & **0.152** & **303** & 15.80 & **13.50** & **22.20** & **17.80** & **17.80** & **17.70** & **17.80** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **State-of-the-Art comparison for 3D-OWIS model.** We show a comparison of performance under the three open-world scenarios, where **3D-OWIS-PC - CT** is our model **3D-OWIS** without **Probability Correction (**PC**) and **Confidence** Thresholding (**CT**). We rely on the metrics used in the open-world literature, A-OSE which quantifies the number of unknown objects misclassified as one of the known classes, WI which measures the impact of the unknown class on the precision of the model on the known classes, and the U-Recall to evaluate the modelβs ability to recover the unknown objects. We show that **3D-OWIS** performs remarkably better than the other model under all scenarios when dealing with the known classes, and superior performance in split A and B, and slightly less performance in split C when handling the unknown objects. We also provide a closed-setting comparison between Mask3D and Oracle (**Ours** with access to unknown labels).
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multicolumn{5}{c}{**Split A**} \\ \hline \hline
**Task ID** & \multicolumn{4}{c}{**Task 1**} \\ \hline & \multicolumn{2}{c|}{**WI**} & \multicolumn{2}{c}{**A-OSE**} & \multicolumn{2}{c}{**U-Recall**} & \multicolumn{2}{c}{**mAP (\(\uparrow\))**} \\ & (\(\downarrow\)) & (\(\downarrow\)) & (\(\uparrow\)) & \multicolumn{2}{c}{**Current**} & \multicolumn{2}{c}{**All**} \\ \hline
3D-GGN [32] & 15.68 & 1452 & 21.33 & 20.51 & 20.12 \\
3D-OLN [19] & - & - & 02.45 & - & - \\
**Ours: 3D-OWIS** & **0.397** & **607** & **34.75** & **40.2** & **39.7** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Open-world instance segmentation comparison**. We provide the results of our implementation of two methods for 2D open-world instance segmentation models. We show that our model performs comparatively better than others across all metrics.
**3D-OWIS\(-\)PC \(-\) CT** consistently exhibits inferior performance in terms of mAP. Additionally, it demonstrates considerably lower U-Recall performance in splits A and B, with slightly higher performance in split **C**. Of particular note, our **3D-OWIS** demonstrates remarkable proficiency in preserving knowledge of the previous classes after fine-tuning. This proficiency is attributed to better pseudo-label selection for the _unknown_ classes. **3D-OWIS** outperforms **3D-OWIS\(-\)PC \(-\) CT** in most cases while minimizing the impact on the _known_ classes, as evidenced by lower WI and A-OSE scores and higher mAP.
Table 3 presents a comparison between our model, **3D-OWIS**, and our implementation of two methods, GGN [32] and OLN [19]. For both models, we adapt Mask3D and train it with mask loss only for OLN. In the case of GGN, we train a Minkowski backbone to predict affinity maps and use Connected Components to generate class-agnostic proposals. These results underscore the effectiveness and potential of our approach in addressing the three proposed open-world challenges.
### Incremental learning results
Our model's performance in incremental learning is evaluated based on its ability to preserve knowledge from previous classes. With the utilization of exemplar replay, the **3D-OWIS** model demonstrates significant improvement on previous classes mAP. Table 2 presents the results, indicating that our model consistently outperforms the others in terms of mean Average Precision (mAP) for the previous classes in all cases.
### Discussion and analysis
**Ablation study.** We show in Table 4 that **3D-OWIS\(-\)PC \(-\) CT** model performs poorly on the _known_ classes because of the high number of low-quality pseudo-labels generated by _Auto-labeler_, which is also explained by the high value of _Wilderness Impact_ and _Absolute open set error_. The U-Recall drops
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c|}{**Task IDs (\(\rightarrow\))**} & \multicolumn{4}{c|}{**Task 1**} & \multicolumn{4}{c|}{**Task 2**} & \multicolumn{4}{c}{**Task 3**} \\ \hline \multirow{3}{*}{w/ Finetuning} & \multirow{3}{*}{CT} & \multirow{3}{*}{PC} & \multicolumn{3}{c|}{**WI**} & \multicolumn{3}{c|}{**A-OSE**} & \multicolumn{3}{c|}{U-Recall} & \multicolumn{3}{c}{mAP (\(\uparrow\))} & \multicolumn{3}{c}{WI} & \multicolumn{3}{c|}{A-OSE} & \multicolumn{3}{c|}{U-Recall} & \multicolumn{3}{c}{mAP (\(\uparrow\))} \\ & & & (\(\downarrow\)) & (\(\uparrow\)) & \multicolumn{3}{c|}{Current} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ & & & & known & & & & known & & known & & known & & known & & known & & & known & & & \\ \hline \multicolumn{18}{c}{**Split A**} \\ \hline \(\times\) & \(\times\) & \(\times\) & 1.589 & 707 & 30.72 & 37.50 & 37.00 & 0.870 & 321 & 19.46 & 00.00 & 16.74 & 08.40 & 00.00 & 09.30 & 02.80 \\ \(\times\) & \(\checkmark\) & \(\times\) & 0.237 & 443 & 30.00 & 40.30 & **39.70** & 30.306 & 129 & 14.96 & 00.00 & **21.00** & 10.50 & 00.00 & **17.45** & 05.20 \\ \hline \(\checkmark\) & \(\times\) & \(\times\) & 1.589 & 707 & 30.72 & 37.30 & 37.00 & **0.000** & **4** & 04.75 & 11.00 & 12.70 & 14.10 & **21.40** & **08.00** & 17.50 \\ \(\checkmark\) & \(\checkmark\) & \(\times\) & **0.237** & **443** & 300 & **40.30** & **39.70** & 0.004 & 102 & 23.62 & 92.22 & 15.80 & 22.30 & 19.70 & 15.70 & **18.50** \\ \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 0.398 & 607 & **34.75** & 40.2 & **39.70** & 0.007 & 126 & **27.03** & **29.40** & 16.40 & **22.70** & & No unknown labels \\ \hline \multicolumn{18}{c}{**Split B**} \\ \hline \(\times\) & \(\times\) & \(\times\) & 3.133 & 1895 & 21.67 & 18.94 & 18.70 & 1.82 & 829 & 17.20 & 00.00 & 15.40 & 06.60 & 00.00 & 20.20 & 07.50 \\ \(\times\) & \(\checkmark\) & \(\times\) & 2.147 & 21.70 & 21.70 & 23.80 & 23.50 & 1.563 & **375** & 13.08 & 00.00 & **13.80** & 07.90 & 00.20 & **25.40** & 09.40 \\ \hline \(\checkmark\) & \(\times\) & \(\times\) & 3.129 & 1995 & 21.70 & 18.94 & 18.70 & 18.93 & 11.081 & 26.63 & 18.00 & 16.40 & 17.20 & 17.10 & 20.10 & 18.30 \\ \(\checkmark\) & \(\checkmark\) & \(\times\) & **2.147** & **1397** & 21.70 & 27.30 & **23.50** & 0.466 & 413 & 20.90 & 18.60 & 16.90 & 17.70 & **18.50** & 24.20 & **20.69** \\ \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 3.684 & 1780 & **24.79** & 23.6 & 23.30 & 0.755 & 581 & **24.21** & **18.70** & 17.30 & **17.90** & No unknown labels \\ \hline \multicolumn{18}{c}{**Split C**} \\ \hline \(\times\) & \(\times\) & \(\times\) & 1.591 & 1752 & 15.66 & 15.00 & 14.80 & 1.361 & 0.00 & 15.00 & 0.00 & 15.00 & 09.40 & 00.00 & 14.60 & 04.70 \\ \(\times\) & \(\checkmark\) & \(\times\) & 0.227 & 828 & 11.44 & **18.70** & 18.40 & 1.361 & 365 & 10.16 & 0.00 & 19.50 & 09.40 & 00.00 & 19.10 & 6.20 \\ \hline \(\checkmark\) & \(\times\) & \(\times\) & 1.2901 & 1752 & **15.66** & **15.00** & 14.80 & 1.799 & 666 & **15.99** & 13.50 & 19.70 & 16.40 & 17.50 & 17.70 & 17.50 \\ \(\checkmark\) & \(\checkmark\) & \(\times\) & **0.227** & **828** & 11.44 & **18.70** & **18.40** & **0.088** & **208** & 12.63 & **14.50** & 22.10 & **18.00** & 17.80 & 17.70 & 17.80 \\ \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 0.419 & 1294 & 14.34 & 18 & 17.60 & 0.152 & 303 & 15.80 & 13.90 & **22.20** & 17.80 & No unknown labels \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Extensive ablation of the added components.** We perform the ablation by adding Probability Correction (**PC**) and Confidence Thresholding (**CT**) components to **3D-OWIS\(-\)PC \(-\)CT**. We compare the performance comparison in terms of mAP,U-Recall, WI, and A-OSE. Even though **3D-OWIS** is performing well in retrieving the _unknown_ classes without **PC** and **CT**, which is reflected by the high U-Recall, it is still performing poorly on the _known_ classes, based on the high WI and A-OSE. This negative impact on the _known_ classes accumulates over the tasks and results in further reduction in mAP. When adding the **CT**, the performance on the _known_ classes improves considerably and remains consistent throughout the incremental learning process. Probability correction (**PC**) significantly improves the U-Recall in all cases. Even though the latter shows lower performance in terms of WI and A-OSE, the overall mAP slightly improves or remains higher with a large margin compared **3D-OWIS\(-\)PC \(-\) CT**. This shows that adding the _confidence threshold_ and the _Probability Correction_ gives the best compromise in performance on both _known_ and _unknown_ classes.
considerably when fine-tuning the **3D-OWIS\(-\)PC\(-\)CT**, while the WI and A-OSE either decrease or increase with the mAP on the _unknown_. On the other hand, our model limits the training only to the best pseudo-labels, which maintain good performance on the _known_ classes in all cases, before and after fine-tuning, and also achieve results on the _unknown_ class comparable to the **3D-OWIS\(-\)PC\(-\)CT** in most of the cases. Adding the probability correction module helps in improving the U-Recall while keeping the mAP of the _known_ classes much above the **3D-OWIS\(-\)PC\(-\)CT**. However, it results in an increase in WI and A-OSE because of the increase of false positives in the _known_ classes.
**tSNE analysis** The tSNE plot shown in Fig. 6 illustrates the below-par performance of the **3D-OWIS\(-\)PC\(-\)CT** in clustering the _unknown_ classes, where most queries are still maintaining features representative of the _known_ classes. This behavior is a result of the weak supervision of the _unknown_ class, which shows the need for correcting the predictions, and explains the improvement in U-Recall when applying the probability correction with nil deterioration in the _known_ classes mAP in most cases.
**Qualitative analysis.** Fig. 5 shows that 3D-OWIS is able to correctly identify background and _unknown_ objects as _unknown_. Also note the second scene, where predictions are corrected from _known_ to _unknown_ without affecting the predictions of the _known_ classes.
## 6 Limitations
Confidence Thresholding (**CT**) enhances the performance of the model on _known_ classes; nonetheless, it diminishes the model's capacity to segment _unknown_ classes, mainly due to its reliance on a smaller number of pseudo-labels during training. Additionally, the effectiveness of Probability Correction (**PC**) is contingent upon the inherent characteristics of the clusters within the _known_ classes. In scenarios characterized by data imbalance, the performance of probability correction may deteriorate when applied to the undersampled classes.
## 7 Conclusion
In this paper, we address the challenge of 3D instance segmentation in open-world scenarios, which is a novel problem formulation. We propose an innovative approach that incorporates an _unknown_ object identifier to detect objects not present in the training set. To facilitate evaluation and experimentation, we present three dataset splits of ScanNet200 based on different criteria for selecting _unknown_ objects. Our experimental results demonstrate that our proposed _unknown_ object identifier significantly improves the detection of _unknown_ objects across various tasks and dataset splits. This work contributes to advancing the localization and segmentation of 3D objects in real-world environments and paves the way for more robust and adaptable vision systems.
**Acknowledgement** The computations were enabled by the Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre.
Figure 6: **tSNE visualization** of the queries for _known_ & _unknown_ classes |
2309.15026 | Instance complexity of Boolean functions | In the area of query complexity of Boolean functions, the most widely studied
cost measure of an algorithm is the worst-case number of queries made by it on
an input. Motivated by the most natural cost measure studied in online
algorithms, the competitive ratio, we consider a different cost measure for
query algorithms for Boolean functions that captures the ratio of the cost of
the algorithm and the cost of an optimal algorithm that knows the input in
advance. The cost of an algorithm is its largest cost over all inputs.
Grossman, Komargodski and Naor [ITCS'20] introduced this measure for Boolean
functions, and dubbed it instance complexity. Grossman et al. showed, among
other results, that monotone Boolean functions with instance complexity 1 are
precisely those that depend on one or two variables.
We complement the above-mentioned result of Grossman et al. by completely
characterizing the instance complexity of symmetric Boolean functions. As a
corollary we conclude that the only symmetric Boolean functions with instance
complexity 1 are the Parity function and its complement. We also study the
instance complexity of some graph properties like Connectivity and k-clique
containment.
In all the Boolean functions we study above, and those studied by Grossman et
al., the instance complexity turns out to be the ratio of query complexity to
minimum certificate complexity. It is a natural question to ask if this is the
correct bound for all Boolean functions. We show a negative answer in a very
strong sense, by analyzing the instance complexity of the Greater-Than and
Odd-Max-Bit functions. We show that the above-mentioned ratio is linear in the
input size for both of these functions, while we exhibit algorithms for which
the instance complexity is a constant. | Alison Hsiang-Hsuan Liu, Nikhil S. Mande | 2023-09-26T15:56:14Z | http://arxiv.org/abs/2309.15026v1 | # Instance complexity of Boolean functions
###### Abstract
In the area of query complexity of Boolean functions, the most widely studied cost measure of an algorithm is the worst-case number of queries made by it on an input. Motivated by the most natural cost measure studied in online algorithms, the _competitive ratio_, we consider a different cost measure for query algorithms for Boolean functions that captures the ratio of the cost of the algorithm and the cost of an optimal algorithm that knows the input in advance. The cost of an algorithm is its largest cost over all inputs. Grossman, Komargodski and Naor [ITCS'20] introduced this measure for Boolean functions, and dubbed it _instance complexity_. Grossman et al. showed, among other results, that monotone Boolean functions with instance complexity \(1\) are precisely those that depend on one or two variables.
We complement the above-mentioned result of Grossman et al. by completely characterizing the instance complexity of _symmetric_ Boolean functions. As a corollary we conclude that the only symmetric Boolean functions with instance complexity \(1\) are the Parity function and its complement. We also study the instance complexity of some graph properties like Connectivity and \(k\)-clique containment.
In all the Boolean functions we study above, and those studied by Grossman et al., the instance complexity turns out to be the ratio of query complexity to _minimum certificate complexity_. It is a natural question to ask if this is the correct bound for _all_ Boolean functions. We show a negative answer in a very strong sense, by analyzing the instance complexity of the Greater-Than and Odd-Max-Bit functions. We show that the above-mentioned ratio is linear in the input size for both of these functions, while we exhibit algorithms for which the instance complexity is a constant.
## 1 Introduction
In the typical setting of online algorithms, an algorithm designer's task is to design an efficient algorithm that is geared towards receiving inputs in an online fashion. More specifically, the input is revealed to an (online) algorithm piece by piece. On each revelation of a piece of the input, the algorithm needs to make _irrevocable_ decisions. A natural cost measure of an online algorithm is the _competitive ratio_, which is defined as the biggest ratio of the algorithm's cost to the optimal offline algorithm's cost on the same input, where the optimal offline algorithm knows the whole input.
Worst-case analysis is a setting studied in various different models, for instance the query complexity model, which is relevant to our discussion. Unlike the worst-case setting, competitive analysis does not only focus on measuring the performance of algorithms on single "hard" inputs but is more representative of the performance of algorithms on all inputs as a whole. Moreover, it reveals how uncertainty affects the quality of decisions. The measure of competitive ratio has gained interest of late in the context of _explorable uncertainty_. In this model, instead of completely unknown inputs, an algorithm receives an uncertain input with the promise that every numerical value sits in an interval, where the realization of the value can be learned by _exploration_. The concept of explorable uncertainty has raised a lot of attention and has been studied on different problems, such as Pandora's box problem [DFR\({}^{+}\)23], sorting [HdL21], finding the median [FMP\({}^{+}\)00], identifying a set with the minimum-weight among a given collection of feasible sets [EHK16], finding shortest paths [FMO\({}^{+}\)07], computing minimum spanning trees [HEK\({}^{+}\)08], etc. In most of these problems, the cost of an algorithm on an input is naturally considered to be the competitive
ratio, which is the ratio of the number of explorations made (i.e., the number of _queries_ made to the input) to the number of explorations made by the best offline algorithm that knows the input in advance.
We consider query algorithms for Boolean functions, which are functions mapping an \(n\)-bit input to a single bit. A query algorithm for a Boolean function is represented by a _decision tree_, which is a rooted binary tree with internal nodes labeled by variables, edges labeled by values in \(\{0,1\}\), and leaves labeled by values in \(\{0,1\}\). The decision tree evaluates an input in the natural way beginning at the root, and traversing a path until a leaf, at which point it outputs the value at the leaf. In the usual query complexity setting, the cost of this decision tree is its depth. The query complexity of a Boolean function \(f\), also known as decision tree complexity of \(f\), is the minimum depth of the number of queries an optimal query algorithm makes on a worst-case input, i.e., the minimum depth of a decision tree computing \(f\).
It is natural to consider the cost measure described in the second paragraph for query algorithms for Boolean functions rather than the more general class of functions studied in the online algorithms setting. In the setting of explorable uncertainty, this corresponds to the input being unknown in the beginning, and each bit has a "uncertainty range" of \(\{0,1\}\) rather than an "uncertainty interval" like in optimization problems as from the first two paragraphs. To this end, Grossman, Komargodski and Naor [10] only recently initiated the study of _instance complexity_ of Boolean functions, where instance complexity of an algorithm is the maximum over all inputs of the ratio of the number of queries made on the input to the number of queries made by an optimal algorithm that knows the whole input. We define the instance complexity of a Boolean function to be the minimum instance complexity of an algorithm that solves it. Intuitively, studying the instance complexity of Boolean functions overcomes some drawbacks that the usual query complexity model has. For example, the query complexity of a function could be very large owing to the hardness of one single input, but it may be the case that even algorithms with prior knowledge about this input may require lots of queries to certify the function's evaluation on it. Nevertheless, it does turn out to be single "hard" inputs that contribute to the large instance complexity of the AND and OR functions, just as in the usual query complexity setting, but we show that this is not always the case.
### Our contributions
We continue the study of instance complexity of Boolean functions initiated by Grossman et al. Among other results, they characterized monotone Boolean functions which are strictly instance optimizable, i.e., monotone Boolean functions which have instance complexity equal to \(1\). We complement this result by completely characterizing the instance complexity of _symmetric_ Boolean functions in terms of their univariate predicates. We refer the reader to Section 2 for formal definitions.
For a symmetric function \(f:\{0,1\}^{n}\to\{0,1\}\), let the integers \(0\leq\ell_{0}(f)\leq\ell_{1}(f)\leq n\) denote the end points of the largest interval of Hamming weights in which \(f\) is a constant.
**Theorem 1.1**.: _Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a symmetric Boolean function. Then,_
\[\mathsf{InstC}(f)=\frac{n}{\ell_{0}(f)+n-\ell_{1}(f)}.\]
In particular it follows that the only symmetric Boolean functions that are strictly instance optimizable are the Parity function and its complement. In the process we show that the instance complexity of a symmetric Boolean function \(f\) is the ratio between its query complexity \(\mathsf{DT}(f)\) (which is the decision tree complexity of \(f\), equaling the number of input variables for symmetric \(f\)) and its _minimum certificate complexity_\(\mathsf{C_{min}}(f)\), which is the smallest number of variables one needs to fix in order to fix the function value to a constant. In other words, \(\mathsf{C_{min}}(f)\) equals the minimum co-dimension of an affine subcube on which \(f\) is a constant.
More generally, one can observe that \(\mathsf{DT}(f)/\mathsf{C_{min}}(f)\) is an upper bound on the instance complexity of any Boolean function \(f\). Along with showing that this bound is attained for all symmetric functions, we also show that this bound is attained for some _graph properties_ like Connectivity and \(k\)-clique containment. Let \(\mathsf{CONN}\) and \(\mathsf{CL}_{k}\) denote the Connectivity and \(k\)-Clique problems, respectively (See Definitions 4.1 and 4.2).
**Theorem 1.2**.: _Let \(n\) and \(k=O(n^{2/3})\) be positive integers. Then,_
\[\mathsf{InstC}(\mathsf{CONN})=\frac{\mathsf{DT}(\mathsf{CONN})}{\mathsf{C}_{\mathsf{ min}}(\mathsf{CONN})}=\frac{\binom{n}{2}}{n-1},\qquad\mathsf{InstC}(\mathsf{CL}_{k})= \frac{\mathsf{DT}(\mathsf{CL}_{k})}{\mathsf{C}_{\mathsf{min}}(\mathsf{CL}_{k}) }=\frac{\binom{n}{2}}{\binom{k}{2}}.\]
In view of this one may expect that the instance complexity of _all_ Boolean functions \(f\) equals \(\mathsf{DT}(f)/\mathsf{C}_{\mathsf{min}}(f)\). We show that this is false in a very strong way using two examples: the Greater-Than function on \(2n\) input variables, and the Odd-Max-Bit function on \(n\) variables, denoted by \(\mathsf{GT}_{n}\) and \(\mathsf{OMB}_{n}\), respectively (see Definitions 5.1 and 5.2). Both of these functions have \(\mathsf{DT}(f)/\mathsf{C}_{\mathsf{min}}(f)=\Theta(n)\) but instance complexity \(O(1)\).
**Theorem 1.3**.: _For all odd positive integers \(n\), we have_
\[\mathsf{DT}(\mathsf{GT}_{n}) =2n,\qquad\mathsf{C}_{\mathsf{min}}(\mathsf{GT}_{n})=2,\qquad \mathsf{InstC}(\mathsf{GT}_{n})\leq 2\] \[\mathsf{DT}(\mathsf{OMB}_{n}) =n,\qquad\mathsf{C}_{\mathsf{min}}(\mathsf{OMB}_{n})=1,\qquad \mathsf{InstC}(\mathsf{OMB}_{n})<2.\]
While none of our results are deep or technically involved, our main goal is to bring to light the natural and interesting complexity measure of _instance complexity_ of Boolean functions. Some interesting open questions that remain are to characterize the instance complexity of monotone or linear threshold functions in terms of some combinatorial parameter, just as we were able to do for symmetric functions.
## 2 Preliminaries
For a positive integer \(n\), we use the notation \([n]\) to denote the set \(\{1,2,\ldots,n\}\). For a string \(x\in\{0,1\}^{n}\) and a set \(S\subseteq[n]\), we denote by \(x_{S}\) the string in \(\{0,1\}^{S}\) that is the restriction of \(x\) to the coordinates indexed by \(S\). For a string \(x\in\{0,1\}^{n}\), let \(|x|\) denote the Hamming weight of \(x\), that is, the number of \(1\)s in \(x\). Let \(\mathsf{XOR}_{n}:\{0,1\}^{n}\to\{0,1\}\) denote the Parity function on \(n\) input bits, that outputs \(1\) iff the number of \(1\)'s in the input is odd. Let \(\mathsf{MAJ}_{n}:\{0,1\}^{n}\to\{0,1\}\) denote the Majority function that outputs \(1\) iff the number of \(1\)'s is at least the number of \(0\)'s in the input. Define the Indexing function as follows.
**Definition 2.1** (Indexing Function).: _For a positive integer \(m\), define the Indexing function, denoted \(\mathsf{IND}_{m}:\{0,1\}^{m+2^{m}}\to\{0,1\}\), by_
\[\mathsf{IND}_{m}(x,y)=y_{\mathsf{bin}(x)},\]
_where \(\mathsf{bin}(x)\) denotes the integer in \([2^{m}]\) represented by the binary expansion \(x\)._
In the above definition, we refer to \(\{x_{i}:i\in[m]\}\) as the _addressing variables_, and \(\{y_{j}:j\in[2^{m}]\}\) as the _target variables_.
A deterministic decision tree is a rooted binary tree. Internal nodes are labeled by variables \(x_{i}\), and leaves are labeled by values in \(\{0,1\}\). Given an input \(x\in\{0,1\}^{n}\), the tree's evaluation on the input proceeds in the natural way: traverse the relevant edge depending on the value of the variable of the node until reaching a leaf, at which point the value at the leaf is output. A decision tree \(\mathcal{T}\) is said to compute a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) if \(\mathcal{T}(x)=f(x)\) for all \(x\in\{0,1\}^{n}\). Denote the number of queries made by \(\mathcal{T}\) on input \(x\) by \(\mathcal{T}(x)\). The decision tree complexity of \(\mathcal{T}\) is the worst-case number of queries made, i.e., its depth.
The decision tree complexity (also called _deterministic query complexity_) of \(f\), denoted \(\mathsf{DT}(f)\), is defined as follows.
\[\mathsf{DT}(f):=\min_{\mathcal{T}:\mathcal{T}\text{ is a DT computing }f}\mathrm{depth}( \mathcal{T}).\]
_Certificate complexity_ captures non-deterministic query complexity. A certificate for an input \(x\in\{0,1\}^{n}\) to a function \(f:\{0,1\}^{n}\to\{0,1\}\) is a set \(S\subseteq[n]\) such that \(f(y)=f(x)\) for all \(y\in\{0,1\}^{n}\) with \(y_{S}=x_{S}\). The certificate complexity of \(f\) at input \(x\), denoted \(\mathsf{C}(f,x)\), is the minimum size of such a set \(S\). The certificate complexity of \(f\), denoted \(\mathsf{C}(f)\), is defined as follows.
\[\mathsf{C}(f)=\max_{x\in\{0,1\}^{n}}\mathsf{C}(f,x).\]
We define another complexity measure that has been studied in the past: _minimum certificate complexity_. This is the minimum co-dimension of an affine subcube on which the underlying function is a constant.
**Definition 2.2**.: _For a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\), define the minimum certificate complexity of \(f\), denoted \(\mathsf{C}_{\mathsf{min}}(f)\), to be_
\[\mathsf{C}_{\mathsf{min}}(f):=\min_{x\in\{0,1\}^{n}}\mathsf{C}(f,x).\]
The interested reader may refer to the survey [1] for an introduction to query complexity and related measures of Boolean functions.
Grossman, Komargodski and Naor [1] introduced the measure of _instance complexity_ of a Boolean function. Although they do not frame it as below, we find the form below convenient as it cleanly captures a complexity measure.
**Definition 2.3**.: _For a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\), an input \(x\in\{0,1\}\) and a decision tree \(\mathcal{T}\) that computes \(f\), define the instance complexity of \(f\) at input \(x\) w.r.t. \(\mathcal{T}\), which we denote by \(\mathsf{InstC}(f,x,\mathcal{T})\), to be_
\[\mathsf{InstC}(f,x,\mathcal{T}):=\frac{\mathcal{T}(x)}{\mathsf{C}(f,x)}.\]
_Define the instance complexity of \(f\) w.r.t. \(T\) to be_
\[\mathsf{InstC}(f,\mathcal{T}):=\max_{x\in\{0,1\}^{n}}\mathsf{InstC}(f,x, \mathcal{T}).\]
_Finally, define the instance complexity of \(f\), which we denote by \(\mathsf{InstC}(f)\), to be_
\[\mathsf{InstC}(f)=\min_{\mathcal{T}:\mathcal{T}\ \text{computes}\ f}\mathsf{ InstC}(f,\mathcal{T}).\]
In other words, the instance complexity of a function \(f\) is small if there exists a decision tree solving it such that for _all_ inputs, the cost of the decision tree is not much larger that the cost of an optimal decision tree on that input (i.e., the certificate complexity of \(f\) at that input). Functions of instance complexity \(1\) are precisely those that Grossman et al. refer to as _strictly \(D\)-instance optimizable_.
### Prior work
Grossman, Komargodski and Naor [1] showed the following, among other results, regarding the instance complexity of specific Boolean functions.
**Lemma 2.4** ([1, Section 3]).: _For all positive integers \(n,m\),_
\[\mathsf{InstC}(\mathsf{XOR}_{n})=\mathsf{InstC}(\mathsf{IND}_{m}) =1,\] \[\mathsf{InstC}(\mathsf{MAJ}_{n}) \approx 2,\] \[\mathsf{InstC}(\mathsf{AND}_{n})=\mathsf{InstC}(\mathsf{OR}_{n}) =n.\]
A Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) is said to be _monotone_ if \(x\leq y\implies f(x)\leq f(y)\). Here \(x\leq y\) represents coordinate-wise inequality. In other words, \(f\) is monotone if flipping a \(1\) to a \(0\) in any input can never change the function value from \(0\) to \(1\). Examples of monotone functions are \(\mathsf{AND},\mathsf{OR},\mathsf{MAJ}\). Grossman, Komargodski and Naor [1, Lemma 3.4] characterized monotone functions \(f\) that satisfy \(\mathsf{InstC}(f)=1\). They showed that the monotone functions satisfying \(\mathsf{InstC}(f)=1\) are precisely those that depend on either \(0\) or \(1\) variable.
A natural upper bound on \(\mathsf{InstC}(f)\) is \(\mathsf{DT}(f)/\mathsf{C}_{\mathsf{min}}(f)\). This is simply because an optimal algorithm (w.r.t. query complexity) for \(f\) witnesses this: the largest possible numerator and smallest possible denominator in the first expression of Definition 2.3 are \(\mathsf{DT}(f)\) (witnessed by an optimal decision tree algorithm for \(f\)) and \(\mathsf{C}_{\mathsf{min}}(f)\), respectively. We show in the next section that this is tight for the class of symmetric Boolean functions \(f\). In the following section we analyze the instance complexity of some graph properties, and show that this bound is tight in these cases as well. In the next section we show that such an equality does not hold true for general Boolean \(f\).
## 3 Instance complexity of symmetric Boolean functions
In this section we completely characterize instance complexity of symmetric Boolean functions. This is a generalization of [1, Examples 3.2, 3.3]. We first formally define symmetric functions below. For a positive integer \(n\), we use \(S_{n}\) to denote the group of permutations of \(n\) elements.
**Definition 3.1** (Symmetric functions).: _A function \(f:\{0,1\}^{n}\to\{0,1\}\) is symmetric if for all \(\sigma\in S_{n}\) and for all \(x\in\{0,1\}^{n}\) we have \(f(x)=f(\sigma(x))\)._
Equivalently, a function is symmetric iff its output only depends on the Hamming weight of the input. Hence, we may identify a symmetric function \(f\) with its associated _predicate_, denoted \(D_{f}\), and defined as
\[D_{f}(i)=b\qquad\text{if }|x|=i\implies f(x)=b.\]
For a symmetric function \(f:\{0,1\}^{n}\to\{0,1\}\), let the integers \(0\leq\ell_{0}(f)\leq\ell_{1}(f)\leq n\) denote the end points of the largest interval of Hamming weights in which \(f\) is a constant. See Figure 1 for a pictorial description.
We observe below that the minimum certificate complexity of a symmetric function \(f\) equals \(\ell_{0}(f)+n-\ell_{1}(f)\).
**Claim 3.2**.: _Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a symmetric Boolean function. Then_
\[\mathsf{C}_{\mathsf{min}}(f)=\ell_{0}(f)+n-\ell_{1}(f).\]
Proof.: We prove the upper bound and lower bound separately.
* For the upper bound, consider an input \(x\) with Hamming weight \(k\in[\ell_{0}(f),\ell_{1}(f)]\). Consider a set of indices \(S_{0}\) of \(\ell_{0}(f)\) many 1's and \(S_{1}\) of \(n-\ell_{1}(f)\) many 0's of it. Clearly such a set exists since \(k\in[\ell_{0}(f),\ell_{1}(f)]\). Any input consistent with these input bits must have Hamming weight in \([\ell_{0}(f),\ell_{1}(f)]\), and thus must be a constant by assumption. Thus, \(S_{0}\cup S_{1}\) is a certificate for \(x\) of size \(\ell_{0}(f)+n-\ell_{1}(f)\).
* Towards the lower bound, assume towards a contradiction that there is an input \(x\) with a certificate \(C\) of size \(w<\ell_{0}(f)+n-\ell_{1}(f)\). Suppose \(C\) contains \(m_{0}\) 0-indices and \(m_{1}\) 1-indices, where \(m_{0}+m_{1}=w\). The fact that \(C\) is a certificate implies that \(f\) must output the same value on all inputs of Hamming weight in \([m_{1},n-m_{0}]\). The length of this interval is \[n-m_{0}-m_{1}=n-w>\ell_{1}(f)-\ell_{0}(f).\] This contradicts the maximality of the interval \([\ell_{0}(f),\ell_{1}(f)]\), yielding a contradiction.
Figure 1: Visual representation of the predicate \(D_{f}\) of a symmetric Boolean function \(f\). The interval \([\ell_{0}(f),\ell_{1}(f)]\) is the largest interval on which \(f\) is constant.
As an immediate corollary, we obtain the following.
**Corollary 3.3**.: _The only symmetric functions \(f:\{0,1\}^{n}\to\{0,1\}\) with \(\mathsf{C}_{\mathsf{min}}(f)=n\) are the parity function \(\mathsf{XOR}_{n}\) and its negation._
Our characterization of the instance complexity of symmetric Boolean functions is as follows.
**Theorem 3.4** (Restatement of Theorem 1.1).: _Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a symmetric Boolean function. Then,_
\[\mathsf{InstC}(f)=\frac{n}{\mathsf{C}_{\mathsf{min}}(f)}=\frac{n}{\ell_{0}(f) +n-\ell_{1}(f)}.\]
Proof.: The last equality follows from Claim 3.2. We prove the upper bound and lower bound of the first equality separately.
* For the upper bound, consider the naive query algorithm that queries all the input bits. The instance complexity of \(f\) w.r.t. this algorithm is clearly \(\frac{n}{\mathsf{C}_{\mathsf{min}}(f)}\).
* For the lower bound proof, assume without loss of generality that \(\ell_{0}(f)\neq 0\). Thus, \(D_{f}(\ell_{0}(f)-1)\neq D_{f}(\ell_{0}(f))\). Towards a lower bound, consider a decision tree \(T\) that computes \(f\). Consider the path of \(T\) that answers \(0\) to the first \(n-\ell_{0}(f)\) variables, and \(1\) to the all of the \(\ell_{0}(f)\) variables after that. There must exist such a path for the following reason. If the tree terminates before this, there exist inputs reached on this path that output different answers (there are inputs consistent with the path so far that have Hamming weights \(\ell_{0}(f)-1\) and \(\ell_{0}(f)\)). Thus, \(T\) cannot compute \(f\) in this case, contradicting our assumption. The Hamming weight of an input that reaches this leaf is \(\ell_{0}(f)\). The certificate complexity of such an input is \(\ell_{0}(f)+n-\ell_{1}(f)\): a certificate is a set of \(\ell_{0}(f)\)\(1\)'s and \(n-\ell_{1}(f)\)\(0\)'s. This concludes the proof of the lower bound.
As a corollary we obtain a complete characterization of symmetric Boolean functions that are strictly \(D\)-instance optimizable (that is, functions with instance complexity \(1\)). We view this as an analogous result to Grossman et al.'s characterization of monotone functions that are strictly \(D\)-instance optimizable.
**Corollary 3.5**.: _The only symmetric Boolean functions that are strictly \(D\)-instance optimizable are the parity function \(\mathsf{XOR}_{n}\) and its negation._
Proof.: It follows from Corollary 3.3 and Theorem 3.4.
## 4 Instance complexity of some graph properties
In this section we give tight bounds on the instance complexity of the graph properties of Connectivity and \(k\)-Clique. In the setting of graph properties, our input is a string in \(\binom{n}{2}\), one variable per edge. A variable being set to \(1\) means the corresponding edge is present, and a variable being set to \(0\) means the corresponding edge is absent. Thus, we identify an unweighted simple graph \(G\) with its corresponding \(\binom{n}{2}\)-bit string. We now list the problems of interest to us.
**Definition 4.1** (Connectivity).: _For a positive integer \(n>0\), define the function \(\mathsf{CONN}:\binom{n}{2}\to\{0,1\}\) as \(\mathsf{CONN}(G)=1\) iff \(G\) is connected._
**Definition 4.2** (\(k\)-Clique).: _For positive integers \(0<k\leq n\), define the function \(\mathsf{CL}_{k}:\binom{n}{2}\to\{0,1\}\) as \(\mathsf{CL}_{k}(G)=1\) iff \(G\) contains a \(k\)-clique as a subgraph._
Our main theorem of this section is as follows.
**Theorem 4.3** (Restatement of Theorem 1.2).: _Let \(n\) and \(k=O(n^{2/3})\) be positive integers. Then,_
\[\mathsf{InstC}(\mathsf{CONN})=\frac{\mathsf{DT}(\mathsf{CONN})}{\mathsf{C}_{ \mathsf{min}}(\mathsf{CONN})}=\frac{\binom{n}{2}}{n-1},\qquad\mathsf{InstC}( \mathsf{CL}_{k})=\frac{\mathsf{DT}(\mathsf{CL}_{k})}{\mathsf{C}_{\mathsf{min}} (\mathsf{CL}_{k})}=\frac{\binom{n}{2}}{\binom{k}{2}}.\]
Proof.: We first note that both of the graph properties have maximal query complexity, then analyze their \(\mathsf{C}_{\mathsf{min}}\) values, and finally show the required bounds on their \(\mathsf{InstC}\) values.
* We first note that \(\mathsf{CONN}\) and \(\mathsf{CL}_{k}\) are known to be _evasive_ graph properties, that is, their query complexity is \(\binom{n}{2}\) (see [1] and [1]). Thus, \[\mathsf{DT}(\mathsf{CONN})=\mathsf{DT}(\mathsf{CL}_{k})=\binom{n}{2}.\]
* Next, the certificate complexity of \(1\)-inputs to \(\mathsf{CONN}\) equals \(n-1\): the total number of connected components in the graph must be \(1\) after querying all certificate variables, implying that the certificate size is at least \(n-1\). In the other direction, a spanning tree with \(n-1\) edges serves as a certificate for connectivity. It is easy to see that a certificate for a \(0\)-input cannot have less than \(n\) edges since any such certificate must completely contain a cut, and the smallest possible cut is that defined by a single vertex of size \(n\). For all values of \(k\), the certificate complexity of \(1\)-inputs to \(\mathsf{CL}_{k}\) equals \(\binom{k}{2}\) since any certificate must contain a \(k\)-clique (otherwise setting all variables outside the certificate gives a \(0\)-input), and a single \(k\)-clique serves as a certificate. On the other hand, in order to certify \(0\)-inputs, we may assume that a certificate only queries \(0\)-variables (otherwise simply drop \(1\)-variables queried to obtain a smaller certificate). Moreover, the certificate must have the property that even if all variables outside it are set to \(1\), then the graph does not contain a \(k\)-clique. Turan's theorem [14] (also see [1, 1]) states that any graph that does not contain a \(k\)-clique must contain at most \(\frac{n^{2}(k-2)}{2(k-1)}\) edges. This implies that a certificate for \(0\)-inputs must contain at least \(\binom{n}{2}-\frac{n^{2}(k-2)}{2(k-1)}\) (otherwise set all variables outside the certificate to \(1\), which yields a graph containing a \(k\)-clique by Turan's theorem). Moreover, there exists a graph achieving this bound [14]. Thus, \[\mathsf{C}_{\mathsf{min}}(\mathsf{CONN})=n-1,\qquad\mathsf{C}_{\mathsf{min}}( \mathsf{CL}_{k})=\min\left\{\binom{k}{2},\binom{n}{2}-\frac{n^{2}(k-2)}{2(k-1 )}\right\}.\] For \(k=O(n^{2/3})\), one may verify that \(\binom{k}{2}<\binom{n}{2}-\frac{n^{2}(k-2)}{2(k-1)}\), and hence \(\mathsf{C}_{\mathsf{min}}(\mathsf{CL}_{k})=\binom{k}{2}\) in this regime.
* To see the claimed bounds on instance complexity, first recall that the bounds \(\mathsf{InstC}(f)\leq\mathsf{DT}(f)/\mathsf{C}_{\mathsf{min}}(f)\) hold for all Boolean functions \(f\). Consider a query algorithm solving \(\mathsf{CONN}\). Since \(\mathsf{CONN}\) is evasive, there exists an input such that the function value remains undetermined even after \(\binom{n}{2}-1\) queries. If the unqueried edge is \(e\), this means that the graph with \(e\) absent (and the other edges consistent with the queries so far) is not connected, and the graph with \(e\) present is connected. The graph with \(e\) present has a \(1\)-certificate of size \(n-1\), and hence \[\mathsf{InstC}(\mathsf{CONN})\geq\frac{\binom{n}{2}}{n-1}.\] Consider a query algorithm solving \(\mathsf{CL}_{k}\). Again, since \(\mathsf{CL}_{k}\) is evasive, this implies existence of an input whose function value is undetermined before the last query. Just as in the argument for \(\mathsf{CONN}\), let the unqueried edge be denoted by \(e\). The graph with \(e\) absent (and the other edges consistent with the queries so far) does not contain a \(k\)-clique, and the graph with \(e\) present contains a \(k\)-clique. The graph with \(e\) present has a \(1\)-certificate of size \(\binom{k}{2}\), and hence \[\mathsf{InstC}(\mathsf{CONN})\geq\frac{\binom{n}{2}}{\binom{k}{2}}.\]
## 5 Instance complexity of some linear threshold functions
In Section 3 we characterized the instance complexity of all symmetric Boolean functions. In particular, Theorem 3.4 shows that \(\mathsf{InstC}(f)=\mathsf{DT}(f)/\mathsf{C}_{\mathsf{min}}(f)\) for all symmetric Boolean functions \(f\). We also showed this bound to hold true for specific graph properties is Section 4. This raises the natural question of whether \(\mathsf{InstC}(f)=\mathsf{DT}(f)/\mathsf{C}_{\mathsf{min}}(f)\) holds true for _all_ Boolean \(f\). We show in this section that this is not the case in a very strong sense, and exhibit two examples witnessing this.
The first example is the Greater-Than function that takes two \(n\)-bit strings as input and outputs \(1\) iff the first string is lexicographically larger than the second one.
**Definition 5.1** (Greater-Than).: _For a positive integer \(n\), the Greater-Than function on \(2n\) inputs, denoted \(\mathsf{GT}_{n}\), is defined by_
\[\mathsf{GT}_{n}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n})=1\iff\sum_{i=1}^{n}2^{ i}x_{i}-2^{i}y_{i}>0.\]
The second example is the Odd-Max-Bit function that takes an \(n\)-bit string as input and outputs \(1\) iff the right-most variable with value \(1\) has an odd index.
**Definition 5.2** (Odd-Max-Bit).: _For a positive integer \(n\), the Odd-Max-Bit function on \(n\) inputs, denoted \(\mathsf{OMB}_{n}\), is defined by_
\[\mathsf{OMB}_{n}(x_{1},\ldots,x_{n})=1\iff\max\left\{i\in[n]:x_{i}=1\right\} \text{ is odd}.\]
_Define \(\mathsf{OMB}(0^{n})=0\)._
**Theorem 5.3** (Restatement of Theorem 1.3).: _For all odd positive integers \(n\), we have_
\[\mathsf{DT}(\mathsf{GT}_{n}) =2n, \mathsf{C}_{\mathsf{min}}(\mathsf{GT}_{n})=2, \mathsf{InstC}(\mathsf{GT}_{n})\leq 2\] \[\mathsf{DT}(\mathsf{OMB}_{n}) =n, \mathsf{C}_{\mathsf{min}}(\mathsf{OMB}_{n})=1, \mathsf{InstC}(\mathsf{OMB}_{n})<2.\]
Before we prove the theorem, we require the following properties of Boolean functions. Every Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) has a unique multilinear polynomial expansion as \(f(x)=\sum_{S\subseteq[n]}\widetilde{f}(S)\prod_{i\in S}x_{i}\) where each \(\widetilde{f}(S)\) is a real number. This expansion is sometimes referred to as the _Mobius expansion_ of \(f\). The _degree_ of \(f\), denoted by \(\deg(f)\), is the maximum degree of a monomial in its Mobius expansion that has a non-zero coefficient. It is not hard to show that a depth-\(d\) decision tree for \(f\) induces a degree-\(d\) polynomial computing \(f\): sum up the indicator polynomials of each \(1\)-leaf, and each of these indicator polynomials can easily be seen to have degree at most the depth of their corresponding leaf. This yields the following folklore lemma.
**Lemma 5.4** (Folklore).: _Let \(f:\{0,1\}^{n}\to\{0,1\}\) be a Boolean function. Then \(\mathsf{DT}(f)\geq\deg(f)\)._
We now prove Theorem 5.3.
Proof of Theorem 5.3.: We first show that \(\mathsf{DT}(\mathsf{GT}_{n})=2n\) and \(\mathsf{DT}(\mathsf{OMB}_{n})=n\), after which we show the \(\mathsf{C}_{\mathsf{min}}\) bounds and finally we show the \(\mathsf{InstC}\) bounds.
* Towards a \(2n\) lower bound for \(\mathsf{DT}(\mathsf{GT}_{n})\), consider an adversary who uses the following strategy against a query algorithm for \(\mathsf{GT}_{n}\) for the first \(2n-1\) queries:
* If a variable \(x_{i}\) is queried for the first time out of the pair \(\{x_{i},y_{i}\}\), answer \(1\).
* If a variable \(y_{i}\) is queried for the first time out of the pair \(\{x_{i},y_{i}\}\), answer \(0\).
* If a variable \(x_{i}\) (\(y_{i}\)) is queried such that \(y_{i}\) (\(x_{i}\)) has already been queried, then answer such that \(x_{i}=y_{i}\).
Just before the last query of the algorithm, we know that the strings are equal except for one bit. Say \(x_{i}\) is the last unqueried bit. By the definition of the adversary, we know that \(y_{i}=0\). Thus, we have \(\mathsf{GT}_{n}(x,y)=0\) if \(x_{i}=0\) and \(\mathsf{GT}_{n}(x,y)=1\) if \(x_{i}=1\). That is, the function value depends on the last unqueried input bit. A similar argument works if the last unqueried bit is \(y_{i}\). Thus, \(\mathsf{DT}(\mathsf{GT}_{n})\geq 2n\) and hence \(\mathsf{DT}(\mathsf{GT}_{n})=2n\).
* The Odd-Max-Bit function outputs the parity of the largest index with variable value \(1\) if it exists, and outputs \(0\) otherwise. One may find such an index by scanning the input from right to left. This intuition is captured in the unique polynomial representation of \(\mathsf{OMB}_{n}\) as \(\mathsf{OMB}_{n}(x)=\) \[x_{n}\cdot 0+(1-x_{n})x_{n-1}\cdot 1+(1-x_{n})(1-x_{n-1}) \mathsf{OMB}_{n-2}(x_{1},\ldots,x_{n-2}) \text{if $n$ is even, or}\] (1) \[(1-x_{n})\cdot 1+x_{n}(1-x_{n-1})\cdot 0+x_{n}x_{n-1}\mathsf{OMB}_{n -2}(x_{1},\ldots,x_{n-2}) \text{if $n$ is odd,}\] (2) with \(\mathsf{OMB}_{1}(x_{1})=x_{1}\) and \(\mathsf{OMB}_{2}(x_{1},x_{2})=0\cdot x_{2}+(1-x_{2})x_{1}=x_{1}-x_{1}x_{2}\). Note that in either case, the coefficient of the maximum-degree monomial equals \((-1)^{n+1}\neq 0\), and hence the degree is \(n\). By Lemma 5.4, this implies \(\mathsf{DT}(\mathsf{OMB}_{n})\geq n\), and hence \(\mathsf{DT}(\mathsf{OMB}_{n})=n\).
* The inequalities \(\mathsf{C}_{\mathsf{min}}(\mathsf{GT}_{n})\leq 2\) and \(\mathsf{C}_{\mathsf{min}}(\mathsf{OMB}_{n})\leq 1\) are witnessed by the strings \(0^{n-1}10^{n}\) and \(0^{n-1}1\), respectively. It is easy to see that no inputs \(z\in\{0,1\}^{2n},w\in\{0,1\}^{n}\) satisfy \(\mathsf{C}(\mathsf{GT}_{n},z)=1\) and \(\mathsf{C}(\mathsf{OMB}_{n},w)=0\), yielding \(\mathsf{C}_{\mathsf{min}}(\mathsf{GT}_{n})=2\) and \(\mathsf{C}_{\mathsf{min}}(\mathsf{OMB}_{n})=1\).
* For the instance complexity of Greater-Than, consider the natural query algorithm \(\mathcal{A}\) that first queries the most significant bits of \(x,y\), outputs the answer if we can deduce it at this point, and use an algorithm for the Greater-Than instance without these two bits if we cannot deduce the output at this point, i.e., the bits seen are equal (see Figure 2 for a visual description of the query algorithm \(\mathcal{A}\) used).
We now argue that this algorithm \(\mathcal{A}\) witnesses \(\mathsf{InstC}(\mathsf{GT}_{n})\leq 2\). We analyze the instance complexity of each input with respect to \(\mathcal{A}\).
* Consider an input of the form \((x,y)\) with \(x=y\). By definition, this is a \(0\)-input for \(\mathsf{GT}_{n}\). In order to certify that \(x\not>y\), it suffices to query all of the \(0\)-variables of \(x\) and all of the \(1\)-variables of \(y\). It is also necessary that a certificate queries at least one element of each pair \((x_{i},y_{i})\), since otherwise the function value could be set to \(1\) by setting \(x_{i}=1\) and \(y_{i}=0\), a contradiction to the assumption that we started with a certificate. Thus, \(\mathsf{C}(\mathsf{GT}_{n},(x,y))=n\) for all \(x=y\). The number of input bits read by our query algorithm can easily be seen to be in \(\{2n-1,2n\}\). Thus, \(\mathsf{InstC}(\mathsf{GT}_{n},(x,y),\mathcal{A})\leq\frac{2n}{n}=2\) for all inputs with \(x=y\).
* For an input \((x,y)\) with \(x\neq y\), let \(n-j\) denote the largest index with \(x_{n-j}\neq y_{n-j}\). Here, \(j\in\{0,1,\ldots,n-1\}\). Just as the argument in the previous bullet, it is easy to show that a certificate for \((x,y)\) must query at least one variable from each pair \((x_{i},y_{i})\) with \(i\geq n-j\).1 Thus, \(\mathsf{C}(\mathsf{GT}_{n},(x,y))\geq j+1\) for all such inputs. The number of queries made by our algorithm \(\mathcal{A}\) can be seen to be in \(\{2j+1,2j+2\}\): it queries all pairs \((x_{i},y_{i})\) (except when \(n-j=1\) and \(x_{1}=0\)). This implies \(\mathsf{InstC}(\mathsf{GT}_{n},(x,y),\mathcal{A})\leq\frac{2j+2}{j+1}=2\) for all inputs \((x,y)\) with \(n-j\) the largest index where \(x_{n-j}\neq y_{n-j}\). Footnote 1: Moreover, there exists a certificate that makes one extra query: if \((x,y)\) is a \(1\)-input, query all of the \(x_{i}\) with \(i>n-j\) and \(x_{i}=1\), and query all of the \(y_{i}\) with \(i>n-j\) and \(y_{i}=0\). Finally, query the pair \((x_{n-j},y_{n-j})\).
Figure 2: A decision tree \(T_{n}\) for Greater-Than on \(2n\) input bits
* As in the argument for the instance complexity of \(\mathsf{GT}_{n}\), consider the natural query algorithm \(\mathcal{B}\) for Odd-Max-Bit that queries the variables from right to left and outputs the parity of the first index seen where the input takes value \(1\) (see Figure 3 for a visual description of \(\mathcal{B}\)).
We now argue that this algorithm \(\mathcal{B}\) witnesses \(\mathsf{InstC}(\mathsf{OMB}_{n})<2\). We analyze the instance complexity of each input with respect to \(\mathcal{B}\).
* Let \(x\) be a \(0\)-input to \(\mathsf{OMB}_{n}\). First, consider the input \(x=0^{n}\). Any certificate for this input must query all variables \(x_{i}\) with \(i\) odd, since if unqueried, we could set \(x_{i}=1\), forcing \(\mathsf{OMB}_{n}=1\). Moreover, the set of all variables \(x_{i}\) with \(i\) odd forms a certificate for \(0^{n}\). Thus, \(\mathsf{C}(\mathsf{OMB}_{n},0^{n})=(n+1)/2\). The algorithm \(\mathcal{B}\) queries all variables on this input, and hence \(\mathsf{InstC}(\mathsf{OMB}_{n},0^{n},\mathcal{B})=\frac{2n}{n+1}<2\). Next, consider a \(0\)-input \(x\neq 0^{n}\) with \(n-i\) the maximum index satisfying \(x_{n-i}=1\). Since \(x\) is a \(0\)-input, \(n-i\) is even and hence \(i\) is odd. A certificate for \(x\) must query all variables \(x_{j}\) with \(j>n-i\) and \(j\) odd, since otherwise setting \(x_{j}=1\) forces the output of \(\mathsf{OMB}_{n}\) to \(1\). It must also query at least one more variable. Moreover there exists such a certificate, where the extra variable queried is \(x_{n-i}\). Thus, \(\mathsf{C}(\mathsf{OMB}_{n},x)=\frac{i+1}{2}+1=\frac{i+3}{2}\). The algorithm \(\mathcal{B}\) queries \(i+1\) variables on this input, and hence \(\mathsf{InstC}(\mathsf{OMB}_{n},x,\mathcal{B})=\frac{2(i+1)}{i+3}<2\).
* Consider a \(1\)-input \(x\) with \(n-i\) the maximum index satisfying \(x_{n-i}=1\). Since \(x\) is a \(1\)-input, \(n-i\) is odd and hence \(i\) is even. A certificate for \(x\) must query all variables \(x_{j}\) with \(j>n-i\) and \(j\) even, since otherwise setting \(x_{j}=1\) forces the output of \(\mathsf{OMB}_{n}\) to \(0\). It must also query at least one more variable. Moreover there exists such a certificate, where the extra variable queried is \(x_{n-i}\). Thus, \(\mathsf{C}(\mathsf{OMB}_{n},x)=\frac{i}{2}+1=\frac{i+2}{2}\). The algorithm \(\mathcal{B}\) queries \(i+1\) variables on this input, and hence \(\mathsf{InstC}(\mathsf{OMB}_{n},x,\mathcal{B})=\frac{2(i+1)}{i+2}<2\).
## Acknowledgment
This work has been supported by Research England funding to enhance research culture.
|
2309.05235 | P2LSG: Powers-of-2 Low-Discrepancy Sequence Generator for Stochastic
Computing | Stochastic Computing (SC) is an unconventional computing paradigm processing
data in the form of random bit-streams. The accuracy and energy efficiency of
SC systems highly depend on the stochastic number generator (SNG) unit that
converts the data from conventional binary to stochastic bit-streams. Recent
work has shown significant improvement in the efficiency of SC systems by
employing low-discrepancy (LD) sequences such as Sobol and Halton sequences in
the SNG unit. Still, the usage of many well-known random sequences for SC
remains unexplored. This work studies some new random sequences for potential
application in SC. Our design space exploration proposes a promising random
number generator for accurate and energy-efficient SC. We propose P2LSG, a
low-cost and energy-efficient Low-discrepancy Sequence Generator derived from
Powers-of-2 VDC (Van der Corput) sequences. We evaluate the performance of our
novel bit-stream generator for two SC image and video processing case studies:
image scaling and scene merging. For the scene merging task, we propose a novel
SC design for the first time. Our experimental results show higher accuracy and
lower hardware cost and energy consumption compared to the state-of-the-art. | Mehran Shoushtari Moghadam, Sercan Aygun, Mohsen Riahi Alam, M. Hassan Najafi | 2023-09-11T05:04:58Z | http://arxiv.org/abs/2309.05235v2 | # P2LSG: Powers-of-2 Low-Discrepancy Sequence Generator for Stochastic Computing
###### Abstract
Stochastic Computing (SC) is an unconventional computing paradigm processing data in the form of random bit-streams. The accuracy and energy efficiency of SC systems highly depend on the stochastic number generator (SNG) unit that converts the data from conventional binary to stochastic bit-streams. Recent work has shown significant improvement in the efficiency of SC systems by employing low-discrepancy (LD) sequences such as Sobol and Halton sequences in the SNG unit. Still, the usage of many well-known random sequences for SC remains unexplored. This work studies some new random sequences for potential application in SC. Our design space exploration proposes a promising random number generator for accurate and energy-efficient SC. We propose P2LSG, a low-cost and energy-efficient Low-discrepancy Sequence Generator derived from _Powers-of-2_ VDC (Van der Corput) sequences. We evaluate the performance of our novel bit-stream generator for two SC image and video processing case studies: image scaling and scene merging. For the scene merging task, we propose a novel SC design for the first time. Our experimental results show higher accuracy and lower hardware cost and energy consumption compared to the state-of-the-art.
Emerging computing, image processing, low-discrepancy sequences, pseudo-random sequences, quasi-random sequences, stochastic computing, video processing.
## I Introduction
Stochastic computing (SC) [1] is a re-emerging computing paradigm offering low-cost and noise-tolerant hardware designs. In contrast to traditional binary computing, which operates on positional binary radix numbers, SC designs process uniform bit-streams of '0's and '1's with no significant digits. While the paradigm was known for approximate computations for years, recent works showed deterministic and completely accurate computation using SC circuits [2, 3]. Encoding data from traditional binary to stochastic bit-streams is an important step in any SC system. The data are encoded by the probability of observing a '1' in the bit-stream. For example, a bit-stream with 25% '1' represents the data value of 0.25. The accuracy of the computations and the energy efficiency of the SC designs highly depend on this encoding step, particularly on the distribution of '1's and '0's in the bit-streams. A stochastic number generator (SNG), which encodes a data value in binary format to a stochastic bit-stream, consists of a random number generator (RNG) and a binary comparator. Fig. 1 shows the structure of an SNG commonly used in SC systems. At any cycle, the output of comparing the input data with the random number from the RNG unit produces one bit of the bit-stream.
The choice of the RNG unit directly affects the distribution of the bits in stochastic bit-streams. While traditionally _pseudo-random_ sequences generated by linear-feedback shift registers (LFSRs), were used in the SNG unit, the state-of-the-art (SOTA) studies employ _quasi-random_ sequences, such as **Sobol (S)**[2, 4] and **Halton (HL)**[5, 6] sequences, for high-quality generation of stochastic bit-streams. These sequences remove an important source of error in SC, the random fluctuation error [7] in generating bit-streams by producing _Low-Discrepancy_ (LD) bit-streams. LD bit-streams quickly converge to the target value, reducing the length of bit-streams and, consequently, the latency of stochastic computations. This latency reduction directly translates to savings in energy consumption (i.e., power \(\times\) latency), a critical metric in the hardware efficiency of SC systems.
A challenge with the SOTA SNGs using sequences such as **Sobol** and **Halton** is their relatively high hardware cost that affects the achievable energy savings when using these sequences. This study extends the SOTA random sequences for the high-quality encoding of data in SC. We analyze some high-quality random sequences for possible improvement in the performance and hardware efficiency of the SNG unit. For the first time, to the best of our knowledge, we explore **Weyl (W)**[8], **R2 (R)**[9], **Latin Hypercube (L)**[10], **Faure (F)**[11], **Hemmersly (HM)**[12], **Niederreiter (N)**[13, 14], **Van der Corput (VDC)**[15], and **Poisson Disk (P)**[16] sequences in the context of SC as promising alternatives to prior costly LD sequences. The primary contributions of this work are summarized as follows:
**1** We employ some high-quality random sequences for the first time in SC literature. We evaluate the performance and accuracy of stochastic computations when using these sequences.
**2** We propose **P2LSG (Powers-of-2 Low-Discrepancy Sequence Generator), a lightweight LD Sequence Generator derived from _Powers-of-2_ VDC sequences and evaluate its hardware cost compared to the SOTA.
**3** For the first time, we introduce a novel SC video processing design for scene merging.
**4** Experimental results on two SC image and video processing case studies show significant savings in area, energy, and latency with P2LSG compared to the SOTA non-parallel and parallel **Sobol**-based designs while maintaining comparative accuracy.
The rest of the paper is structured as follows. Section II provides the necessary background on random sequences and SC. Section III explores different random sequences in the context of SC. Section IV reveals the performance of **P2LSG** along with a new hardware design. Section V investigates the proposed generator for image scaling (interpolation) and scene merging. Finally, Section VI concludes the paper, summarizing the key findings and contributions.
Fig. 1: The architecture of a stochastic number generator (SNG).
## II Background
### _Random Sequences_
Random sequences are widely used in various research domains, particularly in emerging computing technologies [17]. Some sequences are binary-valued with logic-1s and logic-0s [18]. These sequences possess the orthogonality property; that is, different random sequences are (approximately) uncorrelated. Some sequences, on the other hand, are non-binary-valued (having fixed- or floating-point numbers). All non-binary-valued sequences listed in Section I (**S**, **HL**, **W**, **R**, **L**, **F**, **HM**, **N**, **VDC**, **P**) have LD properties. The _discrepancy_ term in LD refers to how much the sequence points deviate from uniformity [19]. The _recurrence_ property (i.e., the constructibility of further-indexed sequences from the previous-indexed ones) in LD sequences is beneficial for cross-correlation, which is advantageous for SC systems that require uncorrelated bit-streams [20].
The **Weyl** sequence belongs to the class of additive recurrence sequences, characterized by their generation through the iteration of multiples of an irrational number modulo \(1\). Specifically, by considering \(\alpha\in\mathbb{R}\) as an irrational number and \(x_{i}\in\{0,\alpha,2\alpha,...,k\alpha\}\), the sequence \(x_{i}-\lfloor x_{i}\rfloor\) (\(x_{i}\) modulo \(1\)) produces an equidistributed sequence within the interval \((0,1)\). Another example of an additive recurrence sequence is the **R** sequence, which is based on the _Plastic Constant_ (the unique real solution of the cubic equation) [9, 21]. The **Latin Hypercube** sequences involve partitioning the sampling space into equally sized intervals and randomly selecting a point within each interval [22].
The **VDC** sequence serves as the foundation for many LD sequences. It is constructed by reversing the digits of the numbers in a specific base, representing each integer value as a fraction within the \([0,1)\) interval. A **VDC** sequence in base-B is notated with **VDC-B**. As an example, the decimal value \(11\) in base-3 is represented by \((102)_{3}\). The corresponding value for the **VDC-3** is \(2\times 3^{-1}+0\times 3^{-2}+1\times 3^{-3}=\frac{19}{27}\). Similarly, the **Faure**, **Hammersley**, and **Halton** sequences are derived from the **VDC** concept using prime or co-prime numbers. To generate the **Faure** sequence in \(q\)-dimensions, the smallest prime number \(p\) is selected such that \(p\geq q\). The first dimension of the **Faure** sequence corresponds to the **VDC-p** sequence, while the remaining dimensions involve permutations of the first dimension. The \(q\)-dimensional **Halton** sequence is generated by utilizing the **VDC** sequence with different prime bases starting from the first to the \(q\)-th prime number. A limitation of the **Halton** sequence is in utilizing prime number bases, which increases the complexity of the sequence generation [23].
The **Hammersley** sequence shares some similarities with the **Halton** sequence. For the sake of fair comparison with the **Halton** sequence we adopt different bases for the **Hammersley** sequence in this work. The first **Sobol** sequence is the same as the **VDC-2** sequence. The other **Sobol** sequence are generated through permutations of some sets of direction vectors [24]. The **Niederreiter** sequence is another variant of the **VDC** sequence relying on the powers of some prime numbers. This sequence features irreducible and primitive polynomials that ensure LD and uniformity over the sample space [25]. Finally, the **Poisson Disk** sequence generates evenly distributed numbers with minimal distance between them.
### _Stochastic Computing (SC)_
SC has gained attention recently due to its intriguing advantages, such as robustness to noise, high parallelism, and low design cost. Complex arithmetic operations are realized with simple logic gates in SC. Significant savings in the implementation costs are achieved for different applications, from image processing [26] to sorting [27] and machine learning [28, 29], to name a few. Data conversion is an essential step in SC systems. Input numbers must be converted to random bit-streams, where each bit has equal significance. SC supports real data in the unit interval, i.e., [0, 1]. A common coding format is unipolar encoding (UPE). In UPE, the probability of observing a '1' in the bit-stream \(X\), i.e., \(P(X=1)\), equals the input value or \(x\). The common method for generating a bit-stream of size \(N\) is to compare the input number with \(N\) random numbers (\(R_{1}...R_{N}\)). This is usually done serially in \(N\) clock cycles. A logic-1' is produced at the output if the input value is greater than the random number; A logic-0 is produced otherwise. The distribution of logic-1s in the produced bit-stream depends on the sequence of random numbers. When dealing with the signed values (\(x\) is in the range \(-1\leq x\leq 1\)), a bipolar encoding (BPE) is used, in which the probability that each bit in the bit-stream is '1' is \(P(X=1)\) = \(\frac{(x+1)}{2}\).
SC operations consist of simple bit-wise logic operations. Multiplication of bit-streams in UPE is achieved by bit-wise AND[20], and in BPE by bit-wise XNOR operation [31]. For accurate multiplication, the input bit-streams must be _uncorrelated_ with each other. Performing bit-wise AND on _correlated_ bit-streams with a maximum overlap in the position of '1's gives the _minimum_ of the input bit-streams [27]. Scaled addition is realized in SC by a multiplexer (MUX) unit for both encodings [32]. For scaled subtraction, a MUX with one inverter is utilized [26]. The main inputs of the MUX can be correlated, but they should be uncorrelated with the select input bit-stream.
## III Design Space Exploration
This section comprehensively examines the use of the random sequences discussed in Section II for SC. We first analyze these sequences for basic SC operations and then extend the evaluations to more complex case studies. The numbers provided by these sequences are used as the required random numbers (\(R_{1}...R_{N}\)) during bit-stream generation. Prior works used **Sobol**[4] and **Halton**[6] sequences for LD bit-stream generation. In this study, for the first time we propose **P2LSG**, a new LD sequence generator based on the **VDC**_Powers-of-2_ bases (e.g., **VDC-2**, **VDC-4**, **VDC-8**,..., **VDC-2\({}^{m\in\mathbb{Z}^{+}}\)**) for cost-efficient LD bit-stream generation. The proposed sequence generator is cost- (area and power) and energy-efficient for hardware implementation.
### _Benchmark-I: SC Multiplication_
We first evaluate the performance of the selected sequences for 2-input SC multiplication. Two input values (\(X1\) and \(X2\)) are converted to bit-stream representation using random sequences, and the generated bit-streams are bit-wise ANDed to produce the output bit-stream. The resulting bit-stream is converted back to standard representation (by counting the number of '1's and dividing by the length of the bit-stream) and compared with the expected multiplication result to find the absolute error. Here, the expected value is \(P_{X1}\times P_{X2}\). For accurate multiplication, the input bit-streams must be _uncorrelated_. In the literature, _Stochastic Cross-Correlation_ (\(SCC\)) is used to quantify the correlation between bit-streams [20]. In this metric, the correlation is calculated by using cumulative values denoted
by \(a\), \(b\), \(c\), and \(d\), which depend on the counts of 11, 10, 01, or 00 pairs in the overlapping bits between the two bit-streams:
\[SCC=\begin{cases}\frac{ad-bc}{\mathbb{N}\times min(a+b,a+c)-(a+b)\times(a+c)}&, if\;ad>bc\\ \frac{ad-bc}{(a+b)\times(a+c)-N\times max(a-d,0)}&, else\end{cases} \tag{1}\]
We exhaustively evaluated the multiplication accuracy for all cartesian combinations of \(X1\) and \(X2\) where the inputs are 8-bit precision values in the \([0,1)\) interval (i.e., \(\frac{0}{256}\), \(\frac{1}{256}\),..., \(\frac{255}{26}\)). The bit-stream lengths vary from \(2^{6}\) to \(2^{16}\) with \(2\times\) increments. Table I and Fig. 2 (a) present the Mean Absolute Error (MAE) of the multiplication results. We multiply the measured mean values by 100 and report them as percentages. Two different sequences are selected for each case to satisfy the uncorrelation requirement (\(SCC=0\)). For the Sobol sequence, the first two Sobol sequences from the MATLAB built-in Sobol sequence generator are used. For the Faure sequence, two sequences are created using VDC-7. The first dimension is generated using base-7, while the second is obtained through a permutation of the first one. The Halton sequence involves two dimensions generated using base-11 (VDC-11) and base-13 (VDC-13) with MATLAB built-in Halton. For the Hammersley sequence, we use the VDC-2 and VDC-3 sequences. The Latin Hypercube sequence was also generated using its MATLAB built-in function. For the Weyl sequence, \(\pi\) and the _Silver Ratio_ (i.e. \(\sqrt{2}-1\)) were chosen as the irrational numbers. The first dimension of our P2LSG sequence is VDC-2, while VDC-N is selected for the other dimensions depending on the bit-stream length (\(N\)). As we move from left to right in Table I, the length of the bit-streams increases, and as expected, the accuracy improves (MAE decreases). Notably, the VDC-related sequences exhibit favorable convergence rates. Specifically, after \(2^{10}\) operation cycles, the P2LSG sequence surpasses the Niederreiter sequence and approaches the Sobol sequence in terms of accuracy. For approximate results, the Sobol, Niederreiter, and P2LSG sequences emerge as the top performers. As can be seen in Fig. 2 (a), the convergence behavior of the P2LSG sequence outperforms other sequences as the length of the bit-stream increases.
Fig. 2: MAE (\(\%\)) of SC operation on two 8-bit precision input (a) SC Multiplication and (b) SC Scaled Addition. VDC-related sequences demonstrate favorable convergence rates. For approximate results, the **5**, **8**, **IMR**, and P2LSG sequences emerge as the top performers.
### _Benchmark-II: SC Addition_
Next, we evaluate the accuracy of the SC _Scaled-Addition_. We utilize a 2-to-1 MUX with two 8-bit precision input operands similar to the multiplication operation. For this SC operation, the two addends (the main inputs of the MUX) are correlated (\(SCC=1\)), while the MUX select input is uncorrelated to the addends [20]. To meet this requirement, we use a random sequence to generate the main input bit-streams and another sequence to generate the bit-stream corresponding to the MUX select input. For two-input addition, a bit-stream corresponding to 0.5 value is generated for the select input. Table II and Fig. 2 (b) present the accuracy results in terms of MAE for different bit-stream lengths. Due to using a select input with 2-bit precision (i.e., 0.5 value), accurate output (0.0% MAE) can be achieved with a bit-stream length of \(2^{9}\) by using sequences such as **Sobol**, **Niederreiter**, **Hammersley**, and **P2LSG**. As can be seen in Table II, for bit-stream sizes greater than \(2^{4}\), the **P2LSG** sequence achieves the minimum MAE among the other sequences. By increasing the bit-stream length (\(N\)), we can see that the MAE tends to zero for **Sobol** and **P2LSG** sequences.
## IV Proposed Sequence Generator
In this section, we propose a novel hardware design for **P2LSG** and evaluate its implementation cost compared to prior LD sequence generators. Alaghi and Hayes [6] implemented a **Halton** sequence generator consisting of mod counters, digit converters, and an adder. Fig. 3 (a) shows the **Halton** generator of [6]. Liu and Han [19] proposed a **Sobol** sequence generator by using some Direction Vectors (DVs). The DVs (\(V_{x}(x=0,1,...,N-1)\)) are generated using some primitive polynomials and stored in a Direction Vector Array (DVA). By employing different DVs, different **Sobol** sequences can be produced. In their design, a priority encoder finds the least significant zero (LSZ) in the output of a counter at any cycle. Depending on the position of the LSZ, a DV is selected from the DVA. A new **Sobol** number is recursively generated by XORing the respective DV and the previous **Sobol** number. Fig. 3 (b) shows the design of this **Sobol** generator.
Prior work suggested a look-up table-based approach for generating **VDC** sequences [33] without a custom hardware design for SC bit-stream generation. We propose a low-cost design for efficient and lightweight generation of the **P2LSG** sequences. Our design uses a \(log_{2}(N)\)-bit counter for generating different sequences of _Powers-of-2_ bases up to \(N\), where \(N\) is the length of the bit-stream.
For a fair comparison with previous random generators and to assess the performance for various image processing applications, we target bit-streams of up to 256 (sufficient for representing 8-bit grayscale image data). Therefore, we require up to 256 different random numbers from the sequence generator to generate each bit-stream. The general algorithm to generate a base-B **VDC** sequence consists of five steps:
* Generating an integer number.
* Converting the integer number to its base-B representation.
* Reversing the base-B representation to a binary number.
* Scaling the input number within the \([0,1)\) interval to the corresponding 8-bit binary number in the \([0,256)\) range to be connected to the binary comparator.
The complexity of the hardware design for this algorithm is closely tied to the chosen base. We classify the hardware designs into two categories depending on the selected base: _Class-I_: those _without Powers-of-2_ bases, and _Class-II_: those _with Powers-of-2_ bases.
### _Class-I: Non-Powers-of-2 Base Generators_
To implement this type of **VDC** sequence generators, we combine the first two steps, 1 and 2, by utilizing a base-B counter to generate the integer numbers in the corresponding base. For instance, a _Binary Coded Decimal_ (BCD) counter can be employed for a base-10 representation. Step 1 is achieved by hard-wiring. Step 1 is implemented by employing adders and MUXs. This step is relatively complex and takes more hardware resources compared to the other steps. Step 1 can simply be achieved by shift operations.
The **Hammersley** and **Halton** sequences extend the **VDC** sequence to higher dimensions, representing each dimension in a different prime base-B. Consequently, the hardware implementation of these sequences falls under this particular type of sequence generator. The need for counters with prime radios and base conversion make the **Halton** sequence generator of [6] complex to implement in hardware. The hardware limitations of the design of [6] motivate us to explore the second class of generators for the _Powers-of-2_ bases that build **P2LSG**.
### _Class-II: Powers-of-2 Base Generators_
#### Iv-B1 Sequential Design
To implement _Powers-of-2_ base generators, a binary counter with sufficient bits is utilized to represent the desired range of integer numbers in step 1. To convert the value of a binary counter to its base-B representation (step 1 ), we consider groups of \(log_{2}(B)\) bits, starting from the least significant bit. If the last group lacks enough bits, some additional '0' bits are appended via zero padding to ensure it forms a complete group. The reversing operation in step 1 is done by hard-wiring each group of bits, treating them as a single digit in base-B. The process of converting a base-B number to its equivalent binary representation is the inverse of
Fig. 3: (a) The **Halton** sequence generator implemented in [6], (b) The **Sobol** sequence generator proposed in [19], (c) Our proposed **P2LSG** design for base-16 (**P2LSG**-16) as an example, compared to SOTA designs of **Halton** and **Sobol** in (a) and (b), **P2LSG** utilizes only T-FFs and simple hard-wiring.
step 1. In this process, each group or base-B digit is considered as equivalent \(log_{2}(B)\) bits of binary representation and any exceeding bits beyond the counter in step 1 is discarded. Fig. 3 (c) demonstrates a simple 8-bit precision P2LSG for base-16 (P2LSG-16).
An Up-Counter counts up to 255, and the target sequence is obtained by significance inversion of each group; the least significant group becomes the most significant group, and vice versa. For this base-16 example, the output \(Q_{3}\) of the \(4^{th}\) T Flip-Flop (T-FF) from the right side becomes the most-significant bit. Fig. 4 (a) shows the overall idea behind the proposed P2LSG. After grouping each bit from the counter, the inversion (via hard-wiring) reverses the bit significance; the new binary output is ready for comparison in the SNG block. Fig. 4 (b) illustrates examples of different bases.
#### Iv-A2 Parallel Design
Our proposed _Class-II_ P2LSG can also operate in parallel. Fig. 5 illustrates how more than one number of a P2LSG sequence (in any base) can be generated in parallel at any cycle. Let us define \(PAR\) as the number of sequence elements to be generated in parallel. First, \(log_{2}(PAR)\) bits are reserved at the least significant positions. The remaining bits require a reduced precision counter (e.g., \(8\)\(\rightarrow\)\(6\) in Fig. 5). At any clock cycle, the reserved bits are filled with \(n^{(log_{2}(PAR))}\) possible logic values (parallel indexing). Fig. 5 (a) shows an example for \(PAR=4\). In this example, each output repeats four times to fill the reserved bits with 00, 01, 10, and 11. The outputs at any cycle produce four consecutive numbers. Fig. 5 (b) illustrates another example of \(PAR=4\) for P2LSG-16.
Table III compares the hardware cost of generating Sobol and Halton sequences with the proposed sequence generator. We report the hardware area, power consumption, and critical path latency (CPL) for each case. We synthesized the designs using the Synopsys Design Compiler v2018.06 with the 45nm FreePDK gate library [34]. The reported numbers in Table III demonstrate that the proposed sequence generator surpasses the Sobol and Halton sequence generators in terms of hardware efficiency.
## V SC Image and Video Case Studies
In this section, we evaluate the performance and the hardware efficiency of the proposed P2LSG in two SC image and video processing case studies. Prior work has used SC for low-cost implementation of different computer vision tasks from depth perception to interpolation [35, 36, 37, 38, 39, 26]. We first evaluate the proposed sequence generator in an interpolation and image scaling application and then implement and study its effectiveness in a novel SC circuit for scene merging video processing, which we propose for the first time in the literature.
### _Interpolation and Image Scaling_
Interpolation refers to the process of estimating or calculating values between two known data points. Linear interpolation is a method used to estimate values between two known values based on a linear relationship [40]. It assumes a straight line between the available values and calculates intermediate values along that line. In image processing, linear interpolation is used to estimate pixel values between two neighbouring pixels. It is commonly employed when performing operations such as rotation, translation, or affine transformations on images. Bilinear interpolation is a specific case of linear interpolation applied in two dimensions. Instead of estimating values along a straight line, it estimates values within a two-dimensional grid of pixels [41]. Bilinear interpolation considers the four nearest pixels to the target location and calculates a weighted average based on their values. The weights are determined by the distances between the target location and the surrounding pixels; thereby, an image scaling task can be performed [42].
Assume we have an original image, \(I\), with pixel values represented by a 2-D array. We want to estimate the pixel value at a non-integer coordinate \((\mathrm{x},\mathrm{y})\) in the image. The four surrounding pixels to consider are \((x_{1},y_{1})\), \((x_{1},y_{2})\), \((x_{2},y_{1})\), and \((x_{2},y_{2})\), where \((x_{1},y_{1})\) represents the pixel at the bottom-left corner of the target location, and \((x_{2},y_{2})\) represents the pixel at the top-right corner. Let us denote the pixel values as \(\mathrm{I(x,y)}\), \(I(x_{1},y_{1})\), \(I(x_{1},y_{2})\), \(I(x_{2},y_{1})\), and \(I(x_{2},y_{2})\). The bilinear interpolation formula to estimate the pixel value \(\mathrm{I(x,y)}\) is as follows: \(I(\mathrm{x,y})=(1-u)(1-v)\times I(x_{1},y_{1})+(1-u)v\times\)
Fig. 4: Proposed P2LSG. (a) The general rule to hard-wiring bits for reversing operation (aka, significance inversion), and (b) Example for the 8-bit counter to generate P2LSG-2, P2LSG-4, P2LSG-8, and P2LSG-16 sequences (up to P2LSG-256 is possible).
Fig. 5: Parallel P2LSG sequence generator. (a) The general rule to assign parallel indexing bits, and (b) P2LSG-16 example with \(PAR=4\) concurrent sequence generation.
\(I(x_{1},y_{2})+u(1-v)\times I(x_{2},y_{1})+uv\times I(x_{2},y_{2})\), where \(u=x-x_{1}\) (fractional distance between \(x\) and \(x_{1}\)) and \(v=y-y_{1}\) (fractional distance between \(y\) and \(y_{1}\)). The values \((1-u)(1-v)\), \((1-u)v\), \(u(1-v)\), and \(uv\) are the weights assigned to each surrounding pixel. These weights represent the contribution of each pixel to the interpolated value. The interpolation formula can be compared to a multiplication-based SC MUX structure [32], where neighbouring pixels are fed into the main MUX inputs, and the location information is fed into the selection ports. In this scenario, a 4-to-1 MUX can be expressed in terms of probabilities as follows: \(P_{I(x,y)}=(1-P_{\textbf{u})}(1-P_{\textbf{v})}P_{I_{11}}+(1-P_{\textbf{u})}( P_{\textbf{v}})P_{I_{12}}+(P_{\textbf{u}})(1-P_{\textbf{v}})P_{I_{21}}+(P_{ \textbf{u}})(P_{\textbf{v}})P_{I_{22}}\).
Fig. 7 visually demonstrates the outputs of 2\(\times\) image scaling with an SC circuit composed of SNGs for data conversion and a 4-to-1 MUX unit. Table IV presents the performance results in terms of peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM). We evaluated the SC circuit for the cases of using the P2LSG, Sobol (S), Niederreiter (N), and LFSR random sequences in the SNG units. We processed the _Mona Lisa_, _Minion_, and _Van Gogh_ images as the test images. The results shown in Fig. 7 and Table IV show the superior performance of the P2LSG sequences. We further evaluated the performance and energy consumption in 45nm CMOS technology when processing the _Mona Lisa_ image (107 \(\times\) 104 image size) for the two cases of P2LSG and Sobol. Table V reports the results. As reported, the non-parallel and the 4\(\times\) parallel designs of the P2LSG-based implementation save area by 64% and 55%, energy by 67% and 85%, and CPL by 13% and 22% compared to the non-parallel and 4\(\times\) parallel Sobol-based implementation, respectively.
input [32]. Based on this analogy, the provided video in Fig. 6, which features a moving tiger, is merged with an African jungle background. Using the P2LSG sequence for data conversion and a 2-to-1 MUX as the SC circuit, the processed video achieves a PSNR of \(48.23dB\) and an SSIM of \(0.999\). The video has a duration of \(6.06\) seconds and was generated at a frame rate of 30 frames per second. We also achieved similar PSNR and SSIM values (\(48.13dB\) and \(0.9999\)) when using the Sobol-based SNG for data conversion. As the hardware results presented in Table V demonstrate, the non-parallel and 4\(\times\) parallel P2LSG design provide 73% and 69% lower area, 72% and 90% lower energy consumption, and 14% and 23% lower runtime compared to the non-parallel and 4\(\times\) parallel Sobol-based design.
## VI Conclusions
This study explores new design possibilities for SC by analyzing some well-known random sequences in the literature. As a promising random sequence for the SNG unit of SC systems, we evaluated the performance of the Powers-of-**2**Low-Discrepancy **S**equence **G**enerator (P2LSG) sequences. We proposed a lightweight hardware design for P2LSG sequence generation. The proposed generator provides a higher hardware efficiency compared to the SOTA LD sequence generators. We evaluated the performance of the P2LSG-based SNGs in an image scaling and a scene merging video processing case study. Our performance evaluation and hardware cost comparison show comparable or better numbers compared to the SOTA. Our finding opens possibilities for incorporating the P2LSG sequences in other emerging paradigms that require orthogonal vectors, such as hyperdimensional computing. The P2LSG sequences can be utilized to address the computational needs and improve the performance of such paradigms. We leave studying this aspect for our future work.
|
2309.15740 | Data-Driven Latent Space Representation for Robust Bipedal Locomotion
Learning | This paper presents a novel framework for learning robust bipedal walking by
combining a data-driven state representation with a Reinforcement Learning (RL)
based locomotion policy. The framework utilizes an autoencoder to learn a
low-dimensional latent space that captures the complex dynamics of bipedal
locomotion from existing locomotion data. This reduced dimensional state
representation is then used as states for training a robust RL-based gait
policy, eliminating the need for heuristic state selections or the use of
template models for gait planning. The results demonstrate that the learned
latent variables are disentangled and directly correspond to different gaits or
speeds, such as moving forward, backward, or walking in place. Compared to
traditional template model-based approaches, our framework exhibits superior
performance and robustness in simulation. The trained policy effectively tracks
a wide range of walking speeds and demonstrates good generalization
capabilities to unseen scenarios. | Guillermo A. Castillo, Bowen Weng, Wei Zhang, Ayonga Hereid | 2023-09-27T15:51:18Z | http://arxiv.org/abs/2309.15740v1 | # Data-Driven Latent Space Representation for Robust Bipedal
###### Abstract
This paper presents a novel framework for learning robust bipedal walking by combining a data-driven state representation with a Reinforcement Learning (RL) based locomotion policy. The framework utilizes an autoencoder to learn a low-dimensional latent space that captures the complex dynamics of bipedal locomotion from existing locomotion data. This reduced dimensional state representation is then used as states for training a robust RL-based gait policy, eliminating the need for heuristic state selections or the use of template models for gait planning. The results demonstrate that the learned latent variables are disentangled and directly correspond to different gaits or speeds, such as moving forward, backward, or walking in place. Compared to traditional template model-based approaches, our framework exhibits superior performance and robustness in simulation. The trained policy effectively tracks a wide range of walking speeds and demonstrates good generalization capabilities to unseen scenarios.
## I Introduction
Bipedal robots have long been a subject of fascination and research within the field of robotics due to their potential for versatile and agile locomotion in complex environments, closely mimicking the morphology and capabilities of humans. However, despite significant advancements in control techniques and hardware, developing a robust and efficient bipedal locomotion control system remains a challenge, largely due to the high dimensionality, underactuation, and highly nonlinear and hybrid dynamics of bipedal locomotion.
Conventional methods for bipedal walking often involve solving optimization problems using the robot's full-order [1] or reduced-order model [2, 3] to find feasible trajectories that enable stable walking gaits. The full-order model captures all the complexities and details of the robot's dynamics, but it can be computationally demanding and not suitable for real-time control [4]. On the other hand, reduced-order template models (such as linear inverted pendulum (LIP) and its variants [2, 5, 6]) simplify the dynamics of the system, making it easier to plan trajectories for the robot's center of mass and end-effector. However, these reduced-order models often require strict constraints to account for the mismatch between the reduced and full-order states of the robot.
The evolution of modern control theory has seen an infusion of machine learning and RL techniques, particularly with the growing abundance and accessibility of data. These data-driven approaches offer novel ways to address challenges in control design, often sidestepping traditional, more rigid, and computationally expensive methods. There is a growing interest in using reinforcement learning based approaches that allow exploiting data from simulation to train controllers in a model-free fashion [7, 8, 9, 10]. However, similar to model-based techniques, they are highly dependent on the quality of the state representation provided to the learning algorithm. As an alternative, more complex frameworks have been proposed to combine learning algorithms with model-based controllers. In [11], an HZD-based approach is used to learn a policy that satisfies Control Barrier Functions (CBF) defined on the reduced-order dynamics. In our previous work [12, 13], a cascade structure is implemented to compensate the learned trajectories with feedback regulators to increase the robustness of the walking gait. A hierarchical structure that combines a template-based RL policy with a model-based low-level controller is proposed in [14].
An effective state representation that can accurately capture the complex dynamics of the whole system can significantly enhance the learning process, enabling more efficient learning and better transferability of control strategies. Unsupervised learning, dimensionality reduction, and representation learning methods can be employed to extract relevant features from high-dimensional sensory data, enabling the development of more efficient and interpretable state representations [15, 16]. Dai et al. [17] learn the step-to-step residual dynamics via an adaptive control approach, which is then used to design a foot-stepping controller for a bipedal robot in simulation. More complex learned residual dynamic models are also combined with Model Predictive Control (MPC) for agile systems [18]. These techniques require previous knowledge of the dynamics of the system's model. Several works also have exploited latent representations through reinforcement learning for locomotion. Peng et al. [19] combine techniques from adversarial imitation learning and unsupervised reinforcement learning to develop skill embeddings that produce locomotion behaviors. Starke et al. [20] extract a multi-dimensional phase space from the full-order state motion data, which effectively clusters animations and produces a manifold with better temporal and spatial alignment. However, these are end-to-end frameworks, which makes it difficult to establish a relationship between the latent space and the control actions of the policy. Moreover, these approaches have been mostly used to control animated characters in simulation, and there are no studies focused on implementation for actual bipedal robots.
In this paper, we propose a novel data-driven framework
for bipedal walking that combines a learned low dimensional state representation of bipedal locomotion with a robust gait planner using RL, as depicted in Fig. 1. Our framework uses an autoencoder to extract an effective reduced-dimensional state representation of the full-order system dynamics. We then integrate this reduced-order latent space with reinforcement learning and a task space feedback controller to train robust locomotion policies. This paper makes two key contributions. First, we demonstrate that the complex dynamics of bipedal robots can be effectively captured using a low-dimensional learned latent space. This allows for a more compact representation of the system's behavior. Second, we show that the learned latent variables can be leveraged to design a robust locomotion policy using RL. By bridging the gap between state representation learning and learning-based control policies, this work enables leveraging existing locomotion data to develop more effective and adaptable frameworks for versatile and robust bipedal locomotion.
## II Preliminaries and Problem Formulation
### _Hybrid System Model of Bipedal Locomotion_
The bipedal locomotion problem can be characterized as a hybrid system determined by a collection of phases of continuous dynamics with discrete events between the transitions of the continuous phases. Formally, the hybrid system model for biped locomotion can be defined as [21]:
\[\Sigma:\left\{\begin{array}{ll}\dot{x}=f(x)+g(x)u+\omega(x,u)&x\in\mathcal{ X}\setminus\mathcal{H}\\ x^{+}=\Delta(x^{-})&x^{-}\in\mathcal{H},\end{array}\right. \tag{1}\]
where \(f(\cdot)\) and \(g(\cdot)\) are vector fields, \(x=(q,\dot{q})\in\mathcal{X}\subseteq\mathbb{R}^{n}\) represents the robot states with \(q\) being the vector of generalized coordinates, \(u\in\mathcal{U}\subseteq\mathbb{R}^{m}\) is a vector of actuator inputs, and \(\omega\in\Omega\subseteq\mathbb{R}^{w}\) captures external disturbances and model uncertainties. The reset map \(\Delta:\mathcal{H}\rightarrow\mathcal{X}\) denotes the mapping between the post-impact states \(x^{+}\) immediately after impacts and the pre-impact states \(x^{-}\) right before impacts, and \(\mathcal{H}\) is the switching surface corresponding to swing foot impacts.
For a typical humanoid robot with high degrees of freedom (DoF), the dimension of the robot states \(x\) is too large to be effectively used for feedback motion planning. Low-dimensional models have become a powerful tool for motion planning of bipedal locomotion, given their potential to characterize the dynamics of bipedal walking into simple linear or nonlinear models. However, existing reduced-order template models, such as the Linear Inverted Pendulum (LIP), make assumptions that limit the full capabilities of walking robots. These assumptions include a constant center of mass (CoM) height and zero angular momentum about the CoM during the walking gait. While previous research has explored alternative template models for bipedal locomotion, this study aims to investigate whether available locomotion data can be utilized to identify an effective reduced-dimensional state representation of bipedal locomotion for motion planning purposes.
### _Data-driven Low Dimensional Latent Space_
Autoencoders are a great tool to harness high dimensional gait data to extract a reduced dimensional latent represen
Fig. 1: An overview of the overall structure and flow of the proposed learning-based framework. In the pre-training phase, the autoencoder learns a latent space that captures the dynamics of the full-order system. During the RL training, the policy maps the latent representation to a set of task space actions that are translated into task space trajectories. Finally, a whole-body task space feedback controller computes the motor torque to track the desired task-space trajectories.
tation of the system that captures the essence of the full-order robot dynamics. An autoencoder works by compressing the input data into a low-dimensional latent space through a feature-extracting function in a specific parameterized closed form, such as neural networks [16]. This function, called _encoder_ is determined by
\[z=h(x,\theta_{e}), \tag{2}\]
where \(z\) is the latent variable or representation encoded from the input \(x\), and \(\theta_{e}\) is the vector of parameters for the encoder neural network. Another closed-form parameterized function called the _decoder_, maps from the latent space back to the full-order state. This function is defined by
\[\hat{x}=d(z,\theta_{d}), \tag{3}\]
where \(z\) is the encoded latent variable, \(\hat{x}\) is the reconstruction of the original input data \(x\), and \(\theta_{d}\) is the vector of parameters for the decoder neural network.
The strength of autoencoders lies in their ability to preserve most of the crucial information from the original data in the latent representation, even though it is of much lower dimensionality. This is ensured by training the autoencoder to minimize the reconstruction loss \(\mathcal{L}\), which measures the error between the original data and its reconstruction from the latent space. In summary, autoencoder training consists of finding a value of the parameter vectors \(\theta_{e},\theta_{d}\) that minimize the reconstruction error:
\[\mathcal{J}_{\mathrm{AE}}(\theta_{e},\theta_{d})=\sum_{x^{(i)}\in\mathbf{X}} \mathcal{L}\left(x^{(i)},d\left(h\left(x^{(i)},\theta_{e}\right),\theta_{d} \right)\right), \tag{4}\]
where \(x^{(i)}\) is a training sample containing the vectors of position and velocity of the generalized coordinates, and \(\mathbf{X}\) is the training set containing all the data samples. \(h\) and \(d\) are the encoding and decoding functions introduced in (2) and (3). Once the autoencoder has been trained, the resulting latent representation can then be utilized as part of the state for the high-level planner policy, offering a compact yet expressive state space for learning and control.
## III Method
The proposed data-driven framework is shown in Fig. 1. The gait policy will be learned via RL by utilizing an effective latent representation of the full-order robot's dynamics to a set of task space commands that generate desired trajectories for the robot. Then, a whole body task space controller (TSC) from [14] is employed to accurately track the desired trajectories to realize stable locomotion.
The proposed framework is tested with the robot Digit, which is a 3D fully actuated bipedal robot with \(30\) DoF and \(20\) actuated joints built by the company Agility Robotics. Each leg has six actuated joints corresponding to the motors located on the robot's hip, knee, and ankle and three passive joints corresponding to the robot's tarsus, shin spring and heel spring joints. In addition, it has four actuated joints per arm corresponding to the shoulder and elbow joints. Since the spring joints are very stiff, we considered them as fixed joints in this work. Therefore, the vector of generalized coordinates for Digit is defined by
\[q=(p,q_{\phi},q_{j}), \tag{5}\]
where \(p=(p_{x},p_{y},p_{z})\) is the position of the robot's base, \(q_{\phi}=(q_{x},q_{y},q_{z},q_{w})\) is the quaternion representation of the robot's orientation, and \(q_{j}=(q_{1},\dots,q_{n_{j}})\) is the vector of the robot's joints with \(n_{j}=24\). Therefore, \(q\in SE(3)\times\mathbb{R}^{24}\subset\mathbb{R}^{31}\), \(\dot{q}\in\mathbb{R}^{30}\), and \(x\in\mathbb{R}^{61}\).
### _Reduced Dimensional Latent Space Representation_
To collect the locomotion dataset to train the autoencoder, we use the hierarchical controller proposed in [14]. The dataset is collected by performing walking gaits at various velocities. Specifically, the velocities \(v_{x}\) and \(v_{y}\) are varied within the ranges of \([-0.5,1.0]\) m/s and \([-0.2,0.2]\) m/s, respectively, with a step size of \(0.1\) m/s, resulting in a total of \(16\) different walking gaits. Each walking gait has a duration of \(10\) seconds, with the data being collected at a frequency of \(50\) Hz. Therefore, the complete locomotion dataset \(\mathbf{X}\) consists of \(40,000\) samples of the robot's full-order states, i.e., \(\mathbf{X}=\{x^{(i)}|i\in[1,40000]\}\).
_Remark:_ Note that any locomotion controller could be used to collect the gait data. For instance, many commercial robots are equipped with proprietary controllers that could be used to collect this data even though they are a black box from the user's perspective. Moreover, we do not make any assumptions about the distribution of the gait data.
In this work, we use an encoder parameterized by a fully connected neural network with three hidden layers of \(128\), \(64\), and \(32\) units, respectively, and ReLU activation functions. The input of the encoder is the robot's full order states. However, since the inputs of the encoder need to be bounded, the absolute base position of the robot cannot be directly used. This is because the absolute position can grow unbounded as the robot moves in the sagittal or frontal plane. To address this, the base position and orientation of the robot are transformed from the world frame to the stance foot frame, which ensures that the inputs to the encoder remain bounded and allows for effective learning of the latent representation of the robot's dynamics.
The autoencoder is trained using Adam optimizer [22] with a learning rate of \(0.001\) and a batch size \(B\) of \(128\). The reconstruction loss is computed with the mean squared error (MSE) between the original values of the gait dataset \(x^{(i)}\in\mathbf{X}\) and their reconstructed values \(\hat{x}^{(i)}\in\mathbf{\hat{X}}\), given as
\[\mathcal{L}=\frac{1}{B}\left(x^{(i)}-\hat{x}^{(i)}\right)^{2}. \tag{6}\]
The autoencoder is trained for 400 episodes in a 12-core CPU machine with an NVIDIA RTX 2080 GPU. The training takes about 10 minutes using the described locomotion dataset \(\mathbf{X}\) and PyTorch.
The selection of the dimension of the latent variable \(z\) involves a trade-off. On one hand, a smaller dimension is desirable as it reduces the number of inputs for the RL policy. However, the dimension should also be large enough
for the autoencoder to accurately reconstruct the full-order state of the system. In our study, we investigated this trade-off and discovered that even for a complex system like a humanoid robot, a latent variable dimension of \(N=2\) is sufficient to capture the dynamics of the full-order systems effectively. This finding is supported by the results shown in Fig. 2, where increasing the dimension of the latent variable does not lead to significant improvements in reconstruction quality. Even in cases where performance is slightly degraded, the reconstructed variable still follows the same pattern as the original state. Interestingly, our findings align with those reported in [20], where the authors demonstrated that periodic movements in bipedal locomotion can be represented using fewer than five phase variables. This similarity may be attributed to the symmetric and periodic nature of bipedal walking.
### _RL-based Gait Policy using Learned Latent Variables_
Given the reduced dimensional latent space, we train an RL policy for robust locomotion based on our previous work in [14]. The states of the policy are defined as
\[s=(z,e_{\bar{v}},v^{d},a_{k-1}), \tag{7}\]
where \(z=(z_{1},\ldots,z_{N})\in\mathbb{R}^{N}\) is the encoded latent state, \(e_{\bar{v}}=(e_{\bar{v}_{x}},e_{\bar{v}_{y}})\) is the error between the average velocity, \(\bar{v}=(\bar{v}_{x},\bar{v}_{y})\), and the desired velocity, \(v^{d}=(v^{d}_{x},v^{d}_{x})\), of the robot, and \(a_{k-1}\) is the last action of the planner policy.
The selection of the action space in this work is designed to exploit the natural nonlinear dynamics of the biped robot and enhance the robustness of the policy under various challenging scenarios. Since Digit is a fully actuated system during the single support phase, we include the instantaneous base velocity as part of the action \(a\). We choose to control instantaneous velocity over the base position because controlling the position during the stance phase through the TSC is more challenging than velocities in practical hardware implementation. This is caused by the noisy and inaccurate estimation of the position and the limited amount of torque in the robot's ankles. Moreover, sudden motions of the foot position due to terrain irregularities could produce big errors in the base position tracking that could result in aggressive control maneuvers. Therefore, controlling the instantaneous velocity is a less aggressive strategy for the TSC and provides some degree of damping to the ankle motion that helps with the stability of the walking gait under irregular terrains. Thus the action \(a\in\mathcal{A}\) of the policy for Digit is chosen to be:
\[a=(p^{x}_{\text{sw},T},p^{y}_{\text{sw},T},v^{d}_{x},v^{d}_{y}), \tag{8}\]
which corresponds to the landing position of the swing foot in \(x\) and \(y\) coordinates with respect to the robot's base and an offset to the instantaneous velocity of the robot's base in \(x\) and \(y\) coordinates, as illustrated in Fig. 3. The trajectory generation module transforms the policy action \(a\) into smooth task-space trajectories for the robot's base and end-effectors. Specifically, the trajectory for the relative swing foot positions, \(p^{x}_{\text{sw}}\) and \(p^{y}_{\text{sw}}\), are generated using a minimum jerk trajectory connecting initial foot positions with target foot positions from the policy action. In particular, the initial foot positions will be computed at every touchdown event and kept constant throughout the current step.
The neural network (NN) chosen to parameterize the gait policy is a feed-forward network with two hidden layers, each layer with \(128\) units. The hidden layers use the ReLU activation function, and the output layer is bounded by the \(\mathrm{Tanh}\) activation function and a scaling factor to constrain the maximum value of the policy commands within the feasible physical limits of the robot hardware. Finally, we implemented a model-based TSC controller following the structure to track the task space trajectories, as described in our previous work [14].
Fig. 3: Visualization of the policy actions and the trajectories generated for the task space controller. Additionally, we keep the torso straight up and the base at a constant height.
Fig. 2: Reconstruction of the robotβs state with different dimensions \(N\) for the latent variable \(z\). The plots show the position (left column) and velocity (right column) for the robotβs base \(x\) coordinate (top), robotβs base \(y\) coordinate (middle), and left knee joint (bottom), respectively.
### _Learning Procedure of the RL Gait Policy_
To train the RL policy, we use the Proximal Policy Optimization algorithm [23] with input normalization, fixed covariance, and parallel experience collection. We follow the algorithm implementation described in [8].
For each episode during the RL training, the initial state of the robot is set randomly from a normal distribution about an initial pose corresponding to the robot standing in the double support phase. An episode will be terminated early if the torso pitch and roll angles exceed \(1\) rad or the base height falls below \(0.8\) m. The reward function adopted in this work is designed to (1) keep track of desired velocities, (2) minimize the angular momentum around the center of mass, denoted as \(L_{\text{CoM}}\), and (3) reduce the variation of the policy actions between each iteration. More specifically,
\[\mathbf{r}=\mathbf{w}^{T}[r_{v},r_{L_{\text{CoM}}},r_{a}]^{T}, \tag{9}\]
with
\[r_{v_{x}} =\exp{(-\left\|\bar{v}-v^{d}\right\|^{2})}, \tag{10}\] \[r_{L_{\text{CoM}}} =\exp{(-\left\|L_{\text{CoM}}\right\|^{2})},\] (11) \[r_{a} =\exp{(-\left\|a_{k}-a_{k-1}\right\|^{2})}, \tag{12}\]
and the weights are chosen as \(\mathbf{w}^{T}=[0.6,0.3,0.1]\).
One iteration step of the policy corresponds to the interaction of the learning agent with the environment. The RL policy takes the reduced order state \(s\) and computes an action \(a\) that is converted in desired task-space trajectories \(y^{d}\) at the time \(t_{k}\). The reference trajectories are then sent to the task-space controller, which sends torque commands to the robot. This workflow is depicted in Fig. 1. The feedback control loop runs at a frequency of \(1\) kHz, while the high-level planner policy runs at \(50\) Hz. The maximum length of each episode is \(600\) iteration steps, which corresponds to \(12\) seconds of simulated time.
## IV Simulation results
In this section, we show the performance of the learned planner policy under different testing scenarios with the bipedal robot Digit. Moreover, we analyze the latent representation generated during the testing of the RL policy.
### _Latent State Representation_
To visualize the learned latent manifolds, we applied principal component analysis (PCA) to reduce the dimensionality of the latent space to 2D. Fig. 4 shows the 2D representation of the latent space for dimensions \(N=2\), \(N=4\), and \(N=8\). The data used for visualization correspond to the robot Digit walking with the learned RL policy within the range of \(v_{x}^{d}\in[-0.75,1.4]\) m/s. In all three cases, the latent space exhibits a well-distributed and disentangled representation of the data, with each walking speed corresponding to a specific area in the 2D plane. For comparison, Fig. 4 also includes the 2D PCA visualization of the full-order state of the robot. It is evident that data points for different speeds overlap with no clear structure. This highlights that the latent space, even with a very low dimensionality \(N\), effectively captures the distribution of the data in the full-order system. This demonstrates the potential of exploiting the latent representation for control purposes.
### _Tracking Performance of Different Velocity Profiles_
We evaluated the performance of the learned gait policies with latent space dimensions with \(N=2\), \(N=4\), and \(N=8\) in tracking a velocity profile in different directions using the Digit robot. As shown in Fig. 5, the policy successfully tracks walking speeds in the range \(v_{x}^{d}\in[-0.75,1.4]\) m/s, even with aggressive changes in the velocity profile. Importantly, the data collected to train the autoencoder was within the range of \(v_{x}^{d}\in[-0.5,1.0]\). These results show the effectivity of the learned latent states to control the walking velocity even with a low dimension \(N\). Furthermore, the results demonstrate that the latent representation captures the dynamic nature of the walking gait and can be generalized to scenarios outside the training distribution. Additional tests about this aspect are presented in Sec. IV-C.
Fig. 4: Two-dimensional principal component analysis (PCA) of the learned latent manifolds at different walking speeds.
Fig. 5: Comparison of velocity tracking performance of the learned policy with different dimensions of the latent space \(N\) with a model-based ALIP planner described in [5].
For comparison, Fig. 5 also shows the speed-tracking performance of a model-based ALIP planner based on the design in [5]. Our policy outperforms the ALIP-based planner in terms of velocity tracking and offers a wider range of admissible walking speeds. The ALIP planner fails to maintain a stable walking gait for speeds higher than 1.2 m/s. In addition, Fig. 6 illustrates the correspondence between the walking speeds and control actions of the RL policies with different \(N\)s and the ALIP planner. Specifically, we focus on the swing foot landing position with respect to the robot's base \(p_{x,T}^{sw}\) when the robot is walking at \(0.7\) and \(1.0\) m/s. Interestingly, the actions of the RL policy exhibit similar patterns across different latent space dimensions \(N\). This finding aligns with the results presented in Sec. IV-A, where we showed that even a small \(N\) is sufficient to characterize the dynamics of the full-order system fully. When provided with a larger latent space dimension, the policy learns to disregard less important states in the latent space, resulting in similar policy actions for different \(N\) values.
Furthermore, the actions of the latent space-based policies exhibit similar patterns to those of the ALIP planner. This is intriguing because the latent space used by the RL policy does not have a direct physical interpretation. Nevertheless, the RL policy learns to behave in a manner similar to the template model-based planner. These behaviors are not enforced during training, suggesting that the latent space naturally captures the dynamics of both the template model and the full-order system.
### _Policy Generalization to Out-of-Distribution Scenarios_
In addition to evaluating the policy's performance on data it was trained on, we also assess its ability to handle out-of-distribution scenarios. To do this, we conduct tests where the policy is instructed to maintain a constant speed while the height of the base is varied. It is important to note that the training of the autoencoder and the RL policy did not include any data with varying base heights, as the locomotion data used for training latent spaces was collected with a fixed base height of 1 m. However, as depicted in Fig. 7, the policy demonstrates successful tracking of the desired walking speed regardless of the different base heights commanded. An interesting future work direction may be the exploration of the robust generalization of the latent space to generate transitions between different locomotion tasks such as walking, jogging, sitting, and jumping.
Furthermore, we conducted tests to evaluate the policy's robustness against external disturbances in the forward and backward directions. The disturbances ranged from \(-100\) N to \(60\) N, with durations ranging from \(0.1\) s to \(1.5\) s. As illustrated in Fig. 8, the policy demonstrated effective reactions to the different disturbances, successfully maintaining stability and tracking the desired walking speeds without falling.
## V Conclusion
In this work, we present a novel data-driven learning framework to realize robust bipedal locomotion. The design of the high-level RL gait policy takes data-driven reduced dimensional latent variables as input states and generates a set of task space commands, including the robot's step length with respect to the base and instantaneous velocity offset of the robot's base. The latent representation of the full-order state is obtained using an autoencoder trained with supervised learning from locomotion data collected with existing locomotion controllers. Our work shows that the learned latent representation manifold has a disentangled structure that is directly correlated with the speed of the walking robot. The insightful choice of the RL state and action spaces results in a compact policy that learns effective strategies for robust and dynamic locomotion in simulation. Future work will focus on implementing and validating the proposed framework on Digit, further demonstrating its effectiveness and adaptability for real-world bipedal locomotion.
Fig. 8: Robustness test to external disturbances.
Fig. 6: Comparison of the desired swing foot landing positions (the policy action) from different latent space dimensions \(N\) and the ALIP planner.
Fig. 7: Generalization of the latent space and trained policy to out-of-distribution data. |
2301.00209 | Blazar boosted Dark Matter -- direct detection constraints on
$Ο_{eΟ}$ : Role of energy dependent cross sections | Elastic collisions with relativistic electrons from the blazar's jet can
accelerate dark matter (DM) particles in the DM spike surrounding the
supermassive black hole at its center. This can allow one to set stringent
limits on the DM-electron scattering cross section ($\bar{\sigma}_{e\chi}$) for
DM masses less than 100 MeV. We consider DM particles boosted by energetic
electrons in the jets of the blazars TXS 0506+056 and BL Lacertae. Both vector
and scalar mediators for the scattering of electron and electrophilic fermionic
DM are studied. We highlight that the ensuing energy dependency of the S-matrix
for the corresponding Lorentz structure of the vertex significantly modifies
the constraints. We find that the revised exclusion limits are orders of
magnitude stronger than the equivalent results for the simple constant cross
section assumption. Our limits are also assessed for the less cuspy spike. | Supritha Bhowmick, Diptimoy Ghosh, Divya Sachdeva | 2022-12-31T14:37:54Z | http://arxiv.org/abs/2301.00209v4 | # Blazar boosted Dark Matter - direct detection constraints on \(\sigma_{e_{X}}\) :
###### Abstract
Elastic collisions with relativistic electrons from the blazar's jet can accelerate dark matter (DM) particles in the DM spike surrounding the supermassive black hole at its center. This can allow one to set stringent limits on the DM-electron scattering cross section (\(\bar{\sigma}_{e_{X}}\)) for DM masses less than 100 MeV. We consider DM particles boosted by energetic electrons in the jets of the blazars TXS 0506+056 and BL Lacertae. Both vector and scalar mediators for the scattering of electron and electrophilic fermionic DM are studied. We highlight that the ensuing energy dependency of the S-matrix for the corresponding Lorentz structure of the vertex significantly modifies the constraints. We find that the revised exclusion limits are orders of magnitude stronger than the equivalent results for the simple constant cross section assumption. Our limits are also assessed for the less cuspy spike.
## I Introduction
The Cold Dark Matter (CDM) provides a compelling explanation for a broad range of observations, including rotation curves in spiral galaxies, gravitational microlensing, cluster collisions (the Bullet Cluster), and temperature anisotropy in the spectrum of cosmic microwave background radiation. To that end, a variety of particle physics models predict a feeble interaction between SM and DM, which can be investigated using Direct detection (DD) experiments. The DD experiments identify the nuclear or electronic recoils produced by the scattering between DM and the detector's (target) nuclei or electron. The average velocity of DM particles in the solar vicinity, however, restricts the amount of energy that may be deposited in a detector. For example: detectors like Xenon1T can detect DM mass \(m_{\chi}\sim\mathcal{O}(1\ \mathrm{MeV})\), corresponding to electronic recoil of \(\sim\mathcal{O}(1\ \mathrm{keV})\). The neutrino detectors like Super-K are sensitive to recoil energy threshold of \(\sim\mathcal{O}(1\ \mathrm{MeV})\) leading to the smallest accessible DM mass of \(\mathcal{O}(1\ \mathrm{GeV})\)1. Thus, these detectors appear to have a limited range for detecting lighter DM particles. Since these observations have been negative about the sign of DM, it is critical to develop methods for probing the sub-GeV/MeV mass range.
Footnote 1: Fermionic DM absorption models [1; 2; 3] allow Xenon1T and Super-K to probe masses down to \(\sim\mathcal{O}(10\ \mathrm{keV})\) and \(\sim\mathcal{O}(1\ \mathrm{MeV})\) respectively
The reach of these experiments has been extended to DM masses well below \(1\ \mathrm{GeV}\) in recent years, thanks to the novel idea of boosting the halo DM through its interaction with the SM particles via cosmic rays [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21], primordial black holes [22; 23], diffuse Supernova Neutrino Background (DSNB) [11; 12; 13; 14; 15], and blazars [24; 25]. Despite the fact that the boosted DM flux is substantial and DM particles are (semi)relativistic, the sensitivity is achieved at larger cross sections because the up-scattered subcomponent flux is significantly lower than the galactic DM population.
In this paper, we consider the blazar boosted DM, which was proposed in Ref. [24; 25] in the context of fermionic DM. The presence of a supermassive Black Hole (BH) at the blazar center, which provides a dense DM population compensates for the blazar's large distance from Earth by producing DM flux that is stronger than that from galactic CRs. The existing literature, however, assumes that DM interaction cross-sections are independent of DM energy. Although this simple assumption makes calculations easier, it's not physically realistic. It would also be an incorrect approximation to make, notably in scenarios where a significant DM flux becomes relativistic after being scattered by energetic particles. Some of the previously mentioned works [8; 17; 18; 19; 26] for cosmic ray boosted DM, have already discovered that the limits for energy-dependent cross-section differ by orders of magnitude from those obtained under the assumption of a constant cross section.
The notion of an energy-dependent scattering cross-section is thus primarily investigated in the present work by taking into account electrophilic fermionic DM that has been boosted by energetic electrons from blazars. To constrain the scattering cross-section, we use electron recoil measurements in Super-Kamiokande. This work is organized as follows. We discuss the spectrum of energetic particles in the blazar jets in Section II, and describe DM density profiles in Section III. In Section IV, we estimate the Blazar boosted DM (BBDM) flux and compute the event rate. In Section V, we present simplified DM models. We present the main results of our paper in Section VI, i.e., the energy dependent exclusion bound from BBDM-electron scattering, and in Section VII, we summarize and conclude.
Blazar Jet Spectrum
Blazars are characterized by a non-thermal spectral energy distribution (SED). This spectrum has a low energy peak in the infra-red or X-ray region, which has been accepted to be due to synchrotron emission of electrons in the jet. Another peak at \(\gamma\)-ray frequencies could be due to highly relativistic protons [27; 28; 29; 30; 31], as motivated by the recent IceCube detection [32; 33; 34] of a high energy neutrino from TXS 0506+056 blazar. Since DM considered in this work is electrophilic, at tree level it can only interact with electrons. Therefore, we are only concerned with the blazar jets' electron spectrum.
We follow the procedure laid out in Ref. [24; 25] to compute the spectrum of the energetic electrons in the blazar jets, assuming "Blob geometry" model [35]. In this model, the energetic particles in the blazar jets move isotropically in a "blob" frame, as the blob traverses outwards along the jet axis. The Lorentz boost factor of the blob is given by \(\Gamma_{B}=(1-\beta_{B}^{2})^{-1/2}\), where \(\beta_{B}\) is the blob's propagation speed. The inclination of the jet axis with respect to the line of sight (LOS) is taken to be \(\theta_{\rm LOS}\).
In the blob frame, the energetic electrons follow a power law distribution with a high and a low energy cutoff (\(\gamma^{\prime}_{\max,e}\) and \(\gamma^{\prime}_{\min,e}\) respectively). This spectrum can then be frame transformed to the observer's rest frame (for details of the derivation, see [24]), given by :
\[\frac{d\Gamma_{e}}{dT_{e}d\Omega}=\frac{c_{e}}{4\pi}\Gamma_{B}^{- \alpha_{e}}\left(1+\frac{T_{e}}{m_{e}}\right)^{-\alpha_{e}}\] \[\times\frac{\beta_{e}(1-\beta_{e}\beta_{B}\mu)^{-\alpha_{e}}}{ \sqrt{\left(1-\beta_{e}\beta_{B}\mu\right)^{2}-(1-\beta_{e}^{2})\left(1-\beta_ {B}^{2}\right)}} \tag{1}\]
where \(m_{e}\) and \(T_{e}\) is the mass and kinetic energy of the electron respectively. Speed of the electrons is given by \(\beta_{e}=\left(1-m_{e}^{2}/(T_{e}+m_{e})^{2}\right)^{1/2}\). Doppler factor for the blob frame is \(\mathcal{D}=\left(\Gamma_{B}(1-\beta_{B}\cos\theta_{\rm LOS})\right)^{-1}\). \(\alpha_{e}\) is the power index of the electron spectrum in the blob frame. \(\mu\) is the cosine of the angle between direction of the electron's motion and the jet axis. It is related to the scattering angle in the blob frame (\(\bar{\mu}_{s}\)) by [24; 4] :
\[\mu(\bar{\mu}_{s},\phi_{s})=\bar{\mu}_{s}\cos\theta_{\rm LOS}+\sin\phi_{s}\sin \theta_{\rm LOS}\sqrt{1-\bar{\mu}_{s}^{2}} \tag{2}\]
where \(\phi_{s}\) is the azimuth with respect to the LOS. \(\bar{\mu}_{s}\) is related to the kinetic energy of the blazar jet electron and the kinetic energy (\(T_{\chi}\)) transferred to the DM, as follows :
\[\bar{\mu}_{s}(T_{e},T_{\chi})=\left[1+\frac{T_{\chi}^{\max}-T_{\chi}}{T_{\chi}} \frac{(m_{e}+m_{\chi})^{2}+2m_{\chi}T_{e}}{(T_{e}+m_{e}+m_{\chi})^{2}}\right]^ {-1/2} \tag{3}\]
Now, \(c_{e}\) is a normalisation constant which is determined from the blazar jet electron luminosity (\(L_{e}\)), where the latter depends on \(c_{e}\) as [24; 36] :
\[L_{e}=c_{e}m_{e}^{2}\Gamma_{B}^{2}\int_{\gamma^{\prime}_{\min,e}}^{\gamma^{ \prime}_{\max,e}}\left(\gamma^{\prime}_{e}\right)^{1-\alpha_{e}}d\gamma^{ \prime}_{e}\;, \tag{4}\]
and thus \(c_{e}\) is simply given by :
\[c_{e}=\frac{L_{e}}{m_{e}^{2}\Gamma_{B}^{2}}\times\begin{cases} \left(2-\alpha_{e}\right)/\left[\left(\gamma^{\prime}_{\max,e}\right)^{2- \alpha_{e}}-\left(\gamma^{\prime}_{\min,e}\right)^{2-\alpha_{e}}\right]&\text{ if }\alpha_{e}\neq 2\;;\\ \\ 1/\log\left(\gamma^{\prime}_{\max,e}/\gamma^{\prime}_{\min,e}\right)&\text{ if } \alpha_{e}=2.\end{cases} \tag{5}\]
The parameters \(\gamma^{\prime}_{\min,e}\), \(\gamma^{\prime}_{\max,e}\), \(\alpha_{e}\), \(L_{e}\) and \(\mathcal{D}\) are fitted to the SED of a blazar. The doppler factor is assumed to be either \(2\Gamma_{B}\) or \(\Gamma_{B}\). These two cases correspond to TXS 0506+056 (\(\theta_{\rm LOS}=0\)) and BL Lacertae (\(\theta_{\rm LOS}\sim 3.82^{\circ}\)). All the parameters required to find the blazar jet spectrum of TXS 0506+056 and BL Lacertae, along with the blazar redshift and luminosity distance (\(d_{L}\)) are mentioned in Table 1. The electron spectrum is plotted in Fig. 1.
## III DM density profile
N-body simulations and observations are not sensitive at subparsec sizes, thus the DM distribution near Galactic center is not well known. The central supermassive black hole (SMBH) can have a considerable impact on DM density if the SMBH grows adiabatically, i.e., on a timescale much longer than its dynamical timescale. The DM density in a region corresponding to the sphere of gravitational influence of the black hole (BH) is expected to be significantly enhanced [38]. This results in a morphological feature known as a DM spike, which corresponds to a DM profile with a power law scaling \(\rho(r)\propto r^{-\gamma_{sp}}\)[38]. Here \(\gamma_{\rm sp}=\frac{9-2\gamma}{4-\gamma}\) commonly ranges from 2.25 to 2.5, depending on the slope of the initial DM halo distribution, \(\gamma\). In this work, we assume the initial central DM profile is Navarro-Frenk-White, \(\gamma=1\). Also, for DM annihilating with cross-section \(\langle\sigma v\rangle_{\rm ann.}\), the innermost region of the DM spike is de
pleted because DM particles annihilate efficiently on account of high DM density, leading to "annihilation plateau" density given by
\[\rho_{\rm sat}=\frac{m_{\chi}}{\langle\sigma v\rangle_{\rm ann.}t_{\rm BH}}, \tag{6}\]
where \(t_{\rm BH}\sim\,10^{9}\,\)yrs is the age of the BH. The DM density profile in such a spike is given by
\[\rho(r)=\left\{\begin{array}{ll}0&r<4R_{S}\\ \rho_{\rm sat}&4R_{S}\leq r<R_{\rm sat}\\ {\cal N}_{1}r^{-\gamma_{\rm sp}}&R_{\rm sat}\leq r<R_{\rm sp}\\ {\cal N}_{2}r^{-\gamma}&r\geq R_{\rm sp}\end{array}\right., \tag{7}\]
where \(R_{S}=2GM/c^{2}\) is the Schwarzchild radius of the BH, \(R_{\rm sp}=10^{5}\,R_{s}\) is the radius of the spike [36] and \(\rho(r)\) goes to zero in the region \(r<4R_{s}\) due to DM particles being captured by the SMBH. The saturation density \(\rho_{\rm sat}\) and the saturation radius \(R_{\rm sat}\) are related by the equality \(\rho(R_{\rm sat})=\rho_{\rm sat}\). The normalization \({\cal N}_{1}\) of \(\rho\) is determined by observing that the mass of the spike is of the same order as \(M_{\rm BH}\) within the spike radius [39] and \({\cal N}_{2}\) is determined by observing the continuity of the profile at \(R_{\rm sp}\).
For a pre-existent DM halo with \(\gamma=1\), the final DM profile near BH corresponds to \(\gamma_{\rm sp}=7/3\). A more realistic model was obtained in Ref. [40], where the time-evolution of dark matter distribution was investigated on sub-parsec scales. This implied softening of the DM density spike, due to scattering of DM by stars and capture of DM particles by the SMBH, dampening it to \(\gamma_{\rm sp}=3/2\). Thus, in this work, we consider both the DM profile parameters \(\gamma_{\rm sp}=7/3\) and \(\gamma_{\rm sp}=3/2\) along with two extreme values of \(\langle\sigma v\rangle_{\rm ann.}\). We define these as
\[{\rm Profile~{}1}:\,\rho(R_{\rm sat}\leq r<R_{\rm sp}) = {\cal N}_{1}r^{-7/3}\] \[{\rm Profile~{}2}:\,\rho(R_{\rm sat}\leq r<R_{\rm sp}) = {\cal N}_{1}r^{-3/2}.\]
For each of these profiles, the two benchmark points (BMPs) are defined as:
* BMP 1: No DM annihilation, i.e., \(\langle\sigma v\rangle_{\rm ann.}|_{\rm tot}=0\). Here, we assume that the DM annihilation is forbidden by some symmetry.
* BMP 2: \(\langle\sigma v\rangle_{\rm ann.}|_{\rm tot}=3\times 10^{-26}\) cm\({}^{3}\)s\({}^{-1}\), thermal relic cross-section.
Another quantity relevant to the computation of the BBDM flux is the line of sight (LOS) integral of DM density around the blazar. This provides a measure of the number of DM particles being boosted by the blazar. At a certain distance \(r\) from the blazar, it is defined as
\[\Sigma_{\rm LOS}(r)=\int_{r_{\rm min}}^{r}\rho(r^{\prime})dr^{\prime} \tag{8}\]
where \(r_{\rm min}\) is the distance from the SMBH from where the blazar jet starts. To get a measure of all boosted DM particles, we want the LOS integral at large distances (\(r>>10^{5}R_{S}\)), and we define \(\Sigma_{\rm LOS}^{\rm tot}=\Sigma_{\rm LOS}(r>>10^{5}R_{S})\).
In this work, we will study BBM flux from TXS 0506+056 and BL Lacertae, and for these blazars, \(r_{\rm min}\) lies within \(100R_{S}\)[37; 28; 36]. We take \(r_{\rm min}=4R_{S}\), noting that \(\Sigma_{\rm LOS}^{\rm tot}\) is independent of choice of \(r_{\rm min}\) for models which allow for DM pair annihilation. For no DM annihilation, \(\Sigma_{\rm LOS}^{\rm tot}\) decreases by an order of magnitude when \(r_{\rm min}\) is changed to \(100R_{S}\).
The DM density and L.O.S. integral profiles are plotted in Fig 2 for two benchmark points. For both DM density profiles, BMP1 yields a larger spike and a larger LOS integral (\(\Sigma_{\rm LOS}^{\rm tot}\)). This results in a larger BBDM flux and consequently a stronger exclusion bound on DM-electron interaction cross section. Hence, one can expect models with no DM annihilation to yield better bounds. Moreover, even though the LOS integral for the idealistic spike (Profile 1) is very large and thus results in substantial BBDM flux, the more realistic profile (Profile 2) with a softer spike would lead to a much smaller LOS integral, and hence a much weaker exclusion bound.
Figure 1: The electron spectrum in the observerβs frame is plotted above, for the blazars TXS 0506+056 (solid lines) and BL Lacertae (dashed lines). The spectrum is shown for two different polar angles : \(\theta=0^{\circ}\) (in red), \(\theta=10^{\circ}\) (purple). For larger kinetic energies (\(T_{e}\gtrsim 10\) GeV), the electron flux from TXS 0506+056 blazar exceeds the flux from BL Lacertae.
\begin{table}
\begin{tabular}{c c c} \hline Parameter & TXS 0506+056 & BL Lacertae \\ \hline \hline Redshift & 0.337 & 0.069 \\ \(d_{L}\) & 1835.4 Mpc & 322.7 Mpc \\ \(M_{\rm BH}\) & \(3.09\times 10^{8}\)\(M_{\odot}\) & \(8.65\times 10^{7}\)\(M_{\odot}\) \\ \(\Gamma_{B}\) & 20 & 15 \\ \(\theta_{\rm LOS}\) & \(0^{\circ}\) & \(3.82^{\circ}\) \\ \(\alpha_{e}\) & 2 & 3.5 \\ \(\left(\gamma_{\rm min,e}^{\prime},\gamma_{\rm max,e}^{\prime}\right)\) & \((500,1.3\times 10^{4})\) & \((700,1.5\times 10^{4})\) \\ \(L_{e}\) (erg/s) & \(1.32\times 10^{44}\) & \(8.7\times 10^{42}\) \\ \end{tabular}
\end{table}
Table 1: Model parameters for TXS 0506+056 [28] and BL Lacertae blazars [37].
## IV Blazar boosted dark matter flux and event rate
DM particles are boosted via elastic collisions with the relativistic electrons in the blazar jet. The DM differential flux resulting out of collision with the electrons is obtained as follows :
\[\frac{d\phi_{\chi}}{dT_{\chi}}=\frac{\Sigma_{\rm DM}^{\rm tot}}{2 \pi m_{\chi}d_{L}^{2}}\int_{0}^{2\pi}d\phi_{s}\int_{T_{e}^{\rm min}(T_{\chi}, \phi_{s})}^{T_{e}^{\rm max}(T_{\chi},\phi_{s})}dT_{e}\] \[\times\frac{d\sigma_{\chi e}}{dT_{\chi}}\frac{d\Gamma_{e}}{dT_{e }d\Omega}\, \tag{9}\]
where \(\sigma_{\chi e}\) is the DM-electron interaction cross section. The integration over \(\phi_{s}\) becomes trivial in case of TXS 0506+056, where the system is symmetric about LOS, and we can simply set \(\mu=\bar{\mu}_{s}\) (from Eqn. (2)).
The maximal kinetic energy of the blazar jet electrons along LOS is given by \(T_{e,\rm jet}^{\rm max}=m_{e}\left(\gamma_{\rm max,e}^{\prime}\ \Gamma_{B}^{-1}(1-\beta_{B}\cos\theta_{\rm LOS})^{-1}-1\right)\). This is set as the upper bound of the integral on \(T_{e}\) in Eqn. (9). The lower bound is set by the minimum kinetic energy required for scattering, given by
\[T_{e}^{\rm min}=\left(\frac{T_{\chi}}{2}-m_{e}\right)\left[1\pm \sqrt{1+\frac{2T_{\chi}(m_{e}+m_{\chi})^{2}}{m_{\chi}(T_{\chi}-2m_{e})^{2}}} \right] \tag{10}\]
with \(+\) and \(-\) applicable for \(T_{\chi}>2m_{e}\) and \(T_{\chi}<2m_{e}\) respectively. However, the kinetic energy of the slowest electrons in the blazar jets could be larger than \(T_{e}^{\rm min}\). In such a case, the kinetic energy of the least energetic electron in the jet, given by \(T_{e,\rm jet}^{\rm min}=m_{e}\left(\gamma_{\rm min,e}^{\prime}\ \Gamma_{B}^{-1}(1-\beta_{B}\cos\theta_{\rm LOS})^{-1}-1\right)\), sets the lower bound of the integral in Eqn. (9).
The differential cross section ( \(d\sigma_{\chi e}/dT_{\chi}\) ) of the DM-blazar jet electron interaction is given by,
\[\frac{d\sigma_{\chi e}}{dT_{\chi}}=\frac{|\mathcal{M}|^{2}}{16\pi s_{e}}\frac{ 1}{T_{\chi}^{\rm max}} \tag{11}\]
where \(\mathcal{M}\) is the interaction matrix element, a function of \(T_{\chi}\) and \(T_{e}\). \(s_{e}\) is the centre of momentum energy for the electron-DM collision given by :
\[s_{e}=\left(m_{\chi}+m_{e}\right)^{2}+2m_{\chi}T_{e}\, \tag{12}\]
and \(T_{\chi}^{\rm max}\) is the maximum kinetic energy that can be imparted to a DM particle by a blazar jet electron of energy \(T_{e}\) is given by :
\[T_{\chi}^{\rm max}=\frac{T_{e}^{2}+2m_{e}T_{e}}{T_{e}+\left(m_{e}+m_{\chi} \right)^{2}/\left(2m_{\chi}\right)} \tag{13}\]
The effect of including energy dependence in DM-electron interaction can be seen from the BBDM flux plots, given in Figs. 3 and 4. Profile 1 of DM density clearly gives larger BBDM flux as compared to Profile 2, as expected from larger DM spike for profile 1 shown in Fig. 2 in Section III. For DM to register event at Super-K, kinetic energies greater than \(\sim 0.1\) GeV are relevant. In this energy range, the heavy mediator scenario gives much larger BBDM flux as compared to the constant cross section case, while on the other hand, the light mediator regime yields a much smaller BBDM flux. From this, we expect the exclusion limit on DM-electron interaction, arising from light mediator regime, to be extremely weak. Since the vector mediator case gives slightly larger BBDM flux as compared to the scalar mediator scenario, we hope for moderately better bounds from vector mediators. Also, since for any given DM Profile or BMP, the BBDM flux is larger for smaller mass DM particles, we can expect bounds to grow stronger for lighter DM particle. Finally, the BBDM flux plots terminate at a certain value of \(T_{\chi}\) because the blazar jet electrons boosting the DM particles have an upper cutoff on their energies (for TXS 0506+056 jets, \(T_{e,\rm jet}^{\rm max}\sim 260\) GeV and for BL Lacertae jets, \(T_{\rm e,\rm jet}^{\rm max}\sim 225\) GeV).
Figure 2: The profiles of \(\rho_{\rm DM}\) (Fig. 1(a)) and \(\Sigma_{\rm DM}\) (Fig. 1(b)) are plotted above for TXS 0506+056 blazar parameters. The DM mass chosen for these figures is \(m_{\chi}=1\) MeV. Profile 1 (red) and Profile 2 (blue) are plotted for BMP 1 (solid curve) and BMP 2 (dashed curve). BL Lacertae, on the other hand, is less massive than TXS 0506+056, and yields a larger spike at a smaller distance from the BH.
TXS 0506+056 blazar is further away from us as compared to BL Lacertae, and is more massive, leading to a larger DM density spike and hence a larger \(\Sigma_{\rm LOS}\). The contribution to the DM flux coming from the factors outside the integrals in Eqn. (9) is hence much larger for BL Lacertae than TXS 0506+056 (i.e., \(\left(\Sigma_{\rm LOS}/d_{L}^{2}\right)_{\rm BL\;Lac}=10^{3}\left(\Sigma_{\rm LOS }/d_{L}^{2}\right)_{\rm TXS}\) ). Inspite of this, we note in Figs. 3 and 4 that the flux of DM particles boosted by TXS 0506+056 is larger than the BBDM flux of BL Lacertae, for more energetic DM particles (\(T_{\chi}\gtrsim 10\) GeV). This is because, the kinetic energy range of the electron responsible for boosting the DM particle to energies greater than \(10\) GeV is roughly \(T_{e}\gtrsim 10\) GeV. For this energy range, the electron spectrum in TXS blazar is larger than that of BL Lac (Fig. 1). As a result of this, we expect stronger bounds to arise from TXS blazar.
The obtained boosted DM flux will yield the following rate of electron recoil events in Super-K
\[\frac{dR}{dE_{R}}=\aleph\int_{T_{\chi}^{\rm min}}^{\infty}dT_{\chi}\frac{d\phi _{\chi}}{dT_{\chi}}\frac{d\sigma_{\chi e}}{dE_{R}} \tag{14}\]
where \(\aleph=7.5\times 10^{33}\) is the effective number of target electrons in Super-K, and \(d\sigma_{\chi e}/dE_{R}\) is the differential DM-target electron interaction cross section, given by
\[\frac{d\sigma_{\chi e}}{dE_{R}}=\frac{|\mathcal{M}|^{2}}{16\pi s_{\chi}}\frac{ 1}{E_{R}^{\rm max}} \tag{15}\]
where \(s_{\chi}\) is centre of momentum energy for the DM-target electron collision which can be obtained from Eqn. (12) under the substitution : \(m_{\chi}\leftrightarrow m_{e}\) and \(T_{\rm e}\to T_{\chi}\). \(E_{R}^{\rm max}\) is the maximum possible recoil in the detector, that can be imparted by a DM particle with kinetic energy \(T_{\chi}\), and can be obtained from Eqn. (13) with the appropriate substitutions mentioned before.
To get total number of expected recoil events (\(N_{\rm e\chi}\)) in a certain energy bin, Eqn. (14) needs to be integrated over \(E_{R}\), as follows
\[N_{e\chi}=\aleph\;T_{\rm exp}\int_{E_{R,\rm min}}^{E_{R,\rm max}}dE_{R}\int_{T_{ \chi}^{\rm min}}^{\infty}dT_{\chi}\frac{d\phi_{\chi}}{dT_{\chi}}\frac{d\sigma _{\chi e}}{dE_{R}} \tag{16}\]
where \(T_{\rm exp}=2628.1\) days is the exposure time, and \([E_{R,\rm min},E_{R,\rm max}]\) is the recoil energy range of each bin.
## V Simplified DM model
We assume that a fermionic DM particle \(\chi\), of mass \(m_{\chi}\), only interacts with electrons. This scenario is possible in several leptophilic particle DM models [41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. Additionally, the electron-DM interaction is mediated by a scalar or vector particle, given as
\[\mathcal{L} = g_{\chi\phi}\phi\bar{\chi}\chi+g_{e\phi}\phi\bar{e}e\quad\text{ or} \tag{17}\] \[= g_{\chi A^{\prime}}A^{\prime}_{\mu}\bar{\chi}\gamma^{\mu}\chi+g_ {eA^{\prime}}A^{\prime}_{\mu}\bar{e}\gamma^{\mu}e \tag{18}\]
For simplicity, we will drop \(A^{\prime}\) and \(\phi\) from subscripts in the coupling constants such that \(g_{\chi}\) (\(g_{e}\)) is the coupling constant of the dark mediator to the DM particle (electron). Next, we provide the differential cross-section for different operators and inspect the effect of the Lorentz structure. For that, we define the following quantities :
\[\mathbb{M}^{2} = \frac{16g_{e}^{2}g_{\chi}^{2}m_{e}^{2}m_{\chi}^{2}}{\left(q_{\rm ref }^{2}-m_{t}^{2}\right)^{2}} \tag{19}\] \[\bar{\sigma}_{e\chi} = \frac{\mu_{\chi e}^{2}}{16\pi m_{e}^{2}m_{\chi}^{2}}\mathbb{M}^{2} \tag{20}\]
Figure 3: _Flux of DM particles, boosted by energetic electrons in the jets of TXS 0506+056 blazar, is plotted above, for heavy (3a) and light (3b) mediators. The parameters chosen for the above plots are \(\bar{\sigma}_{e\chi}=10^{-30}\) cm\({}^{2}\) and BMP1. The vector and the scalar mediator cases have been plotted in solid and dashed lines respectively. For comparision, DM flux for constant cross section scenario have also been plotted in dotted lines. Two DM masses have been considered, \(m_{\chi}=1\) keV (plotted in black) and \(m_{\chi}=1\) MeV (plotted in red) for DM density Profile 1 (i.e. \(\gamma_{sp}=7/3\)). To avoid overcrowding, only vector mediator case is considered for Profile 2 (i.e. \(\gamma_{sp}=3/2\)), and BBDM flux is plotted (in blue) corresponding to DM mass \(m_{\chi}=1\) MeV. Clearly, Profile 2 yields a smaller BBDM flux as compared to Profile 1._
where \(q_{\rm ref}=\alpha m_{e}\) is the reference momentum transferred. Here \(m_{i}\) is the mass of the dark mediator (\(i=A^{\prime},\phi\) for vector, scalar mediator) and \(\mu_{e\chi}\) is the reduced mass of the DM-electron system. We also define a form factor,
\[F_{\rm DM}^{2}(q^{2})=|\mathcal{M}|^{2}/\mathbb{M}^{2} \tag{21}\]
This factor contains the energy dependence arising in the differential cross section \(d\sigma_{\chi e}/dT_{\chi}\) due to the blazar jet electrons boosting the DM particles and the Lorentz structure of the interaction. The explicit form of \(F_{\rm DM}\) depends on the model of DM and mediator considered.
A similar form factor, \(F_{\rm rec}\), contains energy dependence in the differential cross section \(d\sigma_{\chi e}/dE_{R}\) arising due to interaction of relativistic DM particles with the electrons in Super-K, and can be obtained from the form factor \(F_{\rm DM}\) of Eqn. (21) by making the substitutions : \(m_{e}\leftrightarrow m_{\chi}\), \(T_{\chi}\to E_{R}\) and \(T_{e}\to T_{\chi}\).
Hence the differential cross sections, \(d\sigma_{\chi e}/dT_{\chi}\) and \(d\sigma_{\chi e}/dE_{R}\), relevant in the DM-blazar jet electron scattering and DM scattering at the detector end respectively, are given by :
\[\frac{d\sigma_{\chi e}}{dT_{\chi}}=\bar{\sigma}_{e\chi}\frac{m_{e}^{2}m_{\chi }^{2}}{\mu_{e\chi}^{2}}\frac{F_{\rm DM}^{2}(q^{2})}{s_{\rm CR}T_{\chi}^{\rm max}} \tag{22}\]
and,
\[\frac{d\sigma_{\chi e}}{dE_{R}}=\bar{\sigma}_{e\chi}\frac{m_{e}^{2}m_{\chi}^{ 2}}{\mu_{e\chi}^{2}}\frac{F_{\rm rec}^{2}(q^{2})}{s_{\chi}E_{R}^{\rm max}} \tag{23}\]
Under the energy independent approximation for the cross section, the differential cross section would simply be :
\[\frac{d\sigma_{\chi e}}{dT_{\chi}}=\frac{\bar{\sigma}_{e\chi}}{T_{\chi}^{\rm max }}\,\ \frac{d\sigma_{\chi e}}{dE_{R}}=\frac{\bar{\sigma}_{e\chi}}{E_{R}^{\rm max}} \tag{24}\]
### Scalar Mediator
Considering a scalar mediator (denoted as \(\phi\)), one can calculate \(F_{\rm DM}^{2}\) for the interaction between electrons in blazar jets and non-relativistic DM, using Eqn. (21) to obtain
\[F_{\rm DM}^{2}(q)=\frac{\left(q_{\rm ref}^{2}-m_{\phi}^{2}\right)^{2}}{\left( q^{2}-m_{\phi}^{2}\right)^{2}}\frac{\left(2m_{\chi}+T_{\chi}\right)\left(2m_{e }^{2}+m_{\chi}T_{\chi}\right)}{4m_{\chi}m_{e}^{2}} \tag{25}\]
The differential cross section (\(d\sigma_{\chi e}/dT_{\chi}\)) w.r.t. the DM energy (\(T_{\chi}\)), is :
\[\frac{d\sigma_{\chi e}}{dT_{\chi}} =\bar{\sigma}_{e\chi}\frac{(q_{\rm ref}^{2}-m_{\phi}^{2})^{2}}{( q^{2}-m_{\phi}^{2})^{2}}\left\{\frac{m_{\chi}}{4\mu_{e\chi}^{2}}\right.\] \[\left.\frac{(2m_{\chi}+T_{\chi})\left(2m_{e}^{2}+m_{\chi}T_{\chi }\right)}{s_{\rm CR}T_{\chi}^{\rm max}}\right\} \tag{26}\]
The form factor \(F_{\rm rec}\) and the differential cross-section w.r.t. the recoil energy of the detector (\(d\sigma_{\chi e}/dE_{R}\)) are obtained from Eqn. (25) and Eqn. (26) by performing the substitutions prescribed in the previous section, viz. \(m_{e}\leftrightarrow m_{\chi}\), \(T_{\chi}\to E_{R}\), \(T_{e}\to T_{\chi}\), \(s_{e}\to s_{\chi}\).
Figure 4: Flux of DM particles, boosted by energetic electrons in the jets of BL Lacertae blazar, is plotted above, for heavy (4a) and light (4b) mediators. The parameters chosen for the above plots are \(\bar{\sigma}_{e\chi}=10^{-30}\) cm\({}^{2}\) and BMP1. The vector and the scalar mediator cases have been plotted in solid and dashed lines respectively. For comparision, DM flux for constant cross section scenario have also been plotted in dotted lines. Two DM masses have been considered, \(m_{\chi}=1\) keV (plotted in black) and \(m_{\chi}=1\) MeV (plotted in red) for DM density Profile 1 (i.e. \(\gamma_{sp}=7/3\)). To avoid overcrowding, only vector mediator case is considered for Profile 2 (i.e. \(\gamma_{sp}=3/2\)), and BBDM flux is plotted (in blue) corresponding to DM mass \(m_{\chi}=1\) MeV. Clearly, Profile 2 yields a smaller BBDM flux as compared to Profile 1.
### Vector Mediator
Using a similar treatment for the vector mediator (denoted by \(A^{\prime}\)), we find that
\[F_{\rm DM}^{2}(q^{2}) = \frac{\left(q_{\rm ref}^{2}-m_{A^{\prime}}^{2}\right)^{2}}{\left(q^ {2}-m_{A^{\prime}}^{2}\right)^{2}}\frac{1}{2m_{\chi}m_{e}^{2}}\left(2m_{\chi} \left(m_{e}+T_{e}\right)^{2}-\right. \tag{27}\] \[\left.T_{\chi}\left\{\left(m_{e}+m_{\chi}\right)^{2}+2m_{\chi}T_{ e}\right\}+m_{\chi}T_{\chi}^{2}\right)\]
and,
\[\frac{d\sigma_{\chi e}}{dT_{\chi}} = \bar{\sigma}_{e\chi}\frac{\left(q_{\rm ref}^{2}-m_{A^{\prime}}^{2 }\right)^{2}}{\left(q^{2}-m_{A^{\prime}}^{2}\right)^{2}}\frac{m_{\chi}}{2\mu_ {e\chi}^{2}s_{\rm CR}T_{\chi}^{\rm max}}\left\{2m_{\chi}(m_{e}+T_{e})^{2}\right. \tag{28}\] \[\left.-T_{\chi}\{(m_{e}+m_{\chi})^{2}+2m_{\chi}T_{e}\}+m_{\chi}T _{\chi}^{2}\right\}\]
## VI Results
Taking into account the signal efficiency of each recoil bin (\(\epsilon_{\rm sig}\)), the exclusion limit on \(\bar{\sigma}_{e\chi}\) is obtained by
\[N_{e\chi}\epsilon_{\rm sig}<N_{\rm B}, \tag{29}\]
where \(N_{e\chi}\), obtained from Eqn. (16), is the number of expected recoil events arising out of collision of target electron with DM particles boosted by the blazars. \(N_{\rm B}\) (\(B=\rm TXS,BL\) for TXS 0506+056 and BL Lacertae) is 95% CL upper limits on number of events from the blazars.
Three energy bins were considered in the analysis released by Super-K collaboration [51]. The total number of events, the Monte Carlo simulation of the background, signal efficiency and spatial distribution of events were provided for each bin. One can use this data to select signals from a certain "searching cone" in the direction of the blazar. This removes the majority of the background from the data, increasing sensitivity. The selected signal is then used in the standard Poisson method [52] to yield 95% CL upper limit on expected number of events (\(N_{\rm B}\)) for each of the three bins. This analysis was performed by the authors of Ref. [25], and we use their results (i.e. \(N_{\rm B}\)), summarised in Table 2. (For details of the analysis, see [51, 6, 7, 25]). This gives us all the numbers relevant to finding exclusion limits on \(\bar{\sigma}_{e\chi}\) using Eqn. (29).
Super-K is located deep underground to reduce background. As a result, the DM flux entering the detector is significantly attenuated, primarily as a result of its interaction with electrons on the Earth's surface, and this gives rise to the attenuation bound in the exclusion plot. We provide an approximation of the attenuation bound, which is the cross section for which the DM particle with \(T_{\chi}\sim 10\,\rm GeV\) can impart the threshold recoil energy in the detector. For this, we solve the following equation to calculate the energy (\(T_{r}\)) lost by the DM
\[\frac{dT_{\chi}}{dx}=-\sum_{T}n_{T}\int_{0}^{T_{r}^{\rm max}}\frac{d\sigma}{dT _{r}}T_{r}dT_{r} \tag{30}\]
and estimate \(\bar{\sigma}_{e\chi}\) so that kinetic energy of the DM particle at depth \(z\), denoted by \(T_{\chi}^{z}\), is the detector threshold \(E_{\rm th}\) (we consider \(E_{\rm th}=100\) MeV corresponding to Bin 1), for an initial kinetic energy \(T_{\chi,{\rm in}}\sim 10\) GeV. The area bounded by the attenuation bound and the exclusion bound is ruled out by our analysis. In this work, we limit ourselves to elastic scattering and ignore backscattering of light DM particles into the atmosphere. Note that the attenuation limits exist only for the heavy mediators and this may also vary once a more elaborate study is performed, which we leave for future work.
The exclusion bound arising from Super-K data is shown in Figs. 5 and 6 in the heavy mediator regime for scalar and vector operators. Taking energy dependence into account, the exclusion bound is significantly different compared to the bound obtained from constant cross section assumption. BMP1 sets a stronger bound as compared to BMP2, and DM density Profile 1 yields a better bound as compared to Profile 2. This is in agreement with what we expected from the density profiles (Fig. 2) in Section III and the BBDM flux plots (Figs. 3a and 4a) in Section IV.
which results in very little difference in the exclusion limits. However, the limit coming from the three different bins can differ by \(\sim 2\) or \(3\) orders of magnitude for certain \(m_{\chi}\), so a combined analysis of the three bins might change the bounds. We, however, leave out such an analysis from this work.
Since "heavy" and "light" mediator regimes are the convenient extremes of the actual DM model, the true exclusion bound would lie somewhere in between the bound set by these two regimes. Thus we compare the exclusion bound corresponding to various masses of the vector mediator for Profile 1 and BMP1 in Fig. 7. A
Figure 5: The exclusion bound is plotted for the blazars TXS 0506+056 (5a) and BL Lacertae (5b), corresponding to BMP1. The cases considered are heavy vector mediator (plotted in solid lines), heavy scalar mediator (plotted in dashed lines) and constant cross section case (plotted in dotted lines). The various DM density profiles considered are Profile 1 (in red) and Profile 2 (in blue). The direct detection bounds from Xenon10, Xenon100,
SENSEI [53, 54, 55] and DarkSide-50 [56] are also plotted. The bound arising due to DM attenuation is also given for heavy mediator scenario (plotted in grey). The area bounded by the attenuation bound and the exclusion bound is ruled out by our analysis. Exclusion limit from
Superconducting Nanowires is provided in cyan color. Constraint due to solar reflection of DM [57, 17] is shown inamber color. The exclusion bound given by Cosmic Ray electron (CRe) boosted DM [58] is plotted in black (See text for more details).
Figure 6: The exclusion bound is plotted for the blazars TXS 0506+056 (6a) and BL Lacertae (6b), corresponding to BMP2. The cases considered are heavy vector mediator (plotted in solid lines), heavy scalar mediator (plotted in dashed lines) and constant cross section case (plotted in dotted lines). The various DM density profiles considered are Profile 1 (in red) and Profile 2 (in blue). The direct detection bounds from Xenon10, Xenon100,
SENSEI [53, 54, 55] and DarkSide-50 [56] are also plotted. The bound arising due to DM attenuation is also given for heavy mediator scenario (plotted in grey). The area bounded by the attenuation bound and the exclusion bound is ruled out by our analysis. Exclusion limit from
Superconducting Nanowires is provided in cyan color. Constraint due to solar reflection of DM [57, 17] is shown inamber color. The exclusion bound given by Cosmic Ray electron (CRe) boosted DM [58] is plotted in black (See text for more details).
similar comparison and scaling exists for Profile 2 and BMP2. Clearly, a mediator of mass 10 GeV corresponds to the heavy regime, and a mediator of mass \(10^{-4}\) MeV reproduces the exclusion limit set by the light mediator regime.
It should be noted that for \(m_{\chi}\geq\ m_{e}\) the scattering cross-section and consequently the annihilation rate to \(e^{-}e^{+}\) increases as we move to heavier DM particles in Fig. 6 (for \(m_{\chi}\leq\ m_{e}\), \(\sigma_{\rm ann}^{e^{-}e^{+}}=0\)). As a result, for large enough cross sections, DM annihilation rate can exceed 3 \(\times\ 10^{-26}\)cm\({}^{3}s^{-1}\), contradicting the rate assumed for BMP2. We refer to these cross sections as the annihilation ceiling and they are shown by gray curves in Fig. 6. For vector interaction, the annihilation ceiling established for a 100 GeV mediator is lower than the bound set by our work, thereby rendering the exclusion limits irrelevant. Hence, our exclusion limits for vector mediator are shown only for \(m_{\chi}<m_{e}\). However, a significant parameter space remains constrained by our exclusion bounds for scalar interaction. There is no annihilation ceiling for BMP1 because it is presumed that annihilation is prohibited in BMP1 (for instance, in the case of asymmetric DM [59]).
Apart from blazars jets, Cosmic Ray electrons (CRe) provide yet another environment to produce boosted DM particles. Exclusion bound from CRe boosted DM, using Super-K data, is plotted alongwith our bounds in Figs. 5 and 6. Furthermore, Refs. [60; 61] propose a DM detection device with extremely low recoil trigger made using Superconducting Nanowires. The best bounds from such a prototype device is also shown. Currently our results are much stronger, but proposed devices with materials like NbN and Al might give better exclusions in the near future. Constraints from other direct detection experiments, such as Xenon10, Xenon100, SENSEI and DarkSide-50 are also shown. Exclusion limits from solar reflection of DM [17; 57] are important in the heavy mediator case, and are provided as well.
The cosmological constraints from Big Bang Nucleosynthesis (BBN) rule out thermal DM of \(m_{\chi}\lesssim 10\) MeV stringently [62; 63]. However, these bounds can be relaxed in DM models with couplings to both neutrinos as well as electrons [64]. The CMB observations similarly constrain DM annihilating to an \(e^{-}e^{+}\) pair severely [65]. An elaborate dark sector associated in these models can relax these BBN and CMB constraints, so that DM mostly annihilate to other dark sector particles [66].
## VII Summary & Outlook
Blazars, in addition to being a key source of high energy electrons, are projected to have a DM density spike in their core due to DM accretion onto their SMBH. Despite large uncertainties from astrophysics and the unknown annihilation properties of dark matter in the density of the succeeding DM spike, strong bounds on the elastic scattering cross section for DM-electron scattering have been obtained in Ref. [25]. Here, we demonstrate how these limits change when the resulting energy dependence of the S-matrix for the associated vertex Lorentz structure is taken into consideration. We remain agnostic of the relic abundance mechanism since DM models might include an extended dark sector that has a significant impact on how much DM is there in the Universe right now. To that end, we derived limits using Super-K data. We found that the constraints on such energy-dependent scattering cross sections, which mostly depend on mediator mass, are at least several orders of magnitude tighter than the current limits from Blazars in the literature for the constant cross section assumption. Though the constant cross-section is a meaningful way to explain a concept, in reality it corresponds to a small parameter space of a DM model. Our bounds are, however, weakened if the mediator mass is sufficiently small. This is because the BBDM flux is orders of magnitude less than the constant cross-section in the relevant energy bin (see fig. 3b,4b). We also studied the less cuspy profile of the DM spike and realised that the constraints on \(\bar{\sigma}_{e\chi}\) from BBDM are still significant compared to Cosmic ray boosted DM.
Another subtlety is that, in addition to relativistic electrons, blazars also contain energetic protons, which may contribute to the BBDM flux. However, the contribution is insignificant since the coupling with the proton is loop-suppressed if there is just a tree-level interaction of DM with charged electrons. Another natural assumption, inspired by the standard model's \(SU(2)_{L}\) gauge symmetry, is that neutrino should have the same cross section with DM as charged leptons [67], allowing us to compare the current \(\sigma_{e\chi}\sigma_{\nu\chi}\) limits for Cosmic ray boosted DM [16]. We intend to investigate this possi
Figure 7: The exclusion bound is plotted for the blazar TXS 0506+056 for various mediator masses, corresponding to the vector mediator scenario. The mediator masses chosen are \(10\) GeV, \(1\) MeV, \(10^{-2}\) MeV and \(10^{-4}\) MeV, all plotted in different linestyles. The bound for heavy (in black) and light (in grey) mediator regime is also plotted. The profiles chosen are DM density Profile 1, BMP1.
bility in next work.
## Acknowledgements
D.G. acknowledges support through the Ramanujan Fellowship and MATRICS Grant of the Department of Science and Technology, Government of India. D.S. has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101002846, ERC CoG "CosmoChart". The authors also thank Arka Banerjee and Susmita Adhikari for valuable discussions. We thank Robert Mcghee, Iason Baldes and Kalliopi Petraki for useful comments.
|
2309.10638 | Partial triangulations of surfaces with girth constraints | Barnette and Edelson have shown that there are finitely many minimal
triangulations of a connected compact 2-manifold M. A similar finiteness result
is obtained for cellular partial triangulations that satisfy the Maxwell count
3v-e=6 and girth inequality constraints for certain embedded cycles. Also a
characterisation of cellular M-embedded (3,6)-tight graphs is given in terms of
the satisfaction of higher genus girth inequalities. | Stephen C. Power | 2023-09-19T14:20:54Z | http://arxiv.org/abs/2309.10638v3 | # Partial triangulations of surfaces with girth constraints
###### Abstract.
Barnette and Edelson have shown that there are finitely many minimal triangulations of a connected compact 2-manifold \(\mathcal{M}\). A similar finiteness result is obtained for cellular partial triangulations that satisfy the Maxwell count \(3v-e=6\) and girth inequality constraints for certain embedded cycles. Also a characterisation of cellular \(\mathcal{M}\)-embedded (3,6)-tight graphs is given in terms of the satisfaction of higher genus girth inequalities.
MSC2020 _Mathematics Subject Classification._ 05C10, 52C25 November 20, 2023.
## 1. Introduction
The following theorem, obtained in 1989, is due to Barnette and Edelson [2], [3].
**Theorem 1.1**.: _There are finitely many minimal triangulations of a compact 2-manifold._
The first paper [2] deals with orientable surfaces while the second paper [3] obtains the general case, resolves an oversight in [2] and gives a more direct proof avoiding Mayer-Vietoris sequences for homology groups. A proof of the theorem is given in Section 2 below and in Section 4 we obtain an analogous result for partial triangulations that satisfy the girth inequalities of Definition 3.4. These inequalities are length constraints on embedded cycles that bound an open disc in \(\mathcal{M}\) that is not fully triangulated. For the low genus settings of the projective plane and the torus considered in [13] and [7] the girth inequalities of Definition 3.4 are sufficient for the determination of (3,6)-sparsity. In Section 5, which is independent of Section 4, we define higher genus girth inequalities and show, in Theorem 5.9, that a cellular graph \(G\) in \(\mathcal{M}\) is (3,6)-tight if and only if all higher genus girth inequalities are satisfied, together with the Maxwell count \(3v-e=6\) for \(G\). We expect that the methods of Section 4 can be extended to higher genus critical cycles.
A compact 2-manifold \(\mathcal{M}\), as a topological space, is homeomorphic to a compact surface without boundary. We assume throughout that \(\mathcal{M}\) is connected. A face of an embedded (finite) graph \(G\) in \(\mathcal{M}\) is a connected component of the complement of \(G\) and we consider a _triangulation_ of \(\mathcal{M}\) to be an embedded simple graph in \(\mathcal{M}\) where every face is homeomorphic to an open disc and bounded by a 3-cycle of embedded edges. Such an embedded graph \(G\) is a _minimal triangulation_ if no edge can be topologically contracted, in the natural way, to give a simple embedded graph with 2 fewer faces. Two triangulations of \(\mathcal{M}\) are regarded as the same if their embeddings are unique up to a homeomorphism of \(\mathcal{M}\). The embedded graph definition of a triangulation does not require the stricter condition that each face is uniquely associated with its boundary 3-cycle. Thus the unique minimal
triangulation of the sphere in our sense is \(K_{3}\) rather than \(K_{4}\). We also note that Barnette [1] has shown that there are \(2\) minimal triangulations of the real projective plane.
Recall that a triangulation of the sphere is \((3,6)\)-tight, in the sense of Definition 3.6 below. One can view this property as a uniform sparsity condition for which \(3v-e\geq 6\) holds for all subgraphs with at least \(3\) vertices, together with \(3v-e=6\) for \(G\) itself. In contrast, satisfaction of the girth inequalities is a weaker nonuniform sparsity requirement. Classes of \((3,6)\)-tight graphs that admit embeddings in a surface, and their construction by repeated vertex-splitting moves, are topics of current interest in the rigidity theory of generic bar-joint frameworks in \(3\) dimensions, and this provides motivation for our considerations here. The rigidity connection stems from the fact that vertex-splitting moves preserve the generic rigidity of such a framework (Whiteley [19]). See, for example, Cruickshank, Kitson and Power [6], [7], Cruickshank, Kastis, Kitson and Schulze [5], Jordan [11], Jordan and Tanigawa [12], Kastis and Power [13] and Kitson and Power [14].
Girth inequalities were introduced in Cruickshank, Kitson and Power [6] in the setting of block and hole graphs considered earlier by Finbow-Singh and Whiteley [8]. These graphs arise from surgery on a triangulated sphere whereby the interiors of some essentially disjoint (ie. nonoverlapping) triangulated discs are deleted and some of the resulting holes have so-called rigid block graphs attached at their boundaries. In the case of a graph \(G\) with a single block and single hole and equal boundary lengths \(r\geq 4\), a characterisation of generic rigidity in \(\mathbb{R}^{3}\) was given in [8] in terms of the existence of \(r\) disjoint paths from the boundary of the hole to the boundary of the block. In [6] an alternative equivalent characterisation was given in terms of the elementary girth inequalities of Definition 3.4. Also it was shown that this is equivalent to the (3,6)-tightness of \(G\) if the inserted block graph is minimally rigid, and that this characterisation holds more generally for single block graphs with several holes.
For a broad background on embedded graphs see Gross and Tucker [10] and Mohar and Thomassen [16]. Theorem 1.1 has been generalised by Malnic and Nedela [15] to so-called \(k\)-minimal triangulations \(T\) for \(k\geq 3\), meaning that the edge-width of \(T\) is \(k\) and each edge lies on an essential \(k\)-cycle. A triangulation has _edge-width k_ if the minimal length of an essential cycle is \(k\). See also Theorem 5.4.1 of [16] and the associated discussion. Other proofs of Theorem 1.1 by Nakamoto and Oka [17] and Gao et al [9] exploit rather deep additive formulae for the genus of a graph and obtain precise genus bounds (linear and quartic, respectively) for the size of a minimal.
We also note that Theorem 1.1 has been generalised Boulch et al [4] to surfaces with boundary. The sketch proof of Lemma 4 of [4] needed for this may be completed with the loop extension theorem of Barnette and Edelson given in Theorem 2.5 below.
## 2. The Barnette-Edelson theorem.
The overall scheme of the proof in [3] is an induction argument in which the surface \(\mathcal{M}\) is cut by a nonplanar \(3\)-cycle of the embedded graph \(G\). If this \(3\)-cycle has an annulus neighbourhood then the resulting \(1\) or \(2\) open manifolds, \(\mathcal{M}^{\prime}\), or \(\mathcal{M}^{\prime}\) and \(\mathcal{M}^{\prime\prime}\), determine compact surfaces with boundary, each component of the boundary having an embedded \(3\)-cycle. If the cutting \(3\)-cycle has a Mobius strip neighbourhood in \(\mathcal{M}\) then the resulting compact surfaces with boundary have an embedded \(6\)-cycle in each boundary component. In both cases, by capping the boundary components with a topological disc one obtains
\(1\) or \(2\) compact surfaces. These lower genus surfaces, say \(\mathcal{M}_{1}\), or \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), have triangulations, \(T_{1}\), or \(T_{1}\) and \(T_{2}\), that are derived from \(T\) and from small triangulations of the capping discs. A capping disc for the cutting cycle has the form of an attached triangle or an attached triangulated \(6\)-cycle. Although, for a minimal triangulation \(T\), the derived triangulations need not be minimal there is a genus bound on the number of disjoint nonhomotopic \(3\)-cycles with a common base point (see Theorem 2.2 below), and a similar bound on disjoint homotopic \(3\)-cycles with common base point (Lemma 2.1). These constraints ensure that the derived triangulations are close to minimal (in the sense of Lemma 2.4).
The following lemma is Lemma 6 of [2]. See also Lemma 6 of [4]. A _shrinkable_, or _contractible_, edge of the triangulation \(T\) is one that does not lie on a \(3\)-circuit (\(3\)-cycle of edges) other than the two facial \(3\)-cycles that contain the edge. In this case there is a natural contraction \(G/e\) of \(G\), obtained by contracting \(e\) to a single vertex and contracting each incident face to an edge, is the simple embeded graph. A \(3\)-circuit of \(T\) is _planar_ if it is the boundary of an embedded open disc in \(\mathcal{M}\) and is _nonplanar_ otherwise. We may assume henceforth that \(\mathcal{M}\) is connected.
**Lemma 2.1**.: _If \(e\) is an edge in a triangulation \(T\) of a compact 2-manifold \(\mathcal{M}\) and if \(e\) lies on four distinct nonplanar 3-circuits, all homotopic to each other, then \(T\) has a shrinkable edge._
Loosely speaking, the hypotheses imply that there is a triangulated planar patch of \(T\) with an edge that is sufficiently interior to this patch that it cannot lie on a nonfacial \(3\)-cycle. Consequently, for a minimal triangulation there is a bound on the number of homotopic \(3\)-circuits through a vertex. In Lemma 3.7 we give a general version of this principle which is also applicable in the context of certain critical walks in a contraction-minimal graph embedding.
It is well-known that a nonfacial planar \(3\)-cycle in a triangulation of \(\mathcal{M}\) contains a contractible edge (see Lemma 1 of Barnette [1] for example) and so in a minimal triangulation all nonfacial \(3\)-cycles are nonplanar. The lemma allows us to deduce that if, in a minimal triangulation, there are many \(3\)-cycles through an edge, then there are many \(3\)-cycles through that edge which are pairwise nonhomotopic.
A key topological fact used in the proof of Theorem 1.1 is the following genus bound theorem which gives a limitation on the number of disjoint nonhomotopic cycles. This is Corollary 1 of [3] which derives from Theorem 1 of [3] (our Theorem 2.5 below) correcting the oversight from [2]. The genus \(g\) is defined below.
**Theorem 2.2**.: _If \(\mathcal{M}\) is not the sphere or projective plane then any family \(\mathcal{E}\) of homotopically nontrivial simple pairwise nonhomotopic curves, meeting only at a common point in \(\mathcal{M}\), has at most \(6g-3\) members when \(\mathcal{M}\) is orientable, and \(3g\) members when \(\mathcal{M}\) is nonorientable, where \(g\) is the genus of \(\mathcal{M}\)._
A corollary is the following genus bound lemma for vertex degrees with which we may bypass some lemmas from [2].
**Lemma 2.3**.: _Let \(\mathcal{M}\) be a compact 2-manifold with genus \(g\). Then there is a constant \(c\) such that the degree of a vertex in a minimal triangulation \(T\) of \(\mathcal{M}\) is no greater than \(cg\)._
Proof.: Suppose that \(v\) has degree \(m\). Consider the graph \(N(v)\) corresponding to the union of the triangles of \(T\) incident to \(v\). By minimality there are at least \(m/2-1\) nonplanar 3-cycles through \(v\) with various edges not in \(N(v)\) but with vertices adjacent to \(v\). We call these _periferal edges_. By Lemma 2.1 there must be at least \([(m/2-1)/3]\) of these 3-cycles which are pairwise nonhomotopic. By shrinking \(N(v)\) to \(v\), by a path of homeomorphisms of \(\mathcal{M}\), we see that the periferal embedded edges of these 3-cycles determine a set of loops at \(v\) which are otherwise disjoint. These loops are also pairwise nonhomotopic and so, by Theorem 2.2, \(m\leq cg\) for some constant \(c\).
We next give a proof of Theorem 1.1 assuming Theorem 2.2 and follow this with a proof of Theorem 2.2.
Let \(T^{\prime}\) be a triangulation obtained from the triangulation \(T\) by shrinking a single edge. Then call the reverse operation, \(T\to T^{\prime}\), to be a _vertex-splitting move_, or, more precisely, a _planar vertex-splitting move_. The following simple lemma plays a role in the induction argument of the proof.
**Lemma 2.4**.: _Let \(T\) be a triangulation of a compact manifold \(\mathcal{M}\) for which there are exactly \(n\) edges which do not lie on a nonfacial 3-cycle. Then \(T\) is obtained from a minimal triangulation by a sequence of at most \(n\) vertex-splitting moves._
Proof.: The edge shrinking move and the planar vertex-splitting moves are inverse operations on the set of triangulations of \(\mathcal{M}\). It is enough then to observe that if \(T\to T^{\prime}\) is an edge shrinking move between triangulations and some edge \(f\) in \(T\) lies on a nonfacial 3-cycle of \(T\), then the corresponding edge in \(T^{\prime}\) lies on nonfacial 3-cycle.
Proof of Theorem 1.1.: Let \(\mathcal{M}\) be orientable with genus \(g\) and suppose that the number of minimal triangulations of a compact 2-manifold with lower genus is finite. Cutting \(\mathcal{M}\) with a 3-cycle that is not homotopic in \(\mathcal{M}\) to a point we obtain 1 or 2 manifolds with boundary. Capping boundaries, as indicated above, yields a compact 2-manifold \(\mathcal{M}_{1}\), or the pair \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), each of which has genus smaller than \(g\).
Suppose first that there is one derived manifold \(\mathcal{M}_{1}\). It carries an inherited triangulation \(T_{1}\) which need not be minimal. A shrinkable edge, \(f\) say, of \(T_{1}\) has the property that it lies on a 3-cycle in \(\mathcal{M}\) which passes through a vertex of the cutting 3-cycle. By Lemma 2.3 the degrees of these vertices, \(x,y\) and \(z\), in the triangulation \(T\), are bounded by \(cg\) for some constant \(c\). It follows that the number of such 3-cycles, and therefore the number of shrinkable edges \(f\), is at most \(dg\) for some positive constant \(d\). Assume, for the induction hypothesis, that the number of minimal triangulations of \(\mathcal{M}_{1}\) is finite. By Lemma 2.4 the number of triangulations \(T_{1}\) of \(\mathcal{M}_{1}\) obtained by the construction above is finite and so the number of minimal triangulations \(T\) is finite. This argument establishes the induction step in the orientable case with cut cycle giving a connected manifold \(\mathcal{M}_{1}\). The same argument applies when the cut leads to a pair \(\mathcal{M}_{1},\mathcal{M}_{2}\) and so, since there is one minimal triangulation for the sphere the theorem follows for orientable 2-manifolds.
In the nonorientable case the capping of the boundaries of \(\mathcal{M}_{1}\) (or \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\)) is by the attachment of triangulated discs to 3-cycles or 6-cycles and the argument is the same.
An embedded simple graph \(G\) in a compact 2-manifold \(\mathcal{M}\) is _cellular_ if its faces, that is, the components of the complement of the union of the embedded edges, are each
homeomorphic to an open disc. The general Euler formulae for \(G\) are
\[v-e+f=2-2g\quad\text{and}\quad v-e+f=2-g,\]
for, respectively, the case of orientable and nonorientable \(\mathcal{M}\), where \(v,e\) and \(f\) denote the number of vertices, faces and edges of \(G\). The genus \(g=g(\mathcal{M})\) is defined to be the maximum number of disjoint loops whose union has a connected complement in \(\mathcal{M}\). Moreover, it can be defined in terms of the standard models for \(\mathcal{M}\) as the number of attached handles to a sphere when \(\mathcal{M}\) is oriented, and as the number of attached cross caps to a sphere when \(\mathcal{M}\) is not oriented.
Proof of Theorem 2.2.: Let \(|\mathcal{E}|=n\). Suppose first that the embedded graph determined by \(\mathcal{E}\), with a single vertex and \(n\) embedded edges, is cellular and the boundary of each face is the union of at least \(3\) embedded edges. By Euler's formula, for orientable \(\mathcal{M}\), we have \(1-e+f=2-2g\). Also, \(2e=\sum_{k}kf_{k}\) where \(f_{k}\) is the number of faces with \(k\) edges. Thus \(2e\geq 3f\),
\[e=1+f+2g-2\leq 2e/3+2g-1\]
and so \(e\leq 6g-3\). The genus bound in the nonorientable case is obtained in the same way.
It remains to apply Theorem 2.5 below which is essentially Theorem 1 of [3]. We say that a loop edge \(e\) in a surface is _essential_ if an associated simple closed curve for it, say \(\alpha(t),0\leq t\leq 1\), is not null homotopic. Also, a pair of loops \(e_{1},e_{2}\) with curves \(\alpha,\beta\) with \(\alpha(0)=\beta(0)=v\) is said to be a homotopic pair of loops if \(\{\alpha,\beta\}\) or \(\{\alpha,-\beta\}\) is a homotopic pair.
**Theorem 2.5**.: _Suppose that \(\mathcal{M}\) is not the sphere or the projective plane. Then any family \(\mathcal{E}=\{e_{1},\ldots,e_{r}\}\) of pairwise nonhomotopic essential loops, meeting only at a common point \(v\), extends to a similar family \(\tilde{\mathcal{E}}\) whose embedded graph \(G(\tilde{\mathcal{E}})\) is cellular with each face having at least 3 edges._
Proof sketch.: Let \(U\) be a face of the embedded multigraph \(G(\mathcal{E})\) of \(\mathcal{E}\). The main idea of the proof is to introduce a loop in \(U\cup\{v\}\) in order to extend \(\mathcal{E}\) to a similar such family for which \(U\) is replaced by \(1\) or \(2\) faces that have lower genus. The genus \(g(U)\) may be defined directly as the maximum number of disjoint loops in \(U\) that do not disconnect \(U\). This is necessarily finite for any open connected subset \(U\) of \(\mathcal{M}\).
If \(g(U)\) is positive then there exists a loop in \(U\) that passes through either a handle or a cross cap of \(U\). This loop may modified to give a loop, \(e_{*}\) say, in \(\{v\}\cup U\), with the relative topology, that passes through \(v\) and the handle or cross cap. It follows that \(e_{*}\) is essential in \(\mathcal{M}\). Also it is not homotopic to any loop of \(\mathcal{E}_{0}\). This follows formally on considering the fundamental group of the closure of \(U\) in \(\mathcal{M}\). A further case by case analysis shows that \(e_{*}\) is not homotopic to any loop in \(\mathcal{E}\). For the extended set \(\mathcal{E}^{\prime}=\mathcal{E}\cup\{e_{*}\}\) with embedded graph \(G(\mathcal{E}^{\prime})\) the face \(U\) of \(G(\mathcal{E}_{0})\) has been replaced by \(1\) or \(2\) faces with strictly lower genus than \(g(U)\).
Repeating this argument sufficiently often leads to an extended set \(\tilde{\mathcal{E}}\) for which the associated embedded graph has all faces of genus \(0\). Further examination, together with the assumption that \(\mathcal{M}\) is not the projective plane, shows that each face has at least 3 edges.
## 3. Sparse graphs and girth inequalities
Our main interest concerns various families of sparse graphs that satisfy the Maxwell count \(3v-e=6\). However we also include associated families with \(3v-e=\alpha\) for some \(\alpha\geq 6\).
**Lemma 3.1**.: _Let \(G\) be a cellular embedded graph in a compact 2-manifold \(\mathcal{M}\) that satisfies the global count \(3v-e=\alpha\). Then_
\[\sum(k-3)f_{k}=\alpha+3\mu g-6\]
_where \(\mu=2\) if \(\mathcal{M}\) is orientable and \(\mu=1\) otherwise._
Proof.: We have \(2e=\sum_{k}kf_{k}\) and \(3(2-\mu g)=3v-3e+3f\) and so
\[6-3\mu g=e+\alpha-3e+3\sum f_{k}=\alpha-\sum kf_{k}+3\sum f_{k},\]
and the identity follows.
In particular, for such cellular graphs there is a bound on both the size and number of faces with more than 3 edges and this bound depends only on \(\alpha\) and the genus of \(\mathcal{M}\).
Let \(\mathcal{F}(\mathcal{M})\) be a family of simple embedded graphs for the compact 2-manifold \(\mathcal{M}\). Then \(G\) in \(\mathcal{F}(\mathcal{M})\) is said to be _contraction-minimal_ if for every edge \(e\) belonging to two facial 3-cycles the contracted embedded graph \(G/e\) does not belong to \(\mathcal{F}(\mathcal{M})\).
The Barnette-Edelson theorem shows that there are finitely many contraction-minimal embedded graphs in the family \(\mathcal{T}(\mathcal{M})\) of triangulations of \(\mathcal{M}\). For a simple corollary, consider the family \(\mathcal{T}(\mathcal{M},n_{4},\ldots,n_{r})\) of _partial triangulations_ that are cellular embedded simple graphs \(G\) in \(\mathcal{M}\) that have \(n_{k}\) faces with closed boundary walks of length \(k\), for \(4\leq k\leq r\). We extend earlier terminology by defining a 3-cycle of \(G\) to be _planar_ if it is the boundary cycle of an open disc in \(\mathcal{M}\) which contains no nontriangular faces of \(G\).
**Theorem 3.2**.: _There are finitely many contraction-minimal embedded graphs in the family \(\mathcal{T}(\mathcal{M},n_{4},\ldots,n_{r})\)._
Proof.: Let \(G\) be contraction-minimal in \(\mathcal{T}(\mathcal{M},n_{4},\ldots,n_{r})\). Then every edge on 2 facial 3-cycles of \(G\) lies on a nonfacial 3-cycle of \(G\). Add a vertex and \(k\) edges to each face of \(G\) with a closed boundary walk of length \(k\geq 4\) to obtain an embedded simple graph \(G^{+}\) in \(\mathcal{T}(\mathcal{M})\). Let \(\mathcal{S}\) be the set of these added edges together with the set of edges that belongs to these boundary walks of \(G\). Then \(|\mathcal{S}|\) is no greater than \(N=2n_{4}+\cdots+2n_{r}\). All other edges of \(G^{+}\) lie on nonfacial 3-cycles. It follows from Lemma 2.4 that \(G^{+}\) is obtained from a minimal triangulation of \(\mathcal{M}\) by at most \(N\) vertex-splitting moves and so, by the Barnette-Edelson Theorem, the embedded graphs \(G^{+}\), and hence the embedded graph \(G\), are finite in number.
### Girth inequalities
Let \(c\) be a closed walk in an \(\mathcal{M}\)-embedded simple graph \(G\). Then \(c\) is a _planar type closed walk_ if, as an embedded graph, \(c\) has a cellular face \(U\) for which \(c\) is the boundary walk. Such a closed walk is said to be a _planar walk for \(G\)_ if, in addition, \(U\) is triangulated by triangular faces of \(G\). Equivalently, the closure of \(U\) is the closure of the union of the triangular faces of \(G\) in \(U\).
**Definition 3.3**.: Let \(G\) be a simple embedded cellular graph for the compact \(2\)-manifold \(\mathcal{M}\) with nontriangular faces \(W_{1},\ldots,W_{r}\). Then a planar type closed walk \(c\) in \(G\) with face \(U\)_satisfies the girth inequality_ if
\[|c|-3\geq\sum_{k\in I(U)}(|c_{k}|-3),\]
where \(I(U)\) is the index set for the boundary walks \(c_{k}\) for the nontriangular faces \(W_{k}\) contained \(U\). When equality holds \(c\) is said to be a _critical planar type walk for \(G\)_.
**Definition 3.4**.: A cellular embedded graph \(G\) in a compact \(2\)-manifold \(\mathcal{M}\) satisfies the _planar girth inequalities_ if every planar type closed walk \(c\) in \(G\) with cellular face \(U\) satisfies the girth inequality.
Write \(\mathcal{G}_{\rm pl}(\mathcal{M},\alpha)\), for \(\alpha\geq 6\), for the family of simple cellular embedded graphs \(G\) in \(\mathcal{M}\) that satisfy the planar girth inequalities and the freedom count \(f(G)=3v-e=\alpha\).
For a planar type walk \(c\) with open disc face \(U\) let \({\rm Int}_{G}(c)\), the _interior graph_ of \(c\), be the subgraph of \(G\) formed by the union of the edges of \(c\) and the edges of \(G\) that meet \(U\). Also let \({\rm Ext}_{G}(c)\), the _exterior graph_ of \(c\) be the subgraph of \(G\) whose edges and vertices do not meet \(U\).
**Lemma 3.5**.: _Let \(c\) be a planar type closed walk for the cellular embedded graph \(G\) with \(f(G)=\alpha\). Then the following assertions are equivalent._
_(i) \(c\) satisfies the girth inequality._
_(ii) \(f({\rm Ext}_{G}(c))\geq\alpha\)._
Proof.: By Lemma 3.1 we have
\[\sum_{k\geq 4}(k-3)f_{k}=3\mu g+f(G)-6\]
and so
\[\sum_{k\in I(U)}(|c_{k}|-3)+\sum_{k\notin I(U)}(|c_{k}|-3)=3\mu g+f(G)-6.\]
For the cellular embedded graph \({\rm Ext}_{G}(c)\) we have, similarly,
\[|c|-3+\sum_{k\notin I(U)}(|c_{k}|-3)=3\mu g+f({\rm Ext}_{G}(c))-6\]
and so the equivalence follows.
**Definition 3.6**.: A graph is _\((3,6)\)-sparse_ if it satisfies the local count \(f(G^{\prime})\geq 6\) for each subgraph \(G^{\prime}=(V^{\prime},E^{\prime})\) with at least \(3\) vertices and is _\((3,6)\)-tight_ if in addition \(f(G)=6\). The family \(\mathcal{F}(\mathcal{M},6)\) is the family of cellular embedded graphs in \(\mathcal{M}\) whose underlying graphs are \((3,6)\)-tight.
It follows that if \(G\) is (3,6)-tight then it belongs to \(\mathcal{G}_{\rm pl}(\mathcal{M},6)\). The equality \(\mathcal{F}(\mathcal{M},6)=\mathcal{G}_{\rm pl}(\mathcal{M},6)\) does not hold in general as we now see for certain partial triangulations \(G\) associated with Figure 1.
By the formula above if the torus is partially triangulated with a single nontriangular face with a closed walk \(c\) of length \(r\) then \(f(G)=r-3\), and so \(r=9\) if \(G\) is in \(\mathcal{F}(\mathcal{M},6)\). The figure indicates a partial triangulation \(G\) of the double torus \(\mathcal{M}\), with \(g(\mathcal{M})=2\)
where there are two faces with boundary cycle lengths 8 and 10, and hence \(f(\mathcal{M})=6\). Also, in the partial triangulation \(\mathcal{M}\) is the join two partially triangulated tori over a common 3-cycle. Since the nontriangular faces are separated by an essential 3-cycle it follows that a critical walk cannot include vertices that are separated by this 3-cycle. It is therefore elementary to construct the partial triangulation so that \(G\) satisfies the planar girth inequalities. Indeed by vertex splitting it can be assured that there are no planar type closed walks around the 8-sided (resp. 10-sided) face of length less that 8 (resp. 10). On the other hand the subgraph \(K_{1}\) obtained by removing edges and vertices to the right of the separating 3-cycle has freedom \(f(K_{1})=5\) and so \(G\) is not (3,6)-tight.
### Critical patches and contractible edges
We now obtain a generalisation of Lemma 2.1.
Let \(c_{1},c_{2}\) be planar type critical walks in \(G\), with \(G\) in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\), that have common vertices \(v,w\) and subwalks \(\pi_{1},\pi_{2}\) from \(v\) to \(w\) that are otherwise disjoint. Moreover suppose that the concatenation of \(\pi_{1}\) and the reversal of \(\pi_{2}\) gives a boundary walk of an open disc that contains only triangular faces of \(G\). Denote the triangulated subgraph determined by these faces as \(P(\pi_{1},\pi_{2})\) and refer to it as a _critical patch_ of \(G\). It follows that \(|\pi_{1}|=|\pi_{2}|\), with length \(d\) say, and that every path \(\pi\) of edges in the patch from \(v\) and \(w\) has length at least \(d\). If \(\pi\) has no vertices on \(c_{1},c_{2}\) except \(v,w\) then \(\pi\) is said to be an internal path of \(P=P(\pi_{1},\pi_{2})\). With this notation we have the following lemma.
**Lemma 3.7**.: _Let \(P\) be a critical patch for \(G\) determined by paths \(\pi_{1},\pi_{2}\) between \(v,w\) with length \(d\geq 3\). Also, let \(\mathcal{S}\) be the set of all paths in \(P\) from \(v\) to \(w\) with length \(d\) and let \(\mathcal{S}^{\prime}\subset\mathcal{S}\) consist of essentially disjoint paths in the sense that the only vertices common to the 2 paths are \(v\) and \(w\). If \(G\) is contraction-minimal then there is a bound for \(|\mathcal{S}^{\prime}|\), and hence \(|\mathcal{S}|\), that depends only on \(d\)._
Proof.: The paths of \(\mathcal{S}^{\prime}\) may be naturally ordered as adjacent paths, say \(\rho_{1},\ldots,\rho_{s}\). Consider an edge \(e=xy\) of a triangular face of the form \(vxy\) where \(vx\) is an edge of \(\rho_{i}\). (Figure 2 indicates such an edge when \(s=5,d=5\) and \(i=3\).) By the hypotheses \(e\) lies on a nonfacial 3-cycle of \(G\) or on a critical walk \(c\) of \(G\). The first possibility is evidently not possible if \(s\geq 3\) and \(i\neq 1,s\). Consider then the subwalk \(\tau\) of \(c\) in \(P\) with vertices \(a,b\) on the boundary of \(P\). Then \(\{a,b\}\) is not the set \(\{v,w\}\) since this would imply the existence of a path of length less that \(d\) between \(v\) and \(w\). On the other hand by Lemma 3.1 there is an independent bound, say \(M\), for the length of \(c\). It follows that if \(s\geq 2M\) and \(i=M\) then there is no path of length less than \(M\) between \(a\) and \(b\) if \(\{a,b\}\neq\{v,w\}\). It follows that \(s\) is less than \(2M\), completing the proof.
Figure 1. Faces of a cellular graph in the double torus with boundary cycles of lengths 8 and 10.
In the next lemma since \(U_{2}\backslash U_{1}^{-}\) contains no nontriangular faces it is straightforward to apply Lemma 3.7 to obtain an independent bound on its size.
**Lemma 3.8**.: _Let \(G\) be a contraction-minimal embedded graph in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\) and let \(c_{1},c_{2}\) be planar type critical walks of \(G\) with faces \(U_{1},U_{2}\) containing the same nontriangular faces of \(G\) and suppose that \(U_{1}\subset U_{2}\). Then there is a bound depending only on \(g(\mathcal{M})\) and \(\alpha\) for the number of triangular faces of \(G\) in \(U_{2}\backslash U_{1}^{-}\)._
Proof.: Let \(e\) be an edge which meets a connected component \(W\) of \(U_{2}\backslash U_{1}^{-}\). We first show that it is not possible that \(e\) has distance greater that \(6g+\alpha\) from the boundary of \(W\), where \(g\) is the genus of \(\mathcal{M}\). This distance is defined to be the length of a shortest path from a vertex of \(e\) to a vertex of the boundary of \(U_{2}\backslash U_{1}^{-}\). Since \(U_{2}\backslash U_{1}^{-}\) is triangulated by triangular faces of \(G\) the edge \(e\) cannot lie on a critical walk in this case since such a walk has length no greater than \(6g+\alpha\). Similarly \(e\) cannot lie on an essential \(3\)-cycle. Also, since \(G\) is contraction-minimal \(e\) does not lie on a planar nonfacial \(3\)-cycle. The contraction \(G/e\) is therefore simple and lies in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\), and so this case does not occur.
The component \(W\) is a critical patch \(P\) since its boundary walk is a concatenation of a subwalk of \(c_{1}\) and a subwalk of \(c_{2}\). By Lemma 3.7 there is an upper bound for the number of triangular faces of \(P\). Since distinct components \(W\) correspond to distinct subwalks of \(c_{1},c_{2}\) the lemma follows.
## 4. Girth conditions and contraction-minimal graphs
In this section we obtain the following finiteness theorem. As before \(\alpha\geq 6\).
**Theorem 4.1**.: _There are finitely many contraction-minimal embedded graphs in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\)._
In the proof it is shown that there is a bound for the number of edges of a contraction-minimal graph \(G\) that depends only on \(\alpha\) and the genus of \(\mathcal{M}\) and is independent of the size of \(G\). Such a bound is referred to as an _independent bound_.
The first step of the proof, Lemma 4.4, shows that there is an independent bound for the number of edges meeting the face of a critical planar type walk. This is used in the proof of Lemma 4.5, which obtains an independent bound for the number of critical walks. For brevity in the narrative of this section we refer to a critical planar type walk simply as
Figure 2. Internally disjoint paths of length \(5\) in the critical patch \(P(\pi_{1},\pi_{2})\).
a critical walk. Finally, Proposition 4.9 shows that there is an independent bound for the number of essential \(3\)-cycles in \(G\). This last step makes use of Theorem 2.2 in the proof of the preliminary lemma, Lemma 4.7.
The following terminology will be useful. Let \(c_{1},c_{2}\) be a pair of critical walks in \(G\), for the open discs \(U_{1}\) and \(U_{2}\) respectively. Then \(c_{1},c_{2}\) are _equivalent_ if \(U_{1}\) and \(U_{2}\) contain the same set of nontriangular faces of \(G\). An equivalent pair \(c_{1},c_{2}\) is said to be _nonseparating_ if only one connected component of the intersection \(U_{1}\cap U_{2}\) contains the nontriangular faces common to \(U_{1}\) and \(U_{2}\). Finally, the critical walk \(c_{1}\), as well as the open set \(U_{1}\), is said to be _inclusion-maximal_ if there is no nonseparating equivalent pair \(c_{1},c\) where the open disc face \(U\) for \(c\) strictly contains \(U_{1}\).
**Lemma 4.2**.: _Let \(G\) belong to \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\) and let \(c_{1},c_{2}\) be nonseparating equivalent critical planar type walks in \(G\) with faces \(U_{1},U_{2}\) respectively, where \(U_{2}\backslash U_{1}\) is nonempty. Then there is a critical planar type walk \(c_{3}\) equivalent to \(c_{1}\), with \(c_{1},c_{3}\) nonseparating, and whose face strictly contains \(U_{1}\)._
Proof.: Suppose that \(c_{1},c_{2}\) have subwalks \(\pi_{1}\) and \(\pi_{2}\) that are disjoint apart from common endpoints. Also suppose that \(\pi\) is the closed walk formed by the concatenation \(\pi_{1}\) and \(-\pi_{2}\) (or \(\pi_{2}\) according to orientation). Thus \(\pi\) forms the boundary walk of an open disc and we suppose further that it is triangulated by triangular faces of \(G\). Thus \(|\pi_{1}|=|\pi_{2}|\) since the closed walks are critical. The open disc is either a component of \(U_{2}\backslash U_{1}^{-}\) or \(U_{1}\backslash U_{2}^{-}\). In the first case we may complete the proof by taking \(c_{3}\) to be \(c_{1}\) with \(\pi_{1}\) replaced by \(-\pi_{2}\) or \(\pi_{2}\). Thus we may assume that the first case does not occur.
In the second case we can replace \(c_{2}\) by a new critical walk \(c_{2}^{\prime}\) by replacing \(\pi_{2}\) by \(\pi_{1}\) or \(-\pi_{1}\). Repeating such replacements we may assume that \(c_{1},c_{2}\) have no triangulated patches formed by such essentially disjoint subwalks \(\pi_{1},\pi_{2}\). Figure 3 gives an illustration of a pair \(c_{1},c_{2}\) with this property. Noting that \(c_{1},c_{2}\) are assumed to be equivalent, it now follows that the intersection \(U_{1}\cap U_{2}\) has a single connected component and this component has a boundary walk formed by the concatenation of subwalks of \(c_{1}\) and \(c_{2}\) between \(2\) vertices,
Figure 3. Nonseparating equivalent critical planar type walks \(c_{1},c_{2}\) for which there are no components of \(U_{1}\backslash U_{2}^{-}\) that are triangulated.
\(v,w\). Let \(\gamma_{1},\gamma_{2}\) denote these subwalks. Since the complementary walk to \(\gamma_{2}\), say \(\gamma_{2}^{\prime}\), forms a triangulated patch with \(\gamma_{1}\), we have \(|\gamma_{1}|=|\gamma_{2}^{\prime}|\) and we may replace \(|\gamma_{1}|\) by \(|\gamma_{2}^{\prime}|\) to obtain the desired critical walk \(c_{3}\).
The following corollary is immediate.
**Corollary 4.3**.: _Let \(G\) belong to \(\mathcal{G}_{\rm pl}(\mathcal{M},\alpha)\). A critical planar type walk in \(G\) is contained in a nonseparating equivalent critical planar type walk that is inclusion-maximal._
**Lemma 4.4**.: _Let \(G\) be a contraction-minimal embedded graph in \(\mathcal{G}_{\rm pl}(\mathcal{M},\alpha)\). Then there is an independent bound for the number of faces of \(G\) in the face of a critical planar type walk._
Proof.: Consider a critical walk \(c\) of \(G\) with open disc \(U\). Every edge \(e\) of \(G\) which meets \(U\) has one of the following 3 properties.
(i) \(e\) lies on an essential 3-cycle of \(G\).
(ii) \(e\) lies on a critical walk \(c^{\prime}\) with face \(U^{\prime}\) that is not contained in \(U\). In this case \(c^{\prime}\) has a subwalk \(\pi^{\prime}\) between distinct vertices \(v,w\) on the boundary of \(U\).
(iii) \(e\) fails properties (i) and lies on a critical walk \(c^{\prime}\) with face \(U^{\prime}\) contained in \(U\).
In each case we show that there is an independent bound for the number of edges \(e\) by showing that there is an independent bound for the 3-cycles given in (i), for the subwalks given in (ii)), and for the critical cycles \(c^{\prime}\) whose faces are contained in \(U\).
Considering the second property first, note that the subwalk \(\pi\) for an edge \(e\) with property (ii) has no self-intersections in \(U\) in the sense that each vertex of \(\pi^{\prime}\) in \(U\) belongs to exactly 2 edges of \(\pi^{\prime}\). Let \(W_{1},\ldots,W_{r}\) be the nontriangular faces of \(G\) that are contained in \(U\). It follows that the set of such subwalks \(\pi^{\prime}\), between a specific pair of vertices \(v,w\) of \(c\), is partitioned into finitely many subset classes according to the distribution of the particular sets \(W_{i}\) in the two components of the complement of \(\pi\) in \(U\). Moreover, if two such subwalks \(\pi^{\prime},\pi^{\prime\prime}\) between \(v\) and \(w\) belonging to the same partition class, with \(c^{\prime}\) and \(c^{\prime\prime}\) equivalent critical walks, then \(|\pi^{\prime}|=|\pi^{\prime\prime}|\). This follows since the regions between \(\pi^{\prime}\) and \(\pi^{\prime\prime}\) are triangulated. It follows from Lemma 3.7 that there is an independent bound for the number of subwalks in the same partition class. Since there is an independent bound for the number of pairs \(v,w\) it follows that there is an independent bound for the number of subwalks \(\pi\) that arise in (ii).
If an edge \(e\) has property (i) then it must lie on a 3-cycle with 2 or 3 edges that meet \(U\) and which determine a subpath of 2 or 3 edges between vertices \(v,w\) of \(c\). The argument of the previous paragraph applies and so there is an independent bound for such edges.
In the final case we suppose that \(c^{\prime},d^{\prime}\) are equivalent critical cycles with faces \(U(c^{\prime}),U(d^{\prime})\) that are subsets of \(U\). In fact we shall initially view \(c^{\prime}\) as fixed and \(d^{\prime}\) as variable. The argument is similar in spirit to that for case (ii) above, this time with Lemma 3.7 ensuring that there is a bound for equivalent critical cycles with the same partitioning, of the set of \(W_{i}\). This partitioning is referred to below as an _enclosure pattern_.
The intersection \(U(c^{\prime})\cap U(d^{\prime})\) has connected components \(X_{i}\) and the complement of the closure of the union \(U(c^{\prime})\cup U(d^{\prime})\) has components \(Y_{j}\). Also, the number of these components is bounded by the length of \(c^{\prime}\). The nontriangular faces of \(G\) are distributed in these components in a particular manner, as illustrated in Figure 4. This distribution may be recorded in terms of the sequence of common vertices of \(c^{\prime},d^{\prime}\), up to cyclic order
in \(c^{\prime}\), together with the sequence of associated subsets of nontriangular faces that they enclose. We refer to this as an enclosure pattern of \(d^{\prime}\) with respect to \(c^{\prime}\). In Figure 4 the enclosure pattern is the pair
\[(v_{1},v_{2},\ldots,v_{6},v_{1}),\quad(\{W_{1}\},\{W_{2},W_{3}\},\{W_{4}\}, \emptyset,\emptyset,\emptyset).\]
Note now that if for another equivalent pair \(c^{\prime},b^{\prime}\) the enclosure pattern of \(b^{\prime}\) is equal to that of \(d^{\prime}\) (up cyclic permutation) then the symmetric difference \(U(c^{\prime})\triangle U(b^{\prime})\) is nonempty and has connected components whose closures are triangulated. It follows from Lemma 3.8 that there is an independent bound for the set of such critical cycles \(b^{\prime}\) with the given enclosure pattern.
The set of enclosure patterns is finite and so we have shown that the number of distinct critical cycles \(b^{\prime}\) equivalent to \(c^{\prime}\) has an independent bound. Reversing the roles of \(c^{\prime}\) and \(d^{\prime}\) it follows that the set of critical cycles whose equivalence class is not a singleton has an independent bound.
It suffices then to see that there is an independent bound for the number of critical cycles \(c^{\prime}\), with face \(U(c^{\prime})\subset U\), for which there is no other equivalent critical cycle \(b^{\prime}\) with face \(U(b^{\prime})\subset U\). This follows since there are a finitely many nontriangular faces of \(G\).
In the next proof we use an exhaustion argument to find an independent bound for a finite set \(\mathcal{C}(G)\) of critical walks in a contraction minimal graph \(G\) that have a particular property. A partition of \(\mathcal{C}(G)\) is given into subsets \(\mathcal{C}_{1}(G),\ldots,\mathcal{C}_{r}(G)\) where \(r\) has an independent bound and \(C_{1}(G)\) has maximal size amongst these subsets. This maximal
Figure 4. The equivalent critical walks \(c^{\prime},d^{\prime}\) determine subwalks of \(d^{\prime}\) between consecutive vertices in the cycle \((v_{1},v_{2},\ldots,v_{6},v_{1})\) that enclose nontriangular faces of \(G\) (the shaded circles) with corresponding sequence \((\{W_{1}\},\{W_{2},W_{3}\},\{W_{4}\},\emptyset,\emptyset,\emptyset)\).
size set is also denoted as \(\mathcal{C}_{1,1}(G)\). A similar partition, also with an independent bound, can be made for \(\mathcal{C}_{1,1}(G)\) and so giving a maximal size subset, \(\mathcal{C}_{1,2}(G)\). Repeating this process gives a finite sequence
\[\mathcal{C}_{1,1}(G),\mathcal{C}_{1,2}(G),\dots,\mathcal{C}_{1,N(G)}(G)\]
that terminates in a singleton set. Moreover it is known that \(N(G)\) has an independent bound. (In the proof this is the case because of the overlapping property indicated there.) It follows that the cardinality of the original set \(\mathcal{C}(G)\) has an independent bound.
**Lemma 4.5**.: _Let \(G\) be a contraction-minimal graph in \(\mathcal{G}_{\rm pl}(\mathcal{M},\alpha)\). Then there is an independent bound for the number of critical planar type walks in \(G\)._
Proof.: Consider first the subset of such embedded graphs that have a single nontriangular face. If \(G\) is of this type then all critical walks are equivalent and by Corollary 4.3 there is a maximal critical walk that contains all other critical walks. By Lemma 4.4 there is an independent bound for the cardinality of this subset.
Suppose next that \(G\) has more than one nontriangular face and consider first a finite sequence \(\mathcal{C}(G)\) of distinct critical walks \(c_{1},c_{2},\dots\) in \(G\) with the following two properties.
(i) The closed walks \(c_{k}\) are in the same equivalence class for a specific pair of nontriangular faces, say \(W_{1},W_{2}\).
(ii) Each \(c_{k}\) is inclusion maximal.
It follows from (ii) that the closed walks \(c_{k}\) are pairwise separating. Considering \(c_{1}\) as a base walk, each \(c_{k}\), for \(k\geq 2\), has \(2\) subwalks that, together with a subwalk of \(c_{1}\), enclose \(W_{1}\) and \(W_{2}\). (This is illustrated in Figure 5.) Since there is an independent bound for the interior edges of the critical walk \(c_{1}\), by Lemma 4.4, there is also an independent bound on the possibilities for the enclosing subwalks of the \(c_{k}\), for \(k\geq 2\). These possibilities partition the set \(\mathcal{C}(G)\) into subsets, say \(\mathcal{C}_{1},\dots,\mathcal{C}_{r}\). Suppose moreover that \(\mathcal{C}_{1}\), which we also denote as \(\mathcal{C}_{1,1}\), has maximal cardinality, and relabel its critical walks as \(c_{2},c_{3},\dots\). Thus, the enclosing subwalks for the pair \(c_{1},c_{k}\), for \(k\geq 2\), are the same. This is illustrated, for \(k=2,3\), in Figure 5.
Let \(U_{k}\) be the face of \(c_{k}\) and \(F(U_{k})\) the set of faces of \(G\) that are subsets of \(U_{k}\). By the inclusion-maximality of the \(c_{k}\) it follows from Lemma 4.2 that \(F(U_{i})\backslash F(U_{j})\neq\emptyset\), for all \(i,j\). Also, since the faces \(U_{i}\) are of planar type it follows that outside \(U_{1}\) the sets \(U_{i}\) and \(U_{j}\) overlap in the sense that the intersection of \(U_{i}\backslash U_{1}\) and \(U_{j}\backslash U_{1}\) is nontrivial. Thus we have the following _overlapping property_, illustrated in Figure 5.
\[(F(U_{i})\backslash F(U_{1}))\cap(F(U_{2})\backslash F(U_{1}))\neq\emptyset, \quad\text{for all}\quad i\geq 3.\]
By Lemma 4.4 the number of subsets of \(F(U_{2})\backslash F(U_{1})\) has an independent bound. Partitioning by these subsets we obtain a maximal set \(\mathcal{C}_{1,2}\) with relabelled members such that the sets \(F(U_{k})\backslash F(U_{2})\) agree for all \(k\geq 3\).
We can repeat this partitioning process to obtain the finite sequence \(\mathcal{C}_{1,1}(G)\), \(\mathcal{C}_{1,2}(G)\), \(\dots\), \(\mathcal{C}_{1,N}(G)\). Because the number of common triangular faces within the faces of the walks in \(\mathcal{C}_{1,k}(G)\) decrease with \(k\) it follows, from Lemma 4.4 that \(N\) is an independent bound. It follows from this that \(\mathcal{C}(G)\) has an independent bound. the set of critical walks in \(G\) in the equivalence class for \(W_{1},W_{2}\) has an independent bound. In the case for the other
equivalence classes, of which there is an independent bound, is entirely similar and the lemma follows.
For \(G\) in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\) let \(\mathcal{M}(G)\) be the connected compact topological space given by deleting the nontriangular faces of \(G\) from \(\mathcal{M}\). In particular, \(\mathcal{M}(G)\) need not be a surface with boundary in the classical sense that the boundary consists of finitely many disjoint simple loops.
**Lemma 4.6**.: _Let \(\mathcal{L}\) be a set of disjoint loops in \(\mathcal{M}(G)\), based at \(v\), with homotopy classes \([\gamma]_{\mathcal{M}(G)}\). Then the natural map \(\theta:[\gamma]_{\mathcal{M}(G)}\to[\gamma]_{\mathcal{M}}\) has a restriction to \(\mathcal{L}\) that is finite-to-one._
Proof.: Let \(r\) be the number of open discs in \(\mathcal{M}\) whose deletion gives \(\mathcal{M}(G)\). Suppose \(r=1\) with open disc \(U\) and that \(\gamma_{1},\gamma_{2}\) are loops of \(\mathcal{L}\) that are homotopic in \(\mathcal{M}\). Then they form the boundary of a pinched annulus in \(\mathcal{M}\). If the loops are not homotopic in \(\mathcal{M}(G)\) then \(U\) is contained in this annulus. If \(\gamma_{3}\) is homotopic to \(\gamma_{1}\) and \(\gamma_{2}\) in \(\mathcal{M}\) it follows that it is homotopic in \(\mathcal{M}(G)\) to either \(\gamma_{1}\) or \(\gamma_{2}\). The general case follows similarly by induction.
**Lemma 4.7**.: _Let \(\mathcal{E}\) be a family of disjoint loops based at \(v\) that are pairwise nonhomotopic in \(\mathcal{M}(G)\). Then there is an independent bound for \(\mathcal{E}\)._
Proof.: This is immediate from Theorem 2.2 and Lemma 4.6.
Suppose that \(\mathcal{E}_{3}\) is a family of essential \(3\)-cycles of \(G\) which are disjoint apart from a common base vertex \(v\) that does not belong to the boundary of \(\mathcal{M}(G)\). Then \(v\) has incident triangular faces that form a triangulated disc in \(\mathcal{M}(G)\). Contracting this closed disc to the vertex \(v\) gives a topological space, \(\mathcal{M}(G)_{v}\) say, and a family \(\tilde{\mathcal{E}}_{3}\) of loops in
Figure 5. The subwalks of \(c_{2},c_{3}\) in the closure of \(U_{1}\) coincide. The interiors of \(U_{2}\backslash U_{1}\) and \(U_{3}\backslash U_{1}\) overlap.
which are disjoint except for a common base point, \(v^{\prime}\) say. The space \(\mathcal{M}(G)_{v}\) is evidently homeomorphic to \(\mathcal{M}(G)\) if the distance of \(v\) to the boundary of \(\mathcal{M}(G)\) is at least \(2\). In general \(\mathcal{M}(G)_{v}\) need not be homeomorphic to \(\mathcal{M}(G)\) but it is also obtained from \(\mathcal{M}\) by the deletion of a finite family of disjoint open discs and Lemma 4.7 holds, with the same proof, with \(\mathcal{M}(G)_{v}\) in place of \(\mathcal{M}(G)\).
**Lemma 4.8**.: _Let \(G\), \(\mathcal{E}_{3}\) and \(\tilde{\mathcal{E}_{3}}\) be as above and suppose that \(G\) is contraction-minimal. Then there is an independent bound for the size of \(\mathcal{E}_{3}\)._
Proof.: By the critical patch lemma, Lemma 3.7, it will be enough to show that there is an independent bound for any subset, \(\mathcal{S}_{3}\) say, of \(\mathcal{E}_{3}\) consisting of pairwise nonhomotopic \(3\)-cycles. In this case the corresponding set of loops \(\tilde{\mathcal{S}_{3}}\) are pairwise nonhomotopic and so Lemma 4.7 completes the proof.
The proof of the next proposition follows the proof scheme in Section 2 of the Barnette-Edelson Theorem.
**Proposition 4.9**.: _There is an independent bound for the number of essential 3-cycles in a contraction-minimal graph in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\)._
Proof.: Assume that \(\mathcal{M}\) has genus \(g\) and is not the sphere or the projective plane and that \(G\) is a contraction minimal graph in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M},\alpha)\). By Lemma 4.5 there is an independent bound for the number of vertices that lie on a critical walk. In particular we may assume that there exists an essential 3-cycle \(c\) and that no vertex of \(c\) lies on a boundary of \(\mathcal{M}(G)\). For the induction step of the proof assume that for all compact 2-manifolds \(\mathcal{M}^{\prime}\) of genus less than \(g\) there is an independent bound for the number of vertices of a contraction-minimal embedded graph in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M}^{\prime},\alpha)\).
As in the proof of Theorem 1.1, cut and cap \(\mathcal{M}\) by \(c\) to create the surface \(\mathcal{M}_{1}\) (or pair \(\mathcal{M}_{1},\mathcal{M}_{2}\)) and the embedded graph \(G_{1}\) (or pair \(G_{1},G_{2}\)). Suppose that there is a single connected surface \(\mathcal{M}_{1}\) with embedded graph \(G_{1}\) and that \(G\) is orientable. Thus the freedom number \(f(G)\) of \(G\) is \(\alpha\) and so \(f(G_{1})=\alpha+6\). The embedded graph \(G_{1}\) in \(\mathcal{M}_{1}\) is cellular and it satisfies the planar girth inequalities since any critical walk \(d\) for \(G_{1}\) with open disc face \(U\) containing nontriangular faces of \(G_{1}\) is also a critical walk for \(G\) for the same nontriangular faces as they appear in \(G\). Thus \(G_{1}\) is in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M}_{1},\alpha+6)\).
If \(e\) is an edge of \(G_{1}\) that is not an edge of the two capping 3-cycles and if \(e\) is contractible, with \(G_{1}/e\) in \(\mathcal{G}_{\mathrm{pl}}(\mathcal{M}_{1},\alpha+6)\), then, as an edge of the contraction minimal graph \(G\) it must lie on an essential 3-cycle or critical walk that passes through a vertex of \(c\). The number of such critical walks has an independent bound. To complete the proof of the proposition in this case it suffices to show that the set \(\mathcal{E}\) of essential 3-cycles of \(G\) passing through a vertex of \(c\) has an independent bound. This follows from Lemma 4.8. The nonorientable case follows in the same way.
We may now complete the proof of the main theorem, Theorem 4.1, that there are finitely many contraction-minimal embedded graphs \(G\), with \(f(G)=\alpha\geq 6\), that satisfy the planar type girth inequalities. Indeed, there is an independent bound for the number of edges that are boundary edges for a nontriangular face, and every other edge of \(G\) lies on an essential 3-cycle or a critical planar type walk and Lemma 4.5 and Proposition 4.9 provide independent bounds for these.
## 5. Higher genus girth inequalities and (3,6)-tight graphs
In this section we show that a cellular graph \(G\) in a (connected) compact 2-manifold \(\mathcal{M}\) is (3,6)-tight if and only if it satisfies the Maxwell count \(3v-3=6\) and certain higher genus girth inequalities. We conjecture that for any 2-manifold \(\mathcal{M}\) there are finitely many contraction-irreducible cellular \((3,6)\)-tight embedded graphs. It seems likely that this can be obtained, for example, from general genus versions of the arguments in the previous section.
Let \(G\) be an embedded simple graph in \(\mathcal{M}\). Define a _superface_ of \(G\) to be the face of a subgraph \(K\) of \(G\). Write \(G_{U}\) for the subgraph of \(G\) containing the vertices and edges of \(G\) that lie in the closure \(U^{-}\). With mild notational abuse write \(G_{U}^{c}\) for the complementary graph \(G_{U^{c}}\), where \(U^{c}=\mathcal{M}\backslash U^{-}\). This is the subgraph determined by the edges of \(G\) that are disjoint from \(U\). In particular, \(G=G_{U}\cup G_{U}^{c}\) and \(G_{U}\cap G_{U}^{c}\) is the graph \(\partial U\) whose edges are the boundary edges of \(U\). If \(U\) is the only face of \(K\) then \(K\) is \(\partial U\) and \(G_{U}=G,G_{U}^{c}=\partial U\).
**Definition 5.1**.: An \(\mathcal{M}\)-embedded simple graph \(G\) with \(f(G)=6\) satisfies the _superface sparsity inequalities_ if \(f(G_{U})\geq 6\) for all superfaces \(U\) for which \(\partial U\) has at least 3 vertices.
If \(G\) has only one face then every subgraph also has one face and the property of the previous definition is equivalent to the property of \((3,6)\)-tightness. On the other hand if \(G\) is cellular then the set of subgraphs \(G_{U}\) provide a proper subcollection of the set of all subgraphs of \(G\).
The following theorem is due to Shakir [18] and it gives an insight into the \((3,6)\)-tightness of \(G\). In fact it is the "no blocks case" of a more general equivalence for the higher genus analogues of the block and hole graphs described in the introduction.
**Theorem 5.2**.: _Let \(G\) be a embedded graph in the connected compact 2-manifold \(\mathcal{M}\). Then the following are equivalent._
_(i) \(G\) is (3,6)-tight._
_(ii) \(G\) satisfies the superface sparsity inequalities and \(f(G)=6\)._
Proof.: That (i) implies (ii) is immediate. Suppose that (ii) holds and, by way of contradiction, let \(K\) be a maximal subgraph of \(G\) with the properties that \(f(K)\leq 5\) and \(K\) has at least 3 vertices. Let \(U\) be a face of \(K\). Since \(f(G)=f(G_{U})+f(G_{U}^{c})-f(\partial U)\) and \(f(G)=6\) the condition \(f(G_{U})\geq 6\) is equivalent to \(f(G_{U}^{c})-f(\partial U)\leq 0\). Thus, since \(K\cap G_{U}=\partial U\) we have
\[f(K\cup G_{U})=f(K)+f(G_{U}^{c})-f(K\cap G_{U})\leq f(K).\]
Thus \(K=K\cup G_{U}\) by the maximality of \(K\). Since this is true for all faces \(U\) of \(K\) it follows that \(K=G\), a contradiction.
While \(\mathcal{G}_{\rm pl}(\mathcal{M},6)\) contains the family \(\mathcal{F}(\mathcal{M},6)\) of \((3,6)\)-tight embedded graphs in \(\mathcal{M}\) the families are distinct for higher genus manifolds, as we noted in Section 3. In fact, by considering girth constraints for the boundary walks of faces with limited genus we can specify intermediate families between \(\mathcal{G}_{\rm pl}(\mathcal{M},6)\) and \(\mathcal{F}(\mathcal{M},6)\). We now examine the connection between the superface sparsity inequalities and higher genus girth inequalities.
Let \(\mathcal{M},G,K,U\) be as above with the simplification that \(K\) is the boundary graph \(\partial U\) of \(U\) and let \(W\) be the complement of the closure of \(U\). Then \(\mathcal{M}\) is the union of \(U,K,W\)
The genus \(g_{r}(\mathcal{M})\) in the addition formula of the next lemma is defined as \(g_{r}(\mathcal{M})=g(\mathcal{M})\), if \(\mathcal{M}\) is oriented, and \(g_{r}(\mathcal{M})=\frac{1}{2}g(\mathcal{M})\) otherwise, and is known as the _reduced genus_ of \(\mathcal{M}\). The reduced genus \(g_{r}(U)\) is similarly defined. We also remark that \(g_{r}(\mathcal{M})\) is related to the Euler genus \(g_{\mathrm{eul}}(\mathcal{M})\) by the formula \(g_{\mathrm{eul}}(\mathcal{M})=2g_{r}(\mathcal{M})\).
When \(\partial U\) is a union of pairwise disjoint cycles the closure \(U^{-}\) of \(U\) in \(\mathcal{M}\) is a connected compact surface with boundary, say \(\mathcal{N}_{U}\), with \(s>0\) disjoint boundary cycles. The genus \(g(\mathcal{N}_{U})\) may be defined as \(g(U)\). Also this genus may be defined in terms of any triangulation of \(\mathcal{N}_{U}\) by means of the Euler equation
\[v-e+f=2-\mu g(\mathcal{N}_{U})-s\]
where \(\mu=2\) (resp. \(\mu=1\)) if \(\mathcal{N}_{U}\) is oriented (resp. not oriented). Thus, in the oriented case \(g(\mathcal{N}_{U})=g(U)=1-\frac{1}{2}(\chi+s)\) where \(\chi=v-e+f\).
**Lemma 5.3**.: _If the set of boundary walks of \(U\) and \(W\) coincide and consist of \(s\) cycles then_
\[g_{r}(\mathcal{M})=g_{r}(U)+g_{r}(W)+(s-1).\]
Proof.: The hypotheses imply that \(\partial U=\partial W\) and that \(\mathcal{M}\) is the join of \(\mathcal{N}_{U}\) and \(\mathcal{N}_{W}\) over their common boundary cycles. The addition formula follows from direct calculation of the genus values from a triangulation of \(\mathcal{M}\) that includes the edges of \(\partial U\).
With the notation of Lemma 2.1 we have \(\mu g(\mathcal{M})=2g_{r}(\mathcal{M})\), for both cases of \(\mu\), and so Lemma 3.1 gives the following general formula for a cellular embedded graph \(G\) in a connected compact 2-manifold \(\mathcal{M}\):
\[\sum_{k}(k-3)f_{k}=6g_{r}(\mathcal{M})+f(G)-6.\]
With \(G,U,W\) as above let \(W_{i}\) be a connected component of \(W\) with boundary consisting of disjoint cycles \(d_{1},\dots,d_{t}\). Then we may consider the compact connected 2-manifold, \(\mathcal{M}_{W_{i}}\) say, obtained by attaching discs to these cycles. Viewing \(G_{W_{i}}\) as an embedded graph in \(\mathcal{M}_{W_{i}}\) the previous formula gives
\[\sum_{k=1}^{t}(|d_{k}|-3)+\sum_{k\in I(W_{i})}(|c_{k}|-3)=6g_{r}(\mathcal{M}_ {W_{i}})+f(G_{W_{i}})-6\]
where \(I(W_{i})\) is the set of indices for the nontriangular faces that lie in \(W_{i}\). As in Section 3, \(c_{k}\) is a boundary walk of a face of the cellular graph \(G\). We refer to this equation as the _face walk equation_ for \(G_{W_{i}}\). In particular, if \(W\) is connected and \(\partial W\) is a union of disjoint cycles then, recalling that \(g_{r}(\mathcal{M}_{W})=g_{r}(W)\), the face walk equation for \(W\) is
\[\sum_{k=1}^{t}(|d_{k}|-3)+\sum_{k\in I(W)}(|c_{k}|-3)=6g_{r}(W)+f(G_{W})-6.\]
**Lemma 5.4**.: _Let \(G\) be a cellular \(\mathcal{M}\)-embedded graph with \(f(G)=6\) and let \(U\) be a superface of \(G\) for which \(\partial U\) is the union of \(s\) disjoint cycles and for which the open set \(W=\mathcal{M}\backslash U^{-}\) is connected and \(\partial W=\partial U\). Then the following are equivalent._
1. \(\quad f(G_{W})\geq 6\)_._
2. \[\sum_{k=1}^{s}(|d_{k}|-3)\geq\sum_{k\in I(U)}(|c_{k}|-3)-6(g_{r}(U)+s-1).\]
_where \(d_{1},\ldots,d_{s}\) are the common boundary cycles of \(U\) and \(W\)._
Proof.: We have \(f(G)=6\) and so Lemma 3.1 gives
\[\sum_{k\in I(U)}(|c_{k}|-3)+\sum_{k\in I(W)}(|c_{k}|-3)=6g_{r}(\mathcal{M}).\]
Subtracting this from the face walk equation for \(G_{W}\) gives
\[\sum_{k=1}^{s}(|d_{k}|-3)-\sum_{k\in I(U)}(|c_{k}|-3)=6(g_{r}(W)-g_{r}( \mathcal{M}))+f(G_{W})-6.\]
By the addition formula we have
\[g_{r}(W)-g_{r}(\mathcal{M})=-(g_{r}(U)+(s-1))\]
and so (ii) holds if and only if (i) holds.
Consider now the case for which \(W\) is connected but the boundary walks of \(U\) and \(W\) are not necessarily disjoint cycles. Perform vertex-splitting operations on the vertices of each boundary walk \(d_{k}\) of \(\partial U\), for \(1\leq k\leq s\), to create the embedded graph \(G^{\prime}\), containing \(G\), with a superface \(U^{\prime}\) that is interior to \(U\) and whose boundary consists of cycles \(d^{\prime}_{k}\) of length \(|d_{k}|\), for \(1\leq k\leq t\). The boundary of \(U^{\prime}\) consists of \(s\) disjoint cycles and \(g_{r}(U^{\prime})=g_{r}(U)\). Let \(W^{\prime}\) be the complement of the closure of \(U^{\prime}\) and define \(g_{r}^{+}(W)\) to be \(g_{r}(W^{\prime})\). Since \(\partial U^{\prime}=\partial W^{\prime}\), the addition formula of Lemma 5.3 gives
\[g_{r}(\mathcal{M})=g_{r}(U)+g_{r}^{+}(W)+s-1.\]
Noting that \(f(G_{W^{\prime}})=f(G_{W})\), Lemma 5.5 now gives the following more general lemma.
**Lemma 5.5**.: _Let \(G\) be a cellular \(\mathcal{M}\)-embedded graph with \(f(G)=6\) and let \(U\) be a superface of \(G\) with boundary walks \(d_{1},\ldots,d_{s}\) and for which the open set \(W=\mathcal{M}\backslash U^{-}\) is connected. Then the following are equivalent._
1. \(\quad f(G_{W})\geq 6\)_._
2. \[\sum_{k=1}^{s}(|d_{k}|-3)\geq\sum_{k\in I(U)}(|c_{k}|-3)-6(g_{r}(U)+s-1).\]
The following terminology for this lemma will be convenient.
**Definition 5.6**.: (i) A superface \(U\) of \(G\) is a _balanced superface_ if \(W=\mathcal{M}\backslash U^{-}\) is connected.
(ii) The _girth inequality_ (or _higher genus girth inequality_) for a balanced superface \(U\) is the inequality of Lemma 5.5(ii).
Note that if \(s=1\) and the genus of \(U\) is zero then the girth inequality coincides with the girth inequality for a planar type closed walk as given in Section 3. Note also that, as in the example below, the open sets \(U\) and \(W\) for the balanced superface \(U\) need not have the same boundary.
**Example 5.7**.: Consider a partially triangulated torus associated with an embedded graph \(G\) as in Figure 6. Note that irrespective of the triangulation there is a violation of the planar type girth inequalities since \(|d_{1}|-3\) is less than \(|c_{1}|-3\). In fact a partial triangulation may be constructed so that the boundary walk for the superface \(U\) is the only planar type closed walk of length \(8\) whose face contains the face \(X\) of \(G\). This illustrates the general point that it can be necessary to consider closed walks other than cycles to determine whether girth inequalities hold.
We can use this example to illustrate the previous lemma. Vertex-splitting moves on all the vertices of the boundary walk of \(U\) create the embedded graph \(G^{\prime}\supset G\) with superface \(U^{\prime}\subset U\). The superfaces \(U^{\prime}\) and \(W^{\prime}=\mathcal{M}\backslash(U^{\prime})^{-}\) have coinciding boundaries consisting of a single cycle, \(d^{\prime}_{1}\) say, where \(|d^{\prime}_{1}|=|d_{1}|\). The (higher genus) girth inequality for \(U\) is
\[|d_{1}|-3\geq|c_{1}|-3-6(g_{r}(U)+1-1).\]
Since \(g_{r}(U)=g_{r}(U^{\prime})=0\) the inequality is false and Lemma 5.5 implies that \(f(G_{W})<6\).
**Lemma 5.8**.: _Let \(G\) be a cellular \(\mathcal{M}\)-embedded graph with \(f(G)=6\) and suppose that \(f(G_{W})\geq 6\) for all balanced superfaces of \(G\). Then \(f(G_{U})\geq 6\) for each superface \(U\) of \(G\)._
Proof.: By considering \(U^{\prime}\) obtained by vertex splitting on the boundary \(\partial U\), as in the discussion preceding Lemma 5.5, we may assume that the boundary of \(U\) consists of disjoint cycles. Let \(W_{1},\ldots,W_{\kappa}\) be the connected components of the complement of \(U^{-}\) and let
\[U^{\prime}_{i}=U\cup\bigcup_{j\neq i}W_{j}.\]
Then \(U_{i}\) is a balanced superface and \(f(G_{U_{i}})\geq 6\) by assumption.
Figure 6. A partially triangulated torus with embedded graph \(G\) with \(f(G)=6\) and a single nontriangular face \(X\) that is \(9\)-sided. The superface \(U\) of \(G\) has a boundary walk \(d_{1}\) of planar type with length \(|d_{1}|=8\).
Let \(a=f(G_{U})\) and \(a_{i}=f(G_{W_{i}})-f(\partial W_{i})\), for \(1\leq i\leq\kappa\). Then
\[a+\sum_{i=1}^{\kappa}a_{i}=f(G)=6\quad\text{and}\quad a+\sum_{j\neq i}a_{j}=f(G_{ U_{i}})\geq 6.\]
Adding the inequalities and substituting \(6-a\) for the sum \(\sum_{j}a_{j}\) gives \(a\geq 6\) as desired.
Let \(\gamma\) be the genus of \(\mathcal{M}\) and define the families
\[\mathcal{G}_{0}(\mathcal{M},6)\supseteq\mathcal{G}_{1}(\mathcal{M},6)\supseteq \cdots\supseteq\mathcal{G}_{\gamma}(\mathcal{M},6)\]
where \(\mathcal{G}_{k}(\mathcal{M},6)\) consists of the \(\mathcal{M}\)-embedded cellular graphs with \(f(G)=6\) that satisfy the girth inequalities for balanced superfaces \(U\) with \(g(U)\leq k.\) The following theorem is now immediate from Theorem 5.2 and Lemmas 5.5, 5.8.
**Theorem 5.9**.: _Let \(\mathcal{M}\) be a connected compact 2-manifold with genus \(\gamma\). Then \(\mathcal{G}_{\gamma}(\mathcal{M},6)\) is equal to \(\mathcal{F}(\mathcal{M},6)\), the family of cellular embedded \((3,6)\)-tight graphs in \(\mathcal{M}\)._
**Theorem 5.10**.: _For each connected compact 2-manifold \(\mathcal{M}\) the family \(G_{0}(\mathcal{M},6)\) coincides with \(G_{\rm pl}(\mathcal{M},6)\). Also, if \(\mathcal{M}\) is the torus or the projective plane then_
\[G_{0}(\mathcal{M},6)=G_{\rm pl}(\mathcal{M},6)=\mathcal{G}_{1}(\mathcal{M},6)= \mathcal{F}(\mathcal{M},6).\]
Proof.: Let \(G\) belongs to \(G_{\rm pl}(\mathcal{M},6)\) and let \(U\) be a balanced superface of \(G\) with boundary walks \(d_{1},\ldots,d_{s}\) and \(g_{r}(U)=0\). Once again, considering the superface \(U^{\prime}\) of \(G^{\prime}\) obtained by vertex splitting of boundary vertices, the boundary walks of \(U^{\prime}\) are disjoint cycles \(d^{\prime}_{1},\ldots,d^{\prime}_{s}\) and \(g_{r}(U^{\prime})=0\). It follows that \(s=1\) and \(U^{\prime}\) is a disc and so \(d^{\prime}_{1}\), and therefore also \(d_{1}\), is a planar type closed walk. Since \(G^{\prime}\) is in \(G_{\rm pl}(\mathcal{M},6)\) the girth inequality holds for \(d^{\prime}_{1}\) and hence for \(d_{1}\). Since \(g_{r}(U)+s-1=0\) the (higher genus) girth inequality holds for \(U\). Thus \(G\) is in \(G_{0}(\mathcal{M},6)\).
Let \(G\) belong to \(\mathcal{G}_{0}(\mathcal{M},6)\) and let \(d_{1}\) be a planar type walk in \(G\) with superface \(U\). If \(d_{1}\) is a cycle then \(U\) is a balanced superface and so the general girth inequality holds for \(U\). Since \(g_{r}(U)+s-1=0\) it follows that the girth inequality holds for \(d_{1}\) as a planar type closed walk. By the vertex-splitting argument the same conclusion holds when \(d_{1}\) is a general planar type walk and so \(G\) belong to \(\mathcal{G}_{\rm pl}(\mathcal{M},6)\).
Let \(\mathcal{M}\) be the torus or the projective plane. In view of Theorem 5.9 it remains to show that \(\mathcal{G}_{0}(\mathcal{M},6)\) is contained in \(\mathcal{G}_{1}(\mathcal{M},6)\). Let \(\mathcal{M}\) be the torus with \(G\) in \(\mathcal{G}_{0}(\mathcal{M},6)\) with balanced superface \(U\). Suppose first that \(g_{r}(U)=1\). The desired girth inequality for \(U\) takes the form
\[\sum_{k}(|d_{k}|-3)\geq\sum_{k\in I(U)}(|c_{k}|-3)-6s.\]
where \(d_{1},\ldots,d_{s}\) are the boundary walks of \(U\). However, this is evident since \(s\geq 1\) and for the torus, as noted in Section 3, we have \(\sum_{k}(|c_{k}|-3)=6\), since \(f(G)=6\). On the other hand, if \(g_{r}(U)=0\) then the planar type girth inequality holds for the boundary walk of \(U\) since \(G\) is in \(\mathcal{G}_{0}(\mathcal{M},6)\).
If \(\mathcal{M}\) is the projective plane then a superface \(U\) of \(G\) is either oriented, with \(g_{r}(U)=g(U)=0\) or \(1\), or is not oriented in which case \(g(U)=1\) and \(g_{r}(U)=1/2\). In the latter case the desired inequality takes the form
\[\sum_{k}(|d_{k}|-3)\geq\sum_{k\in I(U)}(|c_{k}|-3)-3s\]
and since \(\sum_{k}(|c_{k}|-3)=3\) this inequality holds. Similarly the (less strict) inequality holds in the orientable case with \(g_{r}(U)=1\), and it follows, as before, that \(G\) is in \(\mathcal{G}_{0}(\mathcal{M},6)\).
We remark that the equality between \(\mathcal{G}_{0}(\mathcal{M},6)\) and \(\mathcal{G}_{1}(\mathcal{M},6)\), and hence \(\mathcal{F}(\mathcal{M},6)\), does not hold for the Klein bottle. An illustration of this is suggested by Figure 7, which is reminiscent of the double torus example of Figure 1. An embedded graph \(G\) with \(f(G)=6\) and the properties indicated does not satisfy the girth inequality for the superface \(U\) since \(g_{r}(U)=1/2\) and the boundary cycle \(d_{1}\) of \(U\) has length \(3\).
### Identifying contraction minimal graphs
From the previous theorem and the main result of Section 4 it follows that there are finitely many contraction-minimal \((3,6)\)-tight cellular graphs for the torus. In [7] it was shown that there are \(2\) such graphs that have a single nontriangular face of size \(9\), ie. with a boundary walk of length \(9\), and these are shown in Figure 8.
There are \(11\) possibilities for the set of ordered lists of nontriangular face sizes, namely \(\{4,4,4,4,4,4\}\), \(\{5,4,4,4,4\}\), \(\{5,5,4,4\}\), \(\{5,5,5\}\), \(\{6,4,4,4\}\), \(\{6,5,4\}\), \(\{6,6\}\), \(\{7,4,4\}\), \(\{7,5\}\), \(\{8,4\}\) and \(\{9\}\). Simple exploration shows that there are more than \(35\) mammals in the first case, and this suggests that there are probably more than \(100\) minimal graphs. Figure 9 shows \(4\) contrasting minimals with \(6\) quadrilateral faces.
For the projective plane it was shown by direct methods, in Kastis and Power [13], that there are \(8\) contraction-minimal (3,6)-tight cellular graphs and these are shown below, where diametrically opposite vertices are identified.
Figure 8. The minimal (3,6)-tight cellular graphs for the torus with a face of size \(9\).
Figure 7. Faces of size \(7\) and \(5\) for a graph in the Klein bottle \(\mathcal{M}\), with \(\mathcal{M}\) realised as a sphere with \(2\) cross caps and where \(G\) has an essential \(3\)-cycle between the cross caps. |
2309.11526 | Likelihood-based Sensor Calibration using Affine Transformation | An important task in the field of sensor technology is the efficient
implementation of adaptation procedures of measurements from one sensor to
another sensor of identical design. One idea is to use the estimation of an
affine transformation between different systems, which can be improved by the
knowledge of experts. This paper presents an improved solution from Glacier
Research that was published back in 1973. The results demonstrate the
adaptability of this solution for various applications, including software
calibration of sensors, implementation of expert-based adaptation, and paving
the way for future advancements such as distributed learning methods. One idea
here is to use the knowledge of experts for estimating an affine transformation
between different systems. We evaluate our research with simulations and also
with real measured data of a multi-sensor board with 8 identical sensors. Both
data set and evaluation script are provided for download. The results show an
improvement for both the simulation and the experiments with real data. | RΓΌdiger Machhamer, Lejla Begic Fazlic, Eray Guven, David Junk, Gunes Karabulut Kurt, Stefan Naumann, Stephan Didas, Klaus-Uwe Gollmer, Ralph Bergmann, Ingo J. Timm, Guido Dartmann | 2023-09-20T06:55:39Z | http://arxiv.org/abs/2309.11526v4 | # Likelihood-based Sensor Calibration using Affine Transformation
###### Abstract
An important task in the field of sensor technology is the efficient implementation of adaptation procedures of measurements from one sensor to another sensor of identical design. One idea is to use the estimation of an affine transformation between different systems, which can be improved by the knowledge of experts. This paper presents an improved solution from Glacier Research that was published back in 1973. The results demonstrate the adaptability of this solution for various applications, including software calibration of sensors, implementation of expert-based adaptation, and paving the way for future advancements such as distributed learning methods. One idea here is to use the knowledge of experts for estimating an affine transformation between different systems. We evaluate our research with simulations and also with real measured data of a multi-sensor board with 8 identical sensors. Both data set and evaluation script are provided for download. The results show an improvement for both the simulation and the experiments with real data.
Sensor Adaptation, expert supported learning, distributed learning, transformation of sensors.
## I Introduction
In Internet of things applications, as machine learning for sensor systems, there are often systems with similar behavior, e.g. two sensors measuring the same data or the same machine in different process settings. In this context, the transformation between Euclidean data spaces of two systems can be described by an affine transformation. Figure 1 illustrates the concept of expert-assisted learning. We assume that experts know estimates of several example data points in both systems and can associate some data points of a similar machine with these measurements/estimates. Using these noisy measurements, we want to learn the transformation from system \(1\) to system \(2\). Many examples of this exist in practice, e.g. drifts and transformations of identically constructed sensors. These sensors are subject to non-identical fabrication defects. That is, in feature space, the data of two sensors are shifted or slightly rotated.
A common practice to align the data for e.g. machine learning purposes is feature-wise normalization. Figure (a)a shows raw measurement values from an example application of an artificial nose analyzing apple juice during a heating process using 8 sensors. The black and blue dots show the data of the relative resistance \([\frac{\Omega}{12}]\) values from sensor 1 and sensor 2 at a sensor plate temperature of \(200\)\({}^{\circ}\)C and \(400\)\({}^{\circ}\)C. The red dots represent the estimation for sensor \(2\), applying the affine transformation (AT) on the data of sensor \(1\). The data collection process is described in detail in section IV-C. Figure (b)b shows the feature-wise normalized values. It can be seen that normalization already improves the similarity between the data spaces and that the data spaces can be further aligned if the transformation is applied between the two data spaces. Data measured with one sensor can be mapped to data in the data space of the other sensor. Similar practices can be used, to adjust built-in bias in sensors or to perform software-based maintenance for drifts on aged or poisoned sensors. By combining these two methods, we lay the foundation for improving transformation estimation through expert knowledge. This can be used in distributed learning systems by checking the computed transformation between similar systems by an expert, and iteratively triggering a recomputation or further adaptation of the transformation in
Fig. 1: Concept of expert-based learning. Some model instances from sensor 1 are matched by experts with their representatives in sensor 2, the remaining instances from sensor 1 are calculated using the estimated AT, or vice versa.
case of incorrect predictions. The given example of 8 sensors measuring the same fluid, will provide similar data as if 8 sensors were measuring 8 similar fluids at different locations, which would correspond to a distributed learning use case.
In this paper, we present an approach to estimate the transformation between two data spaces by an AT. Referring to the example with different sensors, an expert can perform the measurement with all sensors under laboratory conditions. Then these measurements are used to estimate the transformation between the two sensors (systems). The basis of this approach is the problem of estimating an AT.
Inspired by a statistical problem in glaciology, Gleser _et al._[1] obtained maximum likelihood estimators (MLE) of the unknown features and an AT. As a continuation of this research, the authors in [1] showed that a MLE for the unknown parameter exists, and gained MLE for the model parameters in closed form.
This paper is organized as follows: in section II, we derive the system model and formulate the estimation problem of the AT. In section III, we present the algorithms, which we in part have selected from the literature [1]. Section III-A summarizes the solution of Gleser _et al._. We show in section III-C that the stationary points of the parameters of the transformation lead to similar results as the least squares solution, which can be combined with Gleser _et al._ to provide better estimation of the AT. Section IV summarizes the simulation results based on Monte Carlo simulations and shows the results in a real multi-sensor setup.
Our main contribution is outlined as follows:
1. We show that the results of an old but fundamental work [1] in glaciology can be used for the estimation of an AT in expert-supported learning. In [1], the authors derive a solution for the estimation of a linear transformation. Their solution allows to estimate the data in system 1, variance and the transformation. By using the well-known augmented matrix [2], this result can be easily improved by denoising using the eigenvectors and extended to an AT.
2. Furthermore, the authors already showed that a simple solution directly follows from the multivariate regression analysis. We are interested in the estimation of the AT and we additionally derive the gradient of the MLE to get a stationary point of the parameters of the transformation, which leads to a similar simplificated _least square solution_ as in [1].
3. We prove that both solutions are connected by an eigenvalue decomposition. By simulation, we show that a combination of both solutions provides better estimates for computing the parameters of the AT compared to the original approach from Gleser _et al._[1].
4. We also show an experiment with a multi-sensor Bosch BME688 development kit board (Fig. 3) to evaluate the estimation error of the different methods and provide the example dataset and evaluation code.
5. Finally, we briefly present an idea for a novel concept for distributed learning using transformation with the support of expert knowledge.
### _Related Work_
More general approach for the MLE procedure for linear model with errors in variables is proposed by authors in [7, 9]. In [10], the relationship between maximum likelihood and generalize least squares approaches are examined and some of the results when estimating in _error in variables_ models are extended. The linear transformations trained in maximum likelihood sense on adaptation data for hidden Markov model based speech recognition is researched in [11, 12]. The authors in [13] discuss and compare the optimal linear estimation and modified MLE, where they show the better results for modified MLE from the point of bias and means square error in comparison with optimal linear model. The maximum likelihood method is used in regression analysis of linear transformation model and efficient and consistent estimator is developed for different data case of cohorts [14, 15]. MLE optical flow model in monocular optical flow field is developed and presented by authors in [16].
Domain adaptation using maximum likelihood linear transformation for speaker verification is presented by authors in
Fig. 2: Comparison of (a) raw and (b) feature-wise normalized data.
[17, 18]. The authors in [20] investigate the transformed linear regression with non-normal error distributions where they develop a MLE approach and provide the almost optimal conditions on the error distribution. The authors in [21], by using Student's t-distribution noise model, introduce a distributed maximum-likelihood based state estimation approach in domain of power systems. The authors in [19] propose a Distributed Learning framework robust to affine distribution shifts (FLRA) for non-i.i.d. personalization to implement efficient distributed learning to generate an averaged global model using ATs. In [23], AT parameters are used to analyze the domain heterogeneity for human activity recognition. The research by [28] introduces a cost-effective calibration method for Low Cost Air Quality Monitoring Systems, which ensures accurate measurements for widespread deployment. This approach reduces calibration costs and enhances the accessibility of precise readings, making it suitable for various IoT Air Quality monitoring devices. The authors in [29] propose TDACNN, a target-domain-free domain adaptation convolutional neural network designed to mitigate the effects of ill-posed gas sensor drift in E-nose applications. The introduction of an additive angular margin softmax loss during training and the use of the MMD-based ensemble method contribute to improved feature generalization and interclass distance enlargement without the need for participation of target domain data. The authors in [30] built five twin sensing units with eight MOX gas sensors each, exploring sensor tolerance and the impact of sensor mismatch, and they demonstrated the effectiveness of the Direct Standardization (DS) transformation for calibration transfer, addressing drift and variability issues. Tao _et al._[31] propose a domain correction based on the kernel transformation (DCKT) for drift compensation. They show that transformation can lead to better classification results in drift-prone sensor setups by modifying source and target domain. Authors in [32, 33] addresses flaws in Maximum Likehood Calibration Array (MLCA), proposing a modified cost function and emphasizing improved convergence rates, while our proposed approach focuses on expert-assisted learning between systems, combining direct estimation with a solution from Gleser _et al._
## II System Model
In this section, we define the mathematical basis in order to develop the corresponding algorithms in Section III. Fig. 4 shows an example of the transformation for the 2-dimensional case. This figure presents a simulation of the data vectors \(\mathbf{v}\in\mathbb{R}^{2}\) in an Euclidean space. The black data points are the measured points \(\mathbf{X}\) from system \(1\) and the red data points \(\mathbf{\Theta}\) are the estimation of these data points. The green data points \(\mathbf{Y}\) represent the (noisy) transformation of the black data points \(\mathbf{X}\) into system \(2\) and the blue data points \(\hat{\mathbf{Y}}\) are the origin estimation of this transformed data. As mentioned in [1], we consider two different systems in which the same features are measured. A single measurement data vector of system \(1\) is denoted by \(\mathbf{x}_{i}\) and the single data measurement vector of system \(2\) by \(\mathbf{y}_{i}\) respectively, where \(i\in\{1,...,n\}\). Let \(\mathbf{x}_{i}\in\mathbb{R}^{q}\) and \(\mathbf{y}_{i}\in\mathbb{R}^{q}\), then the data vectors from system \(1\) are mapped to the data vectors in system \(2\) by the transformation \(\mathbf{y}_{i}=\mathrm{T}(\mathbf{x}_{i})\).
We can summarize in the following definition:
**Definition 1**.: _Affine Transformation - Let \(\mathbf{x}\in\mathbb{R}^{q}\), \(\mathbf{A}\in\mathbb{R}^{q\times q}\) and \(\mathbf{b}\in\mathbb{R}^{q}\), the AT is given by \(\mathrm{T}(\mathbf{x})=\mathbf{A}\mathbf{x}+\mathbf{b}\)._
We assume that the measurement in system \(1\) is noisy. The measurement function in system \(1\) is given by (1):
\[\mathbf{x}_{i}=\boldsymbol{\theta}_{i}+\mathbf{m}_{i}. \tag{1}\]
We also assume that the measurement in system \(2\) is noisy. The measurement function in system \(2\) is therefore given by (2):
\[\mathbf{y}_{i}=\mathrm{T}(\boldsymbol{\theta}_{i})+\mathbf{n}_{i}. \tag{2}\]
The vectors \(\mathbf{m}_{i}\), \(\mathbf{n}_{i}\) are zero mean white Gaussian noise with \(\mathbf{m}_{i}\in\mathcal{N}(\mathbf{0},\sigma\mathbf{I})\), \(\mathbf{n}_{i}\in\mathcal{N}(\mathbf{0},\sigma\mathbf{I})\). With Def. 1, we can rewrite the measurement in system \(2\) as (3):
\[\mathbf{y}_{i}=\mathbf{A}\boldsymbol{\theta}_{i}+\mathbf{b}+\mathbf{n}_{i} \tag{3}\]
We assume \(\mathbf{m}_{i}\) and \(\mathbf{n}_{i}\) are statistically independent. The covariance matrix of \(\mathbf{n}_{i}\) and \(\mathbf{m}_{i}\) is \(\mathrm{Cov}[\mathbf{m}_{i}]=\mathrm{Cov}[\mathbf{n}_{i}]=\sigma^{2}\mathbf{I}\). We can summarize all measurements to the measurement
Fig. 3: Multisensor board of the Bosch BME688 development kit, which provides 8 identical constructed sensors [26].
matrices \(\mathbf{X}_{M}=[\mathbf{x}_{1},...,\mathbf{x}_{n}]\), and \(\mathbf{Y}_{M}=[\mathbf{y}_{1},...,\mathbf{y}_{n}]\), and the estimated origin matrix \(\mathbf{\Theta}_{E}=[\mathbf{\theta}_{1},...,\mathbf{\theta}_{n}]\).
## III Algorithms
The solution of Gleser _et al._[1] was only derived for a linear transformation without a translation vector \(\mathbf{b}\). To introduce the estimation with the translation vector \(\mathbf{b}\), we extend our input using the well known definition of the augmented matrix in (4)
\[\mathbf{B}=\left[\begin{array}{c|c}\mathbf{A}&\mathbf{b}\\ \mathbf{0}^{T}&1\end{array}\right] \tag{4}\]
and the new augmented vectors \(\hat{\mathbf{y}}_{i}=[\mathbf{y}_{i}^{T},\beta_{i}]^{T}\), \(\hat{\mathbf{x}}_{i}=[\mathbf{x}_{i}^{T},\alpha_{i}]^{T}\), \(\hat{\mathbf{\theta}}_{i}=[\mathbf{\theta}_{i}^{T},\gamma_{i}]^{T}\), \(\gamma_{i}=1\), \(\forall i=1...n\), \(\hat{\mathbf{m}}_{i}=[\mathbf{m}_{i}^{T},\mu_{i}]^{T}\), and \(\hat{\mathbf{n}}_{i}=[\mathbf{n}_{i}^{T},\nu_{i}]^{T}\) the estimation problem of (3) and (1) can be represented by equations (5) and (6):
\[\hat{\mathbf{x}}_{i} =\hat{\mathbf{\theta}}_{i}+\hat{\mathbf{m}}_{i}, \tag{5}\] \[\hat{\mathbf{y}}_{i} =\mathbf{B}\hat{\mathbf{\theta}}_{i}+\hat{\mathbf{n}}_{i}. \tag{6}\]
In case of \(\alpha_{i}=1\), \(\beta_{i}=1\), \(\gamma_{i}=1\), \(\mu_{i}=0\), and \(\nu_{i}=0\), we have tranformed the affine transformation into a linear form. Following the same steps as in [1], we can define the same joint likelihood function:
**Definition 2**.: _The joint likelihood function [1] - with \(p=q+1,\ n\geq 2p\) and \(\mathbf{X}=[\hat{\mathbf{x}}_{1},...,\hat{\mathbf{x}}_{n}]\), \(\mathbf{\Theta}=[\hat{\mathbf{\theta}}_{1},...,\hat{\mathbf{\theta}}_{n}]\), and \(\mathbf{Y}=[\hat{\mathbf{y}}_{1},...,\hat{\mathbf{y}}_{n}]\) of dimension \(n\times p\) the joint likelihood of \(\mathbf{X}\), \(\mathbf{Y}\) is given by (7):_
\[p(\mathbf{X},\mathbf{Y}|\mathbf{\Theta},\mathbf{B},\mathbf{\sigma}^{2})=(2\pi \sigma^{2})^{-np}e^{-\frac{1}{2\sigma^{2}}f(\mathbf{X},\mathbf{Y},\mathbf{ \Theta})} \tag{7}\]
_leading to (8),_
\[f(\mathbf{X},\mathbf{Y},\mathbf{\Theta})=\mathrm{tr}[(\mathbf{X}-\mathbf{ \Theta})(\mathbf{X}-\mathbf{\Theta})^{T}]+\mathrm{tr}[(\mathbf{Y}-\mathbf{B} \mathbf{\Theta})(\mathbf{Y}-\mathbf{B}\mathbf{\Theta})^{T}]. \tag{8}\]
### _Solution proposed by Gleser et al._
With Definition 2 MLE of \(\mathbf{\Theta}\) and \(\mathbf{B}\) of [1] can be computed by the observation of the following Lemma:
**Lemma 1**.: _Let \(\boldsymbol{\lambda}\) be the diagonal matrix of the \(p\) largest eigenvalues and \(\mathbf{U}\) be the matrix of all corresponding eigenvectors of \(\mathbf{X}^{T}\mathbf{X}+\mathbf{Y}^{T}\mathbf{Y}\), then the estimates of \(\mathbf{\Theta}\) and \(\mathbf{B}\) are given by (9) and (10):_
\[\mathbf{\Theta}^{\ddagger} =[\mathbf{U}\mathbf{U}^{T}\mathbf{X}^{T}]^{T} \tag{9}\] \[\mathbf{B}^{\ddagger} =\mathbf{Y}\mathbf{\Theta}^{T}(\mathbf{\Theta}\mathbf{\Theta}^{ T})^{-1} \tag{10}\]
The proof is given in [1] in Eqs. (2.4), (2.6) and (2.9).
### _Multivariate Regression Approach_
Important for an implementation on low-power IoT-devices is a balancing of the computation complexity of the algorithms. Therefore, we also investigate an alternative solution in this section. The solution of Gleser _et al._ in Lemma 1 needs an eigenvector decomposition. We are interested in an estimation of the AT from \(\mathbf{X}\) to \(\mathbf{Y}\). Let \(\mathbf{r}_{i}=\hat{\mathbf{n}}_{i}-\mathbf{B}\hat{\mathbf{m}}_{i}\), we can rewrite (6) to (11):
\[\hat{\mathbf{y}}_{i}=\mathbf{B}\hat{\mathbf{x}}_{i}+\hat{\mathbf{n}}_{i}- \mathbf{B}\hat{\mathbf{m}}_{i}=\mathbf{B}\hat{\mathbf{x}}_{i}-\mathbf{r}_{i}. \tag{11}\]
This leads to the idea of of a simple least squares estimate [1]. In general, the sensor measurement is noisy. However, in cases of high quality sensors, the noise power is very low. Hence, a simple regression approach has advantages, e.g., lower computational complexity and thus energy consumption.
As observed in [1], if \(\mathbf{\Theta}\mathbf{\Theta}^{T}\) is non-singluar, the minimum over \(\mathbf{B}\) of \(\mathrm{tr}[(\mathbf{Y}-\mathbf{B}\mathbf{\Theta})(\mathbf{Y}-\mathbf{B} \mathbf{\Theta})^{T}]\) can be obtained based on the results of multivariate regression \(\mathbf{B}=\mathbf{Y}\mathbf{\Theta}^{T}[\mathbf{\Theta}\mathbf{\Theta}^{T}] ^{-1}\). This solution corresponds to the search for \(\mathbf{B}\) with \(\mathbf{B}\mathbf{\Theta}=\mathbf{Y}\). The resulting matrix \(\mathbf{B}\) is obtained by multiplying \(\mathbf{Y}\) by the pseudoinverse of \(\mathbf{\Theta}\) (i. e., the Moore-Penrose inverse), i.e. \(\mathbf{B}=\mathbf{Y}\mathbf{\Theta}^{T}[\mathbf{\Theta}\mathbf{\Theta}^{T}] ^{-1}\). This solution is the least squares solution. Interestingly, the stationary points of (8) are \(\mathbf{\Theta}=\mathbf{X}\) and \(\mathbf{Y}=\mathbf{B}\mathbf{\Theta}\).
**Proposition 1**.: _If \(\mathbf{\Theta}\mathbf{\Theta}^{T}\) is a non-singular matrix, the stationary points of the function \(f(\mathbf{X},\mathbf{Y},\mathbf{\Theta})\) defined by (8) over \(\mathbf{B}\) and \(\mathbf{\Theta}\) is given by the solutions:_
\[\mathbf{\Theta}^{*} =\mathbf{X}\] \[\mathbf{B}^{*} =\mathbf{Y}\mathbf{\Theta}^{T}[\mathbf{\Theta}\mathbf{\Theta}^{T}] ^{-1}.\]
Proof.: See Appendix.
According to Appendix, with \(\mathbf{\Theta}=\mathbf{X}\), we finally get a simple equation (12) for the MLE of \(\mathbf{B}\):
\[\mathbf{B}=\mathbf{Y}\mathbf{X}^{T}[\mathbf{X}\mathbf{X}^{T}]^{-1} \tag{12}\]
Finally, from \(\mathbf{B}\) with (4), we obtain the \(\tilde{\mathbf{A}}\) estimatation of the matrix \(\mathbf{A}\) and the \(\tilde{\mathbf{b}}\) estimation of the vector \(\mathbf{b}\). At the end, we are interested in a transformation of the measured data \(\mathbf{x}_{i}\) of system \(1\) to system \(2\)\(\tilde{\mathbf{y}}_{i}\) as described in (13):
\[\tilde{\mathbf{y}}_{i}=\tilde{\mathbf{A}}\mathbf{x}_{i}+\tilde{\mathbf{b}}. \tag{13}\]
Fig. 5: Simulation result, mean errors \(\bar{e}_{x}\) and \(\bar{e}_{y}\) for the augmented implementation of Gleser _et al._ compared to the augmented Alg. 1 as a function of the standard deviation \(\sigma\).
Algorithm 1 is the augmented implementation of Gleser _et al._. In line \(2\) we calculate the \(p\) largest eigenvalues \(\mathbf{\Lambda}\) according to Lemma 1, and use the associated eigenvectors to estimate \(\mathbf{\Theta}\), the origins of \(\mathbf{X}\), in line \(3\). The variables \(\gamma_{i}\) should satisfy \(\gamma_{i}=1\), \(\forall i=1,...,n\). Therefore, after step \(2\) of Algorithm 1 set \(\gamma_{i}=1\), \(\forall i=1,...,n\). The resulting \(\mathbf{\Theta}\) augmentation line contains many values close to \(1\), so we set those to the value \(1\) in line \(4\) to suppress the small eigenvalues impact on \(\mathbf{\Theta}\) with low noise power [25]. Lines \(5\) to \(7\) calculate and extract the estimated AT parameters and line \(8\) removes the augmentation to format the origin estimation.
Algorithm 2 corresponds to the derivation of the simple least squares according to Proposition 1. Since our focus lies not in the estimation of the origins \(\mathbf{\Theta}\), we can directly calculate and extract the estimation parameters in lines \(2\) to \(5\). Although we cannot achieve improved complexity, it should be possible to achieve lower energy consumption through an improved number of operations, since Alg. 2 skips step 1 and step 2 of Alg. 1. Furthermore, the accuracy is very close to the results of Alg. 3 (cf. Table I and Table II), which, however, requires a similar number of operations as Alg. 1.
```
Data: Measurement data of the two systems \(\mathbf{X}\), \(\mathbf{Y}\) Result: Estimation of \(\tilde{\mathbf{A}}\), \(\tilde{\mathbf{b}}\) and \(\tilde{\mathbf{\Theta}}=\mathbf{\Theta}_{E}\)
1:\(\mathbf{B}=\mathbf{Y}\mathbf{X}^{T}(\mathbf{X}\mathbf{X}^{T})^{-1}\)
2:\(\tilde{\mathbf{A}}=\mathbf{B}(1:p-1,1:p-1)\)
3:\(\tilde{\mathbf{b}}=\mathbf{b}(1:p-1,p)\)
4:\(\mathbf{\Theta}_{E}=\mathbf{X}(1:p-1,:)\)
```
**Algorithm 2** Implementation of the simple least squares according to Proposition 1.
The simple least squares solution of Alg. 2 and the solution of Gleser _et al._ in Alg. 1 can be combined to the improved Alg. 3 by using the following observations.
### _Hybrid Approach_
**Proposition 2**.: _Let \(\mathbf{D}\) be the diagonal matrix of all eigenvalues and \(\mathbf{V}\) be the matrix of all corresponding eigenvectors of \(\mathbf{X}^{T}\mathbf{X}+\mathbf{Y}^{T}\mathbf{Y}\) and let \(\mathbf{B}^{\dagger}\) and \(\mathbf{\Theta}^{\dagger}\) denote the solution of \(\mathbf{B}\) in this case then the estimates are given by the Proposition 1 and it holds \(\mathbf{B}^{\dagger}=\mathbf{B}^{\star}\) and \(\mathbf{\Theta}^{\dagger}=\mathbf{\Theta}^{\star}\)._
Proof.: The Matrix \(\mathbf{X}^{T}\mathbf{X}+\mathbf{Y}^{T}\mathbf{Y}\) is symmetric, then there exist an orthonormal basis of eigenvectors [8]. According to the symmetric Schur Decomposition (see Theorem 8.1.1 in [8]) there exists a real orthogonal matrix \(\mathbf{V}\) with \(\mathbf{V}\mathbf{V}^{T}=\mathbf{I}\) that satisfies (14)
\[\mathbf{V}^{T}(\mathbf{X}^{T}\mathbf{X}+\mathbf{Y}^{T}\mathbf{Y})\mathbf{V}= \mathbf{D}. \tag{14}\]
With \(\mathbf{V}\mathbf{V}^{T}=\mathbf{I}\), we can conclude \(\mathbf{\Theta}^{T}=\mathbf{V}\mathbf{V}^{T}\mathbf{X}^{T}=\mathbf{X}^{T}\).
According to Proposition 2, the results of Alg. 1 and Alg. 2 are identical if all eigenvalues are used. As Alg. 1 offers a better estimation of \(\mathbf{\Theta}\), we can improve Alg. 2, by using the estimator for \(\mathbf{\Theta}\) from Alg. 1. The resulting (hybrid) algorithm is given in Alg. 3.
```
Data: Measurement data of the two systems \(\mathbf{X}\), \(\mathbf{Y}\) Result: Estimation of \(\tilde{\mathbf{A}}\), \(\tilde{\mathbf{b}}\) and \(\tilde{\mathbf{\Theta}}=\mathbf{\Theta}_{E}\)
1:\([\mathbf{V},\mathbf{D}]=\mathrm{eigs}(\mathbf{X}^{T}\mathbf{X}+\mathbf{Y}^{T} \mathbf{Y})\)
2:\(\mathbf{\Theta}^{T}=\mathbf{V}\mathbf{V}^{T}\mathbf{X}^{T}\)
3:\(\mathbf{B}=\mathbf{Y}\mathbf{X}^{T}(\mathbf{X}\mathbf{X}^{T})^{-1}\)
4:\(\tilde{\mathbf{A}}=\mathbf{B}(1:p-1,1:p-1)\)
5:\(\mathbf{b}=\mathbf{b}(1:p-1,p)\)
6:\(\mathbf{\Theta}_{E}=\mathbf{\Theta}(1:p-1,:)\)
```
**Algorithm 3** Hybrid solution: Implementation of a combination of the simple MLE and Gleser _et al._
## IV Results
We evaluate the presented algorithms and investigate the difference between the implementation of Gleser _et al._ and our denoised augmented improvement using Monte Carlo simulations. To test the applicability in practice, we provide the algorithms with data from an application for electronic noses.
### _Monte Carlo Simulations_
We developed Monte Carlo simulations to evaluate the algorithms. In \(l=1000\) simulation runs \(n=1000\) data vectors \(\boldsymbol{\theta}_{i}\) were generated with a data generation tool from [24]. We add a noise vector, where each element is a normally distributed random number scaled by the square root of the desired variance \(\sigma^{2}\) using standard deviations \(\sigma\) from \(1\) to \(15\). The data vectors have a dimension of \(q=2\). We used a fixed transformation given by:
\[\mathbf{A}=\left[\begin{array}{cc}0.3430&0.3430\\ 0.1715&0.8575\end{array}\right]\]
and \(\mathbf{b}^{T}=[52,-58]\). For comparison, the following errors were defined in (15):
\[e_{y}=\frac{1}{n}\sum_{i=1}^{n}\lVert(\tilde{\mathbf{A}}\tilde{\mathbf{\theta }}_{i}+\tilde{\mathbf{b}})-(\mathbf{A}\boldsymbol{\theta}_{i}+\mathbf{b}) \rVert_{2}, \tag{15}\]
where \(\tilde{\mathbf{\theta}}=[\tilde{\mathbf{\theta}}_{1},...,\tilde{\mathbf{ \theta}}_{n}]\). The solution of Gleser _et al._ also estimates \(\mathbf{\Theta}\), therefore, we also calculate the error defined in (16):
\[e_{x}=\frac{1}{n}\sum_{i=1}^{n}\lVert\tilde{\mathbf{\theta}}_{i}-\boldsymbol{ \theta}_{i}\rVert_{2}. \tag{16}\]
Finally, we calculate the arithmetic mean values of the errors \(e_{x}\) and \(e_{y}\) over the \(l=1000\) simulation runs.
The arithmetic mean values are denoted with \(\bar{e}_{x}\) and \(\bar{e}_{y}\). Fig. 6 shows the mean estimation errors \(\bar{e}_{x}\) and \(\bar{e}_{y}\) of the algorithms. Since Algorithm 2 only estimates the parameters of the transformation between the noisy measured data from system 1 to system 2, it performs worse than Gleser _et al._ taking the origins into account. In this case, the error \(\bar{e}_{x}\) is only the noise of system 1. The hybrid version in Alg. 3 benefits from Gleser _et al._ and delivers similar quality results for the given transformation matrix.
Analyzing error \(\bar{e}_{y}\) in Table I shows the improvement of the modification. Alg. 1 improves the results of Gleser _et al._
### _Eigenvector Denoising by Augmentation_
Fig. 5 shows the results of the Monte Carlo simulation for the eigenvector denoising extension of Alg. 1 in comparison with the Gleser _et al._ standard implementation. One can see the improvement in the estimation of \(\bar{e}_{y}\) while error \(\bar{e}_{x}\) remains similar. Table I gives an overview of the mean errors \(\bar{e}_{x}\) and \(\bar{e}_{y}\) for the implementation of Gleser _et al._, Alg. 1, Alg. 2, and Alg. 3 for the selected values of the standard deviation \(\sigma\).
### _Case Study: 8 Sensor Board Scenario_
For the collection of real test data we measured effects of humidity, temperature and disturbances in the air. In addition, we heat the samples during data recording and clean the sensors at regular intervals.
**Data Preparation:** The Bosch BME688 development kit board, shown in Fig. 3, is equipped with eight identical BME688 metal oxide sensors that are used for data recording. For the experiment, we prepared two samples [27], water and apple juice, each with a volume of 100 ml. We present only the apple juice investigations for clarity. The results with water are similar and are available in the gitlab repository. Before each measurement of the sample, the sensor is cleaned in air. The measurement period per glass is 20 minutes. The temperature of the sample glass is increased by 1 \({}^{\circ}\)C every 2 minutes during the measurement. The total temperature increases from 25 \({}^{\circ}\)C at the beginning of the measurement, to 35 \({}^{\circ}\)C at the end of the measurement period. The Bosch BME688 is operated with a continuously repeating temperature cycle of 10 heating temperature steps, also called parallel mode. In this example, the temperature cycle is set to 5 steps at 200 \({}^{\circ}\)C and 5 steps at a sensor temperature of 400 \({}^{\circ}\)C, each at an interval of 1260 milliseconds. For later evaluation of the measurement data, the arithmetic mean is calculated from the 5 raw values at 200 \({}^{\circ}\)C and 400 \({}^{\circ}\)C. As already mentioned in section I, Fig. 2 shows the results of the estimated transformation for an normalized example data set in the use-case of an electronic nose measuring apple juice. The black dots show the position of the data points in system 1, blue dots represent the measured data in system 2 and the red dots are the estimation of system 2 based on the calculated transformation of Proposition 1. It can be seen that Proposition 1 allows a good estimation of the transformation in this case.
**Data Evaluation:** We calculated the error \(e_{y}\) for the \(64\) combinations of sensors denoting \(j\) as the source sensor and \(k\) as the target sensor, \(j,k\in\{1,...,8\}\) by (17)
\[e_{y}=\|(\tilde{\mathbf{A}}_{j,k}\mathbf{x}_{j}+\tilde{\mathbf{b}}_{j,k})- \mathbf{y}_{k}\|_{2}, \tag{17}\]
where the \(\tilde{\mathbf{A}}_{j,k}\) and \(\tilde{\mathbf{b}}_{j,k}\) are representing the transformation parameters from source sensor \(j\) to target sensor \(k\), including cases \(j=k\), using \(K=8\) sensors.
The results of the transformation error \(\bar{e}_{y}\) for the normalized data set of the multi-sensor board are shown in fig. 7. The
mean error for the \(n=90\) data samples is calculated by (18)
\[\bar{e}_{y}(j)=\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\lVert(\tilde{ \mathbf{A}}_{j,k}\mathbf{x}_{j}(i)+\tilde{\mathbf{b}}_{j,k})-\mathbf{y}_{k}(i) \rVert_{2}. \tag{18}\]
Since we can use the measured values \(\mathbf{y}_{k}\), we do not need to calculate them as in eq. 15 by \((\mathbf{A}\boldsymbol{\theta}_{i}+\mathbf{b})\). Table II shows the normalized mean error \(\bar{e}_{y}\) for each source sensor, with the minimum value \(0.00072817\) resulting 0, and maximum value \(0.0028\) resulting 1. Since we focus on calculating the transformation from \(\mathbf{X}\) to \(\mathbf{Y}\), and do not try to estimate the origins of the data, the performance of Gleser _et al._ is slightly less accurate than the least squares, and the hybrid version delivers similar results as Proposition 1. Our proposed Alg. 2, with reduced computational steps and therefore lower energy consumption, gives the best results except source sensor 6. However, this could also be due to inaccuracies in the measurement procedure and needs further investigation. A simple normalization of the data in our experiment leads to at least \(50~{}\%\) worse results than our proposed method.
## V Discussion and Outlook
In the article, we have shown that the procedures of estimating the 2-dimensional AT can be used for an expert-assisted learning between two systems. The simple direct estimation based on the stationary points of the parameters of the transformation can be combined with the solution for the estimation of the data from Gleser _et al._. This new hybrid solution provides the best estimates for both, simulations and real data. For the presented real data evaluation, we achieve best results with the simple Prop. 1, but we are looking forward to investigate whether we can use the approach of Gleser _et al._ to improve the measurement values of our low-cost sensors in an extended setup. For the real data example, our simple least squares approach already achieves unique solution due to the non singularity in \(\boldsymbol{\Theta\Theta}^{T}\), and the best accuracy with low computation, which is very promising for further developments in distributed learning. The results still need to be generalized for the m-dimensional case.
Finally, in further work we will explore if only a part of the measured data is sufficient to give a good estimate of the transformation. The evaluation with multiple transformation can also be done in future work. Based on this, iterative methods can be developed in the future to estimate the transformation between different systems. Fig. 8 illustrates the concept of our future work to implement distributed learning based on the creation of similar models using the expert-supported transformation estimation (see the figure in the abstract) between the two data spaces. As to our knowledge, there is no similar approach, although it seems to be a simple way to match
Fig. 8: Concept of distributed learning using transformation. Each model learns from each (transformed) input data stream. Experts therefore review and train the estimation of transformation.
Fig. 7: Experiment result, mean error \(\bar{e}_{y}\) for source sensor \(j\) to target sensor \(k\), \(j\), \(k\in\{1,...,8\}\), compared to the distance between normalized source and target data. The peak at sensor 6 is likely due to deviations in the measurement procedure.
distributed systems and their events or anomalies. We plan to implement these concepts as the basis for a more efficient method of knowledge sharing between distributed nodes. We also plan to investigate in more detail why denoising improves the results. In our example, the fixed transformation we have chosen is favorable for Gleser _et al._ However, when considering various alternatives for \(\mathbf{A}\), Gleser _et al._ produces notably inferior results. We encourage readers to experiment with this option and assess its performance in different \(\mathbf{A}\) scenarios using a code provided on GitLab at [https://gitlab.rlp.net/vsk](https://gitlab.rlp.net/vsk) i-engineering-group/paper/likelihood-based-sensor-calibration -for-expert-supported-distributed-learning-algorithms-in-iot-s systems-estimating-affine-transformations/matlabcodeanddata.
## VI Conclusion
We propose an improved solution of Gleser _et al._ by augmentation of the matrix, denoising using the eigenvectors and the extension to an AT. We have shown that both solutions are connected by an eigenvalue decomposition. We further showed that a combination of both solutions provides better estimates for computing the parameters of the AT by both, simulation, and a real experiment. Finally we presented a novel concept for distributed learning using transformation with the support of expert knowledge.
All 3 algorithms offer a good approach to estimate the AT between similar systems. Further investigations should include special methods to fairly compare Gleser _et al._ with our hybrid proposition. The development of a simple but powerful expert-based distributed learning algorithm using ATs appears promising but requires further research in multidimensionality, and an iterative approach. Investigating more powerful transformation for stronger IoT nodes should make sense as well.
## Proof Prop. 1
Proof.: With the help of the gradient in (A1) based on [6] (eq. (119))
\[\nabla_{\mathbf{x}}\mathrm{tr}[(\mathbf{AXB}+\mathbf{C})(\mathbf{ AXB}+\mathbf{C})^{T}]\] (A1) \[=2\mathbf{A}^{T}(\mathbf{AXB}+\mathbf{C})\mathbf{B}^{T},\]
we can get the following gradients over \(\mathbf{\Theta}\)
\[\nabla_{\mathbf{\Theta}}\mathrm{tr}[(\mathbf{X}-\mathbf{\Theta})(\mathbf{X}- \mathbf{\Theta})^{T}]=-2(\mathbf{X}-\mathbf{\Theta})\]
and
\[\nabla_{\mathbf{\Theta}}\mathrm{tr}[(\mathbf{Y}-\mathbf{B}\mathbf{\Theta})( \mathbf{Y}-\mathbf{B}\mathbf{\Theta})^{T}]=-2\mathbf{B}^{T}(\mathbf{Y}- \mathbf{B}\mathbf{\Theta}).\]
With the gradients, we get the equation for the stationary points
\[\mathbf{0}=-2(\mathbf{X}-\mathbf{\Theta})-2\mathbf{B}^{T}(\mathbf{Y}-\mathbf{ B}\mathbf{\Theta}),\]
which results in (A2)
\[\mathbf{\Theta}=\mathbf{X}+\mathbf{B}^{T}\mathbf{Y}-\mathbf{B}^{T}\mathbf{B} \mathbf{\Theta}.\] (A2)
Similarly, we can derive the gradient over \(\mathbf{B}\)
\[\nabla_{\mathbf{B}}\mathrm{tr}[(\mathbf{Y}-\mathbf{B}\mathbf{\Theta})( \mathbf{Y}-\mathbf{B}\mathbf{\Theta})^{T}]=-2(\mathbf{Y}-\mathbf{B}\mathbf{ \Theta})\mathbf{\Theta}^{T}\]
which results in the (A3)
\[\mathbf{B}\mathbf{\Theta}\mathbf{\Theta}^{T}=\mathbf{Y}\mathbf{\Theta}^{T}.\] (A3)
Multiplying both sides of (A3) with \(\mathbf{\Theta}\) from right we get
\[\mathbf{B}\mathbf{\Theta}\mathbf{\Theta}^{T}\mathbf{\Theta}=\mathbf{Y}\mathbf{ \Theta}^{T}\mathbf{\Theta}.\]
Multiplying both sides with \([\mathbf{\Theta}^{T}\mathbf{\Theta}]^{-1}\) from the right, we finally get (A4)
\[\mathbf{Y}=\mathbf{B}\mathbf{\Theta}.\] (A4)
Inserting (A4) in (A2), we get (A5)
\[\mathbf{\Theta}=\mathbf{X}+\mathbf{B}^{T}\mathbf{B}\mathbf{\Theta}-\mathbf{B}^ {T}\mathbf{B}\mathbf{\Theta}=\mathbf{X}.\] (A5)
Multiplying both sides of (A3) with \([\mathbf{\Theta}\mathbf{\Theta}^{T}]^{-1}\) from the right, we get result of Proposition 1.
|
2309.09382 | Reframing the Event Horizon: The Harlow-Hayden Computational Approach to
the Firewall Paradox | This study critically reevaluates the Harlow-Hayden (HH) solution to the
black hole information paradox and its articulation in the firewall paradox.
The exploration recognizes the HH solution as a revolutionary approach in black
hole physics, steering away from traditional constraints to depict the event
horizon as a computational rather than a physical barrier. The paper first maps
the initial physical dilemma that instigated the HH journey, introducing Alice,
an observer facing intricate computational challenges as she approaches the
black hole. I then depict the evolution of the narrative, describing how Alice
was facilitated with a quantum computer to surmount the computational
challenges and further detailing the augmented complexities arising from the
integration of the physical dynamics of the black hole. Yet, HH's research
applies the AdS/CFT correspondence to explore the dynamic unitary
transformation in solving the firewall paradox through decoding Hawking
radiation. However, it identifies a contradiction; the eternal perspective of
black holes from the AdS/CFT theory challenges the firewall paradox's
foundation. Finally, I narrate a paradigm shift as HH reframes Alice's task
within the realms of error-correcting codes, illustrating a remarkable
transition from a physical problem in black hole physics to a computational
predicament in computer science. The study revisits pivotal moments in
understanding black hole physics ten years later through this reexamination. | Galina Weinstein | 2023-09-17T21:25:06Z | http://arxiv.org/abs/2309.09382v1 | # Reframing the Event Horizon: The Harlow-Hayden Computational Approach to the Firewall Paradox
###### Abstract
This study critically reevaluates the Harlow-Hayden (HH) solution to the black hole information paradox and its articulation in the firewall paradox. The exploration recognizes the HH solution as a revolutionary approach in black hole physics, steering away from traditional constraints to depict the event horizon as a computational rather than a physical barrier. The paper first maps the initial physical dilemma that instigated the HH journey, introducing Alice, an observer facing intricate computational challenges as she approaches the black hole. I then depict the evolution of the narrative, describing how Alice was facilitated with a quantum computer to surmount the computational challenges and further detailing the augmented complexities arising from the integration of the physical dynamics of the black hole. Yet, HH's research applies the AdS/CFT correspondence to explore the dynamic unitary transformation in solving the firewall paradox through decoding Hawking radiation. However, it identifies a contradiction; the eternal perspective of black holes from the AdS/CFT theory challenges the firewall paradox's foundation. Finally, I narrate a paradigm shift as HH reframes Alice's task within the realms of error-correcting codes, illustrating a remarkable transition from a physical problem in black hole physics to a computational predicament in computer science. The study revisits pivotal moments in understanding black hole physics ten years later through this reexamination.
## 1 Introduction
In this study, I re-examine the Harlow-Hayden (HH) solution [Har-Hay] to the black hole information paradox and its manifestation in the firewall paradox [AMPS]. The initiation of the HH solution heralded a pivotal moment in studying black hole physics. It deviated from the traditional analysis confined by
spatial and temporal borders, proposing a fresh perspective wherein the event horizon is perceived more as a barrier dictated by computational hurdles rather than mere physical limits.
I initiate in section 2 with a concise historical introduction where I elucidate the concepts of Hawking radiation, the monogamy of entanglement, and the firewall paradox, setting a firm foundation for the ensuing discussion. Following this, I delve into a detailed re-evaluation of the HH solution to the firewall paradox.
The initial approach of HH was grounded in addressing a physical dilemma, a prevalent strategy to untangle the intricacies of the Hawking information paradox during that period, explored in detail in section 3. Alice, the observer descending into the black hole, faces a computationally daunting challenge. To navigate this, HH empowered Alice with a quantum computer; a development unfolded in section 4. The narrative evolves by integrating physical insights concerning black hole dynamics in section 5, yet Alice encounters additional complexities.
Adapting to the unfolding scenario, HH redefined Alice's mission in the context of error-correcting codes, elucidated in section 6. Epistemically, this transformative journey took us from grappling with a physical conundrum rooted in black hole physics to ultimately contending with a computational issue in the domain of computer science.
## 2 Hawking radiation and the firewall paradox
According to classical general relativity, nothing can escape from the event horizon of a black hole. However, in 1974, Hawking penned a letter to _Nature_, provocatively titled "A Black Hole Explosion?". In this letter, Hawking intended to demonstrate that significant quantum effects might be associated with black holes [10]. In a reflection three years later, Hawking detailed the initial realization that catalyzed his novel hypothesis, stating, "To my great surprise, I found," back in 1974, "that the black hole seemed to emit particles at a steady rate. Like everyone else at that time, I accepted the dictum that a black hole could not emit anything. I therefore invested substantial effort into dismissing this unsettling result, but it persistently refused to disappear, forcing me to eventually accept it" [10].
Crucially, this hypothetical radiation arises due to how observers at infinity categorize scalar field modes. This classification is discontinuous at the black hole's horizon and disregards all information about the modes within the horizon. Contrarily, an observer plunging into the black hole wouldn't perceive any particle creation, as they wouldn't employ such a discontinuous division but rather analyze the field through modes continuous at the event horizon [10].
Hawking radiation can be conceptualized as follows: Near the event horizon of a black hole, a particle-antiparticle pair spontaneously forms due to quantum fluctuations. One particle, which we will denote as \(A\), falls into the black hole
possessing negative energy, while the other particle, \(B\), escapes with positive energy. Despite being separated, particles \(A\) and \(B\) remain entangled, ensuring energy conservation in the system. Over time, this process leads to continuous emission of particles (\(B\)) away from the black hole, a phenomenon termed Hawking radiation. Meanwhile, the absorption of negative energy particles (\(A\)) gradually decreases the black hole's mass and energy, eventually leading to its complete evaporation.
We face a perplexing question when we consider what happens to the information that enters a black hole. According to the no-hair theorem, a black hole is characterized solely by three parameters: mass, angular momentum, and electric charge. It ostensibly retains no other details about the matter it engulfs, suggesting that a vast amount of information becomes irrevocably lost during the process of gravitational collapse. This proposition seems to violate the second law of thermodynamics, which would imply that the black hole should have zero entropy as all the information (with high entropy) is eradicated.
However, this theory confronts a significant challenge from quantum mechanics, specifically the principles articulated in quantum field theory and the principle of unitarity. The latter insists on the conservation of information, which contests the assertion that information entering a black hole is irretrievably lost. Moreover, if information is preserved, the black hole would have non-zero entropy, giving it a finite temperature and leading it to emit thermal Hawking radiation. This process would theoretically result in the black hole gradually losing mass until it evaporates entirely, leaving only Hawking radiation composed of certain particles behind. This conservation of information gives rise to what is known as the black hole information paradox, a fundamental conflict between quantum mechanics and the theory of general relativity. This paradox underlines a critical discrepancy in our understanding of modern physics: if the information is indeed lost in a black hole, it undermines the foundations of quantum mechanics; conversely, if the information is conserved, it challenges the principles of classical general relativity.
In 1993, Leonard Susskind, Larus Thorlacius, and John Uglum introduced a solution to the black hole information paradox known as "black hole complementarity." This principle hinges on the experiences of two observers: Alice, who falls into the black hole, and Bob, who remains outside of it. According to this proposal, the conflicting descriptions of the black hole interior, as observed by Alice and Bob, are not contradictory but rather complementary, hence the term "black hole complementarity" [STU]; [LPSTU].
From Bob's perspective, observing from a safe distance outside the event horizon, the horizon acts as a physical membrane, becoming a hot layer just above the black hole's horizon, termed the "stretched horizon." In this perspective, Alice appears to become increasingly red-shifted as she approaches the event horizon, effectively getting "frozen" at the horizon. Alice would never seem to cross the event horizon; instead, she would get incenerated due to the extremely high temperatures. The black hole then re-emits her mass energy in the form of Hawking radiation, which carries the information about Alice, allowing Bob to theoretically reconstruct her from the information contained in the
radiation, thereby preserving the principle of unitarity in quantum mechanics.
Conversely, from Alice's vantage point, as she falls towards the black hole, due to the equivalence principle, she doesn't notice anything unusual at the moment she crosses the event horizon, experiencing a "no drama" scenario. Contrary to Bob's observations, she wouldn't see herself getting stuck at the horizon or getting incinerated but would smoothly pass the event horizon and eventually meet her fate at the singularity at the black hole's core, seemingly violating the principle of unitarity due to the apparent loss of information as she crosses the event horizon.
Black hole complementarity proposes a dual reality where Alice's and Bob's descriptions are correct in their respective frames of reference. However, they cannot communicate and compare notes after Alice crosses the horizon. This approach offers a potential resolution to the black hole information paradox, albeit at the cost of introducing a vexing puzzle tied to the nature of entanglement and the very principles of quantum mechanics.
In this paradox, we envisage three particles: \(A\), \(B\), and \(C\), where \(A\) and \(B\) are a pair of entangled particles and particle \(C\) shares information with \(B\). Particle \(A\) is swallowed by the black hole, \(B\) is emitted as Hawking radiation, and particle \(C\) is another piece of radiation emitted before \(B\), creating a mixed state of \(A\), \(B\), and \(C\). Imagine a scenario involving Alice, an observer who first measures early radiation from particle \(C\), then does the same for the later radiation from particle \(B\) before crossing the event horizon to encounter particle \(A\), which shares entanglement with particle \(B\), and through it, with particle \(C\). This scenario raises critical questions grounded in the concept of monogamy of entanglement (the principle that a quantum system can only be entangled with one system at a time), hinting at a violation of the no-cloning theorem of quantum mechanics. Having measured both \(B\) and \(C\), Alice carries this information into the black hole where she measures \(A\), suggesting the duplication of information, a direct contradiction to the no-cloning theorem prohibiting the exact replication of arbitrary unknown quantum states.
Seeking to resolve this deep-seated paradox, Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully (AMPS) reformulated the black hole information paradox, presenting a new paradox, the "firewall paradox." AMPS brought the information paradox to a more concrete footing by arguing that if the information about the in-falling matter is to be preserved and able to be retrieved from a black hole, as suggested by unitary evolution in quantum mechanics, then there must be a highly energetic Planck-scale firewall at the event horizon, which would effectively destroy anything falling in, thus preserving the information. This conclusion, however, starkly contradicts the general relativistic prediction of a smooth event horizon, as well as the equivalence principle ("absence of drama for the infalling observer"), which states that falling through a horizon should be uneventful for an in-falling observer [AMPS].
In 2013, Daniel Harlow and Patrick Hayden (HH) proposed an innovative approach to address the firewall paradox's contentious issues. In this approach, they suggested that Alice could theoretically verify the transmission of quantum information emanating from the interior of a black hole, but to do this
successfully, she would require a quantum computer equipped with an error-correcting code potent enough to handle computations of extraordinary complexity [Har-Hay].
In the subsequent discussion, I will demonstrate that, from an epistemic standpoint, the HH paper fundamentally altered the framework and tenor of the existing discourse and forged a novel pathway for engagement with the firewall paradox. Leveraging insights from computational theory unveiled new pathways for understanding and possibly resolving the firewall paradox.
## 3 HH begin with a physical problem
### Monogamy of entanglement and the Firewall paradox
HH consider the quantum description of a Schwarzschild black hole in 3 + 1 dimensions from the points of view of an external observer, Charlie, and an infalling observer, Alice. The entropy of the black hole is proportional to \(M^{2}\) in Planck units, where \(M\) is the mass of the black hole. The time it takes for the black hole to evaporate is proportional to \(M^{3}\). Alice has a task at hand to extract information from around \(n\sim M^{2}\) bits of Hawking radiation within a time frame \(T\sim n^{3/2}\) before the black hole evaporates completely. Alice needs to apply a unitary transformation to the Hawking radiation to extract the desired information. This process effectively unscrambles the desired information, making it accessible in a specific subfactor of the Hilbert space. Charlie is positioned far from the black hole, at infinity. This position gives him ample time and memory resources to measure the Hawking radiation emitted by the black hole with great precision.
_Let us start by considering Charlie's perspective_. Charlie's description is based on the following three postulates [Har-Hay]:
1) Charlie postulates that the black hole's formation and subsequent evaporation can be described as a unitary process.
2) Furthermore, we can conceptualize the system as undergoing either continuous or discrete time evolution, wherein at any individual moment, it is in a pure quantum state \(|\psi\rangle\), which lives in a specified Hilbert space labeled \(H_{\rm outside}\). Charlie breaks down \(H_{\rm outside}\) into three subspaces: \(H_{\rm outside}=H_{H}\otimes H_{B}\otimes H_{R}\), where \(H_{H}\) represents the degrees of freedom inside or close to the black hole. Heuristically associated with the "stretched horizon" at a Schwarzschild coordinate radius given by \(r=2GM+\epsilon\), \(G\) is the gravitational constant, \(M\) is the black hole's mass, and \(\epsilon\) is some ultraviolet cutoff. \(H_{B}\) represents the field theory modes in the near-horizon region of the black hole. This region has a Schwarzschild coordinate radius in the range \(2GM+\epsilon<r<3GM\), indicating that it is close to the black hole but outside the stretched horizon described in \(H_{H}\). The geometry in this region is close to that of Rindler space. It will include modes with Schwarzschild energy less than the black hole temperature \(T=\frac{1}{4\pi GM}\). Modes with higher energy are not confined to this near-horizon region and are considered part of \(H_{R}\). \(H_{R}\) represents the Hawking radiation
field, the modes of the radiation field outside the black hole, with Schwarzschild coordinate radius \(r>3GM\). \(H_{R}\) includes higher energy modes that are not confined to the near-horizon region described in \(H_{B}\).
We can relabel the different regions associated with a black hole and the surrounding environment as \(B\), \(R\), and \(H\). \(B\) is the zone just outside the event horizon of the black hole. \(B\) is where one can theoretically observe phenomena related to the near-horizon dynamics of the black hole, including the effects of Hawking radiation and other quantum gravitational effects. Subscript \(B\) ties it to phenomena and quantum information related to \(B\). \(H\) refers to the black hole's horizon, and \(R\) represents the radiation field outside the black hole, in a region farther from the black hole, where one can observe the radiation emitted by the black hole, including the Hawking radiation that has escaped the gravitational pull of the black hole. Subscript \(R\) ties it to the phenomena and quantum information associated with this region.
The dimensionalities of \(H\) and \(B\) are denoted by \(|H|\) and \(|B|\), which are related to the area of the black hole's horizon measured in Planck units. The logarithms of these dimensionalities (i.e., \(\log|H|\) and \(\log|B|\)) are proportional to the black hole's horizon area when we study the pure quantum state \(|\psi\rangle\). Over time, the sizes of \(|H|\) and \(|B|\) decrease, illustrating the changes in the black hole dynamics as it ages. \(H_{R}\) starts restricted to a subset involved with the black hole dynamics, but the size of \(H_{R}\), denoted as \(|H|\), increases over time, indicating that more states become relevant to the black hole's dynamics as time goes on.
3) The third postulate involves the changes over time in the degrees of freedom or the "size" of \(H_{H}\) and \(H_{B}\). These changes in size are proportional to the area of the black hole's horizon, and as the black hole evolves, these values change. In other words, the behavior and properties of the black hole differ depending on whether \(|R|\) is larger or smaller than the product \(|H||B|\), giving rise to the distinction between "young" and "old" black holes. As a black hole ages, \(|R|\) expands, altering the entanglement dynamics between \(B\) and \(H\) within the broader \(H_{\rm outside}\). \(B\) and \(H\) are highly entangled when a black hole is young. As the black hole ages, \(B\) and \(H\) become a small part of a much larger system described in Hilbert space. After a certain time, known as Page time, HH show that the combined system of \(B\) and \(H\) reaches a state that can be described using a simple mathematical expression involving the identity operator, which basically means the system has become uniform [Har-Hay].
As time passes, the system reaches a state of maximal entanglement, where \(BH\) (the combination of \(B\) and \(H\)) is maximally entangled with \(R\). HH decompose \(H_{R}\) into a direct sum of subspaces, \(H_{R_{H}}\) and \(H_{R_{B}}\), and another space termed as \(H_{\rm other}\) (which does not contribute to the described entangled state): \(H_{R}=(H_{R_{H}}\otimes H_{R_{B}})\otimes H_{\rm other}\). \(H_{R_{H}}=|H|\) (or \(H_{R_{H}}=|R_{H}|\)) and \(H_{R_{B}}=|R_{B}|\) are their dimensions which are set equal to \(|H|\) and \(|B|\) respectively for the entanglement to hold [Har-Hay]. \(R_{B}\) represents a portion of the surrounding space farther away from the black hole, where the Hawking radiation emitted from the vicinity of the black hole (including \(B\)) can be observed. \(B\) and \(R_{B}\) are maximally entangled, describing a high degree of scrambling.
Let us keep in mind that we are still examining Charlie's perspective. Recall that the entire system (which encompasses both the black hole and the associated Hawking radiation, \(H_{R}\)) is represented by a pure quantum state, denoted \(\left|\psi\right\rangle\), within \(H_{\text{outside}}\). \(\left|\psi\right\rangle\) is in a superposition of product states from the \(H\), \(B\), \(R_{H}\), and \(R_{B}\) Hilbert spaces:
\[\left|\psi\right\rangle=\left(\frac{1}{\sqrt{\left|H\right|}}\sum_{h}\left|h \right\rangle_{H}\left|h\right\rangle_{R_{H}}\right)\otimes\left(\frac{1}{ \sqrt{\left|B\right|}}\sum_{b}\left|b\right\rangle_{B}\left|b\right\rangle_{R_ {B}}\right), \tag{1}\]
where the terms \(\left|h\right\rangle_{H}\) and \(\left|b\right\rangle_{B}\) represent basis states in the spaces associated with the black hole and \(B\), respectively, \(\left|h\right\rangle_{R_{H}}\) and \(\left|b\right\rangle_{R_{B}}\) represent basis states in the radiation Hilbert space corresponding to the states in the black hole and \(B\).
Equation (1) represents the state of an old black hole, written using Schmidt decomposition. It is factorized into two independent subsystems: one associated with \(H\) and one associated with \(B\). The fundamental argument of HH is grounded on equation (1).
The state in the equation is built such that each basis state in \(H\) is entangled with a corresponding state in \(R_{H}\) and similarly for \(B\) and \(R_{B}\). In other words, this structure reveals a deep and complex entanglement pattern: the states in \(H\) are entangled with specific states in \(R_{H}\), and those in \(B\) are entangled with specific states in \(R_{B}\). By setting up an entanglement between \(H\) and \(R_{H}\), and \(B\) and \(R_{B}\), we ensure the full state \(\left|\psi\right\rangle\) is pure despite the subsystems \(H\) and \(B\) individually being in mixed states.
_Now consider the perspective of Alice_, falling into a black hole, and compare her observations with those of Charlie, who is observing from outside the black hole. Here are the key points and steps of her argument: before reaching the singularity of the black hole, Alice also perceives her surroundings through a description based on a Hilbert space, \(H_{\text{inside}}\), divided into several subspaces: \(H_{\text{inside}}=H_{A}\otimes H_{B}\otimes H_{R}\otimes H_{H^{\prime}}\), where, \(H_{A}\) represents the Hilbert space associated with \(A\), which encompasses the field theory modes inside the black hole's event horizon from Alice's perspective. \(A\) is just inside the event horizon of the black hole; \(H_{B}\) and \(H_{R}\) are outside the black hole and accessible to both Alice and Charlie; \(H_{H^{\prime}}\) is related to Alice's horizon, distinct from the black hole's horizon. From Alice's perspective, \(H_{H}\) is absent. The region associated with \(H_{H}\) is crossed by Alice in an extremely short time, meaning it has no operational significance to her; she wouldn't be able to conduct any meaningful measurements in such a short time frame.
To avoid contradictions in Alice and Charlie's observations, an "overlap rule" ensures that both agree on the experiments' results they can communicate. Charlie and Alice, despite potentially having different perspectives, must come to the same conclusion regarding the density matrix of \(H_{B}\otimes H_{R}\).
Alice would normally expect to see a smooth vacuum at the black hole horizon based on general relativity, with specific modes in \(H_{B}\) and \(H_{A}\) being closely entangled, reflecting a smooth vacuum at the horizon. However, this idea
faces a significant problem due to the assumptions about what Charlie perceives at the black hole stage (specifically, being an old black hole).
\(A\), \(B\), and \(R\) would each have its associated Hilbert space representing all possible quantum states. \(B\) is almost maximally entangled with \(R_{B}\), which is perceivable by Charlie. Even though Alice cannot directly perceive \(R_{B}\) (and Charlie cannot directly perceive \(A\)), the high degree of entanglement between \(B\) and \(R_{B}\) means that observing a part of the entangled system (i.e., \(B\)) gives information about the other part of the system (\(R_{B}\)). By the overlap rule, what Charlie perceives about the entanglement between \(B\) and \(R_{B}\) must also be true for Alice when she observes \(B\).
However, because of the monogamy of entanglement, \(B\) cannot be maximally entangled with \(A\) and \(R_{B}\) simultaneously. This contradicts the expectation of a smooth vacuum on the horizon for Alice. Recall from section 2 that to resolve this contradiction, the AMPS argument posits that instead of a smooth vacuum, there is a firewall -- a region of high-energy particles -- at the horizon of an old black hole, which effectively annihilates anything or anyone, including Alice, that falls into it [AMPS]; [Har-Hay].
### Strong and standard complementarity
To navigate this problem, one can choose two potential directions [Har-Hay]:
1) _Strong complementarity_: HH have defined Alice's and Charlie's theoretical frameworks. Alice's theory doesn't directly relate to Charlie's in this approach. They only need to agree on the experimental results visible to both.
The event horizon serves as a barrier, making direct access to information in \(R_{B}\) inherently impossible from Alice's localized perspective near the black hole. The states in \(R_{B}\) are maximally entangled with those in \(B\), i.e., knowing the state in \(B\) allows one to predict the state in \(R_{B}\) perfectly, and vice versa. Over time, the information about the black hole gets scrambled and distributed over a very wide area, including \(R_{B}\). In Charlie's theory, \(B\) and \(R_{B}\) are significantly entangled due to the dynamics of black hole evaporation. This entanglement ties up \(B\), preventing it from being highly entangled with \(A\). But Alice cannot access quantum states in the Hilbert space \(H_{R_{B}}\) associated with \(R_{B}\). So, in Alice's theory, there are two potential solutions to address this problem:
1. Disentangle \(H_{R_{B}}\) from \(H_{B}\): This would involve creating a theory where the states in \(R_{B}\) are no longer entangled with those in \(B\). This freeing up of \(B\) would entangle it with \(A\).
2. Remove \(H_{R_{B}}\) from her theory entirely. Alice's theory would ignore the states in \(R_{B}\), treating them as irrelevant. This would again free up \(B\) to become entangled with \(A\). This would grant Alice a smooth journey across the horizon.
Thus, according to strong complementarity in Alice's frame, certain manipulations or neglect of \(H_{R_{B}}\) are postulated to allow a self-consistent description of a smooth horizon, harmonizing the dual descriptions by Charlie and Alice. Strong complementarity basically advocates for a kind of duality in the descriptions, where both are valid in their respective domains. Still, neither description can be globally valid, thereby complementing each other.
2) _Standard complementarity_: While described using different theoretical frameworks, Alice's and Charlie's observations can be compatible because Alice's theory can be seen as a subset embedded in Charlie's more encompassing theory. A theoretical framework is developed where Alice's interior operators to describe phenomena in \(H_{A}\) are formulated using the exterior operators that Charlie would use to describe \(H_{R_{B}}\). This attempt is made to avert the problem of firewalls and maintain the notion of a smooth space at the horizon, referring to it as \(A=R_{B}\) [Bousso].
This equality symbolizes an effort to harmonize the two perspectives (Alice's and Charlie's) by identifying the region described by Alice (\(A\)) with \(R_{B}\) in Charlie's description. By developing a theory where Alice's interior description (on \(H_{A}\)) is formulated using exterior operators that Charlie would use to describe \(H_{R_{B}}\), standard complementarity aims to avoid the problem of firewalls and uphold the idea of a smooth transition at the event horizon [STU]; [LPSTU].
However, identifying Alice's \(A\) with Charlie's \(R_{B}\) introduces a notable problem. It doesn't restrict Alice from making direct measurements in \(R_{B}\), which, according to the broader theory (Charlie's perspective), she shouldn't be able to access directly. This results in a paradox where Alice, while theoretically unable to access \(R_{B}\), finds herself able to do so according to this approach, thereby contravening the theory that posits complementarity.
### The unitary transformation and its inverse
In what follows, I show that HH introduce a new perspective that involves computational complexity to help address the issues raised by the AMPS paradox. They bring a different angle to this discussion, focusing on the computational complexity involved in reconstructing the interior of a black hole (Alice's point of view) from the information outside of it (Charlie's point of view). They argue that the task of Charlie reconstructing what is inside (and thus detecting a firewall) would involve an intractable computational task. I demonstrate that HH bring a fresh perspective that adds a nuanced layer to the standard complementarity, but they navigate around the standard complementarity rather than extend it.
Initially, HH phrase the discussion in terms of Charlie's Hilbert space. Since both Charlie and Alice are required to agree on the density matrix that describes the system in terms of \(H_{B}\) and \(H_{R}\), it implies that a consistent representation of \(|\psi\rangle\) can be achieved from the perspective of Charlie's Hilbert space. Hence, despite potentially different vantage points, Alice and Charlie will converge on a shared description, abiding by the constraints of the overlap rule.
First, HH specify that in the Schmidt decomposition, one describes the state of the old black hole in terms of a basis that effectively separates the early and late-time radiations, see equation (1). However, HH will use a particular basis (denoted as "computational basis") to describe the radiation field \(H_{R}\), which will be convenient for Alice to work with. They write an equation that denotes the specific computational basis state \(\left(|bhr\rangle_{R}\right)\) in \(H_{R}\) using \(n\), \(k\), and \(m\) that describe various aspects of that state. Working exclusively with this
basis is intended to simplify calculations for Alice as she tries to work through the decoding problem with a basis where the mathematical expressions become more tractable. Alice uses \(n\), \(k\), and \(m\) to represent different quantities related to her problem of decoding information from the black hole's Hawking radiation: \(n\) represents the total number of qubits involved in the problem. This is related to the logarithm to base 2 of the dimension of \(R\): \(n\equiv\log_{2}|R|\); \(k\) represents the number of qubits associated with \(H_{B}\); and \(m\) represents the number of qubits associated with \(H_{H}\). We can think of \(k+m\) as the number of qubits remaining in the black hole. \(H_{H}\), along with \(H_{B}\), forms the entirety of the black hole's state that Alice is interested in for her problem.
Second, HH define \(U_{R}\) as a particular (scrambling) unitary transformation on \(H_{R}\) and write \(\left|\psi\right\rangle\) [equation (1)] in the computational basis:
\[\left|\psi\right\rangle=\frac{1}{\sqrt{|B||H|}}\sum_{b,h}\left|b\right\rangle_ {B}\left|h\right\rangle_{H}U_{R}\left|bh0\right\rangle_{R}. \tag{2}\]
\(U_{R}\)'s exact form or structure depends on the details of the black hole's initial state and the quantum gravity, a yet-to-be-realized theory. The discussion here pertains to Schwarzschild black holes instead of AdS eternal black holes that do not evaporate and maintain a dynamic equilibrium with Hawking radiation. This dynamic equilibrium significantly changes the conditions and assumptions on which the HH decoding task and the \(U_{R}\) are defined. HH note that "Big AdS black holes do not evaporate at all, so the AMPS argument does not directly apply to them, but arguments have been put forward suggesting that they nonetheless have firewalls" [Har-Hay].
Alice's challenge is applying the inverse of \(U_{R}\), denoted \(U_{R}^{\dagger}\), to the Hawking radiation. This allows her to confirm the entanglement between \(H_{B}\) and \(H_{R_{B}}\). Applying \(U_{R}^{\dagger}\) reverses the transformation effected by \(U_{R}\), decoding the information encoded in the Hawking radiation. The goal is to retrieve the information encoded in the entangled states between \(H_{B}\) and \(H_{R_{B}}\) from the Hawking radiation. Through decoding, Alice can confirm the entanglement between \(H_{B}\) and \(H_{R_{B}}\), gaining insight into the states and dynamics occurring in \(R_{B}\) through her operations on the radiation emitted by the black hole without directly accessing \(R_{B}\). By utilizing the decoded information from the Hawking radiation, Alice can avoid the need for direct measurement in \(R_{B}\), thus avoiding the paradox arising from her ability to access information in \(R_{B}\) directly according to the standard complementarity approach. In other words, Alice can bypass the complications brought about by the firewall argument while upholding the principles inherent in the black hole complementarity.
However, the time Alice would need to decode the information is exponential and proportional to \(2^{k+m+n}\). This time complexity is extremely large, indicating a computational task of astronomical proportions. Since the black hole is old, it implies that \(n\) (representing the age parameter) is significantly larger than the sum of \(k\) and \(m\). This condition allows us to derive a lower bound for the reduced time complexity expression: \(n>k+m\). Using this condition in \(2^{k+m+n}\), we can establish a lower bound on the reduced time complexity as at
least \(2^{2(k+m)}\). But this expression also involves an exponential function and, therefore, would be extremely large for non-trivial values of \(k\) and \(m\).
Thus, reversing \(U_{R}\) exactly would be practically impossible for Alice. Hence, a framework is required to quantify how close Alice can reach the perfect task. HH use a concept of "trace norm," which allows them to define a notion of "closeness," how close Alice needs to get to test the entanglement accurately [Har-Hay]. In other words, they aim to find a unitary operation close enough to the ideal operation \(U_{R}^{\dagger}\) such that the error in the subsequent measurements is within acceptable bounds. Defining a threshold for the closeness using trace norm would define how accurately Alice needs to perform the unitary transformation to test the entanglement reliably. It establishes a criterion for the allowable error in Alice's operation, such that she can still validate the entanglement through her measurements. By working within this framework, Alice aims to construct a unitary transformation that, while not the same as the ideal \(U_{R}^{\dagger}\), is close enough according to this defined metric to allow her to verify the entanglement adequately. This approach acknowledges the practical difficulties and uncertainties in constructing the exact unitary transformation and provides a pathway to achieve Alice's goal within tolerable error margins.
By allowing for a non-ideal transformation, where Alice tries to get close enough through trace norms, there might be a reduction in the complexity of the task at hand. However, even when settling for a non-ideal transformation, Alice still faces an enormously complex computational task. Though the requirement is somewhat relaxed compared to achieving the ideal transformation, the task remains a high-order polynomial time problem, implying a computational duration that vastly exceeds the black hole's lifespan.
## 4 HH provide Alice with a quantum computer
### Alice is distilling the radiation with a quantum computer
Alice is on a mission to decode the entanglement between \(R\) and \(B\), aiming to transform it into a format that is easier to analyze. To accomplish this, Alice uses a quantum computer whose initial state is defined in a new Hilbert space \(H_{C}\). This computer interacts with the radiation Hilbert space \(H_{R}\), undergoing a unitary evolution described by \(U_{\mathrm{comp}}\). In other words, \(U_{\mathrm{comp}}\) operates on a larger Hilbert space that includes \(H_{R}\) and \(H_{C}\): \(H_{R}\otimes H_{C}\). Alice employs \(U_{\mathrm{comp}}\) (similar to \(U_{R}^{\dagger}\)) to reverse the effects implemented by \(U_{R}\) to distill \(R_{B}\). This intends to decode the information entangled with \(B\) during the evolution governed by \(U_{R}\).
However, Alice's major challenge is finding the appropriate initial state for her computer:
\[U_{\mathrm{comp}}:U_{R}|bh0\rangle_{R}\otimes|\psi\rangle_{C}\rightarrow| \mathrm{something}\rangle\otimes|b\rangle_{\mathrm{mem}}. \tag{3}\]
Alice is looking for a particular initial state for her computer, denoted \(\left|\psi\right\rangle_{C}\), such that under \(U_{\mathrm{comp}}\), this state interacts with the states from the radiation field \(U_{R}\left|bh0\right\rangle_{R}\) to produce a final state where the qubits representing \(\left|b\right\rangle\) (which are associated with the \(B\) basis) are stored in the first \(k\) qubits of the memory of her computer, separated from the other parts of the system described by \(\left|\mathrm{something}\right\rangle\). In other words, Alice is trying to isolate information about \(B\) into her computer's memory [Har-Hay].
However, finding the initial state \(\left|\psi\right\rangle_{C}\) for the computer to start the computation is extremely challenging. Even if Alice finds one, the probability that it successfully facilitates the computation is minuscule, being exponentially small in terms of the dimensions of the Hilbert space. To estimate the likelihood of finding a successful initial state, HH use the trace norm to create a set close to which every pure state in the Hilbert space can be found. Even after considering various potential unitary evolutions, the probability of finding a successful initial state remains extraordinarily small. The time required to find a suitable \(U_{\mathrm{comp}}\) by random chance is identified as the quantum recurrence time, the time scale over which a quantum system revisits a particular state it was in at some earlier time, due to its natural dynamics. The recurrence time is immensely long, reaching up to \(10^{10^{40}}\) years. The recurrence time in which a quantum system revisits near an initial state by pure chance is especially long for the complex system Alice is dealing with, involving a black hole and Hawking radiation. This time scale is doubly exponential in terms of the entropy of the whole system, i.e., it increases extremely fast as the system's complexity increases. Thus, the recurrence time is potentially longer than the lifespan of a Schwarzschild or an astrophysical black hole.
Despite the immense challenge posed by the quantum recurrence time, Alice has another strategy up her sleeve. Instead of the brute force approach, waiting for the right \(U_{\mathrm{comp}}\) to occur by chance, she aims to actively find it by exploiting patterns in how \(U_{\mathrm{comp}}\) evolves. If there are any predictable structures in its evolution, she can use this information to guide her search, significantly reducing the time it would take to find the right \(U_{\mathrm{comp}}\). By leveraging the structures in \(U_{\mathrm{comp}}\)'s evolution, Alice can potentially reduce the computational time from a double exponential dependency on the entropy of the radiation to a single exponential dependency. This is a very long time since single exponential growth is very fast. But it is much more manageable compared to a double exponential growth. So, this strategy offers a glimmer of hope; despite the initially grim prognosis suggested by the quantum recurrence time, if Alice can understand and leverage the underlying physics and mathematical properties of her system well enough, she might be able to achieve her goal in a "reasonable" amount of time, though still astronomically long from a human perspective [Har-Hay].
### The Solovay-Kitaev theorem
HH then ask [Har-Hay]: how many quantum gates would be necessary to implement a unitary transformation as complicated as \(U_{R}\)? They show that the number of gates required scales with a single exponential function of the number
of qubits \((n)\), \(2^{2n}\) times a logarithm of a parameter \((\epsilon)\): \(2^{2n}\log\Bigl{(}\frac{1}{\epsilon}\Bigr{)}\).
This is a significant result because it means the computational time is a lot less than previously assumed based on a crude double exponential scaling (for instance, \(2^{2^{n}}\)). This improvement is partly credited to the Solovay-Kitaev theorem, which suggests that an efficient sequence of gates can be found to perform any unitary operation to a high degree of precision. Despite this improvement, reducing the computational time further seems unlikely, indicating that \(U_{R}\) and \(U_{R}^{\dagger}\) are still highly non-trivial tasks even under optimal conditions. Adjusting the model, such as using different gates or more complex quantum entities like qutrits (which have three basis states) instead of qubits, doesn't fundamentally change the \(2^{2n}\) scaling of the problem.
We can speed up computations significantly compared to initial expectations. However, we are still looking at a process that requires a time that scales exponentially with the number of qubits, which indicates extremely long computation times for complex operations involving many qubits. This makes Alice's task of implementing \(U_{R}\) within a reasonable time frame highly unlikely.
## 5 HH combine black hole physics and computing
### Taking into consideration the dynamics of the black hole
HH then suggest that the dynamics of a black hole constrain \(U_{R}\) in a way that could help Alice implement it faster. A transformation \(U_{\rm dyn}\) is defined as a unitary transformation that operates on different microstates of a black hole, and "it seems quite reasonable to assume that \(U_{\rm dyn}\) can be generated by a polynomial number of gates," i.e., a computational operation that doesn't require an astronomical number of quantum gates to perform [Har-Hay]. I am quoting this phrase because I find it problematic from a conceptual and philosophical standpoint. I elaborate on this perspective below.
HH define the state \(\left|\psi\right\rangle\) as arising from the action of the polynomial size circuit \(U_{\rm dyn}\):
\[U_{\rm dyn}\!\left|0\right\rangle_{BHR}=\frac{1}{\sqrt{\left|B\right|\!\left|H \right|}}\sum_{bh}\left|b\right\rangle_{B}\left|h\right\rangle_{H}U_{R}\left| bh0\right\rangle_{R}. \tag{4}\]
They provide a more comprehensive explanation of this matter. Alice wants to determine if a small circuit for \(U_{\rm dyn}\) implies a small circuit for \(U_{R}\). If such a circuit exists, she could more easily decode \(R_{B}\) from the Hawking radiation. The matrix \(U_{R}\) is derived from \(U_{\rm dyn}\), which, unlike \(U_{\rm dyn}\), is sensitive to the system's initial state. For the sake of simplicity, an initial state is chosen with all bits set to zero. HH write an expression for \(U_{\rm dyn}\) acting on this initial state, intending to learn more about \(U_{R}\):
\[U_{\rm dyn}\left|00000\right\rangle_{\rm init}\approx\frac{1}{\sqrt{|B||H|}}\sum_{ b,h}\left|b\right\rangle_{B}\left|h\right\rangle_{H}U_{R}\left|bh0\right\rangle_{R}. \tag{5}\]
Some physical assumptions about \(U_{\rm dyn}\) are necessary to further this exploration. While the precise dynamics of quantum gravity remain undefined, HH refer to theories like AdS/CFT and matrix theory, which give ground to assume that \(U_{\rm dyn}\) can be generated through polynomial numbers of gates in a small circuit [Har-Hay]. So, the core search then revolves around whether there is a small circuit for \(U_{\rm dyn}\) that ensures a small circuit for \(U_{R}\). This would imply that if affirmed, Alice could feasibly decode \(R_{B}\) from the Hawking radiation. However, it seems like there is a challenge here, citing the eternal nature of black holes as posited by AdS/CFT theory, which ostensibly disputes the presence of a firewall paradox. This presents a theoretical contradiction, introducing complexity and potential disagreement in integrating these theories with the problem at hand.
Next, disregarding the complication, HH proceed to articulate their argument: \(U_{\rm dyn}\) can be expressed as a product of \(U_{R}\) and another operation called \(U_{\rm mix}\). \(U_{\rm mix}\) is a unitary operation that creates entanglement between different subsystems, mixing or scrambling the information within them to create a highly entangled state.
Here, \(U_{\rm mix}\) is conceptualized as a straightforward circuit that entangles the initial four subfactors in the chosen initial state. HH emphasize the simplicity of implementing \(U_{\rm mix}\), noting that a universal circuit would suffice:
\[U_{\rm mix}\left|00000\right\rangle_{\rm init}=\frac{1}{\sqrt{|B||H|}}\sum_{b, h}\left|b\right\rangle_{B}\left|h\right\rangle_{H}U_{R}\left|bh0\right\rangle_{R}. \tag{6}\]
HH then define a new operator \(\tilde{U}_{R}\) as: \(\tilde{U}_{R}=U_{\rm dyn}U_{\rm mix}^{\dagger}\), which has the property:
\[\tilde{U}_{R}\frac{1}{\sqrt{|B||H|}}\sum_{b,h}\left|b\right\rangle_{B}\left|h \right\rangle_{H}\left|bh0\right\rangle_{R}=\frac{1}{\sqrt{|B||H|}}\sum_{b,h} \left|b\right\rangle_{B}\left|h\right\rangle_{H}U_{R}\left|bh0\right\rangle_{ R}. \tag{7}\]
HH argue that a small circuit can implement \(\tilde{U}_{R}\). It undoes the effects of \(U_{R}\) when applied to a specific superposed state, a complex combination of states from \(B\) and \(H\) [Har-Hay].
### Alice again confronts complications
The above process is highly non-trivial because, as seen in the previous section, the new inverse operator \(\tilde{U}_{R}\) is intricately dependent on the precise initial states of the qubits in \(B\) and \(H\), which are part of an entangled system associated with a black hole. Applying \(\tilde{U}_{R}\) ideally would unscramble the quantum information, extracting it from the entangled state and converting it into a format that can be more straightforwardly accessed and analyzed. Although it seems that \(\tilde{U}_{R}\) could be utilized to decode the information, it encounters a pivotal issue; the
operation of \(\tilde{U}_{R}\) involves all qubits, including those in \(B\) and \(H\), to which Alice doesn't have access. Hence, implementing \(\tilde{U}_{R}\) as is would be infeasible. Alice, therefore, considers a strategy where she replaces the unavailable qubits from \(B\) and \(H\) with some ancillary qubits in a random state, hoping this would allow her to use \(\tilde{U}_{R}\) to undo \(U_{R}\). The difficulty remains as \(\tilde{U}_{R}\) is fundamentally tied to the initial states of the \(B\) and \(H\).
Ultimately, it is highly unlikely for Alice to find a simple, small circuit to implement \(U_{R}\) due to the constraints imposed by \(U_{\rm dyn}\) and the complexity arising from \(U_{R}^{\dagger}\). Because Alice doesn't have access to all the qubits she needs, she can't directly apply \(U_{R}^{\dagger}\) to reverse the dynamics and extract the information encoded in \(R_{B}\). The transformations she would like to apply, including \(\tilde{U}_{R}\), are sensitive to the states of all the zones, including \(B\) and \(H\), which she can't access. Therefore, Alice can't take a straightforward approach to reverse the dynamics in a manageable amount of time, i.e., in a time that is a polynomial entropy function. So, Alice is stuck with having to try a brute force strategy, where she uses a tremendous number of operations (\(2^{n+k+m}\) gates, which indicates a huge, potentially impractical number) to construct \(U_{R}^{\dagger}\). HH, therefore, suggest a kind of pessimism or skepticism that there would be an easy solution to Alice's problem [Har-Hay].
## 6 HH transform a physical problem into a quantum coding problem
### Alice is utilizing an error-correcting code
HH then hint toward the possibility of exploring the problem further through the lens of error-correcting codes and complexity theory, albeit acknowledging the sheer challenge posed by the exponential increase in complexity [Har-Hay]. They recast Alice's task as a quantum coding problem.
Alice's main difficulty is retrieving information, especially when dealing with errors introduced by the black hole environment, which causes erasures in \(B\) and \(H\) (which define the state of the black hole). Recall that Alice has no access to \(B\) and \(H\). This lack of access is represented mathematically as erasures in these systems. These erasures pose a significant challenge to Alice's task, as they take away information that is vital for her to be able to successfully decode the information about the black hole's initial state. Alice uses the error-correcting code to recover information that has been erased and affected by the interaction of the black hole with its environment.
HH write equation (2) in an equivalent way:
\[\left|\psi\right\rangle=\frac{1}{\sqrt{\left|B\right|}}\sum\left|b\right\rangle _{B}\left|\overline{b}\right\rangle,\left|\overline{b}\right\rangle\equiv \frac{1}{\sqrt{\left|H\right|}}\left|h\right\rangle_{H}U_{R}\left|bh0\right\rangle _{R}. \tag{8}\]
The state \(\left|\psi\right\rangle\) is expressed as a superposition of \(\left|b\right\rangle_{B}\) and \(\left|\overline{b}\right\rangle\), where \(\left|\overline{b}\right\rangle\) is defined in terms of \(\left|b\right\rangle_{B}\) and \(U_{R}\). HH are trying to simplify the problem by focusing
on a smaller Hilbert space, where \(\left|\overline{b}\right\rangle\) is a basis for a \(k\) dimensional subspace of \(H_{H}\otimes H_{R}\). But they also underscore the intertwined nature of the state \(\left|\psi\right\rangle\) in the black hole and the Hawking radiation system, with its behavior being determined by interactions between multiple different Hilbert spaces.
Alice creates a quantum code using \(U_{\rm enc}\), which is built from \(U_{R}\) and \(U_{\rm mix,H}\): \(U_{\rm enc}\equiv U_{R}U_{\rm mix,H}\). \(U_{\rm mix,H}\) is a simple entangling transformation analogous to \(U_{\rm mix}\), which is applied to a subset of the qubits representing a part of the radiation emitted. It affects the \(m\) qubits of \(H\) and the \(n+1\) to \(n+m^{th}\) qubits of \(R\). This encoded information involves entangling pairs of qubits from \(H\) and \(R\) to produce new states. Up to this point in their discussion, HH have focused on the errors that arise from the fact that Alice does not have direct access to all the necessary information from \(B\) and \(H\) to decode the information she seeks straightforwardly. However, they point out that this isn't Alice's task's only source of potential errors. They bring attention to another substantial challenge. For instance, the black hole emits Hawking radiation, including hard-to-detect gravitons, let alone coherently manipulate. Thus, information about the black hole's state is carried away with that radiation, and some of it is lost.
When Alice applies \(U_{\rm enc}\), she identifies a subset of states (subspace) of the total Hilbert space within the larger Hilbert space. By narrowing it down to a specific subspace, Alice is reducing the computational complexity and resource requirements of the task at hand. Operating in a smaller subspace allows a more streamlined approach to information retrieval, bypassing the need to deal with a prohibitively large set of all possible quantum states in the full Hilbert space. This subspace effectively helps in encoding the information about the black hole state and aids in protecting and retrieving the information even after some amount of information loss and erasure. Moreover, the universe of all possible quantum states in the Hilbert space is vast, and working with it directly would be computationally prohibitive. By identifying a subspace through the \(U_{\rm enc}\) transformation where the black hole information is encoded, Alice can work with a reduced, more manageable set of states, facilitating the decoding process.
When Alice wants to retrieve the information, she is looking to decode the information stored in this subspace. This decoding process reverses \(U_{\rm enc}\) to retrieve the original information, or at least the pertinent part of it, even after parts of the system have been erased. However, reversing \(U_{\rm enc}\) to decode the information isn't straightforward. Alice uses a set of quantum operations, a universal gate (CNOT gates and Hadamard transformations), to develop \(U_{\rm enc}\), and she tries to identify a suitable decoding process that can retrieve the original information from the encoded state. To do this, Alice introduces an additional system denoted as \(B^{\prime}\), with the same number of qubits as \(B\), and forms a new, more extensive set of encodings. The crucial part of the decoding process involves a complex unscrambling of the entanglements between \(B^{\prime}\) and \(R\) through another set of transformations \(U_{\rm enc}\), which is constructed from \(U_{\rm dyn}\) and the transformation that involves the new system \(B^{\prime}\), \(U_{\rm mix,\ B^{\prime}}\).
More specifically, HH want to establish \(U_{\rm dyn}\) as the encoding transformation \(U_{\rm enc}\) and \(U_{R}\) as a correction operation. However, there is an issue with the dimensionality of the code space, which they address by introducing an addi
tional system \(B^{\prime}\) with the same number of qubits as \(B\). They then apply a transformation \(U_{\text{mix},B^{\prime}}\) to entangle \(B^{\prime}\) with \(B\), thereby creating a \(k\)-qubit code subspace within a larger \(2k+m+n\) qubit Hilbert space using \(U_{\text{enc}}\). \(U_{\text{enc}}\) is defined by applying \(U_{\text{mix},B^{\prime}}\) followed by \(U_{\text{dyn}}\). \(U_{\text{enc}}\) operates on initial states to generate a new state that incorporates \(B\), \(B^{\prime}\), \(H\), and \(R\), such that:
\[U_{\text{enc}}\left|b^{\prime}\right\rangle_{B^{\prime}}\left|0\right\rangle_{ BHR}=\frac{1}{\sqrt{|B||H|}}\sum_{bh}U_{\text{mix},B^{\prime}}\left|b^{ \prime}b\right\rangle_{B^{\prime}B}\left|h\right\rangle_{H}U_{R}\left|bh0 \right\rangle_{R}. \tag{9}\]
HH acknowledge the presence of errors introduced by the environment, conceptualized as erasures affecting both \(B\) and \(H\). To restore the initial state by unscrambling the entanglement between \(B^{\prime}\) and \(R\), one needs to implement \(U_{R}^{\dagger}\). By applying \(U_{R}^{\dagger}\) on the newly defined state, and subsequently acting with \(U_{\text{mix},B^{\prime}}\), the information \(b^{\prime}\) is recovered. This process involves replacing the second element of each pair of qubits from the first \(k\) qubits of \(R\) instead of from \(B\), which isn't accessible anymore. This produces a state:
\[\left|b^{\prime}\right\rangle\frac{1}{\sqrt{|B||H|}}\sum_{bh}\left|b\right\rangle _{B}\left|h\right\rangle_{H}\left|bh0\right\rangle_{R}, \tag{10}\]
Following the recovery of \(b^{\prime}\), the initial state can be restored in polynomial time using ancillary qubits to reset \(BHR\) to the state \(\left|0\right\rangle_{BHR}\) and then utilizing \(U_{\text{enc}}\) to revert to the desirable state formed by the encoding transformation involving \(B^{\prime}\) and \(BHR\) [Har-Hay].
By introducing a new system \(B^{\prime}\), Alice adds a new set of variables. \(B^{\prime}\) interacts with the radiation (\(R\)); a new set of entangled states between \(B^{\prime}\) and \(R\) are formed through this interaction. These entangled states have information about the black hole. \(U_{\text{enc}}\) is a set of operations designed to unscramble the entangled states formed by interactions between \(B^{\prime}\) and \(R\). It uses the dynamics of the black hole and the transformations involving \(B^{\prime}\) to create a scenario where the entangled information can be unscrambled, running some processes in reverse to recover the original data. Alice uses \(U_{\text{enc}}\) to recover the original data encoded in the entangled states of the black hole information. This is trying to reverse the effects of the black hole dynamics on the information by decoding the previously encoded information in the scrambled, entangled states. By utilizing a more comprehensive set of encodings involving the additional system \(B^{\prime}\) and implementing a complex transformation to unscramble the entanglements, Alice aims to recover the original data and thereby solve the problem of decoding the information about the state of the black hole, circumventing the issues encountered with not having access to all the required qubits in the previous setup. This strategy is grounded on the hope that by properly configuring the new system \(B^{\prime}\) and skillfully using \(U_{\text{enc}}\), Alice can create a pathway through the entanglement structure that allows her to access the information she is after, even without direct access to all parts of the system.
### A beacon of hope amidst lingering challenges
For the channel just constructed, \(n\) is the total number of bits that can theoretically be lost due to various errors or complications while still retaining the ability to correct those errors and successfully decode the information. \(2k+m+n\) breaks down this \(n\) into components corresponding to different aspects or parameters of the black hole model. So, \(k+m\) bits have already been unavoidably lost because Alice doesn't have access to \(B\) and \(H\). \(\frac{n-k-m}{2}\) is the remaining number of bits Alice can afford to lose due to other errors and still maintain the possibility of successful error correction. \(\alpha\) represents the fraction of the radiation that consists of gravitons. HH then make a critical point: even if all the gravitons (which constitute less than half of the radiation) are lost, Alice could theoretically still extract the necessary entanglement information accurately because she still has room for error correction -- she can still lose up to \(\frac{n-k-m}{2}\) bits and remain within the bounds for successful error correction. Even if Alice cannot control and measure the gravitons, she can wait until enough radiation has been emitted such that the information about the interior of the black hole becomes accessible in the radiation. She can still successfully decode the necessary information to solve the task.
However, HH note two important complications [Har-Hay]
1) Computational complexity comes into play here as a significant roadblock. Extracting the information from the radiation is an immensely complex computational task. This complexity grows with the size of the black hole and the amount of information to be retrieved.
2) The task must be done before the black hole fully evaporates, setting a strict time limit on the procedure. Thus, while theoretically possible, practically carrying out the AMPS experiment to solve the black hole information problem without considering computational complexity would be extraordinarily challenging and likely unfeasible due to the computational resources required. Thus, Alice might not have enough time to complete her task before the black hole evaporates, hence not being able to rule out the presence of a firewall definitively.
## 7 A physics problem becomes a quantum computing problem
### HH's Error Correctability problem
HH introduce a computational problem termed Error Correctability [Har-Hay]. This problem arises in the context of \(B\), \(H\), and \(R\). The state of this system is transformed by \(U_{\mathrm{dyn}}\) acting on an initial state, resulting in a new state \(\left|\psi\right\rangle_{BHR}\), which exhibits maximal entanglement between \(B\) and \(HR\).
\[\left|\psi\right\rangle_{BHR}=U_{\mathrm{dyn}}\left|000\right\rangle_{BHR}, \tag{11}\]
The crux of the Error Correctability problem is to ascertain whether it is possible to decode the maximal entanglement with \(B\) by only using information from \(R\). This problem has a QSZK proof, which is characterized by a verifier checking the results after a prover implements a quantum error correction operation on \(R\), despite the potential for this process to take an exponentially long time.
HH replicate the kind of maximal entanglement generated during the dynamical evolution governed by \(U_{\text{dyn}}\) using a different strategy involving a noise operator \(U_{\text{noise}}\). In the strategy:
\[\left|\psi\right\rangle_{BHR}=U_{\text{noise}}\left|000\right\rangle_{BHR}, \tag{12}\]
is utilized to create a state where the \(BH\) and \(R\) systems become maximally entangled. By applying \(U_{\text{noise}}\) repeatedly (i.e., working with \(k\) copies of \(\left|\psi\right\rangle\), noted as \(\left|\psi\right\rangle^{\otimes k}\)) one converges towards a state with near-maximal entropy concentrated in a typical subspace of the tensor product of \(k\) copies of the \(B\) and \(H\) Hilbert spaces. However, HH mention that this approach, while similar to the true maximal entanglement generated by \(U_{\text{dyn}}\), might be slightly weaker. This suggests that while this strategy can approach the maximal entanglement that \(U_{\text{dyn}}\) would generate, it might not fully replicate it, potentially missing some correlations or not fully reaching the maximal entropy that would signify truly maximal entanglement. Hence, although this strategy involving \(U_{\text{noise}}\) can yield a state with a high degree of entanglement between the \(BH\) and \(R\) subsystems, it seems it might not exactly reproduce \(U_{\text{dyn}}\).
### Evaluating Alice's task through the lens of QSZK and BQP complexity classes
HH refer to the following complexity classes [Har-Hay]: the BQP complexity class, which contains decision problems that can be solved by a quantum computer in polynomial time, with an allowable error probability. The SZK complexity class: a class of problems where solutions can be verified using classical computers, and where the "yes" instances can be proven with zero-knowledge proofs, where the verifier learns nothing other than the fact that the statement is true. QSZK (Quantum Statistical Zero Knowledge), a quantum analog of the SZK complexity class. This complexity class contains BQP. QSZK is a complexity class that refers to the set of computational problems with yes/no answers where a (potentially dishonest) prover can always convince a verifier of the yes instances but will fail with high probability to convince them of the no instances using quantum computations.
HH identify the Error Correctability problem as a QSZK-complete problem [Har-Hay]. A problem that is QSZK-complete represents the hardest problems in the QSZK class. If there is a polynomial time solution for one QSZK-complete problem, then there are polynomial time solutions for all problems in the QSZK class. In other words, if Error Correctability is identified as a QSZK-complete, then it has two properties:
1) It is in \(\mathsf{QSZK}\) and can be solved using a quantum computer with the help of an all-powerful, though potentially dishonest, prover.
2) Any problem in \(\mathsf{QSZK}\) can be reduced to the \(\mathsf{Error}\)\(\mathsf{Correctability}\) problem through a polynomial-time algorithm.
Solving the \(\mathsf{Error}\)\(\mathsf{Correctability}\) problem would imply a solution for every problem in the \(\mathsf{QSZK}\) class.
HH argue that if \(\mathsf{QSZK}\) were equal to \(\mathsf{BQP}\), then all the problems that can be solved with a quantum verifier and a prover (\(\mathsf{QSZK}\)) can also be solved with just a quantum computer operating in polynomial time (\(\mathsf{BQP}\)). In other words, the prover does not add any computational power; a quantum computer alone is sufficient to solve these problems. If this decoding can be done on a quantum computer in polynomial time, it would mean that all \(\mathsf{QSZK}\) problems can be solved in \(\mathsf{BQP}\), implying \(\mathsf{QSZK}=\mathsf{BQP}\) [Har-Hay]. HH are asking whether the decoding process required to solve the \(\mathsf{Error}\)\(\mathsf{Correctability}\) problem can be done in polynomial time with the size of the circuit used to represent correctable noise denoted by \(U_{\mathrm{dyn}}\). They reiterate that finding a polynomial-time solution to the decoding problem (concerning the size of the \(U_{\mathrm{dyn}}\)) would mean that the \(\mathsf{Error}\)\(\mathsf{Correctability}\) problem (and hence all problems in \(\mathsf{QSZK}\)) can be solved in \(\mathsf{BQP}\) time. This would establish that \(\mathsf{QSZK}=\mathsf{BQP}\), showing that a quantum computer, without the help of a prover, can solve all problems in \(\mathsf{QSZK}\).
HH conclude the discussion by providing the following evaluation [Har-Hay]:
[...] by recasting Alice's problem as quantum error correction, we have set it into a framework where there are general arguments that such problems likely take exponential time to solve. Moreover the practical difficulties of doing the experiment, in particular the problems associated with measuring gravitons, further increase the difficulty of this computational task. We did not quite manage to prove that her task is \(\mathsf{NP}\)-hard at fixed \(k\), but it is almost certainly at least \(\mathsf{QSZK}\)-hard and there are strong reasons to believe that such problems can't be solved in polynomial time on a quantum computer. From the computer science point of view, it would be extremely surprising if implementing \(U_{R}^{\dagger}\) did not require exponential time.
### Aaronson's improvements to the complexity assumptions underlying HH's task
Scott Aaronson suggested improvements to HH's decoding task [Aar]. He proposed improvements to the complexity assumptions underlying HH's decoding task. Instead of working directly within the framework of complexity theory classes, he ground the problem's difficulty in established cryptographic concepts, leveraging the one-way functions (OWF)s and introducing hardcore predicates to argue the hardness of the decoding task [Aar].
Aaronson endeavors to demonstrate that if there are injective OWFs, functions that are easy to compute in the forward direction but hard to invert even
by quantum computers, then the HH decoding task is indeed hard. This approach aims to ground the decoding task in more widely accepted theoretical foundations, thereby enhancing the robustness of the argument concerning the inherent difficulty of the HH decoding task by rooting it in established cryptographic principles based on OWFs. Information is encoded using these OWFs. To decode information from a portion of these states (i.e., from the Hawking radiation part), one needs to find the inverse of the OWF on the encoded information - which is assumed to be computationally hard because of the property of OWFs. Therefore, if someone could solve the HH decoding task, it would mean they have a method to invert OWFs. This has far-reaching implications, including breaking the widely accepted cryptographic systems that rely on the hardness of inverting OWFs.
Aaronson further argues that solving the HH decoding problem would require at least as much computational resource as the hardest problems in the collision problem class. He connects the ability to invert OWFs and the ability to find collisions. If one could efficiently invert an OWF, one could find collisions, undermining the security assumptions of cryptographic systems based on these functions. Aaronson's statement emphasizes the fundamental difficulty in decoding the Hawking radiation as it would imply being able to solve other hard problems in computer science and cryptography.
Aaronson is tying the hardness of a physics problem - decoding Hawking radiation and the firewall paradox - to established hard problems in computer science, grounding the task's believed difficulty in well-established complexity theory [Aar]. Through his improvements to the HH decoding task, Aaronson brought a deeper computer science perspective into the discussion surrounding the firewall paradox, specifically leveraging concepts from the computational complexity theory and cryptography. By framing the HH decoding task as a problem in computer science and showing that it's at least as hard as inverting one-way functions, Aaronson translated a physics problem into the language and framework of computer science. This approach not only allows for a more robust argument regarding the complexity of the problem but also facilitates interdisciplinary dialogue and understanding by linking the physical scenario to well-established concepts in the computer science domain.
## Acknowledgement
This work is supported by ERC advanced grant number 834735.
|
2306.00245 | From Pixels to UI Actions: Learning to Follow Instructions via Graphical
User Interfaces | Much of the previous work towards digital agents for graphical user
interfaces (GUIs) has relied on text-based representations (derived from HTML
or other structured data sources), which are not always readily available.
These input representations have been often coupled with custom, task-specific
action spaces. This paper focuses on creating agents that interact with the
digital world using the same conceptual interface that humans commonly use --
via pixel-based screenshots and a generic action space corresponding to
keyboard and mouse actions. Building upon recent progress in pixel-based
pretraining, we show, for the first time, that it is possible for such agents
to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based
instruction following tasks. | Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova | 2023-05-31T23:39:18Z | http://arxiv.org/abs/2306.00245v2 | # From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces
###### Abstract
Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use -- via pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Building upon recent progress in pixel-based pretraining, we show, for the first time, that it is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.
## 1 Introduction
Systems that can follow instructions to complete tasks through graphical user interfaces (GUIs) can help automate tedious tasks, improve accessibility, and expand the usefulness of digital assistants by allowing them to interact with tools and services. Despite the visual nature of GUIs, prior work has primarily focused on utilizing structured representations of the user interfaces (such as HTML sources, Document Object Model (DOM) trees, and Android view hierarchies) as well as custom, task-specific representations of high-level actions based on these structured representations. Recent efforts have achieved positive outcomes thanks to the advances of powerful language models (Gur et al., 2022; Kim et al., 2023; Yao et al., 2022).
While structured and task-specific representations may be useful, they are not always available - some examples are web applications that use extensive scripting, sandboxed environments where access to DOM is limited, and mobile applications which often do not expose the underlying structure to external modules. Even when structured application source data is available, it may be hard to interpret due to obfuscation and misalignment with what actually appears on the GUIs. Finally, aligning human demonstrations with task-dependent actions is often challenging.
In contrast, people interact with GUIs by perceiving the visual input and using generic mouse and keyboard actions, without needing to inspect the application's source code for cues on its functionality. They can quickly learn to interact with new applications that offer familiar visual interfaces, regardless of differences in implementation technologies. In this paper we ask: _Can we build an agent that can
complete tasks for users while relying solely on pixel-level visual representations of the GUI state, and generic low-level actions?_
Learning based on pixel-only inputs proved effective for game playing environments such as Atari (Mnih et al., 2015). However, for GUI-based instruction following tasks, learning from pixel-only inputs coupled with general low-level actions leads to several challenges. Interpreting GUIs visually requires understanding the interface layout, recognizing and interpreting visually-situated natural language, identifying visual elements, and predicting their functions and methods of interaction. A generic action space also poses the challenge of a more complex mapping between high-level textual instructions and corresponding sequences of low-level actions. As an example of the increased difficulty in this setting, on the MiniWob++ benchmark (Shi et al., 2017; Liu et al., 2018) of web GUI interaction, CC-Net (Humphreys et al., 2022) demonstrates human-level accuracy when accessing both screenshots and DOM structure, but its performance drops by 75% when the DOM information is removed from the agent's observations.
Here we present Pix2Act, a model that relies solely on pixel-based screenshots as input and selects actions corresponding to basic mouse and keyboard functionalities. We build on Pix2Struct(Lee et al., 2022), a Transformer-based (Vaswani et al., 2017) image-to-text model pre-trained to map screenshots to structured representations derived from HTML on web-scale data. Pix2Act tunes this model using a combination of human demonstrations and environment interactions, applying tree
Figure 1: **Our agent learns to follow instructions via Graphical User Interfaces (GUIs).** Unlike most prior work studying instruction following for GUI-based tasks, our agent does not rely on text-based observations corresponding to DOM trees or HTML source code, or task-specific actions. Instead, our agent receives pixel-based observations and generates outputs corresponding to mouse and keyboard actions. The possible actions are encoded as text and shown on the top of the figure. We show examples of observations from various episodes for two benchmarks, MiniWob++ (top row) and WebShop (bottom row), that we adapt to study within the context of our general Chrome-based environment framework. See details in Β§2.
search to iteratively generate new expert trajectories for training. We develop a general browser-based environment framework, and adapt two benchmark datasets, MiniWob++ and WebShop (Yao et al., 2022), to our setting with a unified, general purpose observation and action format.
On MiniWob++, Pix2Act outperforms human crowdworkers and improves task score nearly 4x compared to the best prior results in our proposed setting (CC-Net without DOM). Ablations show that a key ingredient for Pix2Act's performance is the pixel-based pre-training of Pix2Struct.
Our contributions are as follows:
1. We show, for the first time, that an agent using pixel-only inputs and a generic action space can outperform human crowdworkers on the MiniWob++ benchmark, significantly improving over prior work on this setting, and reaching performance comparable to that of state-of-the-art agents that access DOM information and use a comparable number of human demonstrations.
2. We adapt the WebShop benchmark to our setting, using pixel-based observations and general low-level actions. We establish the first baseline on this setting, although there is still a performance gap relative to larger language models using HTML-based inputs and task-specific actions.
3. We show that Pix2Struct's pre-training via screenshot parsing is effective for GUI-based instruction following with pixel-based inputs. In the behavioral cloning setting, pre-training improves task scores from 17.1 to 66.5 on MiniWob++ and from 1.1 to 46.7 on WebShop.
4. We demonstrate the successful application of tree search as a relatively simple method for policy improvement for MiniWob++.
## 2 Environment
Following the reinforcement learning literature, we model GUI interaction as a Markov Decision Process (MDP): at each time step, our agent receives an observation and selects an action. We develop a common environment framework with shared observation and action formats for browser-based tasks. Similarly to prior work on web-based agents (Liu et al., 2018), we use Selenium to programmatically interact with the Google Chrome browser.
ObservationsTo form an observation, we first take a screenshot of the current browser window using Selenium and then augment it with additional information. First, if not already present, we render the natural language instruction on the top of the screenshot. Second, as Selenium screenshots do not include cursors (which are typically rendered by the operating system), we draw a cursor on the screenshot to indicate the mouse pointer position. Finally, we render an indicator of whether the mouse button is currently pressed down, which is useful for dragging actions.
ActionsOur action space consists of raw mouse and keyboard actions, as shown in Figure 1, where X and Y refer to discrete coordinate bins, K is one or more keys, M is an optional modifier key such as "shift", and Z refers to a vertical scroll amount, also represented as a discrete bin. The begin_drag and end_drag actions can be used to execute "click and drag" actions. We use a configurable number of coordinate buckets per vertical and horizontal axis. Importantly, the DOM information is not provided by the environment and is therefore not used in any way to define observations or actions.
Episodes and RewardsEpisodes continue until a terminal state or a configurable number of maximum steps is reached. For the environments we consider, the agent only receives a reward at a terminal state. This can be a binary reward based on whether the task was completed successfully or a partial reward based on how well the task was completed.
## 3 Proposed Agent
Our agent, Pix2Act, is based on the Pix2Struct model (Lee et al., 2022), which uses an image Transformer encoder and a text Transformer decoder. The architecture is based on Vision Transformer (Dosovitskiy et al., 2021) and T5 (Raffel et al., 2020). Pix2Struct is pre-trained on a _screenshot parsing_ task: predicting simplified HTMLs from screenshots with visually-masked regions. Such pre-training was proven effective for tasks related to understanding user interfaces in a non-interactive setting, such as screen summarization and widget captioning (Wang et al., 2021;
Li et al., 2020). We use the Pix2Struct base variant with 282M parameters (12 encoder and 12 decoder layers; hidden size 768) for all our experiments. The model is called once per time step.
InputThe only input to the model is pixel-based observation from the environment. We can also condition on multiple previous observations by concatenating multiple frames. In preliminary experiments, we did not observe significant gains from conditioning on past observations for MiniWob++, and thus we only use the screenshot of the current step in our experiments. We reuse Pix2Struct's image processing by scaling input images up or down so as to extract the maximal number of fixed-size patches that still fit within the sequence length limit. We use resolutions of 160\(\times\)210 and 800\(\times\)600 for MiniWob++ and WebShop, respectively.
OutputWe encode actions as text tokens, which are predicted autoregressively by the Transformer decoder. We use beam search to output the \(k\)-best actions.
Greedy PolicyFor interacting with the environment, we adopt a standard greedy policy, selecting the highest scoring action at each step, with one modification. To help prevent the agent from getting stuck in cycles, we track which actions have been taken for a given observation, and select the highest probability action in the beam that has not previously been taken given the current observation, which provides a modest increase in performance.
### Training
We explore two methods for training models to follow instructions via GUIs. First, similarly to prior work, we use Behavioral Cloning (BC), where we train our model using standard supervised learning to predict the given action for each observation in a set of human demonstrations. Second, given access to environments with reward signals, prior work has also explored Reinforcement Learning (RL) to further improve agent performance. As an alternative to common reinforcement learning algorithms such as REINFORCE (Williams, 2004) and PPO (Schulman et al., 2017), we apply tree search as a simple method for policy improvement.
Tree SearchFor a given set of model parameters, tree search leverages the deterministic nature of the environment to look ahead at the consequences of possible actions to determine a more optimal policy than greedily selecting actions.
We adopt Monte Carlo Tree Search (MCTS) (Coulom, 2006), which outperformed more naive search algorithms in initial experiments, and has been successfully integrated with neural network policies in prior work (Silver et al., 2017; Anthony et al., 2017). Similarly to this prior work, we train a
Figure 2: **An example episode of our agent on the MiniWob++ use-colorwheel-2 task. At each step, the agent receives a new observation and outputs the next action to take. The screenshots include a rendered instruction that the agent needs to follow to successfully complete the episode. For MiniWob++, we use 32 vertical and horizontal coordinate bins to specify locations. We show the click location visually for this figure.**
model to estimate a _value function_, which predicts the value (i.e., estimated future rewards) of a given state. We use a surrogate reward which penalizes the number of steps taken to encourage concise trajectories without unnecessary actions. We implement this value function approximator using the same Pix2Struct architecture used for our agent.2 However, instead of predicting actions, this model predicts state-values mapped to discrete buckets. To estimate the value of leaf states during MCTS, we use a combination of this value function approximator and rollouts using our greedy policy, similarly to Silver et al. (2017). See Appendix B for additional technical details.
Footnote 2: While it may be more efficient to share an encoder between these two Pix2Struct-based models that condition on the same inputs, we trained separate models for simplicity.
We can then use successful episodes found with this stronger tree search policy to improve our model when this stronger model then yields a more effective tree search policy, we can continue to iteratively improve our model using this method. Notably, this approach requires no modifications to the fine-tuning procedure of Pix2Act, as, for simplicity, we tune on episodes from the tree search policy using standard supervised learning.
## 4 Benchmarks and Demonstrations
We adapt two benchmarks, MiniWob++ and WebShop, to our environment framework (SS2) which consists of pixel-based observations and generic low-level actions. We also map previously collected human demonstrations for these benchmarks to our observation and action spaces.
### MiniWob++
MiniWob++(Liu et al., 2018) is a set of over a hundred web-browser based tasks. See Figures 1 and 2 for task examples. Each task consists of an algorithm for generating variations of the task and an instruction template, controlled by a random seed, with up to billions of possible configurations per task. The task instruction is given as (mostly) natural language text in the top yellow part, which in our framework can only be accessed visually. An automatic reward is given at the end of the task.
Human DemonstrationsWe use the human demonstrations collected by Humphreys et al. (2022). However, their demonstrations were collected using an X11-based environment, which is different from our Selenium-based environment. This results in different renderings of the same underlying environment state, introducing a shift between the screenshots seen during training and those observed at test time. Additionally, we need to map from their real-time X11-based action sequences to our action space. We were able to perform this mapping with a reasonable degree of success for 59 tasks. Notably, not all behaviors in the human demonstrations are supported in our Selenium-based environment. For example, Selenium does not implement the ability to highlight text and drag it into a text field, and such an action is widely used in the human demonstrations for tasks where text is copied and pasted. Additionally, while our environment framework intends to cover the basic functionality of most web interfaces, aspects of some MiniWob++ tasks, such as capturing real-time observations for animated elements, are not supported. See Appendix A for additional details.3
Footnote 3: Other prior work has used the demonstrations from Liu et al. (2018), which cover a different subset of MiniWob++ tasks. However, these demonstrations do not include screenshots or sufficient information to replay the episodes in a browser environment to collect new screenshots, and therefore cannot be applied to our setting.
Starting with approximately 1.3 million demonstrations across the 59 supported tasks, we filtered demonstrations with a reward of \(<0.8\), or approximately 6% of demonstrations. We were able to successfully convert 81% of the remaining demonstrations to our action space. We reserve 10% of the data for a development set. Demonstrations contain approximately 3 steps per task on average, although this varies considerably across tasks.
EvaluationWe report the mean score across seeds and tasks. The score is the MiniWob++ raw reward (without time decay) mapped from the original range \([-1,1]\) to the range \([0,100]\). The score is equivalent to the success rate (_i.e_. the proportion of episodes in which the agent receives a positive reward) for tasks with binary rewards. For episodes that do not complete due to reaching a maximum number of allowed steps, we assume a score of \(0\). For each task, we compute the mean over 100 random seeds, and then compute the mean over 59 MiniWob++ tasks.
### WebShop
WebShop (Yao et al., 2022) is a web-based shopping environment with over 1.1 million products from Amazon. The task is to find and purchase a product based on a human-authored text instruction. Finding a suitable product requires entering search queries, clicking on results, and determining the relevance of various products to the instruction. An automatic reward is computed based on similarity between the purchased product and the gold target product.
Human DemonstrationsWe use the 1,566 human demonstrations (with a train/development/test split of 1012/54/500) collected in Yao et al. (2022). As with the MiniWob++ demonstrations, we need to map between the observation and action sequences used in their setup to our framework. Yao et al. (2022) used high-level actions (_e.g._ "search" or "click[item]"), each of which could map to multiple lower-level actions in our environment. Specifically, for all actions involving a mouse click, we determine the coordinates of the center of the corresponding HTML element. For WebShop, the entire screen content is not always visible due to page heights exceeding the viewport dimensions. If the clicked element lies outside the visible area, we add scroll actions until the element is visible. Finally, we map search actions to two actions in our environment: clicking on the center of the search box and entering the search query followed by the _enter_ key. We render the HTML inputs in the human demonstrations using our browser to obtain screenshots. Additionally we found that rendering the last 5 actions (separated by \(<s>\)) on top of the screenshot to be helpful.
EvaluationConsistent with previous work, we report Task Score, which is the average reward across 500 test instructions.
## 5 Experiments and Analysis
### Training Details
MiniWob++We finetuned a single model jointly on episodes from all tasks for a total of 26K steps using a batch size of 512, input/output sequence lengths of 512/16, and a learning rate of 0.01 with the Adafactor optimizer (Shazeer and Stern, 2018). We also evaluated using the tree search procedure described in SS3.1 to improve our agent. We performed 2 iterations of policy improvement with tree search, collecting a total of 826K episodes across all tasks, and tuning for a further 26K steps.
WebShopWe used only the provided human demonstrations to train our model.4 Due to its larger resolution and text-heavy data, we used a higher input sequence length of 4096. We also found it useful to perform intermediate finetuning on MiniWob++, followed by 10K steps of further finetuning WebShop using a batch size of 256 (see SS5.3 for details).
Figure 3: **Main results evaluating Pix2Act (ours) on MiniWob++ (left) and WebShop (right). In this paper we focus on approaches that do not have access to DOM or HTML information, and receive pixel-based observations (blue). On this setting, Pix2Act significantly improves over prior work on MiniWob++ and establishes the first baseline on WebShop. Our method performs competitively with humans (green) and with methods that have access to DOM or HTML information (red) on MiniWob++, although there is a gap with the best performing methods that access HTML on WebShop (see Β§5.3 for detailed analysis).**
### Main Results
We report the results of our models on MiniWob++ and WebShop in Figure 3. For MiniWob++, we also provide task-level comparisons between Pix2Act and human crowdworkers in Figure 4. There is limited prior work studying these tasks without access to DOM and HTML information. For MiniWob++, the only comparable baselines are from the CC-Net model of Humphreys et al. (2022), which mentions an ablation experiment where performance dropped by 75% from their primary results when the models conditioned on only screenshots without DOM information. As they did not provide per-task numbers for this ablation, we estimate the performance of CC-Net without DOM information by assuming that the drop in performance on the subset of tasks we study was also 75%. Regardless, it is clear that Pix2Act significantly outperforms CC-Net on this setting. The difference in performance can be largely attributed to the screenshot parsing pre-training of Lee et al. (2022). For WebShop, there is no prior work exploring such a setting, so we establish the first baseline.
### Ablations and Analysis
Pre-training ablationsTo study the impact of the pre-training on our model's ability to effectively learn to follow instructions via GUIs, we evaluate model performance without the pre-training procedure. For these experiments, we only compared performance of models trained using behavioral cloning. The results are shown in Figure 3, and demonstrate that pre-training is critical for our model's performance.
Comparison with models that use DOM or HTML as inputWe can also compare our results without access to DOM or HTML to previous methods that utilized these resources, including those which also leverage DOM information to construct specialized action spaces. The performance of the best model from prior work leveraging DOM or HTML information is shown in Figure 3.
For MiniWob++, the best model on this setting is CC-Net (Humphreys et al., 2022) trained with BC and RL and with access to both DOM and pixel-based observations.5Pix2Act achieves comparable performance to their best model, while relying on only a subset of the information used by CC-Net, and using a comparable number of human demonstrations for training. Pix2Act also outperforms CC-Net when each model is trained only with behavioral cloning, as CC-Net performance on this setting drops to 38.7 (results not shown in the Figure). Notably, CC-Net scores also drop by approximately 10% when the model is not given access to a dictionary of input strings provided by the environment. As shown in Figure 3, the key to our model's ability to achieve comparable performance without relying on DOM-based inputs is pixel-based pre-training. Another difference is that CC-Net uses a real time setting, which enables some forms of interaction not supported by our environment, and therefore can support a larger set of MiniWob++ tasks. On the other hand, for BC, CC-Net does not need to handle the shift in rendering format and potentially noisy action space conversion.
Footnote 5: We compute mean scores for CC-Net by averaging their reported per-task results over the 59 tasks we study.
For WebShop, the best model on this setting is WebGUM (Furuta et al., 2023a), which leverages the HTML source, a custom action space for the shopping domain, and a Flan-T5-XL (Chung et al.,
Figure 4: Comparing scores on MiniWob++ tasks of Pix2Act (blue) with human crowdworkers (green), ranked from left to right by the relative difference in performance.
2022) backbone. WebGUM outperforms Pix2Act when compared on this setting. Some of this gap can be attributed to their simplified high-level action space, direct access to the relevant text on the page, and ability to transfer from Flan-TS's pretraining scale and instruction finetuning. Comparable improvements to the scale and pretraining of pixel-based models could reduce this gap.
We discuss other approaches that leverage DOM or HTML information further in SS6. We also offer a complete comparison across all MiniWob++ tasks in Appendix C.
Evaluating transfer across tasksTraining a pretrained, pixel-based model to interact with a GUI can intuitively lead to better generalization to new tasks that use common GUI design principles. To study this, we evaluate the ability of Pix2Act (without RL) to generalize to tasks unseen during training. Specifically, we hold out 9 out of 59 tasks and train on the remaining 50.6 We then evaluate performance on the held-out tasks, comparing initializing with Pix2Struct to random initialization. Table 1 illustrates that Pix2Act can reach a mean score of 28.3 on held out tasks compared to 65.5 when training on those tasks. Conversely, mean score is 7.6 when Pix2Struct initialization is not used. This shows that combining pretraining with a general GUI interface can lead to non-trivial generalization to held out tasks.
Footnote 6: We manually pick the 9 tasks to verify they include only actions or elements that would be reasonable to generalize to from the training tasks. The tasks are click-checkboxes-large, click-color, click-tab-2, click-tab-2-hard, count-shape, drag-shapes, use-color-wheel-2, use-slider-2.
For WebShop, we find that finetuning directly on WebShop (without intermediate finetuning on MiniWob++ as mentioned in 5.1) results in a drop of 4.0 in Task Score, demonstrating transfer learning benefits across these datasets.
Tree search analysisTable 2 shows the improvement in MiniWob++ scores by training on episodes generated by tree search. After an initial round of training on episodes generated by tree search, the effectiveness of tree search also improves due to improvements in the underlying model used to guide the search. The best greedy policy achieves performance close to the best tree search policy, but does not require access to reward signals or additional exploration at inference time. Our results indicate that we could further improve performance with more iterations of policy improvement via tree search.
## 6 Related Work
We focus on agents that interact with GUIs, such as operating system dialogs or web pages, to accomplish a given task. Many early approaches relied on the structured information from the GUIs (Zettlemoyer and St. Amant, 1999; Allen et al., 2007; Branavan et al., 2010). This information could range from a flat list of GUI components and their properties, to the full hierarchical structure of the components (_e.g._ the DOM tree). The output space also depends on this structured information, often using GUI components as action targets (_e.g._ clicking button #7). As discussed in SS1, such structured information might not always be available, or might not align with what visually appears to the users.
When Shi et al. (2017) introduced the _World of Bits_ tasks, which was the precursor to MiniWob++ (Liu et al., 2018), they proposed a model based on a convolutional neural network that takes both visual and structured inputs and then performs generic low-level computer actions (_e.g._ clicking
\begin{table}
\begin{tabular}{l c c} \hline \hline Pre-training & Included & Heldout \\ \hline Yes & 65.5 & 28.3 \\ No & 11.0 & 7.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: We selected 9 MiniWob++ tasks and evaluated mean scores when they are _heldout_ from the training set. Pretraining leads to non-trivial generalization (28.3) to held out tasks that were unobserved at training time compared to a randomly initialized model (7.6). We also include scores when the tasks are _included_ during training for reference.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Iteration} \\ \cline{2-4} Policy & 0 & 1 & 2 \\ \hline Greedy & 66.5 & 93.1 & 96.2 \\ Tree Search & 91.7 & 98.4 & β \\ \hline \hline \end{tabular}
\end{table}
Table 2: We compare average MiniWob++ scores using the greedy policy with one that uses tree search and lookahead, given the same underlying model. The model is initially trained on human demonstrations and iteratively improved by training on episodes generated by the tree search policy.
at a coordinate or pressing a key), similarly to Pix2Act. However, the model performed poorly compared to humans. Follow-up work studied specialized architectures for incorporating structured DOM information and restricted the action space to clicking and typing predetermined texts on DOM elements (Liu et al., 2018; Gur et al., 2018; Jia et al., 2019). Humphreys et al. (2022) reconsidered incorporating both visual and structured information as well as a low-level action space that aligns better to the human demonstrations. We discussed their approach, CC-Net, in SS5.3. Humphreys et al. (2022) also explored the benefits of large-scale human demonstrations, and we build on their work to utilize a large number of human demonstrations to train Pix2Act. This paper shows that Pix2Act, a model with pixel-only inputs, can outperform humans on MiniWob++ and match the state-of-the-art approaches that rely on DOM information.
Automating web-based tasks using large language models (LLMs) has also been broadly explored. For instance, WebGPT uses a text-based web browsing environment to search and navigate the web (Nakano et al., 2021). More relatedly, recent work has investigated prompting LLMs to produce agents that can generalize to tasks based on a small number of in-context examples. Yao et al. (2023) proposed ReAct, a few-shot prompted LLM, which uses observations derived from HTML and a custom action space to make predictions based on explicit reasoning steps. Similarly, Kim et al. (2023) proposed RCI, a prompted LLM that iteratively critiques and refines its outputs, also using HTML inputs and custom action spaces. These approaches achieve competitive performance on WebShop and MiniWob++, respectively, and are extremely sample-efficient, relying on just a handful of demonstrations per task. Gur et al. (2022) treated raw HTML as a string and fed it to LLMs pretrained on natural language. After fine-tuning them on demonstrations, the models improved MiniWob++ task success rate and sample efficiency compared to models that take DOM-based inputs and specialized architectures. Finally, WebGUM (Furuta et al., 2023b), discussed in SS5.3, extends HTML-based models to integrate a vision encoder pretrained on ImageNet-21K.
Other work has focused on interactive tasks using mobile apps (Li et al., 2020; Burns et al., 2022), but has generally not studied observation and action formats similar to ours. We focused on web-based GUIs so that we could use a consistent environment framework for simplicity. Besides GUIs, several works on video game agents also considered visual-only input and low-level actions. For example, most works on Atari games used the screenshot as visual input and predicted the controller buttons to press (Mnih et al., 2015). More recently, Baker et al. (2022), which focuses on learning from unlabeled videos, proposes an agent for Minecraft that uses pixel-based inputs paired with keyboard and mouse actions, similarly to Pix2Act.
## 7 Limitations and Discussion
Pixel-based vs. text-based representationsText-based representations may be practically useful when available, especially since they enable transferring knowledge from LLMs, demonstrating impressive few-shot learning with LLMs for MiniWob++ (Kim et al., 2023) and WebShop (Yao et al., 2023). When structured source is not available, OCR systems and models trained to predict the location and function of UI elements may also help connect models with the power of LLMs. On the other hand, similar advances in scaling and pre-training of vision or multimodal models could potentially enable similar capabilities in a pixel-based setting in the future, as we have shown the effectiveness of pixel-based pre-training (albeit at a smaller scale) for GUI-based tasks. Nevertheless, beyond addressing the case where HTML or DOM information is unavailable, we hope our study contributes towards a better understanding of the potential of pixel-based representations for instruction following via GUIs.
Tree SearchOur approach to policy improvement with tree search for MiniWob++ relied on the ability to procedurally generate new MiniWob++ environment and instruction variations and receive reward signals for task completion. Both aspects are unlikely to be available for some real world environments, and such an approach might need to rely on generative models of potential instructions and approximate reward models for task completion (Du et al. (2023)). Additionally, while we show that tree search can be sufficient to reach high performance on MiniWob++, we did not perform a detailed comparison relative to other search and RL algorithms in this study, which would be useful to better understand the most efficient approaches for learning from GUI-based environments.
Broader ImpactWe trained and evaluated our models using offline environments. Special consideration would be needed to deploy models safely and responsibly in environments that can access 3rd party services, e.g. to avoid spamming these applications.
Our research could lead to improved accessibility, productivity, and overall user experience. However, agents that use the same conceptual interface humans use may be more capable of breaking security defenses (e.g. solving CAPTCHAs). It is therefore important for security research to take such potential uses into account.
## Acknowledgments
We would like to thank Peter Humphreys, Toby Pohlen, and Gregory Thornton for their assistance with the MiniWob++ demonstraions. We also thank Ming-Wei Chang, Austin Huang, Luheng He, Tianze Shi, David Gaddy, Jacob Eisenstein, and Yi Luan for useful discussions and comments.
|
2310.20673 | Balancing Act: Constraining Disparate Impact in Sparse Models | Model pruning is a popular approach to enable the deployment of large deep
learning models on edge devices with restricted computational or storage
capacities. Although sparse models achieve performance comparable to that of
their dense counterparts at the level of the entire dataset, they exhibit high
accuracy drops for some data sub-groups. Existing methods to mitigate this
disparate impact induced by pruning (i) rely on surrogate metrics that address
the problem indirectly and have limited interpretability; or (ii) scale poorly
with the number of protected sub-groups in terms of computational cost. We
propose a constrained optimization approach that directly addresses the
disparate impact of pruning: our formulation bounds the accuracy change between
the dense and sparse models, for each sub-group. This choice of constraints
provides an interpretable success criterion to determine if a pruned model
achieves acceptable disparity levels. Experimental results demonstrate that our
technique scales reliably to problems involving large models and hundreds of
protected sub-groups. | Meraj Hashemizadeh, Juan Ramirez, Rohan Sukumaran, Golnoosh Farnadi, Simon Lacoste-Julien, Jose Gallego-Posada | 2023-10-31T17:37:35Z | http://arxiv.org/abs/2310.20673v2 | # Balancing Act: Constraining Disparate Impact in Sparse Models
###### Abstract
Model pruning is a popular approach to enable the deployment of large deep learning models on edge devices with restricted computational or storage capacities. Although sparse models achieve performance comparable to that of their dense counterparts at the level of the entire dataset, they exhibit high accuracy drops for some data sub-groups. Existing methods to mitigate this disparate impact induced by pruning (i) rely on surrogate metrics that address the problem indirectly and have limited interpretability; or (ii) scale poorly with the number of protected sub-groups in terms of computational cost. We propose a constrained optimization approach that _directly addresses the disparate impact of pruning_: our formulation bounds the accuracy change between the dense and sparse models, for each sub-group. This choice of constraints provides an interpretable success criterion to determine if a pruned model achieves acceptable disparity levels. Experimental results demonstrate that our technique scales reliably to problems involving large models and hundreds of protected sub-groups.
## 1 Introduction
Current deep learning practice displays a trend towards larger architectures (Bommasani et al., 2021), as exemplified by popular models such as GPT-4 (OpenAI, 2023), Llama 2 (Touvron et al., 2023) and DALL-E 2 (Ramesh et al., 2022). Model compression techniques such as pruning (Gale et al., 2019), knowledge distillation (Hinton et al., 2015), or quantization (Gholami et al., 2021) are crucial towards enabling the deployment of large models across a wide range of platforms, including resource-constrained edge devices like smartphones.
Despite achieving comparable performance at an aggregate level over the entire dataset, pruned models often exhibit significant accuracy reduction for some data sub-groups (Hooker et al., 2019, 2020; Paganini, 2020). In particular, under-represented groups can suffer high performance degradation while the overall performance remains unaffected, thus exacerbating systemic biases in machine learning models. Tran et al. (2022) refer to this phenomenon as the _disparate impact of pruning_.
Existing mitigation methods face challenges in terms of interpretability and scalability to a large number of sub-groups. Tran et al. (2022) introduce constraints aiming to equalize the loss of the sparse model across sub-groups. However, their approach does not account for the unequal group-level performance of the dense model. Moreover, while the loss can be a useful surrogate for training, this method addresses the disparate impact issue indirectly as it focuses on controlling the loss, rather than group-level changes in accuracy. Alternatively, Lin et al. (2022) compute per-group importance scores for every model parameter to determine the weights to be pruned. This approach becomes prohibitively expensive when the model or the number of sub-groups is large.
In this work, we characterize the disparate impact of pruning in terms of the group-level accuracy gaps between the dense and sparse models. Additionally, we propose a problem _formulation that directly addresses the disparate impact of pruning_ by imposing constraints on the per-group excess accuracy
gaps (CEAG). A key advantage of our proposed formulation is that it _enjoys interpretable semantics_: feasible solutions of our optimization problem correspond to models with low pruning-induced disparity. Finally, our approach introduces a _negligible computational overhead_ (SSE.1) compared to (disparity-agnostic) naive fine-tuning of the sparse model, making it applicable to problems with large numbers of groups, such as intersectional fairness tasks.
Fig. 1 illustrates the reliability of our approach at mitigating the disparate impact of pruning. We measure disparity in terms of excess accuracy gaps (EAGs, SS3.1). Naive fine-tuning yields models that disproportionately affect group _Others_, and while the equalized loss formulation mitigates the issue, _our formulation consistently reduces the pruning-induced disparity_. See SS5 for further discussion.
The main contributions of our work1 are as follows:
Footnote 1: Our code is available here: [https://github.com/merajhashemi/Balancing_Act](https://github.com/merajhashemi/Balancing_Act)
* We formulate a constrained optimization problem (CEAG, SS3) that directly controls disparate impact by bounding group-level accuracy gaps between the dense and sparse models.
* We propose an algorithm for solving constrained optimization problems with _non-differentiable_, _stochastic constraints_ (SS4). We use proxy constraints (Cotter et al., 2019); and introduce replay buffers (SS4.2) for handling noise in the estimation of the constraints.
* Our replay buffers improve the training dynamics of the equalized loss formulation proposed by Tran et al. (2022). The improved dynamics lead to better models in terms of disparity.
* Our experiments demonstrate that we can reliably mitigate the disparate impact of pruning across multiple architectures, datasets, and sparsity levels (SS5). These results carry over to tasks with intersectional groups, and up to hundreds of constraints.
We highlight that _all mitigation methods considered in this paper (including ours) may fail to generalize to unseen data_. Nevertheless, we believe our proposal constitutes a step in the right direction since our approach is the only one that reliably mitigates the disparate impact of pruning on the training set. We hope the empirical observations presented here will motivate further research on improving the generalization properties of methods for mitigating the disparate impact of pruning.
## 2 Related works
**Disparate Impact of Pruning.**Hooker et al. (2019, 2020) and Paganini (2020) document the disparate impact of pruning where some classes experience a more significant performance degradation
Figure 1: **Left: A dense model is sparsified with IMP, and then subjected to either (i) naive fine-tuning (NFT, using ERM), (ii) equalized loss constraints (Tran et al., 2022, EL), or (iii) our approach (CEAG). Right: Positive (resp. negative) excess accuracy gaps (EAGs, Β§3.1) indicate groups whose performance degraded more (resp. less) than the modelβs overall accuracy change. Models with low disparate impact have EAGs that concentrate around zero. CEAG **consistently yields models with lower disparity (\(\Psi_{\text{PW}}\), Β§3.1) than NFT and EL.** For example, NFT yields a 10% hyper-degradation (EAG, \(\psi_{g}\)) on group _Others_. Results correspond to race prediction on UTKFace, with race as group attribute at 90% sparsity. Metrics are measured on the training set and averaged over 5 seeds.**
compared to others. Existing methods to mitigate disparity involve fairness-aware pruning (Lin et al., 2022) or formulating constraints on a surrogate metric such as the loss (Tran et al., 2022).
Lin et al. (2022) propose a pruning technique that removes weights based on a heuristic metric that relates parameters with their importance for predicting each group. This approach scales poorly as it requires computing per-weight, per-group scores.
Tran et al. (2022) apply constraints to match the sparse model's loss on each sub-group to the aggregate loss. These constraints are agnostic to the performance of the _dense_ model on each group. Since the disparate impact of pruning is pertinent in relation to a reference model, the equalized loss formulation addresses the problem indirectly. Moreover, loss-based constraints lack the interpretability of the per-group accuracy changes between the sparse and dense models.
**Fairness and Constraints.** Independent of model pruning, fairness in machine learning models is a well studied problem (Dwork et al., 2012; Dieterich et al., 2016; Verma and Rubin, 2018; Mehrabi et al., 2021; Zemel et al., 2013; Zhao and Gordon, 2022). Enforcing fairness with constraints has mainly focused on imposing requirements such as demographic parity, equalized odds, equal opportunity (Hardt et al., 2016), accuracy parity (Agarwal et al., 2018; Berk et al., 2021), or combinations of these properties (Zafar et al., 2017; Lowy et al., 2021; Bakker et al., 2020; Shui et al., 2022). The _disparate impact of pruning_ is a sparsity-dependent fairness notion that aims to match the performance of a sparse model to that of a reference dense model.
**Constrained Optimization.** Constrained formulations have gained popularity in different sub-fields of machine learning such as safe reinforcement learning (Stooke et al., 2020), active learning (Elenter et al., 2022) and sparsity (Gallego-Posada et al., 2022). These constrained formulations lead to stochastic min-max optimization problems, which can be challenging to optimize due to their non-convexity (Lin et al., 2020). We make use of proxy constraints (Cotter et al., 2019) to solve problems with interpretable but non-differentiable constraints.
**Variance Reduction.** The stochasticity in gradient estimates introduces additional optimization challenges (Beznosikov et al., 2023). Variance reduction techniques (Gower et al., 2020) have been employed to improve convergence on stochastic optimization (Defazio et al., 2014), and in min-max games (Chavdarova et al., 2019). In this work, we leverage the idea of replay buffers (Mnih et al., 2013) to reduce the noise in the estimation of stochastic constraints.
## 3 Addressing the Disparate Impact of Pruning via Accuracy Gaps
In this section, we propose using _accuracy gaps_ (AGs) to quantify the disparate impact induced by model pruning. AGs are group-level measurements that quantify changes in accuracy between the dense and sparse models. As we will see, large discrepancies in AGs across groups correspond to scenarios where pruning-induced disparity is high. In SS3.2, we propose a problem formulation that yields models with low disparity by explicitly constraining deviations in the group accuracy gaps.
### Accuracy Gaps
We consider a supervised learning problem on a dataset \(\mathfrak{D}=\{(\mathbf{x}_{i},y_{i},g_{i})\}_{i=1}^{N}\) of \(N\) i.i.d tuples, each comprising features \(\mathbf{x}\in\mathcal{X}\), target class \(y\in[K]\) and group membership \(g\in\mathcal{G}\). The dataset can be partitioned into _sub-groups_\(\mathfrak{D}_{g}\triangleq\{(\mathbf{x}_{i},y_{i},g_{i})\in\mathfrak{D}\,|\,g_{i }=g\}\) for every \(g\in\mathcal{G}\).
Let \(h_{\mathbf{\theta}}:\mathcal{X}\rightarrow\mathbb{R}^{K}\) be a predictor with parameters \(\mathbf{\theta}\in\Theta\). The accuracy of \(h_{\mathbf{\theta}}\) on a sample set \(\mathcal{D}\) is \(A(\mathbf{\theta}|\mathcal{D})\triangleq\frac{1}{|\mathcal{D}|}\sum_{(\mathbf{x},y,g) \in\mathcal{D}}\mathbbm{1}\{\operatorname{argmax}[h_{\mathbf{\theta}}(\mathbf{x})]=y\}\). In particular, \(A(\mathbf{\theta}|\mathfrak{D})\) denotes the model accuracy on the entire dataset, while \(A(\mathbf{\theta}|\mathfrak{D}_{g})\) is the model accuracy on a specific sub-group \(g\).
Given access to a dense pre-trained model, we are interested in the effect of pruning on the accuracy across sub-groups \(\mathfrak{D}_{g}\). In realistic pruning applications the dense model may exhibit different accuracies across sub-groups, thus we do not aim to equalize the accuracy of the sparse model across groups. Therefore, **we argue that the accuracies after pruning should change (approximately) equally across sub-groups.**
Let \(\mathbf{\theta}_{d}\) and \(\mathbf{\theta}_{s}\) denote the parameters of the dense and sparse models, respectively. We define the _global accuracy gap_\(\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\) and _group accuracy gaps_\(\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\) as:
\[\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right) \triangleq A(\mathbf{\theta}_{d}|\mathfrak{D})-A(\mathbf{\theta}_{s}| \mathfrak{D}), \tag{1}\] \[\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right) \triangleq A(\mathbf{\theta}_{d}|\mathfrak{D}_{g})-A(\mathbf{\theta}_{s}| \mathfrak{D}_{g})\qquad\forall g\in\mathcal{G}. \tag{2}\]
A _positive gap_ (resp. _negative_) corresponds to a _degradation_ (resp. _improvement_) in the performance of the sparse model with respect to that of the dense model. This correspondence holds both at the global \(\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\) and group levels \(\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\).
**Disparate Impact of Pruning.** Following our discussion above, we say a sparse model \(h_{\mathbf{\theta}_{s}}\) experiences low disparate impact (with respect to a dense model \(h_{\mathbf{\theta}_{d}}\)) if the changes in performance are similar across sub-groups, i.e. \(\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\approx\Delta_{g^{ \prime}}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right),\forall g,g^{\prime}\in \mathcal{G}\).
Due to the loss of model capacity caused by pruning, \(\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\neq 0\) in general. Thus, we consider \(\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\) as the reference point for defining the group _excess accuracy gaps_ (EAGs):
\[\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\triangleq\Delta_{g} \left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)-\Delta\left(\mathbf{\theta}_{s},\mathbf{ \theta}_{d}\right),\qquad\forall g\in\mathcal{G}. \tag{3}\]
If \(\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)>0\), then \(g\) is more negatively impacted by pruning than the overall dataset. Conversely, \(\psi_{g^{\prime}}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)<0\) indicates that group \(g^{\prime}\) was less affected relative to the overall model degradation.
Note that if \(\psi_{g}=0,\forall g\in\mathcal{G}\), then it follows that \(\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)=\Delta_{g^{\prime}} \left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right),\forall g,g^{\prime}\in \mathcal{G}\), and there is no disparate impact. Thus, we quantify the disparate impact of pruning via:
\[\Psi_{\text{P:iWisc}}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\triangleq \max_{g,g^{\prime}\in\mathcal{G}}\,\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{ d}\right)-\psi_{g^{\prime}}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)=\max_{g \in\mathcal{G}}\,\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)-\min_{g ^{\prime}\in\mathcal{G}}\,\Delta_{g^{\prime}}\left(\mathbf{\theta}_{s},\mathbf{\theta }_{d}\right). \tag{4}\]
Note that \(\Psi_{\text{PW}}\geq 0\) always. Moreover, \(\Psi_{\text{PW}}=0\) if and only if we are in an ideal setting where the accuracy gaps are _equal_ across all groups. However, aiming to constraint \(\Psi_{\text{PW}}\) directly can be difficult in practice, as detailed in Appendix B.3.
### Constrained Excess Accuracy Gaps Formulation
We propose to impose upper-bounds (with a tolerance level \(\epsilon\geq 0\)) on the values of \(\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\leq\epsilon\). Since \(\epsilon\geq 0\), the constraints are effectively only enforced on \(\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)>0\), corresponding to groups experiencing hyper-degradation in performance (with respect to the average degradation)2. Imposing a lower bound on group EAGs \(\psi_{g}\) would allow for better control over the resulting disparate impact \(\Psi_{\text{PW}}\). However, solving the problem with both of these bounds is challenging due to the small size of the feasible region relative to the estimation noise in the constraints. Appendix B.3 provides further discussion and motivation regarding the choice to constrain only positive \(\psi_{g}\) values.
Footnote 2: Note that the set of hyper-degraded groups \(\{g\in\mathcal{G}\,|\,\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)>0\}\) depends directly on the parameters of the sparse model \(\mathbf{\theta}_{s}\) and thus changes at every training step.
This choice motivates an _operational definition of disparate impact_ which focuses on the group with the highest EAG, given by \(\max_{g}\psi_{g}\). Bounding this worst-case quantity, gives rise to an optimization problem with per-group constraints given by:
\[\text{(CEAG)}\quad\operatorname*{argmin}_{\mathbf{\theta}_{s}\in\Theta}\,L(\mathbf{ \theta}_{s}|\mathfrak{D}),\quad\text{s.t.}\quad\psi_{g}\left(\mathbf{\theta}_{s},\bm {\theta}_{d}\right)=\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)- \Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\leq\epsilon,\quad\forall g \in\mathcal{G} \tag{5}\]
where \(L(\mathbf{\theta}|\mathcal{D})\) is the loss of \(h_{\mathbf{\theta}}\) on dataset \(\mathcal{D}\), and the tolerance \(\epsilon\geq 0\) is the maximum allowed EAG. When \(\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)>0\), the constraints require that the performance degradation for each group be at most the overall model degradation plus the tolerance. Conversely, if \(\Delta\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)<0\), the constraints prescribe that all group accuracies must _increase_ by at least the overall improvement, except for an \(\epsilon\).
### Discussion
By formulating constraints on EAGs, CEAG directly addresses the disparate impact of pruning and has benefits in terms of interpretability, flexibility, and accountability. For alternative formulations to address the disparate impact of pruning, see Appendix B.
**Tackling disparate impact.** Existing methods aim to mitigate disparate impact by enforcing properties on the sparse model while being agnostic to the performance properties the dense model. Since
EAGs relate the per-group performance of the dense and sparse models, we argue that our approach _actually_ addresses pruning-induced disparity, rather than other fairness notions such as the loss equalization proposed by Tran et al. (2022).
**Interpretability.** The choice of tolerance level \(\epsilon\) directly translates to bounds on AGs. For example, setting \(\epsilon=1\%\) implies the worst affected class may not lose beyond \(1\%\) accuracy compared to the overall model change. In contrast, it is challenging to set interpretable tolerance levels for constraints based on losses.
**Flexibility.** CEAG allows for some slack in the disparity of the pruned model, as prescribed by the tolerance \(\epsilon\). This flexibility allows incorporating application-specific requirements into the learning procedure. For example, strict fairness regulations can be enforced with small tolerance values. In practice, this flexibility may be necessary as attaining \(\Delta_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)=\Delta\left(\mathbf{ \theta}_{s},\mathbf{\theta}_{d}\right)\,\forall g\in\mathcal{G}\) could be impossible given the reduced capacity of the sparse model.
**Accountability.** Being a constrained approach, establishing feasibility with respect to CEAG constitutes a clear success criterion to determine if a pruned model achieves acceptable disparity levels: a model is only admissible if it satisfies the constraints at a prescribed tolerance level.
## 4 Solving the Constrained Excess Accuracy Gaps Problem
A popular approach to solve the constrained optimization problem (CEAG) in Eq. (5) is to formulate its Lagrangian and optimize the resulting min-max problem:
\[\min_{\mathbf{\theta}_{s}\in\Theta}\,\max_{\mathbf{\lambda}\geq 0}\,\mathcal{ \Sigma}(\mathbf{\theta}_{s},\mathbf{\lambda})\triangleq L(\mathbf{\theta}_{s}|\mathbf{ \Sigma})+\sum_{g\in\mathcal{G}}\lambda_{g}\left(\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)-\epsilon\right), \tag{6}\]
where \(\lambda_{g}\geq 0\) is the Lagrange multiplier associated with the constraint for group \(g\) and \(\mathbf{\lambda}=[\lambda_{g}]_{g\in\mathcal{G}}\). We refer to \(\mathbf{\theta}_{s}\) as the _primal_ parameters, and to \(\mathbf{\lambda}\) as the _dual_ parameters.
Optimizing deep neural networks can be challenging, and generally requires carefully crafted procedures and extensive hyper-parameter tuning (Choi et al., 2019). We are interested in re-using standard techniques for optimizing \(\mathbf{\theta}_{s}\). Therefore, we consider a generic optimization protocol on \(\mathbf{\theta}_{s}\) and gradient ascent on \(\mathbf{\lambda}\), instead of specialized optimization approaches for min-max games such as extragradient (Gidel et al., 2019; Korpelevich, 1976).
### Optimization with Non-Differentiable Constraints
A natural next step is to optimize Eq. (6) with gradient-based updates. Unfortunately, this is not possible as the \(\psi_{g}\) terms are not continuous (since they are accuracy gaps), and are non-differentiable with respect to \(\mathbf{\theta}_{s}\). Therefore, we must resort to a surrogate \(\tilde{\psi}_{g}\) for computing gradients with respect to \(\mathbf{\theta}_{s}\). In contrast, Eq. (6) is differentiable with respect to \(\mathbf{\lambda}\), with gradients corresponding to constraint violations. Thus, the dual variables can be updated using the non-differentiable constraints \(\psi_{g}\). This update scheme is inspired by the proxy-constraint technique introduced by Cotter et al. (2019).
\[\mathbf{\theta}_{s}^{*},\mathbf{\lambda}^{*}\in\begin{cases}\underset{\mathbf{\theta}_{s }\in\Theta}{\text{argmin}}\,\,\mathcal{\Sigma}_{\theta}(\mathbf{\theta}_{s},\bm {\lambda})\overset{\Delta}{=}L(\mathbf{\theta}_{s}|\mathbf{\Sigma})+\sum_{g\in \mathcal{G}}\lambda_{g}\tilde{\psi}_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right) \\ \underset{\mathbf{\lambda}\geq 0}{\text{argmax}}\,\,\mathcal{\Sigma}_{ \lambda}(\mathbf{\theta}_{s},\mathbf{\lambda})\overset{\Delta}{=}\sum_{g\in\mathcal{G }}\lambda_{g}\big{(}\psi_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)- \epsilon\big{)},\end{cases} \tag{7}\]
Specifically, we choose surrogates \(\tilde{\psi}_{g}\) given by the _excess (negative) loss gaps_: \(\tilde{\psi}_{g}\left(\mathbf{\theta}_{s},\mathbf{\theta}_{d}\right)\triangleq-\big{(} L(\mathbf{\theta}_{d}|\mathbf{\Sigma}_{g})-L(\mathbf{\theta}_{s}|\mathbf{\Sigma}_{g})\big{)}+ \big{(}L(\mathbf{\theta}_{d}|\mathbf{\Sigma})-L(\mathbf{\theta}_{s}|\mathbf{\Sigma})\big{)}\). Note that \(\tilde{\psi}_{g}\) has the same structure as \(\psi_{g}\), but replaces accuracy measurements with _negative_ loss terms. This is a reasonable choice of surrogate function since _drops_ in accuracy for the sparse model correspond to _increases_ in loss.
Eq. (7) represents a two-player, non-zero-sum game. Rather than replacing the non-differentiable constraints with their surrogates everywhere, this approach only performs the replacement _when necessary_, i.e., for computing gradients for the primal parameters. Preserving the actual constraints on the dual objective \(\mathcal{\Sigma}_{\lambda}(\mathbf{\theta}_{s},\mathbf{\lambda})\) is useful as it results in a problem closer to Eq. (6).
Equation (7) can be optimized via gradient descent on \(\mathbf{\theta}_{s}\) (based on \(\mathfrak{L}_{\theta}\)) and gradient ascent on \(\mathbf{\lambda}\) (based on \(\mathfrak{L}_{\lambda}\)). Alternating gradient descent-ascent (Alt-GDA) updates yield:
\[\lambda_{g}^{(t+1)} =\left[\lambda_{g}^{(t)}+\eta_{\lambda}\left(\psi_{g}\left(\mathbf{ \theta}_{s}^{(t)},\mathbf{\theta}_{d}\right)-\epsilon\right)\right]_{+} \tag{8}\] \[\mathbf{\theta}_{s}^{(t+1)} =\mathbf{\theta}_{s}^{(t)}-\eta_{\theta}\left[\nabla_{\theta}L\left( \mathbf{\theta}_{s}^{(t)}|\mathfrak{D}\right)+\sum_{g\in\mathcal{G}}\lambda_{g}^{ (t+1)}\,\nabla_{\theta}\tilde{\psi}_{g}\left(\mathbf{\theta}_{s}^{(t)},\mathbf{\theta }_{d}\right)\right], \tag{9}\]
where \(\eta_{\theta}\) and \(\eta_{\lambda}\) are step-sizes and \([\,\cdot\,]_{+}=\max(\cdot,0)\). We initialize the Lagrange multipliers to \(\mathbf{\lambda}^{(0)}=\mathbf{0}\). Appendix A contains more details on non-convex constrained optimization.
```
0:\(\theta\): Initial model parameters, \(\eta_{\theta}\): Primal step-size, \(\eta_{\lambda}\): Dual step-size, \(k\): Memory size for replay buffer, \(\epsilon\): Tolerance hyper-parameter, \(B\): Batch size, \(T\): Total number of iterations, \(A_{\mathsf{dense}}^{\tilde{\psi}}\): Accuracy of the dense model on each group \(g\), \(A_{\mathsf{dense}}\): Aggregate accuracy of the dense model.
1:\(\lambda_{g}\gets 0,\quad\forall g\in\mathcal{G}\)\(\triangleright\)Initialize dual parameters
2:\(\texttt{buf}_{g}\leftarrow\texttt{queue}(k),\quad\forall g\in\mathcal{G}\)\(\triangleright\)Initialize replay buffer
3:for\(\texttt{iter}=1,\ldots,T\)do
4:\(\mathbf{x},\mathbf{y},\mathbf{g}\leftarrow\text{Sample }\{(x_{i},y_{i},g_{i})\}_{i=1}^{B}\sim \mathfrak{D}\)\(\triangleright\)Sample batch from training set
5:\(\texttt{idx}_{g}\leftarrow(\mathbf{g}==g),\quad\forall g\in\mathcal{G}\)\(\triangleright\)Calculate sub-group indices for batch
6:\(\tilde{\mathbf{y}}\leftarrow h_{\theta}(\mathbf{x})\)\(\triangleright\)Compute forward-pass
7:\(\texttt{buf}_{g}\leftarrow\text{UpdateBuffer}(\texttt{buf}_{g},\tilde{ \mathbf{y}},\mathbf{y},\texttt{idx}_{g}),\quad\forall g\in\mathcal{G}\)\(\triangleright\)Update replay buffer
8:\(\psi_{g}\leftarrow\text{QueryBuffer}(\{\texttt{buf}_{g}\}_{g=1}^{\mathcal{G}},k\{A_{ \mathsf{dense}}^{\mathcal{G}}\}_{g=1}^{\mathcal{G}},A_{\mathsf{dense}})\)\(\triangleright\)Query replay buffers
9:\(\tilde{\psi}_{g}\leftarrow\text{ComputeSurrogate}(\tilde{\mathbf{y}},\texttt{idx}_{g},\quad\forall g\in\mathcal{G}\)\(\triangleright\)Compute surrogates
10:\(\lambda_{g}\leftarrow\max\{0,\,\lambda_{g}+\eta_{\lambda}(\psi_{g}-\epsilon)\},\quad\forall g\in \mathcal{G}\)\(\triangleright\)Update dual params
11:\(\texttt{grad}_{\theta}\leftarrow\nabla_{\theta}\left[L\left(\theta|(\mathbf{x}, \mathbf{y})\right)+\sum_{g\in\mathcal{G}}\lambda_{g}\tilde{\psi}_{g}\right]\)\(\triangleright\)Compute primal gradient
12:\(\theta\leftarrow\text{PrimalOptimUpdate}(\eta_{\theta},\texttt{grad}_{\theta})\)\(\triangleright\)Update model params
13:endfor
14:return\(\theta\)
```
**Algorithm 1** Constrained Excess Accuracy Gap (CEAG)
### Stochastic Constraints and Replay Buffers
In practice, the problem in Eq. (5) is solved by using mini-batch samples from the dataset to estimate the objective function, the constraints, and their gradients. This procedure can yield constraint estimates with high variance across mini-batches, especially for under-represented groups; or for all groups when the number of constraints is large. In extreme cases, a mini-batch may contain very few samples from a given sub-group, leading to multiplier updates based on very noisy estimates.
We overcome these issues by estimating constraints based on information across multiple mini-batches. For calculating AGs, (i) we compute the performance of the dense model _on the whole dataset_ (once at the beginning of training), and (ii) we estimate the accuracy of the sparse model from per-sample accuracy measurements on the \(k\) most recent datapoints of each group. We refer to the data structure that stores historic accuracies as a _replay buffer_ (RB), given the analogy to the technique used in reinforcement learning (Mnih et al., 2013). The choice of buffer size \(k\) introduces a trade-off between reducing the variance of the constraints, and biasing estimates towards old measurements.
These adjustments reduce the variance of the constraints, thus yielding stable updates for the multipliers. This allows us to solve Eq. (5) in settings with large numbers of constraints relative to the choice of batch size. We do not apply variance reduction on the model updates. For details on our implementation of replay buffers, see Appendix C. For experimental evidence on their benefits, see SS5.3 and Appendix C.1.
### Algorithmic Details
Algorithm 1 presents our approach for solving CEAG. Note that Algorithm 1 is applicable to a broader class of constrained optimization problems with stochastic constraints, including the equalized loss formulation of Tran et al. (2022) (see Appendix B.1 for details).
**Computational Overhead.** The constrained approach in Algorithm 1 represents a negligible computational overhead compared to fine-tuning the sparse model with empirical risk minimization. An iteration of Alt-GDA (Eq. (8)) requires _one forward pass and one backward pass_ through the model since the same iterate \(\mathbf{\theta}_{s}\) is used for both the primal and dual updates. This matches the cost of gradient descent for ERM, except for the minimal overhead associated with the evaluation of constraints after the forward pass. Note that, given our choice of surrogate, the gradient of the Lagrangian is a weighted average of the per-sample loss gradients, thus autograd frameworks can compute it as efficiently as \(\nabla_{\theta}L\left(\mathbf{\theta}_{s}|\mathfrak{D}\right)\). For empirical evidence supporting this claim, see Appendix E.1.
**Memory Cost.** The memory overhead of our approach is negligible in the context of training deep networks: storing the dual variables requires one float per constraint, and the buffers (for CEAG) store one boolean per data point in the restricted buffer memory indicating whether the point was predicted correctly.
## 5 Experiments
In this section, we present an empirical comparison between naive fine-tuning, equalized loss (Tran et al., 2022), and our proposed CEAG approach. The main goal of our experiments is to train sparse models with low pruning-induced disparity. While low disparity may introduce a trade-off with aggregate performance, we aim to achieve comparable overall accuracy to no-mitigation methods. We explore the reliability and accountability of our approach, along with the effect of replay buffers on the constrained optimization problem. Our experiments demonstrate that our method successfully scales to problems with hundreds of groups.
### Experimental Setup
**Tasks and architectures.** We carry out experiments on the FairFace (Karkkainen and Joo, 2021) and UTKFace (Zhang et al., 2017) datasets, following the works of Lin et al. (2022) and Tran et al. (2022). Additionally, we perform experiments on CIFAR-100 (Krizhevsky, 2009), a task with a large number of sub-groups. The choice of target and group attributes for each dataset is specified in Appendix D.1. The architectures for each task, and the source of our pre-trained models are presented in Appendices D.3 and D.4.
**Baseline methods.** We compare with three baseline mitigation methods (i) NFT: the last iterate when fine-tuning the sparse model via ERM, (ii) NFT+ES: the best iterate of NFT in terms of test accuracy (early stopping), and (iii) EL+RB: our implementation (See Appendix B.1) of the equalized loss formulation proposed by Tran et al. (2022). The optimization hyper-parameters employed for each mitigation method (including CEAG) are described in Appendix D.6.
**Model pruning.** Previous work has shown that iterative magnitude pruning (IMP) (Zhu and Gupta, 2017) achieves SOTA aggregate performance on unstructured sparsity tasks (Blalock et al., 2020). Because of this (and its simplicity), we employ unstructured IMP on all our tasks. IMP gradually prunes the model by removing parameters with the smallest magnitude once every epoch. The remaining weights are fine-tuned in between pruning episodes. We carry out IMP during the first 15 epochs. Appendix D.5 provides details on this pruning technique.
**Choice of sparsity levels.** For very high levels of unstructured sparsity (over 95%), Gale et al. (2019) observe that pruning has a devastating impact on the overall performance of ResNet-50 models (He et al., 2016). In contrast, performance remains essentially unaffected for models with up to 85% sparsity. These observations may not carry over to other architectures such as MobileNets (Sandler et al., 2018), or other ResNets. Nonetheless, our experiments stick to the [85%, 95%] range, except for FairFace experiments, where 99% sparsity has been considered by FairGrape (Lin et al., 2022).
**Software.** Our implementations use PyTorch 1.13.0 (Paszke et al., 2019) and the Cooper library for constrained optimization (Gallego-Posada and Ramirez, 2022).
**Experimental uncertainty.** All metrics reported in our tables and plots follow the pattern avg \(\pm\) std. Unless mentioned otherwise, all our experimental metrics are aggregated across 5 seeds.
For comprehensive experimental results across multiple tasks and sparsity levels, see Appendix F.
### FairFace and UTKFace
**ResNet-34 Models on FairFace.** Table 1 includes results for FairFace classification at 99% sparsity. We compare the behavior of NFT, NFT+ES, EL+RB, and CEAG. We quote the results reported for the FairGRAPE technique3, aggregated over 3 seeds.
Footnote 3: We do not re-run FairGRAPE owing to its high computational cost, see discussion in Appendix E.2
We observe that CEAG attains a feasible model in training (\(\max_{g}\psi_{g}\leq\epsilon\)), as well as the smallest \(\max_{g}\psi_{g}\) both in the training and test sets. This does not come at the cost of aggregate performance, as all methods achieve a comparable test accuracy of around 65%. We observe that FairGRAPE's \(\max_{g}\psi_{g}\) and \(\Psi_{\text{PW}}\) are significantly higher than that of all other methods.
\begin{table}
\begin{tabular}{c c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Sparsity**} & \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**Train**} & \multicolumn{3}{c}{**Test**} \\ & & Accuracy & \(\Psi_{\text{PW}}\) & \(\max_{g}\psi_{g}\) & Tol (\(\epsilon\)) & Accuracy & \(\Psi_{\text{PW}}\) & \(\max_{g}\psi_{g}\) \\ \hline \multirow{4}{*}{99} & NFT & \(76.1\pm 0.2\) & \(3.9\pm 0.9\) & \(2.3\pm 0.3\) & β & \(65.2\pm 0.4\) & \(4.2\pm 0.5\) & \(2.1\pm 0.5\) \\ & NFT + ES & \(74.0\pm 2.5\) & \(7.2\pm 3.3\) & \(4.0\pm 1.4\) & β & \(65.4\pm 0.4\) & \(6.3\pm 2.6\) & \(2.9\pm 1.3\) \\ & EL + RB & \(76.1\pm 0.1\) & \(8.8\pm 1.3\) & \(2.6\pm 0.2\) & β & \(65.1\pm 0.4\) & \(6.0\pm 1.5\) & \(2.4\pm 0.4\) \\ & FairGRAPE & β & β & β & β & \(65.1\pm 0.4\) & \(15.9\pm 10.7\) & \(10.7\) \\ & CEAG & \(76.2\pm 0.1\) & \(3.5\pm 0.6\) & \(1.8\pm 0.4\) & \(\leq 2\)\% & \(65.2\pm 0.4\) & \(4.3\pm 0.8\) & \(2.0\pm 0.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Race prediction task on FairFace with race as group attribute. **CEAG achieves a \(\max_{g}\psi_{g}\) within the prescribed threshold**. Tol (\(\epsilon\)) is the tolerance hyper-parameter of CEAG. We do not specify \(\epsilon\) for other formulations as they do not admit a tolerance.
\begin{table}
\begin{tabular}{c c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Sparsity**} & \multirow{2}{*}{**Method**} & \multicolumn{4}{c|}{**Train**} & \multicolumn{3}{c}{**Test**} \\ & & Accuracy & \(\Psi_{\text{PW}}\) & \(\max_{g}\psi_{g}\) & Tol (\(\epsilon\)) & Accuracy & \(\Psi_{\text{PW}}\) & \(\max_{g}\psi_{g}\) \\ \hline \multirow{4}{*}{90} & NFT & \(98.1\pm 0.1\) & \(11.5\pm 0.7\) & \(10.0\pm 0.7\) & β & \(79.6\pm 0.5\) & \(8.9\pm 2.3\) & \(3.1\pm 0.5\) \\ & NFT + ES & \(90.5\pm 4.7\) & \(49.8\pm 2.30\) & \(44.8\pm 2.08\) & β & \(81.0\pm 0.2\) & \(12.0\pm 5.3\) & \(6.9\pm 4.8\) \\ & EL + RB & \(98.3\pm 0.2\) & \(32.0\pm 0.6\) & \(2.4\pm 0.6\) & β & \(79.4\pm 0.5\) & \(11.4\pm 0.9\) & \(3.0\pm 1.1\) \\ & CEAG & \(96.2\pm 0.1\) & \(2.4\pm 0.6\) & \(1.0\pm 0.3\) & \(\leq 3\)\% & \(80.2\pm 0.1\) & \(6.0\pm 2.5\) & \(2.3\pm 1.0\) \\ \hline \multirow{4}{*}{92.5} & NFT & \(95.1\pm 0.2\) & \(34.2\pm 1.6\) & \(30.7\pm 1.5\) & β & \(79.2\pm 0.2\) & \(8.8\pm 3.2\) & \(3.6\pm 1.3\) \\ & NFT + ES & \(91.2\pm 2.7\) & \(53.3\pm 9.6\) & \(48.0\pm 8.3\) & β & \(80.4\pm 0.4\) & \(7.5\pm 3.4\) & \(5.4\pm 3.1\) \\ & EL + RB & \(95.4\pm 0.3\) & \(11.1\pm 1.5\) & \(8.6\pm 1.4\) & β & \(78.7\pm 0.3\) & \(16.8\pm 3.9\) & \(3.3\pm 0.6\) \\ \cline{1-1} & CEAG & \(93.4\pm 0.3\) & \(3.8\pm 0.4\) & \(2.3\pm 0.4\) & \(\leq 3\)\% & \(79.5\pm 0.1\) & \(10.8\pm 2.2\) & \(3.3\pm 1.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Race prediction task on the UTKFace dataset with the intersection of race and gender as group attribute, across sparsities. For instance, if a sample has race _Black_ and gender as _Female_, its group label is _Black-Female_. **CEAG consistently achieves a \(\max_{g}\psi_{g}\) within the threshold, across sparsities**.
Figure 2: Trade-off between disparity and accuracy for UTKFace race prediction with race as group attribute. NFT and EL+RB yield models with high disparity. **In contrast, CEAG consistently produces models that mitigate the disparate impact of pruning.** CEAGβs gains do not entail a degradation in overall test accuracy. Vertical dashed lines indicate the tolerance (\(\epsilon\)) of our method, with colors corresponding to different sparsity levels.
**MobileNet-V2 Models on UTKFace.** Fig. 2 illustrates results for UTKFace with race as group attribute. \(\mathsf{CEAG}\,\) consistently attains feasible models in training, and the smallest values of \(\max_{g}\psi_{g}\) in the test set. \(\mathsf{CEAG}\,\) attains comparable performance to \(\mathsf{NFT}\,\) and \(\mathsf{EL+RB}\,\) in the test set.
Table 2 presents results for UTKFace with intersectional groups (race \(\cap\) gender). \(\mathsf{NFT}\,\) and \(\mathsf{NFT+ES}\,\) have very high disparity metrics. In contrast, \(\mathsf{CEAG}\,\) attains a feasible \(\max_{g}\psi_{g}\) and the smallest \(\Psi_{\text{PW}}\) in the training set, for all sparsities. Our approach has worse aggregate performance than \(\mathsf{NFT}\,\) and \(\mathsf{EL+RB}\,\) in the train set; however, the test accuracy of these three methods is comparable.
For \(\mathsf{NFT}\,\), both Fig. 2 and Table 2 show significantly higher disparity metrics in training when compared to in test. This is an indicator that the sparse model achieves good performance in training by overfitting to the majority groups and losing a lot of performance on the under-represented groups.
### Scaling to Large Numbers of Groups
**CifarResNet-56 models on CIFAR-100.** Table 3 contains results for CIFAR-100 classification at 92.5% sparsity. By having the groups correspond to class labels, constrained formulations for this experiment have 100 constraints. We include two additional experiments to illustrate the importance of replay buffers: equalized loss without replay buffers (\(\mathsf{EL}\,\)), and \(\mathsf{CEAG}\,\) (no RB).
Disparity metrics for \(\mathsf{EL}\,\) and \(\mathsf{CEAG}\,\) are better when employing replay buffers, both on the train and test sets. This difference is more notable for \(\mathsf{EL}\,\). We also observe the \(\mathsf{RBs}\) improve the training dynamics of the dual variables (Appendix C.1). \(\mathsf{CEAG}\,\) obtains the best disparity on the train set. Nonetheless, all approaches have a significant generalization gap in terms of disparity measurements. We observe that the best accuracy and the smallest \(\max_{g}\psi_{g}\) on the test set are obtained by \(\mathsf{EL+RB}\).
## 6 Discussion
It is important to develop techniques that reliably mitigate the disparate impact of pruning since deploying pruned models can have downstream consequences. We observe that \(\mathsf{NFT}\,\) is unsuccessful at doing this, and \(\mathsf{NFT+ES}\,\) amplifies the disparity induced by pruning. In contrast, \(\mathsf{CEAG}\,\) reduces disparity while achieving comparable aggregate performance to \(\mathsf{NFT}\,\). However, _we observe that all mitigation approaches may fail to mitigate disparate impact on unseen data_.
**Mitigating the disparate impact of pruning.** Unlike other mitigation methods, our approach consistently mitigates the disparate impact of pruning on the training set. We observe this across a wide range of tasks and architectures. In contrast, other mitigation approaches generally yield worse maximum degradation \(\max_{g}\psi_{g}\). In particular, \(\mathsf{NFT+ES}\,\) yields models with very high disparity.
**Accuracy trade-off. \(\mathsf{CEAG}\,\)** may introduce a trade-off in terms of accuracy in order to satisfy the disparity requirements. On the train set, we observe a small degradation in performance in comparison to \(\mathsf{NFT}\,\), typically of at most 2%; on the test set, \(\mathsf{CEAG}\,\)'s accuracy is comparable to that of \(\mathsf{NFT}\,\).
**Reliability.** Our approach reliably yields models within the requested disparity levels. Moreover, \(\mathsf{CEAG}\,\) results in the smallest variance of the \(\max_{g}\psi_{g}\) and \(\Psi_{\text{PW}}\) metrics across seeds.
**Generalization.** Although \(\mathsf{CEAG}\,\) reliably satisfies the constraints on the train set, this may not transfer to the test set. We highlight that (i) these generalization issues are present for other mitigation methods, and (ii) our approach generally achieves better test disparity than the baselines. Improving the generalization of disparity mitigation methods is an important direction for future research.
\begin{table}
\begin{tabular}{l l|c c c c|c c c} \hline \hline \multirow{2}{*}{**Sparsity**} & \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**Train**} & \multirow{2}{*}{\(\mathsf{Test}\)**} \\ & & & & & & & & \(\mathsf{NFT+ES}\,\) \\ \hline \multirow{4}{*}{92.5} & \(\mathsf{NFT}\,\) & \(99.8\pm 0.0\) & \(3.7\pm 0.9\) & \(3.0\pm 0.9\) & β & \(64.9\pm 0.4\) & \(26.2\pm 5.2\) & \(143.3\pm 3.4\) \\ & \(\mathsf{NFT}\,\) + ES & \(99.3\pm 0.2\) & \(6.8\pm 1.9\) & \(5.8\pm 1.8\) & β & \(65.2\pm 0.4\) & \(274.2\pm 2.3\) & \(14.6\pm 2.0\) \\ & \(\mathsf{EL}\,\) & \(98.5\pm 0.1\) & \(11.3\pm 0.9\) & \(9.8\pm 1.0\) & β & \(65.3\pm 0.5\) & \(25.8\pm 2.0\) & \(14.1\pm 1.3\) \\ & \(\mathsf{EL}\,\) + RB & \(99.5\pm 0.0\) & \(6.7\pm 1.4\) & \(5.7\pm 1.5\) & β & \(65.3\pm 0.4\) & \(24.2\pm 2.9\) & \(13.3\pm 2.4\) \\ & \(\mathsf{CEAG}\,\) (no RB) & \(99.6\pm 0.0\) & \(2.6\pm 0.3\) & \(1.7\pm 0.2\) & \(\leq 2\)\% & \(65.0\pm 0.4\) & \(27.2\pm 2.6\) & \(14.9\pm 2.5\) \\ & \(\mathsf{CEAG}\,\) & \(99.6\pm 0.0\) & \(2.4\pm 0.2\) & \(1.6\pm 0.1\) & \(\leq 2\)\% & \(64.8\pm 0.3\) & \(25.0\pm 1.9\) & \(13.8\pm 1.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: CIFAR-100 classification with the group attributes being the class labels, at 92.5% sparsity. \(\mathsf{EL}\,\) is the equalized loss formulation without replay buffers; \(\mathsf{CEAG}\,\) (no RB) is similarly defined.
Conclusion
In this paper, we explore mitigating the disparate impact of pruning. We formalize disparate impact in terms of accuracy gaps between the dense and sparse models, and propose a constrained optimization approach for mitigating it. Our formulation offers interpretable constraints and allows for algorithmic accountability. Although other methods can indirectly reduce disparity, our approach reliably addresses the disparate impact of pruning across a wide range of tasks, while attaining comparable aggregate performance. In particular, our method successfully scales to tasks with hundreds of subgroups. Despite the fact that current mitigation methods exhibit generalization issues, our approach represents a solid step towards mitigating the disparate impact of pruning.
### Ethics Statement
* **Facial recognition.** Our paper makes use of datasets that contain face images. We focus on these datasets as they illustrate the disparate impact of pruning, and for comparisons with previous work. We would like to highlight that although our method focuses on reducing the disparate impact across groups, we do not endorse the use of our algorithm in facial recognition systems.
* **Data annotation.** We use the UTKFace (Zhang et al., 2017) and FairFace (Karkkainen and Joo, 2021) datasets in this work. These datasets include annotations for sensitive demographic attributes such as race, gender, and age. However, it is essential to recognize that these annotations represent normative ways of perceiving gender, race, and age, and we do not endorse or promote these normative categorizations.
* **Ethical sourcing of data.** We don't endorse using datasets where the data may not have been ethically sourced or the workers/subjects involved in the data collection process are not fairly compensated.
* **Fairness notions.** We explore a specific notion of fairness in this paper. Our framework can be extended to other fairness notions by incorporating additional constraints. However, certain notions of fairness are incompatible with each other, and a "fair" model in one definition could be "unfair" with respect to another (Friedler et al., 2021). Therefore, our method should not be considered a solution to all notions of fairness.
* **Disparate impact of pruning.** In this paper, we consider a problem that mitigates the disparate impact of pruning. Although we solve the formulated problem in training, the generalization issues present in machine learning mean that we do not necessarily mitigate disparate impact in unseen data. Furthermore, we highlight how existing methods do not address this problem completely or reliably either.
* **Deploying pruned models.** We hope our paper brings about an important discussion on the implications of deploying pruned deep learning models in edge devices. As shown in this work, despite the application of mitigation techniques, pruning can exacerbate systemic biases. In particular, given the generalization issues across mitigation methods, it could cause unintended consequences when used in commercial applications.
### Reproducibility Statement
We provide our code4, including scripts to replicate the experiments in this paper. The pseudo-code of our algorithm is described in Algorithm 1. Experimental details, as well as the hyper-parameters used in our experiments, are included in Appendix D. Our implementation uses open-source libraries PyTorch (Paszke et al., 2019) and Cooper (Gallego-Posada and Ramirez, 2022).
Footnote 4: Our code is available here: [https://github.com/merajhashemi/Balancing_Act](https://github.com/merajhashemi/Balancing_Act)
#### Acknowledgments
This research was partially supported by an IVADO PhD Excellence Scholarship, the Canada CIFAR AI Chair program (Mila), a Google Excellence Award, and the Natural Sciences and Engineering Research Council of Canada (NSERC). Simon Lacoste-Julien is a CIFAR Associate Fellow in the Learning in Machines & Brains program.
This research was enabled in part by compute resources, software, and technical help provided by Mila ([https://mila.quebec](https://mila.quebec)).
We would also like to thank Nazanin Sepahvand for support in the initial stages of the project. Additionally, we would like to thank Marwa El-Halabi, Stefan Horoi and Albert Orozco Camacho for their feedback on the paper.
|
2310.20115 | Rectification of Random Walkers Induced by Energy Flow at Boundaries | We explore rectification phenomena in a system where two-dimensional random
walkers interact with a funnel-shaped ratchet under two distinct classes of
reflection rules. The two classes include the angle of reflection exceeding the
angle of incidence ($\theta_{reflect} > \theta_{incident}$), or vice versa
($\theta_{reflect} < \theta_{incident}$). These generalized boundary reflection
rules are indicative of non-equilibrium conditions due to the introduction of
energy flows at the boundary. Our findings reveal that the nature of such
particle-wall interactions dictates the system's behavior: the funnel either
acts as a pump, directing flow, or as a collector, demonstrating a ratchet
reversal. Importantly, we provide a geometric proof elucidating the underlying
mechanism of rectification, thereby offering insights into why certain
interactions lead to directed motion, while others do not. | Vidyesh Rao Anisetti, Sharath Ananthamurthy, J. M. Schwarz | 2023-10-31T01:16:42Z | http://arxiv.org/abs/2310.20115v1 | # Rectification of Random Walkers Induced by Energy Flow at Boundaries
###### Abstract
We explore rectification phenomena in a system where two-dimensional random walkers interact with a funnel-shaped ratchet under two distinct classes of reflection rules. The two classes include the angle of reflection exceeding the angle of incidence (\(\theta_{reflect}>\theta_{incident}\)), or vice versa (\(\theta_{reflect}<\theta_{incident}\)). These generalized boundary reflection rules are indicative of non-equilibrium conditions due to the introduction of energy flows at the boundary. Our findings reveal that the nature of such particle-wall interactions dictates the system's behavior: the funnel either acts as a pump, directing flow, or as a collector, demonstrating a ratchet reversal. Importantly, we provide a geometric proof elucidating the underlying mechanism of rectification, thereby offering insights into why certain interactions lead to directed motion, while others do not.
## I Introduction
Systems in thermal equilibrium do not show rectification as demonstrated by the Feynman-Smoluchowski ratchet[1; 2]. However, non-equilibrium systems with an underlying spatial asymmetry do exhibit sustained motion rectification, or motor-like behavior [1; 3]. An experimental realization of what is now known as a Brownian motor consists of four vanes that can freely rotate and surrounded by a vibrated granular gas [4]. This system is out-of-equilibrium as the collisions are inelastic and granular particles are driven by external vibrations (and not thermal fluctuations). Of course, this example naturally leads one to the field of active matter, in which each system constituent consumes energy to lead to self-generated motion [5; 6]. Indeed, creating molecular motors/engines using active Brownian particles, be it living or nonliving, has been explored by many [7; 8; 9]. One such experimental example consists of active bacterial baths being used to operate asymmetric gears [10; 11].
One of the simplest forms of an active engine is motion rectification of active matter in the presence of funnel-shaped ratchets [12; 13]. Interestingly, it has been shown that active, or self-propelled, particles show rectification in presence of funnel-shaped ratchets because of breaking of detailed balance which occurs when particles slide along the boundary after encountering it [13]. However, for other types of particle-boundary interactions, for example, pure reflection, rectification is lost, despite the particles being active [8; 14]. Therefore, we go back to a simpler system of non-active Brownian particles and ask the question: What types of particle-boundary interaction rules lead to motion rectification?
By investigating this question, we show that there exists a class of particle-boundary interactions that gives rise to rectification, and sliding along the boundary is simply a special case of this class. Additionally, we provide a physical understanding behind this effect, by using particle kinematics and geometry of the boundary we provide a geometric proof on why, rectification occurs for this class of interaction and not for pure reflection. Our approach simplifies the system to its core processes by emphasizing only the essential properties responsible for rectification. This primary simplification entails simulating particle kinematics independently of the forces guiding their trajectories, which is a departure from conventional methods [13]. Our model system consists of a two-dimensional rectangular chamber with a single funnel-shaped ratchet in between (Fig. 2). Within this chamber, we introduce non-interacting random walkers. While these walkers obey the reflection rule upon contacting the rectangular boundary, they exhibit modified reflection behavior when interacting with the funnel.
We define two classes of this modified reflection:-
\[(i)\ \theta_{r}=\theta_{i}+\alpha\left(\frac{\pi}{2}-\theta_{i} \right), \tag{1}\] \[(ii)\ \theta_{r}=\theta_{i}-\alpha\theta_{i}\ ;\alpha\in[0,1] \tag{2}\]
Here, \(\theta_{r}\) and \(\theta_{i}\) denote the angles of reflection and incidence, respectively. Rule (i) results in \(\theta_{r}>\theta_{i}\) while Rule (ii) leads to \(\theta_{r}<\theta_{i}\). Each value of \(\alpha\) corresponds to a specific reflection rule. The parameter \(\alpha\) modulates the extent of deviation from the standard law of reflection. When \(\alpha=0\), the modified rule reverts to \(\theta_{r}=\theta_{i}\). Notably, for \(\alpha=1\) in Rule (i), the condition simulates particle sliding post-collision with the funnel, a scenario explored in prior research with active particles [8; 13; 14].
The rest of the manuscript is organized as follows. We detail the simulation methodology, then present our simulation results. A simple, geometric proof helps to interpret our simulation results. We conclude with a discussion of the implications of our findings.
## II Simulations
The system consists of a rectangular box of dimensions 1000 \(\times\) 200, with a single funnel in the middle (Fig. 2). This is different from the geometry studied in Ref. [13][12], which used multiple funnels. In this system, we
study 5000 non-interacting random walkers following the iterative equation :-
\[x_{t+dt} = x_{t}+\lambda\sin(2\pi\zeta(t)) \tag{3}\] \[y_{t+dt} = y_{t}+\lambda\cos(2\pi\zeta(t)), \tag{4}\]
where \((x_{t},y_{t})\) define the position of each particle at time \(t\) and \(\zeta\) is a function that outputs a random number \(\zeta(t)\in[0,1]\) with a uniform distribution. Moreover, \(\lambda\) is the persistence length of these random walkers. In the simulations, \(dt=1\). If a particle encounters the rectangular boundary during a time step, it reflects off it according to the standard law of reflection. However, upon interacting with the funnel, it follows one of the modified reflection rules detailed earlier. If a particle, transitioning from \((x_{t},y_{t})\) to \((x_{t+1},y_{t+1})\), intersects the funnel before completing its step, the point \((x_{t+1},y_{t+1})\) is reflected about the funnel. This point is then rotated about the collision point based on the reflection rule and \(\alpha\) value, as shown in Fig. 1. It is crucial to note that each particle traverses a distance equal to \(\lambda\) in each time step, even during a collision.
During the simulations, we observed that certain particles tend to collide asymptotically with the corners in region B, where the funnel merges with the rectangle. These particles generally follow Rule (i) with a high \(\alpha\) value. When such a particle approaches a corner within a distance of less than 0.1 units, it receives a directional "kick". This kick ensures that the particle covers a distance \(\lambda\) in one time step, while also ensuring that the particle's direction lies within the apical angle of the corner.
## III Results
The system is initialized with a uniform number density of random walkers throughout the system. Gradually, we see number density increase in one of the chambers depending on the reflection rule (see Fig. 3). We observe that an increase in \(\alpha\) leads to a higher left-right number density asymmetry in the system at steady state. Additionally, within each chamber, a higher \(\alpha\) value results in a more heterogeneous steady-state number density distribution. For \(\alpha=0\), signifying perfect reflection, the particles exhibit a uniform distribution across the system. The particles in the system are initialized with a uniform number density.
For Rule (i), we observe a higher particle accumulation in chamber B, with the number density ratio escalating with increasing \(\alpha\). This suggests that the funnel acts as a pump, directing particles into chamber B (see Fig. 3). Conversely, for Rule (ii), more particles gather in chamber A. This indicates a net movement of particles against the funnel's "easy" direction, signifying a ratchet reversal. We further analyzed how the ratio of particle number density across the funnel evolves over time (Fig. 4). For the same \(\alpha\) value, Rule (i) induces a greater asymmetry at steady state than Rule (ii).
To understand this phenomenon fundamentally we need to understand why there is no rectification in case of pure reflection. In the next section we provide a geometric proof of this.
Figure 1: The left image shows an example of Rule (i) and right image shows an example of Rule (ii). The blue outgoing arrow shows the path the particle would have followed in case of perfect reflection rule. The green outgoing arrow shows the path which particle takes in case of modified reflection rule. Notice that in Rule (i) the particle is deviated towards the wall and in Rule (ii) the particle is deviated towards the normal.
Figure 3: Snapshot of the simulation at t=1000 for for rules (i) (left column) (ii) (right column) respectively for different \(\alpha\)s.
Figure 2: The shaded areas specify region A and B respectively. Coordinates of orange dots are shown.
## IV A geometric proof
### Why \(\alpha=0\) shows no rectification
For pure reflection, where \(\alpha=0\), we evaluate the one-step transfer probabilities, \(P_{A\xrightarrow{}B}\) and \(P_{B\xrightarrow{}A}\), of particles transitioning between chambers A and B. Through a simple geometric proof, we show why purely reflecting random walkers do not show rectification in the presence of asymmetric boundaries. In the preceding subsection we explain why we see rectification along the easy direction of funnel in Rule (i) and a ratchet reversal in Rule (ii).
The probability of a particle at \(\vec{r}\) transitioning to the adjacent chamber in a single step is:
\[p(\vec{r})=\begin{cases}\dfrac{\theta_{\text{eff}}(\vec{r})}{2\pi}&\text{if } \lambda\geq R\\ 0&\text{if }\lambda<R,\end{cases} \tag{5}\]
where \(R\) represents the distance between \(\vec{r}\) and the nearest point on the opening, and \(\theta_{\text{eff}}\) designates the angular range facilitating chamber transition. This effective angle, \(\theta_{\text{eff}}(\vec{r})\), comprises contributions from both direct and reflective transfers, as visualized in Fig. 6(a). The one-step transfer probabilities are then:
\[P_{A\xrightarrow{}B}=\int_{A}\dfrac{\theta_{eff}(\vec{r})}{2\pi}\,d^{2}r \quad;\quad P_{B\xrightarrow{}A}=\int_{B}\dfrac{\theta_{eff}(\vec{r})}{2\pi} \,d^{2}r \tag{6}\]
Upon examining Fig. 6(a), it looks like the transfer probabilities \(P_{A\xrightarrow{}B}\) and \(P_{B\xrightarrow{}A}\) aren't inherently equal. In region A, the value of \(p(\vec{r})\) is elevated because reflections expand the angular range, augmenting particle transitions to the adjacent chamber. In contrast, region B has a larger area with non-zero \(p(\vec{r})\). Intriguingly, the enhanced probability in \(P_{A\xrightarrow{}B}\) (attributable to reflection) is precisely offset by the increase in \(P_{B\xrightarrow{}A}\) owing to the larger effective area \(B_{\text{eff}}\) (Fig. 6(c-d)).
This observation underscores a pivotal point: irrespective of the funnel's geometry, regions A and B will maintain equal particle number densities. This is solely possible due to perfect reflection.
### Why \(\alpha\neq 0\) is necessary for rectification
To understand what happens in case of asymmetric reflection we have to go back to Fig. 6(c). For Rule (i), \(\theta_{r}>\theta_{i}\), this means now the particles with smaller incident angles can go into region B, this will result in larger \(\theta_{ref}(\vec{r})\) as particles hitting the funnel to the left of point R can now enter region B. Therefore now \(\theta_{ref}(\vec{r})>\theta(\vec{r^{\prime}})\) which leads to \(P_{A\xrightarrow{}B}>P_{B\xrightarrow{}A}\). On the flip side, for Rule (ii), we get \(\theta_{ref}(\vec{r})<\theta(\vec{r^{\prime}})\) which leads to \(P_{A\xrightarrow{}B}<P_{B\xrightarrow{}A}\).
To validate these deductions, we conducted numerical calculations for \(P_{A\xrightarrow{}B}\) and \(P_{B\xrightarrow{}A}\), juxtaposing them with the steady-state number density ratios derived from our simulations, as depicted in Fig. 5. In equilibrium, particles traversing the funnel from either side are balanced, which translates to the relation \(n_{A}P_{A\xrightarrow{}B}=n_{B}P_{B\xrightarrow{}A}\). While our observations align well for \(\alpha>0\), discrepancies arise for \(\alpha<0\). This divergence might stem from the prolonged time systems with \(\alpha<0\) require to reach a steady state, as shown in Fig. 4. Hence, using one-step probabilities might not effectively capture the number density ratios in these cases.
Delving deeper, when \(\alpha>0\), particles experience a force directing them towards the boundary, leading to accumulation of particles in the corners of the box. Conversely, for \(\alpha<0\), particles are repelled from the boundary, moving away from the corners, an effect visible in Fig. 3. The scenario for \(\alpha=0\) is distinct, as the density distribution of purely reflecting particles remains uniform, irrespective of boundary geometry.
## V Discussion
Purely reflecting random walkers do not show rectification even for funnel-shaped boundary. To observe rectifi
Figure 5: The dotted lines show the steady state number density ratio found from simulation and the solid line is \(\dfrac{P_{A\xrightarrow{}B}}{P_{B\xrightarrow{}A}}\).
cation, we introduce a non-equilibrium effect in the system by modifying the reflection rules. When the boundary reflection rule deviates from pure reflection, time-reversal symmetry is broken, particle-boundary interactions become non-reciprocal, which results in the ratchet acting as a Maxwell demon.
Our result helps connects the different rectification phenomenon that has been observed in self-propelled particles, ballistic chains, flexible vesicles, granular gases, and even thermal systems with nontrivial interactions [12; 15; 16; 7; 17]. Self- propelled particles that slide along the boundary after collision follow Rule (i) with \(\alpha=1\)[12], and therefore show rectification along the easy direction of funnel. Ballistic chains when interacting elastically with the boundary show an effective reflection law that looks like Rule (ii), which explains why a ratchet reversal was observed [15]. Deviations from pure reflection can be experimentally achieved by introducing a temperature difference between the gas molecules and the collision surface [18], resulting in an exchange of energy during collision and making the particle-wall interaction "active". Such experimental realizations have the potential to pave the way for innovative engines that harness these non-equilibrium phenomena, potentially leading to the creation of highly efficient engines reminiscent of molecular motors.
|
2309.03877 | Introducing "Forecast Utterance" for Conversational Data Science | Envision an intelligent agent capable of assisting users in conducting
forecasting tasks through intuitive, natural conversations, without requiring
in-depth knowledge of the underlying machine learning (ML) processes. A
significant challenge for the agent in this endeavor is to accurately
comprehend the user's prediction goals and, consequently, formulate precise ML
tasks. In this paper, we take a pioneering step towards this ambitious goal by
introducing a new concept called Forecast Utterance and then focus on the
automatic and accurate interpretation of users' prediction goals from these
utterances. Specifically, we frame the task as a slot-filling problem, where
each slot corresponds to a specific aspect of the goal prediction task. We then
employ two zero-shot methods for solving the slot-filling task, namely: 1)
Entity Extraction (EE), and 2) Question-Answering (QA) techniques. Our
experiments, conducted with three meticulously crafted data sets, validate the
viability of our ambitious goal and demonstrate the effectiveness of both EE
and QA techniques in interpreting Forecast Utterances. | Md Mahadi Hassan, Alex Knipper, Shubhra Kanti Karmaker | 2023-09-07T17:41:41Z | http://arxiv.org/abs/2309.03877v1 | # Introducing "Forecast Utterance" for Conversational Data Science
###### Abstract
Envision an intelligent agent capable of assisting users in conducting forecasting tasks through intuitive, natural conversations, without requiring in-depth knowledge of the underlying machine learning (ML) processes. A significant challenge for the agent in this endeavor is to accurately comprehend the user's prediction goals and, consequently, formulate precise ML tasks. In this paper, we take a pioneering step towards this ambitious goal by introducing a new concept called _Forecast Utterance_ and then focus on the automatic and accurate interpretation of users' prediction goals from these utterances. Specifically, we frame the task as a slot-filling problem, where each slot corresponds to a specific aspect of the goal prediction task. We then employ two zero-shot methods for solving the slot-filling task, namely: 1) Entity Extraction (EE), and 2) Question-Answering (QA) techniques. Our experiments, conducted with three meticulously crafted data sets, validate the viability of our ambitious goal and demonstrate the effectiveness of both EE and QA techniques in interpreting _Forecast Utterances_.
## 1 Introduction
Imagine a conversational AI agent designed to assist end-users, who are not experts in machine learning, with simple forecasting tasks such as estimating a publicly traded stock's future price or predicting the average temperature of a geographic location in the upcoming week. One critical challenge for such an agent is to accurately understand the user's forecasting goals and formulate a precise machine learning task accordingly. As these conversations are expected to happen in real-time, and each user may have unique data sets and data science needs, it is unrealistic to assume that any training data is available to pre-train these conversational agents, making the supervised learning paradigm impractical. As such, the central question we investigate in this paper is how to automatically and accurately understand users' forecasting goals from their utterances in an unsupervised fashion.
**Motivation:** Time series forecasting is an essential tool for informed decision-making and strategic planning across various organizations. It offers valuable insights into future business developments, including revenue, sales, and resource demand. However, small businesses and machine learning enthusiasts from diverse backgrounds often face challenges in harnessing the benefits of time series forecasting due to several factors: 1) Limited ML-related technical expertise among entrepreneurs and business owners; 2) A lack of highly competent data science teams in small businesses; and 3) The high cost of hiring external consultants and data privacy concerns when involving third parties. In this context, a conversational approach that simplifies the application of time series forecasting would be highly appealing. Furthermore, time series forecasting is primarily self-supervised in nature as it relies mostly on historical data without requiring additional human labels, making it an ideal candidate for formulating ML tasks in a zero-shot fashion. With the growing popularity of AI chatbots, the proposed "Conversational Data Science" approach can significantly broaden ML's reach and impact across industries.
**The Conceptual Leap:** Existing _ML_ solutions demand a clear understanding of ML techniques on the user's end as well as requires significant manual effort for executing the end-to-end Data Science Pipeline, making Data Science inaccessible to the general public [16]. However, Conversational AI research holds the potential to democratize
data science for a broader audience. This paper connects machine learning pipeline automation with Conversational AI, creating a novel solution for forecasting task formulation by integrating simple yet powerful methods from both disciplines.
**Challenges:** In real-time conversations, end users will provide their dataset "on the fly" and try to formulate a forecasting task on the provided dataset. This makes it impossible to pre-train an ML task formulation model, as the attribute set is different each time. To address this challenge, we frame it as an unsupervised slot-filling problem, where each aspect of the prediction goal represents a slot. We then adopt two zero-shot approaches to fill the slots: 1) Entity Extraction (EE) and 2) Question-Answering (QA) techniques.
**Contributions:** This paper takes an initial step towards formulating prediction tasks through natural conversation by focusing on a specific type of user utterance called a _Forecast Utterance_. We design and develop techniques for automated understanding of users' forecasting goals from these utterances in an unsupervised fashion. The main contributions of this paper include the following:
* Introducing the _Forecast Utterance_, an expression of users' prediction needs and goals via natural language, and creating three benchmark datasets, through extensive manual effort, to promote future research in this area.
* Framing _Forecast Utterance Understanding_ as an unsupervised slot-filling problem, with each prediction need representing a slot. We propose two _zero-shot_ approaches using Entity Extraction (EE) and Question-Answering (QA) techniques to solve the slot-filling task.
* Conducting case studies with three real-world datasets to demonstrate the feasibility and efficacy of the proposed ideas and techniques.
## 2 Related Works
Over the past decade, the machine learning community has made significant advancements in automating machine learning pipelines. Despite these developments, automating Prediction Task Formulation remains a challenge due to its human-centric nature (Karmaker et al., 2021). In parallel, NLP researchers have made significant progress in domains such as 1) Dialog Systems (Weizenbaum, 1966), 2) Slot Filling (Lafferty et al., 2001a), and 3) Zero-Shot Learning (Palatucci et al., 2009), to create more sophisticated conversational agents. For example, Dialog Systems research has evolved through Conversation Topic Prediction, Dialogue State Tracking, and open-domain dialogue system innovations, with Large Language Models (LLMs) like ChatGPT 1 being among the latest developments. On the other hand, the Slot Filling problem has been addressed as a sequence labeling task by leveraging CRFs (Lafferty et al., 2001b), RNNs (Williams and Zipser, 1989), and self-attention transformers (Vaswani et al., 2017), while Zero-Shot Learning (Akata et al., 2013) has primarily focused on recognizing unseen labels and has become a very popular paradigm in ML research recently. In this section, we will delve deeper into these areas and discuss their relevance to our research.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
**Dialog Systems:** In Dialog Systems research, significant progress has been achieved through advancements in Conversation Topic Prediction (Khatri et al., 2018) and Dialogue State Tracking (DST) (Henderson et al., 2014a,b). DST improvements involve a range of approaches, including schema guidance for better structure (Chen et al., 2020; Zhu et al., 2020; Kapelonis et al., 2022), recursive inference for deeper understanding (Liao et al., 2021), generalization and value normalization for more adaptability (Williams, 2013; Wang et al., 2020), zero-shot transfer learning for data efficiency (Campagna et al., 2020; Rabinovich et al., 2022), and attention modulation for improved focus during inference (Veron et al., 2022). Open-domain dialogue systems have also seen significant advancements. GODEL's (Peng et al., 2022) grounded pre-training adapts to diverse downstream tasks, FusedChat (Young et al., 2022) combines task-oriented and open-domain dialogue for natural conversations, & ChatGPT further enhances conversational agent performance across various applications.
**Slot Filling:** Slot Filling has been studied across applications like recommender systems and chatbots, using approaches such as
RNNs Kurata et al. (2016); Mesnil et al. (2015), integrating CRFs with RNNs Huang et al. (2015); Reimers and Gurevych (2017); Jbene et al. (2022), and self-attention mechanisms for sequence labeling Shen et al. (2018); Tan et al. (2018); Zhao et al. (2022). Joint learning of intent detection and slot-filling has been explored Liu and Lane (2016); Goo et al. (2018); Zhang et al. (2019); Chen and Luo (2023), incorporating few-shot learning and model-agnostic meta-learning Bhathiya and Thayasivam (2020); Krone et al. (2020). Transfer learning has led to zero-shot slot filling approaches Mehri and Eskenazi (2021); Wang et al. (2021); Larson and Leach (2022), enhancing knowledge transfer between pre-trained models and target domains, improving performance on unseen slots and achieving state-of-the-art results.
**Zero-Shot Learning:** The machine learning community tackles unseen class recognition through zero-shot deep learning methods Wang et al. (2019); Rahman et al. (2018); Pourpanah et al. (2023), which can be applied to AutoML solutions using dataset metadata Singh et al. (2021); Drori et al. (2019), enabling real-time model selection for accurate pipelines. Recent research showcases zero-shot time series forecasting using meta-learning frameworks Oreshkin et al. (2021); Abdallah et al. (2022); Van Ness et al. (2023), data augmentation & adversarial domain adaptation Hu et al. (2020); Jin et al. (2022), and ordinal regression recurrent neural networks Orozco and Roberts (2020). However, these approaches don't address Forecast Utterance Understanding for unseen datasets in real-time, which is this paper's primary focus.
**Difference from Previous Work:** Our work distinguishes itself from previous works by introducing a novel concept called "Forecast Utterance" and demonstrating the feasibility of so-called "Conversational Data Science". In contrast to existing Dialog Systems and Slot-Filling research, our primary focus is on understanding an end-user's forecasting needs by framing it as a slot-filling task where each slot represents a unique aspect of the goal prediction task. Additionally, we propose a novel synthetic data generation technique for pre-training the EE and QA models to perform zero-shot inference in real-time.
## 3 Problem Definition
We treat the task as a slot-filling problem, with each aspect of the user's prediction need as a slot. To achieve this, we require an expression language capable of translating abstract goals into a slot-value style format. We introduce the "Prediction Task Expression Language" (PeTEL), consisting of slot/value pairs to define the forecasting task's objectives and constraints.
### Prediction Task Expression Language
Assuming a user provides a relational database schema with tables and relations, we can simplify this by treating all tables, whether they define an entity or a relation, as a single joined "giant" table containing all entities and attributes. Figure 1 demonstrates an example schema.
For the purposes of this work, we employ four slots in the PeTEL expressions: "Target Attribute", "Aggregation Constraint", "Filter Attribute", and "Filtering Constraint" which can be easily extended to an arbitrary number of slots in the future. The values for "Target Attribute" and "Filter Attribute" can be any attribute from the schema, while "Filtering Constraint" captures constraints such as _equal to_ or _greater than_. Meanwhile, "Aggregation Constraint" represents constraints like _count_, _sum_, _average_, etc (see appendix A.2 for details). Consider the following prediction goal and the corresponding _PeTEL_ expression, for example:
_For each airline, predict the maximum delay that any of its flights will suffer next week._
This prediction goal can be expressed by the following _PeTEL_ expression:
Target Attribute: ARRIVAL_DELAY Filtering Constraint: NONE Aggregation Constraint: max_agg
Figure 1: Sample Database Schema with all entities and attributes joined together.
### Filling PeTEL Slots
In this section, we discuss the filling process for the _PeTEL_ slot "Target Attribute". We omit details for other slots due to space constraints, as they follow a similar process. Each PeTEL slot is modeled as a random variable, e.g., the _Target Attribute_ slot represents a probability distribution over all attributes in a given schema. These probabilities indicate the likelihood of an attribute being the desired _Target Attribute_ given a _Forecast Utterance_.
We assume a uniform prior over all attributes, meaning each is initially considered equally likely as the target. Upon receiving a _Forecast Utterance_, the agent extracts information and updates its belief about the _Target Attribute_ slot by computing the posterior distribution for the corresponding random variable.
Formally, let the list of attributes in the given schema be \(A=\{a_{1},a_{2},...,a_{n}\}\), and \(q\) represent the _Forecast Utterance_. Our goal is to rank attributes in \(A\) based on \(q\). We assume that a user utterance contains clues about the target attribute, so attributes with higher semantic similarity to \(q\) are more likely to be the _Target_ attribute. Thus, attributes with higher similarity are ranked higher, as they are more likely the desired target attribute.
User utterances are often uncertain and implicit, making it crucial to extract salient phrases for accurate inference. This can be achieved using _Entity Extraction_ (EE) techniques (Nasar et al., 2021) or _Question-Answering_ (QA) techniques (Soares and Parreiras, 2020), where targeted questions identify slots and answers extract salient phrases. Since a one-to-one mapping from salient phrases to candidate attributes is unlikely, one needs to consider the following two probabilities jointly to make an accurate inference about the _Target Attribute_ slot.
1. Given a salient-phrase \(x\) extracted from forecast utterance \(q\), what is the likelihood that \(x\) is indeed relevant to the desired _Target Attribute_? To capture this, we introduce a binary random variable \(R_{x}\), which is defined as the relevance of a salient-phrase with respect to the target attribute. \[R_{x}=\begin{cases}1,&\text{if $x$ is a relevant salient-phrase}\\ 0,&\text{otherwise}\end{cases}\] (1) Here, \(x\) can be any salient-phrase extracted from user utterance, such that \(x\in Z(q)\), where \(Z(q)\) theoretically defines all possible n-grams with \(n=\{1,2,3,...\}\). Mathematically, we need to estimate \(P(R_{x}=1|x)\).
2. Given a relevant salient-phrase \(x\in Z(q)\) and an attribute \(a_{i}\) from the data-base schema, what is the probability that \(a_{i}\) is indeed the target attribute? Mathematically, we need estimation of \(P(a_{i}|x,R_{x}=1)\).
Finally, all attributes in the data-base schema are ranked according to the following joint probability.
\[P(a_{i}|q)=\max_{x\in Z(q)}\left\{P(R_{x}=1|x)\times P(a_{i}|x,R_{x}=1)\right\} \tag{2}\]
### The Zero-Shot Approach
Calculating probability distributions for _PeTEL_ representations is challenging due to the non-deterministic nature of user utterances and the varying number of values for each slot in real-time, unseen datasets. Pre-training a model with variable slot options becomes difficult.
To address this, we propose a Zero-Shot approach for conversational forecasting task formulation. Upon receiving a new dataset, unsupervised heuristic-generated artificial training examples containing probable user utterances are used (explained in Section 4.2). These examples help the model learn the dataset's attributes, granularity, and types. Once trained, the model accurately interprets forecast utterances, as demonstrated by our experiments. With the dataset schema provided, users don't need manual data labeling or annotation, enabling Zero-Shot learning.
## 4 Estimation of PeTEL Expressions
In this section, we first outline our assumptions for estimating PeTEL expressions. Next, we elaborate on the joint probability estimation process from Equation 2. Finally, we discuss the process of ranking candidate attributes using our probability estimates to fill the _PeTEL_ slots.
### Assumptions
The assumptions for our case studies are as follows:
* Our _PeTEL_ expression consists of four slots: _Target Attribute_, _Aggregation Constraint_, _Filter Attribute_ and _Filtering Constraint_.
* A _Forecast Utterance_ may not contain all required information about each slot, i.e., users may provide partial/incomplete information.
* Each slot can have one candidate value.
### Estimation of \(\mathbf{P(R_{x}=1|x)}\)
We present the estimation process for the "Target Attribute" slot, while noting that other slots follow a similar approach. The probability \(P(R_{x}=1|x)\) signifies the likelihood that a salient-phrase (\(x\)) extracted from the forecast utterance (\(q\)) is relevant to the desired target attribute. Here, \(x\in Z(q)\), where \(Z(q)\) represents all possible n-grams in \(q\), with \(n=1,2,3,....\). Since \(Z(q)\) is computationally intractable by definition, we tackle this complexity by utilizing Entity Extraction (EE) and Question Answering (QA) techniques. These methods allow us to extract a limited number of salient-phrases, along with confidence scores, from the forecast utterance and estimate probabilities accordingly.
While EE and QA techniques offer potential in computationally estimating \(P(R_{x}=1|x)\), directly incorporating a pre-trained EE/QA model is unsuitable for our real-time user-provided database schema scenario. Given the lack of a pre-existing training dataset tailored to each unique schema/domain, fine-tuning pre-trained models is unattainable. With a diverse user base expecting assistance in devising forecasting tasks for their distinct problem domains and database schemas, pre-training for every possibility is infeasible.
To address this limitation, we present a robust approach, comprising two methods for synthetic data generation. The first method, a heuristic technique, involves constructing realistic template utterances with empty slots, subsequently populated with relevant attributes and their synonyms derived from the provided schema. This method generates context-specific, custom-crafted examples capturing the core of forecasting tasks. For instance, consider the following template:
_Predict the average \(\_\_\) for each airline tomorrow._
We can fill in the blank of this example template utterance using different attributes and their synonyms from a given user schema.
The second method utilizes a T5 model fine-tuned on the CommonGen Lin et al. (2020) task, which generates artificial user utterances via a keyword-to-sequence task, aiming to create more natural-like utterances containing specified slots that partially conform to our templates. We developed three versions of the T5 model: the first remains unaltered, the second is fine-tuned with 1,000 templated samples to integrate template essence subtly, and the third is fine-tuned with 10,000 examples to enforce template structure more emphatically. The resulting T5-based synthetic dataset combines a balanced mixture of utterances from each version, ensuring diversity, various degrees of template conformity, and natural language expression.
By leveraging both synthetic datasets and their corresponding slots, we generate training examples in the CoNLL-2003 format Tjong Kim Sang and De Meulder (2003) (for EE) and SQuAD format Rajpurkar et al. (2016) (for QA). This comprehensive foundation allows us to effectively fine-tune pre-trained EE/QA models, thereby enabling accurate extraction of salient phrases and confidence scores from previously unseen utterances. Such confidence scores can then be directly used to provide a reasonable estimate of \(P(R_{x}=1|x)\) (see our experimental results).
### Estimation of \(\mathbf{P(a_{i}|x,R_{x}=1)}\)
Given a relevant (\(R_{x}=1\)) salient-phrase \(x\) and a candidate attribute \(a_{i}\) from the database schema, \(P(a_{i}|x,R_{x}=1)\) represents the probability that \(a_{i}\) is the desired target. We assume that attributes with high semantic similarity to relevant salient-phrases are more likely to be the target attribute, as users often mention it directly or through synonyms/salient-phrases. Consequently, we model \(P(a_{i}|x,R_{x}=1)\) as proportional to the semantic similarity between \(x\) and \(a_{i}\). Mathematically, \(P(a_{i}|x,R_{x}=1)\propto Sem\_similarity(a_{i},x)\).
### Ranking Algorithm for Slot-Filling
Algorithm 1 details the slot-filling process to generate a _PeTEL_ expression from the user's database schema and forecasting goals. As stated in Section 3.2, the "Target Attribute"
values are initialized using a uniform distribution across all schema attributes. Algorithm 1 receives the database schema attributes, initial _PeTEL_ expression, and user utterance as input. It then generates an updated _PeTEL_ expression, incorporating a posterior distribution for the target attribute based on the joint probability calculated from equation 2. The attributes are then ranked based on this probability, with higher values indicating a greater likelihood of being the desired target.
```
1 AlgorithmTaskFormulation() Input: Attributes \(\{a_{i}\}\), PeTEL, \(utterance(q)\) Output:PeTEL with revised probability distribution
2 Initialization: pretrained EE/QA model, PeTEL with uniform distribution
3\(D\leftarrow\) TrainingSetGeneration( )
4\(model\leftarrow\) Fine-tune EE/QA Model using D
5\(X,X_{conf}\leftarrow\) Salient-Phrases and confidence scores extracted from \(q\) by applying \(model\)
6\(P(R_{x}=1|x\in X)\leftarrow\) Normalize \(X_{conf}\) into a probability distribution
7for\(x\in X\)do
8for\(a_{i}\in schema\)do
9\(P(a_{i}|x,R_{x}=1)\gets sem\_sim(a_{i},x)\)
10 end for
11 Normalize \(P(a_{i}|x,R_{x}=1)\) over all \(a_{i}\)
12
13 end for
14
15 PeTEL\(\leftarrow\) Re-compute PeTEL probability distributions using equation 2
16do
17 Show top item from PeTEL to user;
18ififuser agreesthen
19returnPeTEL;
20else
21 Remove top item from PeTEL;
22 Re-compute PeTEL distributions;
23
24 end for
25whileUser has not agreed or list is not exhausted
```
**Algorithm 1**Forecasting Goal Extraction from User Utterances via Slot-Filling.
## 5 Case-Studies and Experimental Setup
### Data-sets and Evaluation Metric
We experimented with three public Kaggle datasets: Flight Delay (FD)2, Online Food Delivery Preferences (OD)3 and Student Performance (SP)4(Details in Appendix A.3). For evaluation, we created validation sets for three datasets using human volunteers with data science expertise, who generated utterances expressing forecasting goals. Each instance consists of a user utterance and associated ground truth slot-value labels. To minimize bias, three volunteers independently created and labeled datasets. Additionally, a cross-validation process was implemented, where one volunteer reviewed another's dataset. The validation datasets contain 344, 170, and 209 utterances for the **FD**, **OD**, and **SP** datasets, respectively.
Footnote 2: [https://www.kaggle.com/datasets/larsen0966/student-performance-data-set](https://www.kaggle.com/datasets/larsen0966/student-performance-data-set)
We evaluate slot accuracy using _F1_ score and our ranker using _mean reciprocal rank (MRR)_.
### Extraction Techniques, Embeddings
We employed four transformer-based architectures (Bert Devlin et al., 2018), RoBERTa Liu et al. (2019), XLNet Yang et al. (2019), and Albert Lan et al. (2020)) for the EE/QA models in our case studies, and four embeddings (Word2Vec Mikolov et al. (2013), GloVe Pennington et al. (2014), FastTextBojanowski et al. (2017), and USE Cer et al. (2018)) to capture semantic similarity. The fine-tuning process for the EE/QA models includes two variations: 1) **custom** models, fine-tuned exclusively on artificially generated data from a single dataset, and 2) a **universal** model, fine-tuned on artificially generated data for all three datasets. As discussed in Section 4.2, we employed two methods to generate artificial data: heuristic-based and T5-based. In the case study, models fine-tuned using heuristic-generated artificial data are labeled with the keyword **Heuristic**, while those fine-tuned with T5-generated data are labeled with the keyword **T5**.
### Hyper-parameter Tuning
Due to the nature of our implementation, our experimental results heavily depend on how well the extraction model (EE/QA) extracts salient-phrases from the user utterances. This, in turn, depends on the effectiveness of the fine-tuning process described in Section 4.2. Therefore, we performed an exhaustive hyperparameter search using a subset of both of
our artificially-generated data-set. We then took the highest-scoring set of hyperparameters and fine-tuned the extractive models one more time using the whole training data-set. See Appendix A.6 (Tables 12-15) for details.
## 6 Results
The performance of the fine-tuned extraction models (EE/QA) on the complete training data is displayed in Table 1. By analyzing the results in Table 1, we have identified the preferred model configurations for different settings, as presented in Table 2. We observed that Universal models yield reasonable \(F_{1}\) scores, making them suitable for users with strict memory constraints. Furthermore, T5 models do not outperform Heuristic models across all three test datasets, indicating that if user utterances follow a template like _Forecast Utterance_, training with templated data can be effective.
We present the case study results in Figures 2-5 using the recommended configurations from Table 2. The Mean Reciprocal Rank (MRR) for each slot, derived from the four embedding techniques detailed in Section 5.2, is averaged across all models in Table 2. These visualizations demonstrate MRR performance for each slot based on our case study assumptions. Notably, models perform better on datasets with fewer attributes, which is intuitive. Furthermore, the final MRR score is influenced by the model's extraction task performance. In Figure 2, models excel due to the ease of extracting aggregation constraints (count, sum, etc.), while in Figure 5, performance declines as filtering constraints (high, less, etc.) are more abstractive.
Test on all model pairs (detailed in Appendix, Table 18) using a flat vector of MRR scores from all datasets and slots. The results revealed statistically significant differences between the MRR scores of the Vanilla method and any other extractive models (EE/QA).
### Failure Analysis
In our extractive model, we observe a slower convergence rate when handling utterances with abstract or implicit slot expressions. This is particularly evident in the performance of our proposed T5 and Heuristic models, which struggle to extract filtering constraints (Figure 5) as effectively as they do for other slots (Figures 2-4). Table 3 showcases instances where the Heuristic Universal XLNet model encounters extended convergence times. A closer analysis of these examples reveals that, when users mention slot values implicitly, the model tends to extract incorrect salient phrases, underscoring the difficulties in accurately discerning subtle or abstract expressions within the text.
### Open-Domain Dialog systems
In this section, we present a qualitative study comparing two recent Open Domain Dialog Systems (ODDS): GODEL and ChatGPT. While a direct comparison between our proposed method and these ODDS is not feasible, we aim to discuss the adaptability of ODDS in the context of this highly ambitious task. Our study encompasses four distinct system setups to evaluate the performance of these models across diverse scenarios. Due to space constraints, we present only one such scenario (see Appendix A.10 for details). For both GODEL and ChatGPT, we provide the following instructions as the system's role: "_You are a Machine Learning expert who will help a small business owner by formulating an interesting problem through conversation. You should perform a slot-filling task based on the conversation and use those slot information to formulate a time series forecasting problem for the user. The slots are aggregation operation, target attribute, filter attribute, and filter operation. You have to infer the slot values from the conversations_". User interactions with GODEL are illustrated in Table 4, while interactions with ChatGPT can be found in Table 5. The study reveals that GODEL fails to converge, while ChatGPT performs relatively well. Both ChatGPT and our proposed model struggle to extract the _Filter operation_ constraint. Given the potential of ChatGPT, we plan to explore it as a future direction for improving the extraction of such constraints and enhancing overall model performance. In summary, ChatGPT shows great promise in materializing the vision of "Conversational Data Science".
## 7 Conclusion
In this paper, we have introduced the concept of _Forecast Utterance_, where, a user expresses their forecasting goals through natural language. We have primarily focused on how to automatically
Figure 4: MRR for Filter Attribute prediction.
Figure 5: MRR for Filter operation prediction.
\begin{table}
\begin{tabular}{l|c|c} User Utterance & Extracted & Count \\ \hline \hline predict the total order where the preferred medium of order is online within tomorrow & βwithinβ & 9 \\ \hline \hline Predict the maximum final grade of students where student has extra educational support at school & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: This table displays the user utterance (first column), extracted salient phrase (second column), and iterations needed for correct slot-value (third column).
understand users' prediction goals from such utterances accurately in an _unsupervised_ fashion. As a first step towards the ambitious goal of "Conversational Data Science", we have framed this task as an **unsupervised** slot-filling problem and, accordingly, proposed **zero-shot** solutions based on Entity Extraction(EE)/Question Answering(QA) techniques. Due to the task's
\begin{table}
\begin{tabular}{c p{284.5pt}} \hline \hline User: & Predict the maximum first period grade of students where student has partaken in extra-curricular activities \\ \hline GODEL: & Is this something that can help you plan a schedule for your college students? \\ \hline User: & hope, just formulate a ML problem for the utterance. \\ \hline GODEL: & Do you have any idea on the methods of determining the maximum amount of first period grade of students where student has partaken in extra-curricular activities? \\ \hline User: & hope, that is your job \\ \hline GODEL: & You should get me a job based on this information and that is why I want to get a job. \\ \hline User: & you are not helping with the problem creation. \\ \hline GODEL: & I agree that you are not helping with the problem creation problem. \\ \hline \hline \end{tabular}
\end{table}
Table 4: User Interaction with GODEL. The model is hallucinating and no way of converging.
\begin{table}
\begin{tabular}{c p{284.5pt}} \hline \hline User: & Predict the maximum first period grade of students where student has partaken in extra-curricular activities \\ \hline ChatGPT: & Can you please tell me the time period for which you want to make this prediction? \\ \hline User: & I do not have time period available with me. Please use the given information to predict. \\ \hline ChatGPT: & To do this, we can consider each school term as a time period and use the following slot values: Aggregation operation: Maximum; Target attribute: First-period grade; Filter attribute: Participation in extracurricular activities; Filter operation: Equals to "Yes" \\ \hline User: & please check the filter operation again. \\ \hline ChatGPT: & Apologies for the mistake. Can you please provide the correct filter operation so that I can update the slot values accordingly? \\ \hline User : & I do not have the filter operation value. Please check the original input to find the value. \\ ChatGPT: & Okay. Just to confirm, did you mean to say that you didnβt mention any filter operation value in your original message? \\ \hline User: & No. The filter operation value is mentioned in the original message. Please read the message and find the value. \\ \hline ChatGPT: & The correct slot values are as follows: Aggregation operation: Maximum; Target attribute: First-period grade; Filter attribute: Participation in extracurricular activities; Filter operation: Equals to "Yes" \\ \hline User: & The value of filter operation value is incorrect. \\ \hline ChatGPT: & I apologize for making a mistake earlier. Can you please tell me the correct filter operation value so that I can update the slot values accordingly? \\ \hline User: & I dont know \\ \hline ChatGPT: & Based on your initial request, we can create a time series forecasting problem as follows: Aggregation operation: Maximum; Target attribute: First-period grade; Filter attribute: Participation in extracurricular activities \\ \hline User: & Filter operation value is missing in above response. \\ \hline \hline \end{tabular}
\end{table}
Table 5: User Interaction with ChatGPT. The model struggles with extracting correct filter operation.
user-centric and real-time nature, the lack of custom training data for fine-tuning EE/QA models in the ranking algorithm was a significant challenge, which we addressed by our proposed heuristic and T5 based synthetic data generation technique. Experimental results show that the proposed synthetic data generation technique significantly improved the accuracy of the overall ranker, and, therefore, demonstrates the feasibility of a general "Conversational Data Science" paradigm. This task is ambitious as well as multidisciplinary in nature and requires attention from multiple research communities, including NLP, AutoML and HCI.
## 8 Limitations
Our work is limited by the following assumptions made in the paper:
* We assumed that our _PeTEL_ expression consists of four slots: _Target Attribute_, _Aggregation Constraint_, _Filter Attribute_ and _Filtering Constraint_, which may not hold in some real word scenarios.
* In this work, we assumed that a _Forecast Utterance_ may not contain all required information about each slot, i.e., users may provide partial/incomplete information. But, we have not studied how to prompt users to gather further information or ask clarification questions, which is an interesting future work.
* We have assumed that each slot can have one value at maximum, which may not hold in real word. We plan to extend our approach to multiple values scenario in the future.
Although our case study is limited in many ways, this limited scope should not undermine the potential impact of our proposed idea, i.e., _Forecasting Task Formulation Via Natural Conversation_. Imagine Alexa, Siri, Watson and other home assistants serving as your personal data scientists. As the first work of its kind, we made a choice regarding the inevitable trade-off between taking a small step towards an ambitious goal vs taking a significant step towards solving a well-defined problem. Thus, we had to leave more complex scenarios as future work. As a novel perspective, we believe our work will inspire the researchers to pursue this ambitious problem and make substantial impact through wider adoption of such "end-user-in-the-loop" conversational AutoML solutions.
|
2303.17859 | MapFormer: Boosting Change Detection by Using Pre-change Information | Change detection in remote sensing imagery is essential for a variety of
applications such as urban planning, disaster management, and climate research.
However, existing methods for identifying semantically changed areas overlook
the availability of semantic information in the form of existing maps
describing features of the earth's surface. In this paper, we leverage this
information for change detection in bi-temporal images. We show that the simple
integration of the additional information via concatenation of latent
representations suffices to significantly outperform state-of-the-art change
detection methods. Motivated by this observation, we propose the new task of
*Conditional Change Detection*, where pre-change semantic information is used
as input next to bi-temporal images. To fully exploit the extra information, we
propose *MapFormer*, a novel architecture based on a multi-modal feature fusion
module that allows for feature processing conditioned on the available semantic
information. We further employ a supervised, cross-modal contrastive loss to
guide the learning of visual representations. Our approach outperforms existing
change detection methods by an absolute 11.7\% and 18.4\% in terms of binary
change IoU on DynamicEarthNet and HRSCD, respectively. Furthermore, we
demonstrate the robustness of our approach to the quality of the pre-change
semantic information and the absence pre-change imagery. The code is available
at https://github.com/mxbh/mapformer. | Maximilian Bernhard, Niklas StrauΓ, Matthias Schubert | 2023-03-31T07:39:12Z | http://arxiv.org/abs/2303.17859v4 | # MapFormer: Boosting Change Detection by Using Pre-change Information
###### Abstract
Change detection in remote sensing imagery is essential for a variety of applications such as urban planning, disaster management, and climate research. However, existing methods for identifying semantically changed areas overlook the availability of semantic information in the form of existing maps describing features of the earth's surface. In this paper, we leverage this information for change detection in bi-temporal images. We show that the simple integration of the additional information via concatenation of latent representations suffices to significantly outperform state-of-the-art change detection methods. Motivated by this observation, we propose the new task of _Conditional Change Detection_, where pre-change semantic information is used as input next to bi-temporal images. To fully exploit the extra information, we propose _MapFormer_, a novel architecture based on a multi-modal feature fusion module that allows for feature processing conditioned on the available semantic information. We further employ a supervised, cross-modal contrastive loss to guide the learning of visual representations. Our approach outperforms existing change detection methods by an absolute 11.7% and 18.4% in terms of binary change IoU on DynamicEarthNet and HRSCD, respectively. Furthermore, we demonstrate the robustness of our approach to the quality of the pre-change semantic information and the absence pre-change imagery. The code will be made publicly available.
## 1 Introduction
Earth observation data has become increasingly available in recent years, providing valuable insights into various fields such as climate research, disaster management, and urban planning. In the course of this, enormous efforts have been made towards creating rich semantic maps of the earth. For instance, OpenStreetMap provides vast amounts of data containing manual annotations around the globe. In addition, deep learning techniques are nowadays used to produce semantic map data at a large scale, e.g., [21] generated building footprints for the entire African continent. As a result, we live in a world that has been extensively mapped. However, we live in a constantly changing world where the earth's surface is subject to various influences. These influences can be natural, e.g., earthquakes, extreme weather, floods, wildfires, and changes in vegetation, but also directly caused by humans, e.g., building construction, deforestation, and agriculture. In fact, the impact of civilization on our planet has never been higher than in recent years, with drastic consequences for ecosystems and wildlife [8, 22]. Therefore, monitoring changes in the earth's surface with up-to-date earth observation data is becoming increasingly important.
In technical terms, this challenge is addressed by the task of change detection, where bi-temporal images are
Figure 1: Advantage of our approach over existing change detection methods (represented by ChangeFormer [1] here) on a sample of DynamicEarthNet [24]. Employing semantic pre-change information allows to extract better features and improves change detection quality.
compared in order to segment pixels that have semantically changed. Today, deep-learning-based methods for change detection achieve state-of-the-art performance [13, 14]. However, the research community so far mostly ignored the fact that existing semantic information like maps may be employed for change detection. To the best of our knowledge, semantic map information has been considered an additional input next to bi-temporal images for detecting change only for the specific task of updating road networks [2]. Although this approach is only possible if semantic information for the area of interest is available, we argue that maps are available for most parts of the world today. Alternatively, pre-trained neural networks can be used to generate a map containing the features of interest. Furthermore, when updating a map, a pre-change version of the map has to be available in the first place.
In this paper, we tap into this direction in a general way and establish the novel task of _Conditional Change Detection_ (see Figure 1). We demonstrate that the usage of semantic information in form of a pixelwise map has a strong impact on change detection by showing that a simple baseline (concatenenating bi-temporal image features and map features) is already able to outperform state-of-the-art change detection methods, which do not use semantic pre-change information. Furthermore, we propose a _MapFormer_, a novel architecture to fully exploit the additional information. At its core, we integrate a newly designed multi-modal feature fusion module based on the idea that the pre-change semantics should dynamically influence the processing of hidden features. Additionally, we apply a supervised contrastive loss on the learned image features, guiding the image encoder during training and further improving the performance. We also show that our approach can be applied without bi-temporal imagery, detecting change by only processing a pre-change map and current imagery. We call this task _Cross-modal Change Detection_. We conduct experiments on the publicly available and challenging datasets DynamicEarthNet [24] and HRSCD [7].
To sum up, our main contributions are as follows:
* We investigate the novel tasks of _Conditional Change Detection_ and _Cross-modal Change Detection_ and introduce simple baselines that outperform state-of-the-art change detection methods.
* We propose _MapFormer_, a novel architecture consisting of a multi-modal feature fusion module and a supervised contrastive loss, allowing us to effectively merge image features and semantic information.
* We conduct detailed experiments and ablations demonstrating our approach's consistent performance gains, robustness, and practicability.
## 2 Related Work
Change detection in remote sensing images has attracted much attention over the last years, resulting in numerous publications in this field [13, 14, 17, 1, 10, 11, 30, 16, 18]. Most state-of-the-art methods employ deep learning and introduce various sophisticated ways of merging bi-temporal image features. For instance, [6] proposes a series of fully convolutional architectures for change detection that rely on different fusion mechanisms for the bi-temporal image features. Furthermore, [11] uses NestedUNet [31] in combination with channel attention. With the introduction of transformer architectures into computer vision [9, 3, 26], recent architectures for change detection also utilize attention blocks. BiT [4] was the first of these methods, combining a CNN backbone with cross-attention layers in the decoder head. Subsequently, works like CDViT [20], ChangeFormer [1], FHD [17], and Changer [10] use the attention mechanism in various forms, pushing the state-of-the-art in bi-temporal change detection even further. Our general architecture is inspired by ChangeFormer and FHD, which adopt the MiT backbone and the MLP head from SegFormer [26] and add specialized mechanisms to fuse the bi-temporal image features.
LEVIR-CD [5], DSIFN-CD [28], WHU [12], and S2Looking [19] are common benchmark datasets for bi-temporal change detection, where networks are provided with co-registered, bi-temporal images as input and have to predict a pixelwise, binary change mask. Recently, a new dataset for change detection has been published with DynamicEarthNet [24], which surpasses previous datasets in terms of volume and diversity. In addition, models have to predict semantic segmentation masks containing the land cover classes next to the binary change masks for this dataset. However, none of these methods or datasets consider inputs other than bi- or multi-temporal images. Thus, previous approaches ignore the fact that pre-change semantic information for the area of interest is often available. Only for the specific task of road network extraction, [2] challenged the community to come up with solutions for updating road networks when pre-change road networks are given. Consequently, their approach considers graph data as input and output. In contrast, we consider the pixelwise change detection task when pre-change semantic information in the form of a pixelwise map is available and show that this paradigm opens up exciting new possibilities.
## 3 Conditional Change Detection
We consider two points in time \(t_{1}\) (pre-change) and \(t_{2}\) (post-change). Correspondingly, we have two image versions \(I^{(1)}\) and \(I^{(2)}\) for the same geographic location at \(t_{1}\) and \(t_{2}\), respectively. The ground-truth binary mask \(b\) indicates which pixels in the images have changed semanti
cally between \(t_{1}\) and \(t_{2}\). Furthermore, we consider semantic maps \(m^{(1)}\) and \(m^{(2)}\) containing the semantic class \(c\in\mathcal{C}\) for each pixel in \(I^{(1)}\) and \(I^{(2)}\), respectively. Thus, \(b\) can be obtained from \(m^{(1)}\) and \(m^{(2)}\) via a pixelwise inequality operation. As we propose new tasks in this paper (_Conditional Change Detection_ and _Cross-modal Change Detection_), we provide a differentiated description of the existing and novel tasks here. An overview can be seen in Table 1.
**Existing Tasks** In general, change detection tasks differ in two aspects: input and output. We can distinguish w.r.t. the output between binary and semantic change detection (BCD and SCD) for every combination of inputs. In BCD, the goal is to predict the binary change mask \(\hat{b}\), indicating which pixels have changed semantically. SCD was recently proposed in [24] and constitutes an extension of BCD. Here, not only the binary change mask \(\hat{b}\) is predicted, but also the semantic segmentation \(\hat{m}^{(2)}\) of the post-change image \(I^{(2)}\). This allows us to tell which regions have changed and how they have changed. The corresponding evaluation metric \(SCS\) proposed in [24] combines the binary change IoU \(BC\) and the semantic change metric \(SC\) as their arithmetic mean. The semantic change metric \(SC\) is the standard mIoU from semantic segmentation, except that it is computed only on the pixels that have changed, i.e.
\[SC =\frac{1}{|\mathcal{C}|}\sum_{c\in\mathcal{C}}\frac{|\{b=1\} \cap\{m^{(2)}=c\}\cap\{\hat{m}^{(2)}=c\}|}{|\{b=1\}\cap(\{m^{(2)}=c\})\cup\{ \hat{m}^{(2)}=c\}|},\] \[BC =\text{IoU}(b,\hat{b})\quad\text{and}\quad SCS=\frac{BC+SC}{2} \tag{1}\]
Regarding the input, the existing settings consider bi-temporal images \(I^{(1)}\) and \(I^{(2)}\). Thus, we will refer to the existing tasks as bi-temporal BCD and bi-temporal SCD.
**Conditional Change Detection** While the aforementioned change detection tasks assume bi-temporal images as input, semantic information in the form of a segmentation map \(m^{(1)}\) at \(t_{1}\) may be employed additionally. That is, given pre- and post-change images together with pre-change semantic information \(m^{(1)}\), the goal is to predict the binary change mask \(\hat{b}\) for Conditional BCD (as well as the semantic segmentation map \(\hat{m}^{(2)}\) for Conditional SCD).
**Cross-modal Change Detection** When pre-change semantic maps are available, change detection can also be conducted without bi-temporal, but solely uni-temporal imagery. In this case, only comparing the semantic information \(m^{(1)}\) and the current imagery \(I^{(2)}\) can suffice to detect semantic changes. In other words, this setting corresponds to Conditional Change Detection with uni-temporal images. We call this task _Cross-modal Change Detection_ as the change has to be detected across different modalities (semantic pre-change map and post-change image).
## 4 MapFormer
### Overall Architecture
To handle these new tasks, we propose a framework, which is inspired by [17, 1] (see Figure 2). We feed the images \(I^{(1)}\) and \(I^{(2)}\) through the same backbone to obtain multi-scale feature maps \(f^{(1)}\) and \(f^{(2)}\). For each scale, we have a separate fusion module (see Section 4.2) that handles the features from \(f^{(1)}\) and \(f^{(2)}\) at the corresponding scale. In addition to the image features, in our setting, we also feed semantic information in the form of an encoded map \(g^{(1)}\) into the fusion modules. These map features are obtained from \(m^{(1)}\) via a shallow encoder (see Section 4.4). The outputs of the fusion modules are then multi-scale features \(f\), containing information of both points in time \(t_{1}\) and \(t_{2}\). Thus, we can use them in a prediction head to output the binary change map \(\hat{b}\). In terms of architecture, this prediction head is a standard semantic segmentation head [26]. Furthermore, we apply a contrastive loss on the image features \(f^{(1)}\) and \(f^{(2)}\), which helps the model to learn semantically more meaningful representations (see Section 4.3).
For many applications, it may be interesting to determine into which semantic class a pixel has changed. In this case, we also need to derive a semantic segmentation map \(\hat{m}^{(2)}\) for \(I^{(2)}\). We can accomplish this either by feeding the uni-temporal features \(f^{(2)}\) into a separate head for semantic segmentation or by predicting both \(\hat{b}\) and \(\hat{m}^{(2)}\) based on the merged features \(f\). We analyze both variants in Section 5.5. Training a semantic segmentation head on uni-temporal image features also allows applying the trained model without pre-change information as we can use the predicted semantic segmentation \(\hat{m}^{(1)}\) instead of \(m^{(1)}\). Results for this setting are provided in Section 5.3. Furthermore, for Cross-modal Change Detection, the design of our fusion module (described next) enables us to omit the image features \(f^{(1)}\) from \(I^{(1)}\). We evaluate this setting in Section 5.4.
### Multi-modal Feature Fusion
The main challenge in bi-temporal change detection is to design an effective method for fusing the bi-temporal image features. In the case of Conditional (and Cross-modal) Change Detection, we additionally have to bring together
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \multicolumn{2}{c|}{**Task**} & \multicolumn{2}{c|}{**Input**} & \multicolumn{2}{c}{**Output**} \\ & \(I^{(1)}\) & \(I^{(2)}\) & \(m^{(1)}\) & \(\hat{b}\) & \(\hat{m}^{(2)}\) \\ \hline Bi-temp. Bin. Change Det. (BCD) & β & β & & β \\ Bi-temp. Sem. Change Det. (SCD) & β & β & & β \\
[new] & Conditional BCD & β & β & β & \\
[new] & Conditional SCD & β & β & β & β \\
[new] & Cross-modal BCD & & β & β & \\
[new] & Cross-modal SCD & & β & β & β \\ \hline \end{tabular}
\end{table}
Table 1: Inputs and outputs for different change detection tasks.
different modalities, i.e., the image features and the available semantic information. To this end, we propose a novel multi-modal feature fusion module. Subsequently, we follow [17, 26] and assume that the spatial context is encoded within the backbone, so we only consider pointwise operations, treating each spatial location separately.
A standard way of merging multi-modal features is a simple concatenation along the channel dimension. However, we argue that the semantic information in our setting can be used more effectively if we use it to modulate the features dynamically. Thus, the given information should influence the processing of image features and thereby improve perception. Such data-dependent feature processing can be achieved by dynamically predicting the weights of one or more layers from the features themselves [27, 23]. While such techniques are powerful, they come at a significant computational cost. In fact, applying a method like DynamicMLP [27] to a segmentation task like ours by predicting dynamic weights for each pixel separately is intractable without drastically reducing the number of channels (or the spatial resolution). Therefore, we resort to a restricted attention-style approach. More precisely, to keep the computational cost tractable, we restrict the attention to a set of \(K\) values for each output dimension, thereby greatly reducing the complexity compared to unrestricted attention.
A visualization of our proposed multi-modal feature fusion module can be seen in Figure 3. First, we concatenate the bi-temporal features \(f^{(1)}_{(i,j)}\) and \(f^{(2)}_{(i,j)}\) and the map feature \(g^{(1)}_{(i,j)}\) for each spatial location \((i,j)\). This concatenated feature is fed through \(K\) separate MLPs (implemented with grouped, pointwise convolutions) to obtain \(K\) joint representations \((h_{k(i,j)})_{k=1\dots K}\) with \(D_{f}\) channels. Here, \(D_{f}\) refers to the dimensionality of \(f^{(t)}_{(i,j)}\), \(t=1,2\), and \(K\) is a hyperparameter for tuning the semantic variety. These \(K\) joint representations \((h_{k(i,j)})_{k=1\dots K}\) can be interpreted as views, encoding different aspects of the same spatial location. To associate the map information \(g^{(1)}_{(i,j)}\) with these views, we employ a channelwise attention mechanism. In particular, we compute attention weights by feeding the map features \(g^{(1)}_{(i,j)}\) into an additional MLP. The result of shape \(K\times D_{f}\) is softmaxed over the \(K\)-dimension providing a matrix of attention scores with entries \(a_{(i,j,k,d)}\) for location \((i,j)\). Thus, each channel of the module output \(f_{(i,j)}\) separately attends to the \(K\) joint representations \((h_{k(i,j)})_{k=1\dots K}\). In mathematical terms, we compute
\[f_{(i,j)}=\left(\sum_{k=1\dots K}a_{(i,j,k,d)}h_{k(i,j,d)}\right)_{d=1\dots D_{ f}} \tag{2}\]
The merged features \(f_{(i,j)}\) are then collected for the different scales of the backbone feature maps and passed to the prediction head.
### Contrastive Loss
For unchanged areas, we want \(f^{(1)}\), \(f^{(2)}\), and \(g^{(1)}\) to encode the same semantic concepts. For changed areas how
Figure 3: **Our multi-modal feature fusion module** for fusing bi-temporal image features and semantic input features.
Figure 2: **The proposed MapFormer framework** consists of two main components: a novel fusion module, which combines inputs from different points in time and modalities, and a contrastive loss module, which is only used during training.
ever, the encoded information in \(f^{(1)}\) and \(g^{(1)}\) should be similar, while \(f^{(2)}\) should be dissimilar from the two. Based on this observation, we design a supervised contrastive loss that forces the network to learn better image features. More precisely, we apply a learnable projection \(\pi\) to map \(f^{(1)}\) and \(f^{(2)}\) into the feature space of \(g^{(1)}\). Then, we compute our contrastive loss as follows:
\[\mathcal{L}_{(i,j)}^{(contr)} =-\text{sim}\left(g^{(1)}_{(i,j)},\pi\big{(}f^{(1)}_{(i,j)}\big{)}\right) \tag{3}\] \[+\begin{cases}-\text{sim}\left(g^{(1)}_{(i,j)},\pi\big{(}f^{(2)}_ {(i,j)}\big{)}\right),&b_{(i,j)}=0\\ \text{max}\left(\text{sim}\left(g^{(1)}_{(i,j)},\pi\big{(}f^{(2)}_{(i,j)} \big{)}\right),0\right),&b_{(i,j)}=1.\end{cases}\]
Here, \(sim(\cdot,\cdot)\) denotes the cosine similarity, and the loss is ultimately aggregated over all pixel locations \((i,j)\). Interestingly, we found it beneficial not to use a projection for \(g^{(1)}\) and also not to backpropagate gradients coming from the contrastive loss through \(g^{(1)}\). Details can be found in Section 5.3.
### Architecture Details
Like [17], we use the MixVisionTransformer from [26] as image encoder backbone. Due to the spatial attention in the encoder, the features \(f^{(t)}_{(i,j)}\) are already contextualized, and a pointwise prediction head after a pointwise fusion module suffices to produce high-quality predictions. We leave SegFormer's prediction head unchanged and solely plug in our proposed feature fusion module between the backbone and the head. Thus, our module can be easily combined with other architectures. Inside our feature fusion module, the merged features \((h_{k})_{k=1...K}\) are produced by \(K\) two-layer MLPs, while the attention weights \(a\) are predicted from \(g^{(1)}\) via a linear layer. To encode the semantic information \(m^{(1)}\) into \(g^{(1)}\), we use a three-layer CNN with a pointwise convolution followed by two convolutions with a kernel size of five and a dilation of two to increase the receptive field. When fusing the encoded map \(g^{(1)}\), we use bilinear interpolation to comply with the different spatial resolutions of the different scales of the image feature maps. The projection head \(\pi\) for the contrastive loss is a pointwise MLP head in the style of SegFormer's prediction head. The overall training loss is the sum of our contrastive loss and the cross-entropy loss for the binary change map (as well as the semantic segmentation in SCD).
## 5 Experiments
### Datasets
Since Conditional Change Detection requires semantic segmentation maps, which are not included in commonly used benchmark datasets such as LEVIR-CD [5], DSIFN-CD [28], and S2Looking [19], we conduct our experiments on DynamicEarthNet [24] and HRSCD [7].
DynamicEarthNet is a recent dataset consisting of daily images of 75 areas of interest (AOIs) over a span of two years. Ground-truth annotations containing the seven land use classes "impervious surface", "agriculture", "forest & other vegetation", " wetlands", "soil", "water", and "snow & ice" are only available for the first day of each month and 55 AOIs. As our setting assumes semantic information as input, we only use the images with available annotations and selected 35, 10, and 10 AOIs for training, validation, and testing, respectively. Following [24], we omit the class "snow & ice" from the evaluation because it only occurs in two AOIs. Targets for binary change detection were created by applying a pixelwise inequality on pairs of ground-truth semantic segmentation maps.
The High-Resolution Semantic Change Detection (HRSCD) dataset was proposed in [7]. It contains 291 pairs of images, where the first was acquired in 2005/2006 and the second in 2012. Semantic ground-truth segmentation maps and binary change segmentation maps are available for all samples. The semantic classes are "artificial surface", "agricultural area", "forest", "wetland", and "water". We selected 191, 50, and 50 pairs for training, validation, and testing, respectively. Further details on both datasets can be found in the supplementary material.
### Implementation Details
**Baselines** As the first baseline, we employ SegFormer [26] for change detection by taking the pixelwise inequality of the predicted semantic segmentations \(\hat{m}^{(1)}\) and \(\hat{m}^{(2)}\) as binary change prediction. The concatenation baseline for bi-temporal BCD also resembles the SegFormer architecture, but concatenates the image features \(f^{(1)}\) and \(f^{(2)}\) before passing them to a head that directly predicts the binary change map. Both of these baselines can be easily reused for Conditional BCD. For SegFormer, we can simply apply the pixelwise inequality to \(m^{(1)}\) and \(\hat{m}^{(2)}\) instead of \(\hat{m}^{(1)}\) and \(\hat{m}^{(2)}\). For the concatenation baseline, we can concatenate map features \(g^{(1)}\) extracted by a map encoder as described in Section 4.4 to the image features \(f^{(1)}\) and \(f^{(2)}\). As state-of-the-art methods for bi-temporal BCD, we select FHD [17], ChangerEx [10], and ChangeFormer [1] as they are recent and have been shown to perform best on the common benchmarks. Furthermore, to have a strong competitor for our feature fusion module, we use a pixel-wise Dynamic MLP-C [27] for fusing semantic input with image features. Due to the high memory requirement of this mechanism, we reduced the number of channels by a factor of three to fit the model into our GPU memory.
**Training** We used the same training parameters as [17], except for the number of training steps which we had to increase for the used datasets. All models were trained for 32k iterations with a batch size of 16 (i.e. 32 images per batch), a learning rate of \(6\times 10^{-5}\), and the AdamW op
timizer [15]. DynamicEarthNet and HRSCD images were split into tiles of size 512 and 500, respectively. We use a default value of \(K=10\) for most experiments and provide results for \(K=5\) and \(K=15\) to examine parameter sensitivity to \(K\). For MapFormer, FHD [17], ChangeFormer [1], and the baselines, we use MiT-b2 [26] (24m params) pre-trained on ImageNet as backbone, whereas a ResNeSt50 [29] (25m params) backbone was employed for ChangerEx [10]. To counter the effect of the extreme class imbalance when training FHD, ChangeFormer, and ChangerEx on HRSCD, we had to introduce a class weight of \(4\) on the change class for binary change detection. Note that this was not necessary for MapFormer. All experiments were run on a single 40 GB Nvidia A100 or a comparable device.
### Binary Change Detection
**Overall Performance** In BCD, there is only a single objective (binary change IoU); thus, our approach can be best compared to other methods. In Table 2, we present BCD results for baselines, state-of-the-art methods, and our method. It is evident that Conditional BCD greatly outperforms all existing bi-temporal methods - even with the simple concatenation baseline (11.8% vs. 20.1% and 29.6% vs. 45.1% IoU, resp.). MapFormer further pushes the performance on both datasets to 23.5% and 48.0% IoU, respectively. Particularly, our method also outperforms DynamicMLP [27], a state-of-the-art method for multi-modal feature fusion. This holds for all tested values of our only hyperparameter \(K\) and even without our contrastive loss (see Ablations). Remarkably, when MapFormer is applied without the true map \(m^{(1)}\), but only predicted segmentation maps \(\hat{m}^{(1)}\) (w/o \(m^{(1)}\)), the performance is still highly competitive, even achieving by far the best performance among the bi-temporal methods on DynamicEarthNet with an IoU of 16.7%. Furthermore, it is noteworthy that SegFormer for Conditional BCD performs poorly, especially on HRSCD. We attribute this to the fact that every misclassified pixel leads to a false positive change prediction in this setup, causing a very low precision. In bi-temporal BCD, identical classification errors in \(\hat{m}^{(1)}\) and \(\hat{m}^{(2)}\) may still lead to true negative change predictions.
**Ablations** In the lower part of Table 2, we present several ablation experiments to analyze MapFormer's contributing factors and robustness. First, we study the effect of our design choices in the contrastive loss module. When the contrastive loss is completely omitted (w/o contrastive), performance drops on both datasets by 2.6% and 0.9% IoU, respectively. Furthermore, we compare MapFormer with modifications where we omitted the stop-gradient operation on the map features \(g^{(1)}\) (w/o stop-gradient) and a version where we included a linear projection layer to map \(g^{(1)}\) to the joint embedding space with \(\pi\big{(}f^{(t)}\big{)}\) (w/ map proj.). Both variants do not reach the performance of our proposed method. Thus, we conclude that gradients coming from the contrastive loss flowing through the map features \(g^{(1)}\) have a detrimental effect on their representation.
Finally, we investigate the effect of the quality of the semantic information \(m^{(1)}\). To this end, we merge some classes in the pre-change maps to simulate that only high-level semantic information is available (high-level info). For DynamicEarthNet, we merge the classes "agriculture", "forest & other vegetation" and "soil" as well as "wetlands", "water" and "snow & ice" into two high-level classes, respectively. For HRSCD, we solely distinguish between "artificial" and "rest". Apparently, this high-level semantic information still suffices to generate better change predictions than the bi-temporal change detection methods. However, we observe a significant performance drop on DynamicEarthNet (5.9% IoU), while the performance only degrades marginally on HRSCD (0.2% IoU). Furthermore, we simulate low-resolution semantic information by downsampling with a factor of 32 (low-res. info). Here, we can see that the performance of our method degrades with the low-resolution input (by 5.6% and 4.8% IoU, resp.). Nevertheless, it is still clearly superior to the bi-temporal change detection methods. In conclusion, our results show that the semantic input quality correlates to the Conditional Change Detection performance. However, low-quality input still
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**DynENet**} & \multicolumn{2}{c}{**HRSCD**} \\ & **F1** & **IoU** & **F1** & **IoU** \\ \hline \multicolumn{4}{c|}{_Bi-temporal BCD:_} \\ \hline Segformer [26] & 20.3 & 11.3 & 17.8 & 9.7 \\ Concatenation & 20.7 & 11.6 & 41.2 & 25.9 \\ FHD [17] & 17.2 & 9.4 & 45.2 & 29.2 \\ ChangerEx [10] & 21.1 & 11.8 & 37.0 & 22.7 \\ ChangeFormer [1] & 20.7 & 11.5 & 45.7 & 29.6 \\
**MapFormer\({}_{K=10}\)** (w/o \(m^{(1)}\)) & 28.6 & 16.7 & 43.9 & 28.2 \\ \hline \multicolumn{4}{c|}{_Conditional BCD:_} \\ \hline SegFormer [26] & 21.2 & 11.9 & 5.6 & 2.8 \\ Concatenation & 33.5 & 20.1 & 62.1 & 45.1 \\ DynamicMLP [27] & 34.5 & 20.8 & 62.7 & 45.7 \\
**MapFormer\({}_{K=5}\)** & 34.8 & 21.1 & **64.9** & **48.0** \\
**MapFormer\({}_{K=10}\)** & **38.0** & **23.5** & 64.5 & 47.7 \\
**MapFormer\({}_{K=15}\)** & 36.4 & 22.2 & 64.5 & 47.6 \\ \multicolumn{4}{c|}{_Ablations_ (MapFormer\({}_{K=10}\)):_} \\ \hline w/o constrastive & 34.5 & 20.9 & 63.8 & 46.8 \\ w/o stop-gradient & 36.7 & 22.5 & 62.5 & 45.4 \\ w/ map proj. & 35.8 & 21.8 & 60.7 & 45.9 \\ high-level info & 29.8 & 17.6 & 64.4 & 47.5 \\ low-res. info & 30.4 & 17.9 & 60.3 & 43.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: (**Conditional) Binary Change Detection** as well as ablations. Our method performs best and is still highly competitive without ground-truth semantic input.
suffices to achieve significantly better results than methods not relying on this kind of information. Further ablations can be found in the supplementary material.
### Cross-modal Binary Change Detection
We also evaluate MapFormer on the task of Cross-modal BCD, i.e. change detection when only post-change imagery and pre-change map information are given (see Table 3). Here, the concatenation baseline omits the pre-change image features \(f^{(1)}\). The SegFormer baseline is identical to the one for Conditional BCD in Table 2 as it does not utilize \(\hat{m}^{(1)}\). Again, we can see that MapFormer performs best, although the margin to the concatenation baseline is rather small (0.1% and 0.6% IoU, resp.). However, comparing the results with those in Table 2, we observe that bi-temporal methods are greatly outperformed in this setting as well, and the performance is still relatively close to Conditional BCD, which relies on bi-temporal images. This highlights the value of semantic input for change detection and shows that bi-temporal images are not critical for change detection when semantic pre-change maps are available.
### Semantic Change Detection
For semantic change detection, one can integrate additional semantic segmentation heads into our framework. We provide the results for Conditional SCD and Cross-modal SCD in Table 4. Here, we follow the notation of [24] and denote the binary change IoU with BC (instead of IoU in Tables 2,3). In the first row of Table 4, we reuse the bi-temporal SegFormer baseline of Table 2. The second and third rows correspond to MapFormer, with the semantic segmentation outputs being predicted from the uni-temporal image features (sem. seg. on \(f^{(2)}\)) and the merged features (sem. seg. on \(f\)), respectively. Apparently, the version with a segmentation head applied on \(f^{(2)}\) performs better w.r.t. BC, SC, and SCS, whereas the joint version is superior w.r.t. mIoU. Our reasoning is that the imbalance between changed and unchanged areas in the data causes the model using \(f\) to tend to repeat the given semantic information, which is not accessible through \(f^{(2)}\). This hurts the performance on changed pixels (SC), but leads to higher overall mIoU. In the last row of Table 4, we show the results for MapFormer solely based on uni-temporal images (Cross-modal), using \(f^{(2)}\) for semantic segmentation. Similar to our observations for BCD, this model is competitive to its counterpart for Conditional SCD (25.5% vs. 26.0% and 33.5% vs. 34.7% SCS, resp.). This implies that \(m^{(1)}\) has a stronger impact on the performance than \(I^{(1)}\).
### Qualitative Results
The first row of Figure 4 (a) depicts inputs and ground-truth targets for a sample of DynamicEarthNet. The second row shows the binary change predictions of several methods for this sample. We can see that SegFormer suffers from many false positive predictions, while the other state-of-the-art methods struggle to detect the changed areas. In contrast, MapFormer produces a convincing change mask where only small details are missing.
In Figure 4 (b), we visualize the attention weights of MapFormer for this sample. The visualizations correspond to the argmax over the \(K\)-dimension of the attention weights \(a_{(\cdot,\cdot,k,d)}\) for six different channels \(d\). In other words, the colors in each visualization show which of the features \((h_{k})_{k=1\dots K}\) receive most attention for the selected channels. Comparing them, we can see that the maps focus on different aspects of the semantic information \(m_{1}\). For example, the first attention map mostly resembles the original map \(m^{(1)}\), whereas the second and third visualizations resemble more coarse versions of \(m_{1}\). On the other hand, the fourth map focuses more on small and thin segments such as roads. The remaining two attention maps seem to aim at separating the features for specific semantic classes.
To provide further insight, we compare t-SNE [25] visualizations of the hidden features of MapFormer and FHD in Figure 4 (c). Each point in the visualization represents the hidden feature for one pixel in the sample depicted in subfigure (a). For MapFormer, the feature space is much less entangled, and the semantic classes of the pixels and the binary change ground truth can be separated from other pixels relatively well. In contrast, the FHD representations for different semantic and binary change classes are highly mixed, leading to prediction errors.
\begin{table}
\begin{tabular}{c|c c c|c c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**DynamicEarthNet**} & \multicolumn{3}{c}{**HRSCD**} \\ & **BC** & **SC** & **SCS** & **mIoU** & **BC** & **SC** & **SCS** & **mIoU** \\ \hline SegFormer [26] & 11.3 & **31.5** & 21.4 & 39.7 & 9.7 & 20.3 & 15.0 & 52.7 \\ MapFormer (sem. seg. on \(f^{(2)}\)) & 23.0 & 29.0 & **26.0** & 39.9 & 48.0 & 21.5 & **34.7** & 52.9 \\ MapFormer (sem. seg. on \(f\)) & **23.1** & 13.1 & 18.1 & 61.5 & **48.2** & 12.5 & 30.4 & **72.3** \\ MapFormer (Cross-modal) & 20.4 & 30.7 & 25.5 & **40.3** & 45.6 & **21.7** & 33.5 & 49.9 \\ \hline \end{tabular}
\end{table}
Table 4: **(Conditional) Semantic Change Detection.** Predicting \(\hat{m}^{(2)}\) based on \(f^{(2)}\) yields the best results w.r.t. the main objective SCS.
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**DynENet**} & \multicolumn{3}{c}{**HRSCD**} \\ & **F1** & **IoU** & **F1** & **IoU** \\ \hline SegFormer [26] & 21.2 & 11.9 & 5.6 & 2.8 \\ Concatenation & 31.8 & 18.9 & 61.5 & 44.4 \\ MapFormer & **32.0** & **19.0** & **62.1** & **45.0** \\ \hline \end{tabular}
\end{table}
Table 3: **Cross-modal Binary Change Detection.** MapReduce experiment and the concatenation baseline outperform SegFormer.
## 6 Conclusion
In this paper, we have introduced Conditional Change Detection, a new paradigm for change detection. We have shown that using available pre-change semantic information as input to change detection methods leads to significant improvements over existing methods for change detection. This holds even for a simple baseline that solely relies on feature concatenation. To further improve the performance, we have proposed MapFormer, a novel architecture outperforming all compared methods in all considered settings. A limitation of our approach is the requirement of semantic pre-change information. In the future, we plan to mitigate this dependency by leveraging pre-trained models to generate pre-change maps via transfer learning. Further, we see room for improvement w.r.t. the semantic segmentation pipeline of our approach if more sophisticated techniques are integrated for this purpose. We hope this
Figure 4: Qualitative results and insights for a sample of DynamicEarthNet.
work will lead to further research following the avenue of Conditional Change Detection. In particular, we believe that more diverse datasets, more sophisticated methods, and other settings, such as uni-temporal supervision as in [30], will greatly benefit this line of research.
|
2309.08936 | Localization with Noisy Android Raw GNSS Measurements | Android raw Global Navigation Satellite System (GNSS) measurements are
expected to bring smartphones power to take on demanding localization tasks
that are traditionally performed by specialized GNSS receivers. The hardware
constraints, however, make Android raw GNSS measurements much noisier than
geodetic-quality ones. This study elucidates the principles of localization
using Android raw GNSS measurements and leverages Moving Horizon Estimation
(MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel (RTS) smoother for
noise suppression. Experimental results show that the RTS smoother achieves the
best positioning performance, with horizontal positioning errors significantly
reduced by 76.4% and 46.5% in static and dynamic scenarios compared with the
baseline weighted least squares (WLS) method. Our codes are available at
https://github.com/ailocar/androidGnss. | Xu Weng, Keck Voon Ling | 2023-09-16T09:43:40Z | http://arxiv.org/abs/2309.08936v2 | # Localization with Noisy Android Raw GNSS Measurements
###### Abstract
Android raw Global Navigation Satellite System (GNSS) measurements are expected to bring smartphones power to take on demanding localization tasks that are traditionally performed by specialized GNSS receivers. The hardware constraints, however, make Android raw GNSS measurements much noisier than geodetic-quality ones. This study elucidates the principles of localization using Android raw GNSS measurements and leverages Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel (RTS) smoother for noise suppression. Experimental results show that the RTS smoother achieves the best positioning performance, with horizontal positioning errors significantly reduced by 76.4% and 46.5% in static and dynamic scenarios compared with the baseline weighted least squares (WLS) method. Our codes are available at [https://github.com/ailocar/androidGnss](https://github.com/ailocar/androidGnss).
Android smartphones, global navigation satellite system, raw measurements, localization, Google smartphone decimeter challenge datasets
## I Introduction
Localization is an essential technology that underpins the interconnected world. Global Navigation Satellite Systems (GNSS) provide location information that drives industries ranging from defense and agriculture to geomatics and transportation. However, high-performance positioning services are still far away from our daily lives because specialized GNSS equipment is bulky and expensive. Unlike dedicated GNSS devices, ubiquitous portable smartphones integrate affordable GNSS chips and antennas, providing huge potential for everyday location services. Especially since the release of Android raw GNSS measurements, smartphones have been expected to enable various exciting localization-based applications, such as vehicle navigation, intelligent management of city assets, outdoor augmented reality, and mobile health monitoring. Nevertheless, it is difficult to keep such promises due to the large noise present in these measurements collected by current mass-market Android devices.
Researchers around the world have conducted systematic assessments of Android raw GNSS measurements. It is reported that the average carrier-to-noise power density (\(C/N_{0}\)) of smartphones' Global Positioning System (GPS) L1 observations is about ten dB-Hz lower than the representative value for geodetic antennas and receivers [1]. Consequently, the pseudorange noise of phones is about an order of magnitude larger than geodetic-quality measurements [2]. Therefore, Android raw GNSS measurements must be denoised to meet the essential requirement of potential mobile localization applications.
The Extended Kalman Filter (EKF) is widely applied to denoise pseudorange measurements for GNSS receivers [3, 4]. For example, an EKF-based GNSS positioning approach using Android data for maritime applications has been demonstrated in [5], but it fails to give a complete description of how to construct the process model and measurement equations with pseudoranges and pseudorange rates. A Least Squares (LS) localization engine followed by a Kalman filter is introduced in [6], but its connections to Android raw GNSS measurements are not explained in detail. Another work has proposed an EKF with inequality constraints, including vertical velocity, direction, and distance constraints, to process Android GNSS data [7]. Its experimental results show obvious
Fig. 1: Localization results using the Weighted Least Squares (WLS) method, Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel (RTS) smoother with Android raw GNSS measurements collected by Xiaomi Redmi K60
improvement in localization performance credited to these constraints. However, like the aforementioned studies, there are no clear specifications of its implementation over Android raw GNSS measurements.
Moving Horizon Estimation (MHE) also shows potential for state estimation from noisy measurements [8]. Some sophisticated tracking loops have been designed to enhance the performance of GNSS receivers in challenging scenarios involving high dynamics, signal fading, strong ionosphere scintillation, and so on [9, 10]. In the measurement domain, MHE has been applied to vehicular localization using GNSS measurements aided by map boundaries [11]. Besides, in the area of high-accuracy positioning, a state-space-varied MHE algorithm is proposed to solve inconstant states present in precise point positioning (PPP) algorithms for low-cost receivers and Android smartphones in harsh environments [12]. The prior research has focused on the design of advanced algorithms but ignores the implementation details. And the MHE-based pseudorange positioning with Android raw GNSS measurements has yet to be fully covered.
In addition to these filtering methods, the Rauch-Tung-Striebel (RTS) smoother, a post-processing algorithm, also delivers exceptional noise mitigation ability [13]. RTS smoother has been applied to integrated navigation systems [14, 15]. Nevertheless, when it comes to the RTS smoother-based localization using Android raw GNSS measurements, only some codes are available on the internet [16]. A systematic and technical study about it is still scarce in the GNSS community.
Previous research has rarely provided a detailed and systematic introduction to pseudorange-based positioning using noisy Android raw GNSS measurements, especially engineering details about adapting the well-established filtering or smoothing theories to Android data, including measurement acquisition, noise modeling, and exception handling. Unlike studies above that concentrate on the design of filtering or smoothing algorithms, our work thoroughly explains the adaptation of various filtering or smoothing algorithms to noisy Android raw GNSS measurements from an engineering perspective. Our main contributions are listed below:
* We detail how to calculate position, velocity, and time (PVT) using Android raw GNSS measurements.
* We employ MHE, EKF, and RTS smoother to filter or smooth noisy Android raw GNSS measurements to improve localization performance. We design finite state machines for these algorithms to address Android data discontinuity. To the best of our knowledge, our work is the first to systematically go into detail about how to filter or smooth Android raw GNSS measurements.
* We evaluate these algorithms using static data we collected and dynamic data from Google.
## II Determining Position, Velocity, and Time using Android Raw GNSS Measurements
This section details the pseudorange-based position, velocity, and time (PVT) determination using the Weighted Least Squares (WLS) algorithm with Android raw GNSS measurements.
### _Calculating Pseudorange Measurements with Android Raw GNSS Measurements_
Android phones record the moments related to the propagation of GNSS signals. The time when GNSS signals are transmitted \(t_{Tx}\) is logged as _ReceivedSTimeNanos_ in nanoseconds in their respective GNSS time reference systems, i.e.,
\[t_{Tx}=ReceivedSvTimeNanos-\Delta t_{constellation}\]
where \(\Delta t_{constellation}\) represents the time difference between the current constellation time and the GPS reference time. The time when Android smartphones receive GNSS signals is calculated as follows:
\[t_{Rx} =TimeNanos+TimeOffsetNanos\] \[-(FullBiasNanos(1)+BiasNanos(1))\] \[-weekNumberNanos\]
where _TimeNanos_ is the internal hardware clock of Android phones in nanoseconds. _TimeOffsetNanos_ is the offset between \(TimeNanos\) and the actual measurement time. _FullBiasNanos_ estimates the difference between smartphone clocks and GPS time in full nanoseconds, while _BiasNanos_ is the sub-nanosecond section. Here, we use the initial values of _FullBiasNanos_ and _BiasNanos_ to include the hardware clock drift into \(t_{Rx}\). _weekNumberNanos_ represents the total full weeks in nanoseconds since midnight on January 5-6, 1980, and is calculated as follows:
\[weekNumberNanos=\lfloor\frac{-FullBiasNanos}{NumberNanoSecondsWeek}\rfloor\] \[\times NumberNanoSecondsWeek\]
where _NumberNanoSecondsWeek_ represents the total nanoseconds in a week. Because _FullBiasNanos_ is a negative value, there is a minus sign before it.
After obtaining the moments when GNSS signals are transmitted and received, the pseudorange measurements \(\rho\) can be calculated using the speed of light \(c\) as follows:
\[\rho=(t_{Rx}-t_{Tx})\,c.\]
Note that \(t_{Tx}\) and \(t_{Rx}\) are in the format of time of week (TOW), so the week rollover should be considered when we calculate the difference between them.
Android phones directly provide the pseudorange rate measurement _PseudorangeRateMetersPerSecond_ and its 1-\(\sigma\) uncertainty _PseudorangeRateUncertaintyMetersPerSecond_.
### _Pseudorange and Pseudorange Rate Models_
After removing the satellite clock offset and atmosphere delays, which can be modeled and computed using broadcast ephemeris, we are able to write the corrected pseudorange \(\rho_{c_{k}}^{(n)}\) from the \(n^{th}\) satellite to a phone at the \(k^{th}\) epoch as
\[\rho_{c_{k}}^{(n)}=r_{k}^{(n)}+\delta t_{u_{k}}+\varepsilon_{k}^{(n)} \tag{1}\]
where \(r_{k}^{(n)}\) denotes the geometry range from the \(n^{th}\) satellite to the phone. \(\delta t_{u_{k}}\) represents the clock offset of the phone relative to the GNSS reference time. We wrap up the multipath delay, hardware delay, pseudorange noise, modeling residuals of atmosphere delays, and other potential errors in one item \(\varepsilon_{k}^{(n)}\) called pseudorange errors. We can compute the corrected pseudorange rate measurement \(\dot{\rho}_{c_{k}}^{(n)}\) by removing the satellite clock drift that is the derivative of the satellite clock bias, which is shown below:
\[\dot{\rho}_{c_{k}}^{(n)}=\dot{r}_{k}^{(n)}+\delta f_{u_{k}}+\dot{\varepsilon}_{ k}^{(n)} \tag{2}\]
where \(\dot{r}_{k}^{(n)}\) represents the geometry range rate from the \(n^{th}\) satellite to the user. \(\delta f_{u_{k}}\) represents the clock drift of the user device. \(\dot{\varepsilon}_{k}^{(n)}\) denotes the variation of pseudorange errors.
### _WLS-based PVT Solution_
At the \(k^{th}\) epoch, an Android phone's position \((x_{k},y_{k},z_{k})\), velocity \((v_{x_{k}},v_{y_{k}},v_{z_{k}})\), clock offset \(\delta t_{u_{k}}\), and clock drift \(\delta f_{u_{k}}\) are unknowns to be estimated. We can estimate all of them simultaneously using the WLS algorithm. Let \(\mathbf{X}_{k}=\left[x_{k},v_{x_{k}},y_{k},v_{y_{k}},z_{k},v_{z_{k}},\delta t _{u_{k}},\delta f_{u_{k}}\right]^{T}\). The location and velocity of the \(n^{th}\) satellite are denoted by \(\mathbf{x}_{k}^{(n)}=[x_{k}^{(n)},y_{k}^{(n)},z_{k}^{(n)}]^{T}\) and \(\mathbf{v}_{k}^{(n)}=[v_{x_{k}}^{(n)},v_{y_{k}}^{(n)},v_{z_{k}}^{(n)}]^{T}\) respectively, which can be derived from ephemeris data. If the Android phone receives signals transmitted by \(M\) satellites, \(2M\) measurements like (1) and (2) can be collected. And if we know an approximation of the phone's state, i.e., \(\tilde{\mathbf{X}}_{k}=[\tilde{x}_{k},\tilde{v}_{x_{k}},\tilde{y}_{k},\tilde{ v}_{y_{k}},\tilde{z}_{k},\tilde{v}_{z_{k}},\delta\tilde{t}_{u_{k}},\delta \tilde{f}_{u_{k}}]^{T}\), the PVT solution can be found by solving the following linear equation system [4]:
\[\mathbf{G}_{k}\left(\mathbf{X}_{k}-\tilde{\mathbf{X}}_{k}\right)=\mathbf{b}_{k} \tag{3}\]
where
\[\mathbf{G}_{k}=\begin{bmatrix}\tilde{a}_{x_{k}}^{(1)}&0&\tilde{a}_{y_{k}}^{(1 )}&0&\tilde{a}_{z_{k}}^{(1)}&0&1&0\\ 0&\tilde{a}_{x_{k}}^{(1)}&0&\tilde{a}_{y_{k}}^{(1)}&0&\tilde{a}_{z_{k}}^{(1)} &0&1\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \tilde{a}_{x_{k}}^{(M)}&0&\tilde{a}_{y_{k}}^{(M)}&0&\tilde{a}_{z_{k}}^{(M)}&0 &1&0\\ 0&\tilde{a}_{x_{k}}^{(M)}&0&\tilde{a}_{y_{k}}^{(M)}&0&\tilde{a}_{z_{k}}^{(M)} &0&1\end{bmatrix}\]
\[\tilde{a}_{x_{k}}^{(n)}=\frac{\tilde{x}_{k}-x_{k}^{(n)}}{\tilde{r}_{k}^{(n)} },\tilde{a}_{y_{k}}^{(n)}=\frac{\tilde{y}_{k}-y_{k}^{(n)}}{\tilde{r}_{k}^{(n)} },\tilde{a}_{z_{k}}^{(n)}=\frac{\tilde{z}_{k}-z_{k}^{(n)}}{\tilde{r}_{k}^{(n)}}\]
\[\tilde{r}_{k}^{(n)}=\sqrt{\left(\tilde{x}_{k}-x_{k}^{(n)}\right)^{2}+\left( \tilde{y}_{k}-y_{k}^{(n)}\right)^{2}+\left(\tilde{z}_{k}-z_{k}^{(n)}\right)^{2}}\]
\[\mathbf{b}_{k}=[\Delta\rho_{c_{k}}^{(1)},\Delta\dot{\rho}_{c_{k}}^{(1)}, \Delta\rho_{c_{k}}^{(2)},\Delta\dot{\rho}_{c_{k}}^{(2)},\cdots,\Delta\rho_{c_ {k}}^{(M)},\Delta\dot{\rho}_{c_{k}}^{(M)}]^{T}\]
\[\Delta\rho_{c_{k}}^{(n)}=\rho_{c_{k}}^{(n)}-\tilde{r}_{k}^{(n)}-\delta\tilde{t} _{u_{k}}\]
\[\Delta\dot{\rho}_{c_{k}}^{(n)}=\dot{\rho}_{c_{k}}^{(n)}-\left(\tilde{\mathbf{v} }_{k}-\mathbf{v}_{k}^{(n)}\right)\cdot\tilde{\mathbf{g}}_{k}^{(n)}-\delta \tilde{f}_{u_{k}}\]
\[\mathbf{\tilde{v}}_{k}=[\tilde{v}_{x_{k}},\tilde{v}_{y_{k}},\tilde{v}_{z_{k}}]^{T}\]
\[\mathbf{\tilde{g}}_{k}^{(n)}=[\tilde{a}_{x_{k}}^{(n)},\tilde{a}_{y_{k}}^{(n)}, \tilde{a}_{z_{k}}^{(n)}]^{T}.\]
To balance the impact of noise on the estimation precision, 1-\(\sigma\) uncertainties of pseudorange and pseudorange rate measurements can be used to weight (3) as follows:
\[\mathbf{W}_{k}\mathbf{G}_{k}\left(\mathbf{X}_{k}-\tilde{\mathbf{X}}_{k}\right) =\mathbf{W}_{k}\mathbf{b}_{k} \tag{4}\]
where \(\mathbf{W}_{k}\) is a diagonal weight matrix with the reciprocals of 1-\(\sigma\) uncertainties of pseudorange and pseudorange rate measurements of different satellites as its main diagonal. The 1-\(\sigma\) uncertainty of \(t_{Tx}\) given by _ReceivedSvTimeUncertaintyNanos_ can represent the 1-\(\sigma\) pseudorange measurement uncertainty. The 1-\(\sigma\) pseudorange rate uncertainty is given by _PseudorangeRateUncertaintyMetersPerSecond_. Then, the WLS solution to (4) can be computed as [4]
\[\mathbf{X}_{k} =\tilde{\mathbf{X}}_{k}+\Delta\mathbf{X}_{k} \tag{5}\]
where \(\Delta\mathbf{X}_{k}=\left(\mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W}_{k }\mathbf{b}_{k}\) is the displacement from the approximate user state to the actual one. The approximate user state \(\tilde{\mathbf{X}}_{k}\) will be updated with the result of (5), and the computation in (5) will be iterated until the accuracy requirement is satisfied. Note that the approximation of state \(\tilde{\mathbf{X}}_{k}\) can be initialized as all zeros or set as the phone's state at the last epoch.
## III Modeling Android Phones
### _Process Models of Android Phones_
A suitable process model can connect the states of Android phones at various time steps. Considering the low dynamics of Android phones, their states can be described by a two-state model. For example, the dynamic modal of the phone's location and velocity on the \(x\) axis is shown in Fig. 2, where \(a_{x}\) is the acceleration on the \(x\) axis. Accordingly, the discrete dynamic modal can be written as
\[\mathbf{X}_{x_{k}}=\mathbf{A}_{t_{k},t_{k-1}}^{(x)}\mathbf{X}_{x_{k-1}}+\mathbf{W} _{x_{k-1}} \tag{6}\]
where
\[\mathbf{X}_{x_{k}}=[x_{k},v_{x}]^{T} \tag{7}\] \[\mathbf{A}_{t_{k},t_{k-1}}^{(x)}=\begin{bmatrix}1&T_{s_{k}}\\ 0&1\end{bmatrix}\]
\[\mathbf{Q}_{x_{k-1}}=\mathrm{E}\left(\mathbf{W}_{x_{k-1}}\mathbf{W}_{x_{k-1}}^{T} \right)=\begin{bmatrix}\frac{1}{2}S_{v_{x}}T_{s_{k}}^{3}&\frac{1}{2}S_{v_{x}}T_{s _{k}}^{2}\\ \frac{1}{2}S_{v_{x}}T_{s_{k}}^{2}&S_{v_{x}}T_{s_{k}}\end{bmatrix}\]
\[S_{v_{x}}=\mathrm{E}(a_{x}^{2}).\]
\(T_{s_{k}}\) is the sampling period of the system at the \(k^{th}\) epoch. \(S_{v_{x}}\) can be estimated as follows:
\[S_{v_{x}}=\mathrm{E}(a_{x}^{2})\approx a_{x}^{2}\approx\left(\frac{\hat{v}_{x _{k-1}}-\hat{v}_{x_{k-2}}}{T_{s_{k-1}}}\right)^{2} \tag{8}\]
where \(\hat{v}_{x_{k-1}}\) and \(\hat{v}_{x_{k-2}}\) is the estimated velocities at previous epochs. The same dynamic models can be established for motions on the \(y\) and \(z\) axes.
The dynamic model for the hardware clock of Android phones can be represented by Fig. 3, where \(e_{t}\) and \(e_{f}\) are
the clock offset noise and the clock drift noise. Accordingly, the discrete dynamic model for the clock states of Android phones can be written as
\[\mathbf{X}_{t_{k}}=\mathbf{A}_{t_{k},t_{k-1}}^{(t)}\mathbf{X}_{t_{k-1}}+\mathbf{W }_{t_{k-1}} \tag{9}\]
where
\[\mathbf{X}_{t}=\left[\delta t_{u},\delta f_{u}\right]^{T}\] \[\mathbf{A}_{t_{k},t_{k-1}}^{(t)}=\begin{bmatrix}1&T_{s_{k}}\\ 0&1\end{bmatrix} \tag{10}\] \[\mathbf{Q}_{t_{k-1}}=\mathrm{E}\left(\mathbf{W}_{t_{k-1}}\mathbf{ W}_{t_{k-1}}^{T}\right)=\begin{bmatrix}S_{t}T_{s_{k}}+\frac{1}{3}S_{f}T_{s_{k}}^{ 3}&\frac{1}{2}S_{f}T_{s_{k}}^{2}\\ \frac{1}{2}S_{f}T_{s_{k}}^{2}&S_{f}T_{s_{k}}\end{bmatrix}\] \[S_{t}=\mathrm{E}(e_{t}^{2}),\ S_{f}=\mathrm{E}(e_{f}^{2}).\]
According the clock model illustrated by Fig. 3, \(S_{t}\) and \(S_{f}\) can be simply estimated as follows:
\[S_{t}=\mathrm{E}(e_{t}^{2})\approx e_{t}^{2}\approx\left(\frac{\delta\hat{t} _{u_{k-1}}-\delta\hat{t}_{u_{k-2}}}{T_{s_{k-1}}}-\delta\hat{f}_{u_{k-1}}\right) ^{2} \tag{11}\]
\[S_{f}=\mathrm{E}(e_{f}^{2})\approx e_{f}^{2}\approx\left(\frac{\delta\hat{f} _{u_{k-1}}-\delta\hat{f}_{u_{k-2}}}{T_{s_{k-1}}}\right)^{2} \tag{12}\]
where \(\delta\hat{t}_{u_{k-1}}\), \(\delta\hat{t}_{u_{k-2}}\), \(\delta\hat{f}_{u_{k-1}}\), and \(\delta\hat{f}_{u_{k-1}}\) are previously estimated clock bias and clock drift.
According to (6) and (9), the joint process model of an Android phone can be written as
\[\mathbf{X}_{k}=\mathbf{A}_{k,k-1}\mathbf{X}_{k-1}+\mathbf{W}_{k-1} \tag{13}\]
where
\[\mathbf{A}_{k,k-1}=\begin{bmatrix}\mathbf{A}_{t_{k},t_{k-1}}^{(x)}&0&0&0\\ 0&\mathbf{A}_{t_{k},t_{k-1}}^{(y)}&0&0\\ 0&0&\mathbf{A}_{t_{k},t_{k-1}}^{(z)}&0\\ 0&0&0&\mathbf{A}_{t_{k},t_{k-1}}^{(t)}\end{bmatrix} \tag{14}\]
\[\mathbf{Q}_{k-1}=\mathrm{E}\left(\mathbf{W}_{k-1}\mathbf{W}_{k-1} ^{T}\right)\] \[=\begin{bmatrix}\mathbf{Q}_{x_{k-1}}&0&0&0\\ 0&\mathbf{Q}_{y_{k-1}}&0&0\\ 0&0&\mathbf{Q}_{z_{k-1}}&0\\ 0&0&0&\mathbf{Q}_{t_{k-1}}\end{bmatrix}.\]
### _Measurement Models of Android Phones_
For Android phones in low dynamics, according to (1), (2), and (3), we can get the joint pseudorange and pseudorange rate measurement equations as follows:
\[\mathbf{b}_{k}=\mathbf{C}_{k}(\mathbf{X}_{k}-\tilde{\mathbf{X}}_{k})+ \mathbf{E}_{k} \tag{15}\]
where
\[\mathbf{C}_{k}=\mathbf{G}_{k}\]
\[\mathbf{E}_{k}=[\varepsilon_{k}^{(1)},\hat{\varepsilon}_{k}^{(1)},\varepsilon _{k}^{(2)},\hat{\varepsilon}_{k}^{(2)},\cdots,\varepsilon_{k}^{(M)},\hat{ \varepsilon}_{k}^{(M)}]^{T}.\]
## IV Filtering and Smoothing Noisy Android Raw GNSS Measurements
### _PVT Solution Based on Moving Horizon Estimation_
MHE estimates the current system state from a moving window of data. Let \(N+1\) denote the size of the moving window. Then, we need to determine the state at the \(k^{th}\) epoch \(\mathbf{X}_{k}\) with the measurements from the \(k-N^{th}\) epoch to the \(k^{th}\) epoch \(\left[\mathbf{b}_{k-N},\mathbf{b}_{k-N+1},\cdots,\mathbf{b}_{k}\right]^{T}\). The state-transition matrix \(\mathbf{A}_{k,k-1}\) defined by (7), (10) and (14) is non-singular. Thus, according to (13), the state at the \(k-1^{th}\) epoch can be derived from the state at the \(k^{th}\) epoch if the process noise \(\mathbf{W}_{k-1}\) is ignored, i.e.,
\[\mathbf{X}_{k-1}=\mathbf{A}_{k,k-1}^{-1}\mathbf{X}_{k}. \tag{16}\]
If the measurement equation at the \(k^{th}\) epoch is linearized at its approximation \(\tilde{\mathbf{X}}_{k}\), the measurement equation at the \(k-1^{th}\) epoch can be linearized at \(\tilde{\mathbf{X}}_{k-1}\) that can be derived as follows:
\[\tilde{\mathbf{X}}_{k-1}=\mathbf{A}_{k,k-1}^{-1}\tilde{\mathbf{X}}_{k}. \tag{17}\]
By recursively substituting (16) and (17) into (15) at all \(N+1\) epochs and ignoring the measurement noise, we can get the following measurement equation system:
\[\mathbf{Y}_{k,N}=\mathbf{M}_{N}(\mathbf{X}_{k}-\tilde{\mathbf{X}}_{k}) \tag{18}\]
where
\[\mathbf{Y}_{k,N}=\left[\mathbf{b}_{k}^{T},\mathbf{b}_{k-1}^{T}, \cdots,\mathbf{b}_{k-N}^{T}\right]^{T}\]
\[\mathbf{M}_{N}=[(\mathbf{C}_{k})^{T},(\mathbf{C}_{k-1}\mathbf{A}_{k,k-1}^{-1}) ^{T},\cdots,\]
\[(\mathbf{C}_{k-N}\mathbf{A}_{k-N+1,k-N}^{-1}\cdots\mathbf{A}_{k-1,k-2}^{-1} \mathbf{A}_{k,k-1}^{-1})^{T}]^{T}.\]
We can solve (18) using the LS method and get
\[\hat{\mathbf{X}}_{k}=\mathbf{M}_{N}^{+}\mathbf{Y}_{k,N}+\tilde{\mathbf{X}}_{k}. \tag{19}\]
The approximate state \(\tilde{\mathbf{X}}_{k}\) will be updated using the estimated state \(\tilde{\mathbf{X}}_{k}\) for the iterative computation from (16) to (19) until the convergence of \(\tilde{\mathbf{X}}_{k}\). Like the WLS algorithm, we can also weight the measurement equations (18).
Fig. 3: Dynamic model for hardware clock of Android phones
Fig. 2: Dynamic model for Android phones
### _PVT Solution Based on Extended Kalman Filter_
#### Iv-B1 General Algorithm
EKF recursively estimates the current state based on all the previous data, which is illustrated by the following equations:
\[\hat{\mathbf{X}}_{k}^{-}=\mathbf{A}_{k,k-1}\hat{\mathbf{X}}_{k-1}\] \[\mathbf{P}_{k}^{-}=\mathbf{A}_{k,k-1}\mathbf{P}_{k-1}\mathbf{A}_{ k,k-1}^{T}+\mathbf{Q}_{k-1}\] \[\mathbf{K}_{k}=\mathbf{P}_{k}^{-}\mathbf{C}_{k}^{T}\left(\mathbf{ C}_{k}\mathbf{P}_{k}^{-}\mathbf{C}_{k}^{T}+\mathbf{R}_{k}\right)^{-1}\] \[\hat{\mathbf{X}}_{k}=\hat{\mathbf{X}}_{k}^{-}+\mathbf{K}_{k} \mathbf{b}_{k}\] \[\mathbf{P}_{k}=\left(\mathbf{I}-\mathbf{K}_{k}\mathbf{C}_{k} \right)\mathbf{P}_{k}^{-}\]
where \(\hat{\mathbf{X}}_{k-1}\) and \(\hat{\mathbf{X}}_{k}\) represent the posterior state estimation. \(\mathbf{P}_{k-1}\) and \(\mathbf{P}_{k}\) are the covariance matrix of the posterior state estimation. \(\mathbf{P}_{k}^{-}\) denotes the covariance matrix of the prior estimation \(\hat{\mathbf{X}}_{k}\). \(\mathbf{K}_{k}\) represents the Kalman gain. Note that the approximate state \(\hat{\mathbf{X}}_{k}\) is replaced by the prior state estimation \(\hat{\mathbf{X}}_{k}^{-}\) to calculate \(\mathbf{b}_{k}\). \(\mathbf{R}_{k}\) is the covariance matrix of measurement noise.
#### Iv-B2 Determination of Covariance Matrix \(\mathbf{Q}_{k-1}\) and \(\mathbf{R}_{k}\)
The computation of \(\mathbf{Q}_{k-1}\) has been introduced in detail in Section III. Note that (8), (11), and (12) tell that the state estimation at the previous two steps is needed to determine the covariance matrix at the current epoch. The precedent states can be initially estimated with the WLS solutions and then gradually replaced with the EKF-based solutions.
Assume the pseudorange noise and pseudorange rate noise are unbiased and uncorrelated with each other. And the measurement noise is uncorrelated among various satellites. Then, \(\mathbf{R}_{k}\) is calculated as follows:
\[\mathbf{R}_{k}=\mathrm{E}\left(\mathbf{E}_{k}\mathbf{E}_{k}^{T}\right)\] \[=\begin{bmatrix}\sigma_{\rho_{k}}^{(1)^{2}}&0&0&0&0&\cdots&0&0\\ 0&\sigma_{\rho_{k}}^{(1)^{2}}&0&0&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&0&0&0&\sigma_{\rho_{k}}^{(M)^{2}}&0\\ 0&0&\cdots&0&0&0&\sigma_{\rho_{k}}^{(M)^{2}}\end{bmatrix}\]
where
\[{\sigma_{\rho_{k}}^{(n)}}^{2} =\mathrm{E}\left({\varepsilon_{k}^{(n)}}^{2}\right)\] \[=ReceivedSvTimeUncertaintyNanos^{2}\] \[{\sigma_{\rho_{k}}^{(n)}}^{2} =\mathrm{E}\left(\hat{\varepsilon}_{k}^{(n)^{2}}\right)\] \[=PseudorangeRateUncertaintyMetersPerSecond^{2}.\]
### _PVT Solution Based on Rauch-Tung-Striebel Smoother_
RTS smoother is a backward EKF, which starts from the last-epoch state estimated by EKF and smooths the state estimation backward. Thus, before using it, we should have obtained \(\hat{\mathbf{X}}_{k}^{-}\), \(\mathbf{P}_{k}^{-}\), \(\hat{\mathbf{X}}_{k}\), and \(\mathbf{P}_{k}\) using the forward EKF. Let
Fig. 4: Finite state machine for WLS
Fig. 5: Finite state machine for MHE
Fig. 6: Finite state machine for EKF
Fig. 7: Finite state machine for RTS smoother
\(\hat{\mathbf{X}}_{k}^{S}\) and \(\mathbf{P}_{k}^{S}\) represent the smoothed state estimation and the corresponding covariance matrix. The recursive formation of RTS smoother is shown below:
\[\hat{\mathbf{X}}_{k}^{S}=\hat{\mathbf{X}}_{k}+\mathbf{S}_{k}\left( \hat{\mathbf{X}}_{k+1}^{S}-\hat{\mathbf{X}}_{k+1}^{-}\right)\] \[\mathbf{P}_{k}^{S}=\mathbf{P}_{k}+\mathbf{S}_{k}\left(\mathbf{P}_ {k+1}^{S}-\mathbf{P}_{k+1}^{-}\right)\mathbf{S}_{k}^{T}\]
where
\[\mathbf{S}_{k}=\mathbf{P}_{k}\mathbf{A}_{k,k-1}\left(\mathbf{P}_{k+1}^{-} \right)^{-1}.\]
## V Practical Implementation for Discontinuous Data
We detect three kinds of discontinuity in Android raw GNSS measurements. The first is localization failure due to fewer than four visible satellites, which we call satellite discontinuity. The second is the clock discontinuity of Android phones. That is, the time of two adjacent measurements is discontinuous. In this study, two adjacent data are considered discontinuous if the time difference exceeds 10 seconds. The last one is the pseudorange discontinuity. It means the pseudorange change between two consecutive epochs is larger than an expected value (such as 50 km), which may be caused by signal blocking, multipath, duty cycles, etc. We design finite state machines (FSM) to handle such discontinuity, as shown by Fig. 4-Fig. 7.
### _Handling Data Discontinuity for the WLS Algorithm_
Only the satellite discontinuity influences the WLS algorithm, which is illustrated by its FSM containing two states, i.e., "Stop" and "Run," shown in Fig. 4. "counter1" stores the number of measurements which are collected from enough satellites. Once the satellite discontinuity happens, "counter1" will be reset to zero, and the WLS algorithm will stop.
### _Handling Data Discontinuity for MHE_
The satellite discontinuity rarely happens in MHE because MHE combines a window of data that generally guarantees enough visible satellites for PVT computation. Thus, we only consider the clock discontinuity and the pseudorange discontinuity for it. The combination of two measurements with these discontinuities will lead to large estimation errors. As shown in Fig. 5, we use "counter2" to store how many measurements are continuous in terms of both time and pseudoranges. The "Warm up" state means that MHE will run with data fewer than its preset window size. If any of the two kinds of discontinuity is detected, "counter2" will be set back to one, and MHE will be replaced with WLS.
### _Handling Data Discontinuity for EKF_
All three kinds of discontinuity will affect EKF. The FSM of EKF is shown in Fig. 6, which involves four states, i.e., "Stop," "Warm up," "Run," and "Hold." "flag1" indicates whether the clock discontinuity or pseudorange discontinuity takes place at the moment. A non-zero "counter1" will be set to 1 once "flag1" is true, i.e., the current measurement is discontinuous in clock or pseudoranges but contains enough visible satellites for WLS-based state estimation. "counter0" counts how many measurements with satellite discontinuity have been accumulated. If satellite discontinuity happens, EKF will run in the "Hold" state and infer the phone's state without any adjustment until counter0 is larger than "Th".
Fig. 8: Evaluation of static scenario
Fig. 9: Evaluation of dynamic scenario
"Th" represents the number threshold of consecutive satellite-discontinuity data and is set to 10.
### _Handling Data Discontinuity for RTS Smoother_
RTS smoother estimates the current state based on the current and subsequent states given by EKF. Therefore, we only need to check whether the current EKF-based state estimation is empty and whether the current and the next EKF-based state estimations are continuous. As shown in Fig. 7, "counter3" counts how many consecutive non-empty states have been given by EKF backward to the current epoch. If the current EKF-based state estimation is empty, "counter3" will be set back to 0. "flag2" indicates whether the clock discontinuity or pseudorange discontinuity takes place at the next epoch.
## VI Experiments
We evaluate the performance of the aforementioned positioning algorithms in static and dynamic scenarios. The static data were collected by HUAWEI Mate 10 Pro on the roof of the School of Art, Design and Media at Nanyang Technological University, with ground truth collected by a u-blox receiver. For dynamic scenes, we use Google public datasets collected in Mountain View with Pixel 4 with ground truth provided by the NovAtel SPAN system [17]. The positioning traces and errors are illustrated in Fig. 8 and 9. We score each method using the mean of the \(50^{th}\) and \(95^{th}\) percentile of horizontal errors computed by Vincenty's formulae, which is the evaluation metric in the Google Smartphone Decimeter Challenge (GSDC) and summarized in Table I.
As shown in Fig. 8a, Fig. 8b, Fig. 9a, and Fig. 9b, MHE, EKF, and RTS smoother significantly mitigate noise and obtain much smoother positioning results compared with the baseline WLS algorithm. Fig. 8c, Fig. 8d, Fig. 9c, and Fig. 9d display the empirical cumulative distribution function (ECDF) of horizontal and vertical positioning errors, demonstrating that MHE, EKF, and RTS smoother improve the positioning performance substantially. As indicated in Table I, RTS smoother achieves the best horizontal score and reduces the horizontal localization error by \(76.4\%\) and \(46.5\%\) in static and dynamic scenes, respectively, compared to the WLS algorithm.
## VII Conclusion
In this work, we detail how to compute locations using Android raw GNSS measurements and implement MHE, EKF, and RTS smoother to handle the large noise present in them. Besides, we devise dedicated finite-state machines for these localization algorithms to address the discontinuity in Android data. The experiment results show that the filtering or smoothing methods can significantly relieve the adverse impact of noise on localization results for Android smartphones.
However, the positioning results in the dynamic scene indicate that these methods cannot eliminate the positioning bias (around 10 meters), which is the problem we need to solve in the next step to further improve the pseudorange localization performance of Android smartphones.
|
2310.20667 | Coherent manipulation of nuclear spins in the strong driving regime | Spin-based quantum information processing makes extensive use of spin-state
manipulation. This ranges from dynamical decoupling of nuclear spins in quantum
sensing experiments to applying logical gates on qubits in a quantum processor.
Here we present an antenna for strong driving in quantum sensing experiments
and theoretically address challenges of the strong driving regime. First, we
designed and implemented a micron-scale planar spiral RF antenna capable of
delivering intense fields to a sample. The planar antenna is tailored for
quantum sensing experiments using the diamond's nitrogen-vacancy (NV) center
and should be applicable to other solid-state defects. The antenna has a broad
bandwidth of 22 MHz, is compatible with scanning probes, and is suitable for
cryogenic and ultrahigh vacuum conditions. We measure the magnetic field
induced by the antenna and estimate a field-to-current ratio of $113\pm 16$
G/A, representing a x6 increase in efficiency compared to the state-of-the-art.
We demonstrate the antenna by driving Rabi oscillations in $^1$H spins of an
organic sample on the diamond surface and measure $^1$H Rabi frequencies of
over 500 kHz, i.e., $\mathrm{\pi}$-pulses shorter than 1 $\mu s$ - faster than
previously reported in NV-based nuclear magnetic resonance (NMR). Finally, we
discuss the implications of driving spins with a field tilted from the
transverse plane in a regime where the driving amplitude is comparable to the
spin-state splitting, such that the rotating wave approximation does not
describe the dynamics well. We present a recipe to optimize pulse fidelity in
this regime based on a phase and offset-shifted sine drive, that may be
optimized without numerical optimization procedures or precise modeling of the
experiment. We consider this approach in a range of driving amplitudes and show
that it is particularly efficient in the case of a tilted driving field. | Dan Yudilevich, Alon Salhov, Ido Schaefer, Konstantin Herb, Alex Retzker, Amit Finkler | 2023-10-31T17:31:27Z | http://arxiv.org/abs/2310.20667v1 | # Coherent manipulation of nuclear spins in the strong driving regime
###### Abstract
Spin-based quantum information processing makes extensive use of spin-state manipulation. This ranges from dynamical decoupling of nuclear spins in quantum sensing experiments to applying logical gates on qubits in a quantum processor. Fast manipulation of spin states is highly desirable for accelerating experiments, enhancing sensitivity, and applying elaborate pulse sequences. Strong driving using intense radio-frequency (RF) fields can, therefore, facilitate fast manipulation and enable broadband excitation of spin species.
In this work, we present an antenna for strong driving in quantum sensing experiments and theoretically address challenges of the strong driving regime. First, we designed and implemented a micron-scale planar spiral RF antenna capable of delivering intense fields to a sample. The planar antenna is tailored for quantum sensing experiments using the diamond's nitrogen-vacancy (NV) center and should be applicable to other solid-state defects. The antenna has a broad bandwidth of \(22\,\mathrm{MHz}\), is compatible with scanning probes, and is suitable for cryogenic and ultrahigh vacuum conditions. We measure the magnetic field induced by the antenna and estimate a field-to-current ratio of \(113\pm 16\,\mathrm{G/A}\), representing a six-fold increase in efficiency compared to the state-of-the-art, crucial for cryogenic experiments. We demonstrate the antenna by driving Rabi oscillations in \({}^{1}\mathrm{H}\) spins of an organic sample on the diamond surface and measure \({}^{1}\mathrm{H}\) Rabi frequencies of over \(500\,\mathrm{kHz}\), i.e., \(\pi\)-pulses shorter than \(1\,\upmu\mathrm{s}\) - faster than previously reported in NV-based nuclear magnetic resonance (NMR).
Finally, we discuss the implications of driving spins with a field tilted from the transverse plane in a regime where the driving amplitude is comparable to the spin-state splitting, such that the rotating wave approximation does not describe the dynamics well. We present a simple recipe to optimize pulse fidelity in this regime based on a phase and offset-shifted sine drive, which may be optimized _in situ_ without numerical optimization procedures or precise modeling of the experiment. We consider this approach in a range of driving amplitudes and show that it is particularly efficient in the case of a tilted driving field.
The results presented here constitute a foundation for implementing fast nuclear spin control in various systems.
## I Introduction
Quantum sensing with solid-state spin sensors, such as the nitrogen-vacancy (NV) center in diamond, frequently involves manipulating nuclear spin states. Nuclear spins may be part of the sample of interest, as in the case of nanoscale nuclear magnetic resonance (NMR) spectroscopy, which relies on sequences of radio-frequency (RF) pulses applied to the sample to recover information on its chemical structure [1; 2; 3; 4; 5]. Solid-state nuclear spins around the sensor are also utilized as ancilla qubits that store the quantum state of a sensor to retrieve it repeatedly [3; 6] or to prolong the sensing time [7].
Most experiments have relied so far on antennas that induce weak RF driving fields, with a standard \(\pi\)-pulse lasting a few tens of microseconds [1; 6; 8]. These lengthy pulses imply longer measurement times and, thus, reduced sensitivity [9]. They may also impede the application of elaborate pulse sequences, as the sensing time in NV-based NMR is limited by the spin relaxation time of the NV center (\(T_{1}\))[10], or a nuclear memory [3].
Fast manipulation of nuclear spins by strong RF driving fields can better utilize the limited sensing time of NV center sensors, generate broadband excitation of the nuclear spin resonance, and enable novel sensing protocols [11]. Previous works demonstrated strong driving for the NV center electron spin at a rate of \(\sim 1\,\mathrm{GHz}\)[12], and \({}^{13}\mathrm{C}\) spins in diamond at \(\sim 70\,\mathrm{kHz}\)[13]. The highest reported driving rates for protons in NV-based NMR are \(50-80\,\mathrm{kHz}\)[1; 3].
The system described below enables spin manipulation in a regime where the driving strength \(\Omega_{d}\) is close to the energy splitting (i.e., \(\Omega_{d}\lesssim\omega_{0}\)). Alongside the experimental challenges of producing strong driving fields, working in this regime poses a theoretical control challenge. Most experimental setups deliver linearly polarized fields often tilted away from the transverse plane. For weak driving strengths (\(\Omega_{d}\ll\omega_{0}\)), the dynamics are accurately approximated by sinusoidal state transitions, known as Rabi oscillations, under the rotating-wave approximation (RWA). In the regime where \(\Omega_{d}\lesssim\omega_{0}\), how
ever, the deviations from an ideal drive (specifically, a linearly polarized drive with a longitudinal component) will markedly alter the dynamics [12]. Without proper adaptations, this "breakdown" of the RWA results in the deterioration of pulse fidelity. It is thus crucial to design the signals so as to optimize an operation's fidelity in the strong driving regime.
The issue of strong driving has attracted interest, especially for quantum information processing, where strong driving can accelerate operations and increase the speed of quantum processors [14; 15]. Among others, optimal control strategies have been employed for optimizing quantum control in the strong driving regime. In particular, the concept of time-optimal control fields [16] has been introduced to identify the shortest possible signals to control a qubit state [17; 18]. Bang-bang control sequences have been shown to be the quickest form, while bang-bang driving at rates exceeding \(\omega_{0}\) has been demonstrated on solid-state qubits [19], and other optimal control theory-derived waveforms have been demonstrated for solid-state qubit controls [20]. Optimal control approaches require a precise description of the driving field and the qubit, e.g., the relative orientation and magnitude, according to which the control signal is calculated. However, errors in the estimated parameters might deteriorate the ultimate performance compared with the simulations [20]. Also, _in situ_ optimization is difficult as it requires sampling a complex parameter space.
In this work, we design and implement a micrometer-scale planar spiral RF antenna compatible with NV magnetometry and capable of delivering intense RF pulses to a diamond sample. We characterize the antenna's characteristics and performance. Demonstrating the antenna's function by driving proton (\({}^{1}\)H) spins, we observe spin state Rabi oscillations at frequencies surpassing 500 kHz.
We then discuss the unique characteristics of spin-state control by a strong and tilted driving field. We propose a novel approach to optimize the fidelity of control pulses in this regime, which is particularly suitable for driving fields that are noisy or not fully characterized.
## II Experimental Methods
We performed experiments on a home-built room temperature confocal microscope. NV center electron spins were excited by a 520 nm diode laser, and their fluorescence was measured by a single-photon counting module. Low-frequency RF signals (\(\sim\)1 MHz) were irradiated to the sample via our novel, custom-designed spiral antenna (see further in Sec. III), and the NV center electronic spins were controlled by microwave pulses delivered by a wire drawn above the sample.
The diamond sample was a thin, single-crystal [100] diamond membrane (approximately 30 \(\upmu\)m thick) patterned with nanopillar diamond waveguides. Shallow NV centers were created in the diamond by nitrogen ion implantation. For proton sensing, a small drop of microscope immersion oil (Cargille Type LDF) was applied to the diamond's surface with a sterile syringe (see further details in SI-2).
## III Planar Spiral Radio-Frequency Antenna
We designed the RF antenna as a planar spiral. Fig. 1(a) depicts the setup schematically. Compatibility with a typical NV magnetometry apparatus was the fundamental design principle. The diamond is placed directly on the antenna to enhance the magnetic field induced at the sample's position. A small aperture at the center of the spiral allows optical access to NV centers. An additional wire, drawn above the sample, is dedicated to signals in the gigahertz range for manipulating the NV center's electron spin.
The antenna was fabricated on a polyimide flexible printed circuit, suitable for ultra-high vacuum and cryogenic environments. The planar geometry can accommodate a scanning probe, such as an atomic force microscope, which may be used, for example, to carry a sample [21] or create a magnetic field gradient [22].
In our antenna, the inner loop diameter is 600 \(\upmu\)m, at the center of which is a 200 \(\upmu\)m-diameter optical aperture; the sample's area of interest is placed on the aperture (Fig. 2(a)). The inner diameter was minimized to achieve the strongest field up to the fabrication capabilities. The spiral consists of two identical layers, separated by a 20 \(\upmu\)m polyimide layer and connected by a via at the center. The number of turns and trace width of the spiral can be set to optimize its operation; for a larger field-per-current ratio, the number of turns should be increased. However, the bandwidth decreases with the number of
Figure 1: Schematic of the experimental system (not to scale). (a) The RF spiral antenna sits underneath a thin diamond sample with NV centers in nanopillar waveguides. The NV centers are addressed optically through an aperture in the antenna. Microwave signals to the NV center are applied with a thin copper wire drawn above the diamond sample. Immersion oil with \({}^{1}\)H is placed atop the diamond. (b) The static magnetic field \(B_{0}\) is aligned with the NV centerβs axis. The spiral antennaβs field is approximately perpendicular to the diamondβs surface, inducing an RF field component along the NVβs axis (\(B_{z}\)) and a component perpendicular to \(B_{0}\) (\(B_{x}\)).
turns, and the field-per-power is maximal for a specific number of turns (see further in SI-1). The results presented in this study were measured with a 15-turn spiral and a \(100\,\upmu\)m trace width.
We terminate the antenna with a \(\sim 50\)\(\Omega\) load that dissipates over 90% of the generated power. By monitoring the voltage on the load, we also determine the current through the antenna. The 3 dB bandwidth of the antenna is approximately 22 MHz, as observed in the transmission spectrum of the system (\(S_{21}\) parameter, Fig. 2(b)). The antenna's bandwidth allows working with bias fields of up to 500 mT (for detecting proton spins). Such bias fields are required when wishing to utilize an ancilla nuclear spin in the diamond as a quantum memory [6; 23]. Additionally, the large bandwidth enables the transmission of pulses shorter than a microsecond without significant distortion.
Fig. 2(c) shows a finite element simulation of the antenna's field distribution. The figure depicts the field along a cross-section of the antenna's center and where the sample sits. The simulation confirms that the expected magnetic field is approximately uniform in magnitude and orientation over the projected sample position. The magnetic field for a 1 A current at the experiment's sample position was estimated to be \(136\pm 1\) G. The field vector was nearly perpendicular to the spiral plane, with a slight tilt of \(1\pm 0.7^{\circ}\).
We characterized the magnetic field vector emitted by the antenna using _in situ_ static magnetic field measurements with the NV center. We swept a direct current through the antenna and measured the Zeeman shift of the NV center's levels around \(B_{0}=0\) (without an additional applied field). From the optically detected magnetic resonance (ODMR) spectra, we extracted the dependence of the field magnitude on the current and the tilt of the applied magnetic field to the NV axis. The result is plotted in Fig. 3. We fit the data to a spin Hamiltonian incorporating strain and a magnetic field tilted away from the NV center axis. Thus, the transitions are not linearly dependent on the magnetic field (see SI-2 for further details on the analysis). From the transitions, we obtain the DC field-to-current ratio of \(B\) / \(I_{\text{spiral}}=113\pm 16\)\(\frac{\text{G}}{\text{A}}\). The field's angle is measured to be tilted from the plane transverse to the NV axis by \(\theta_{d}=36.5\pm 5.8^{\circ}\)(corresponding to a tilt of \(\sim 1.2^{\circ}\) from the normal to the spiral plane). The measured field's magnitude agrees with the finite element simulation. The NV center lies at an angle of \(\sim 54.7^{\circ}\) to the diamond surface, parallel to the spiral plane. Thus, the measured orientation is consistent with our expectation that the planar spiral antenna induces a field normal to its plane.
In a sensing experiment, as discussed in the following section, there will usually be an applied quantizing magnetic field (\(\vec{B}_{0}\)) along the NV center's axis (\(\hat{z}\) in Fig. 1(b)). Under the RWA, the transverse component (\(B_{x}\) in Fig. 1(b)) drives the spins and is proportional to the Rabi frequency. From the DC characterization, we estimate it to be \(B_{x}\) / \(I_{\text{spiral}}=B\cos\left(\theta_{d}\right)/I_{\text{spiral}}=92\pm 14\)\(\frac{\text{G}}{\text{A}}\) (the magnitude might be attenuated according to the transmission at the specific frequency, as described in Fig. 2(b)).
Figure 2: Spiral broadband RF antenna. (a) Photo of the spiral antenna. The sample sits on the antenna, and the working region is directly above the aperture. (b) Transmission characteristics of the antenna (\(S_{21}\) parameter). (c) Finite-element simulation of the antennaβs field, focusing on the region of interest. The image shows a cross-section along the dashed line in (a). The color map and contours depict the magnetic field magnitude, and the arrows show the projection of the fieldβs orientation on the XZ plane. The golden polygons depict the cross-section of the spiralβs trace. The markings inside the polygons denote the direction of the simulated current. The sampleβs position in the current experiment is marked by the semitransparent rectangle.
Figure 3: Direct current magnetic field characterization. The NV center level shifts were measured in a series of ODMR spectra with varying currents through the spiral. The points were fit to a model incorporating the magnetic field tilt and a strain field. The pink areas mark the confidence intervals of the fit.
Fast \({}^{1}\)H Rabi Oscillations
We demonstrate the antenna's function by driving Rabi oscillations in a proton spin ensemble of an organic sample on the diamond's surface. As a preliminary experiment, we detect proton nuclear magnetic resonance with an XY8-N dynamical decoupling sequence [24], employing phase randomization to exclude spurious harmonics of \({}^{13}\)C spins [25]. Fig. 4(b) features an XY8-10 trace with a dip at the expected position of the proton Larmor frequency (\(B_{0}\approx 652\) G, \(\omega_{{}^{1}\mathrm{H}}\approx 2.78\) MHz), indicating that the NV senses the proton's oscillating magnetization.
We then employ a correlation spectroscopy sequence with RF pulses [3; 26] to observe proton Rabi oscillations, as depicted in Fig. 4(c). We use a correlation spectroscopy sequence based on XY8-4 dynamical decoupling blocks locked to the proton Larmor frequency to sense the phase of the proton's oscillation [27; 28]. The correlation delay, i.e., the spacing between the two sensing blocks, was fixed at \(20\,\upmu\)s. The RF pulses during the correlation delay, tuned to the proton Larmor frequency of varying duration, drive the nuclear magnetization, inducing a \(\left|\uparrow\right\rangle\leftrightarrow\left|\downarrow\right\rangle\) transition.
The resulting Rabi oscillations are plotted in Fig. 4(d) for several driving powers corresponding to different spiral currents. The oscillations were fitted to a decaying sine function, from which we extracted the driving frequency (\(\Omega_{d}\)). Fig. 4(e) summarizes the observed driving frequencies as a function of spiral currents. The driving frequency is proportional to the driving current, as expected. We achieve a maximal driving frequency of \(530\pm 12\) kHz, ultimately limited by the amplifier's saturation power.
We estimate a driving frequency-to-current ratio of \(\Omega_{d}\) / \(I_{\mathrm{spiral}}=463\pm 3\)\(\frac{\mathrm{kHz}}{\mathrm{A}}\). From this ratio, we estimate the field-to-current ratio of the transverse field at \(2.78\,\mathrm{MHz}\) to be \(B_{1}\) / \(I_{\mathrm{spiral}}=108.8\pm 0.7\)\(\frac{\mathrm{G}}{\mathrm{A}}\); this is in good agreement with the value expected from _in situ_ DC measurement (\(92\pm 14\)\(\frac{\mathrm{G}}{\mathrm{A}}\)) presented previously and the finite-element simulations (\(111.0\pm 0.8\)\(\frac{\mathrm{G}}{\mathrm{A}}\)).
## V Manipulating spins in the strong driving regime
As our spiral antenna can indeed reach the strong driving regime (\(\Omega_{d}\sim\omega_{0}/5\) in the aforementioned experiment), we describe a straightforward approach to generate control signals in the \(\Omega_{d}\lesssim\omega_{0}\) regime for high-fidelity operations. We show that a simple sine signal with an offset may provide sufficient fidelity in this regime by optimizing just one or two parameters.
Our approach is particularly suitable for tilted drive signals, that is, signals with a component along the quantization axis (hereinafter referred to as \(\hat{z}\)). Tilted drives are found in various solid-state spin qubit systems, such as the NV center in diamond as described in the previous section, the SiV defect in diamond [29], defects in SiC [30] and in h-BN [31], as well as superconducting flux qubits [32]. However, to our knowledge, optimizing strong tilted drives has not been discussed in the literature.
In what follows, we motivate our approach analytically using a clear physical picture, illustrate its validity numerically, and compare it to optimal control-derived signals. We argue that offset-sine signals bear benefits over optimal control-derived signals while providing similar and sufficiently high fidelity rates. Our focus is on the optimization of the \(\pi\)-pulse, which is to be reached at \(t_{\pi}\sim\frac{\pi}{\Omega_{d}}\) (a precise definition follows below).
### Resonant offset-sine driving pulses
We consider a two-level system driven by a tilted driving field. The system is described by the following Hamiltonian:
\[\mathcal{H}=\frac{\omega_{0}}{2}\sigma_{z}+\Omega_{d}f\left(t\right)\left( \sigma_{x}+\tan\left(\theta_{d}\right)\sigma_{z}\right) \tag{1}\]
where \(\omega_{0}\) is the energy splitting of the two-level system, \(\Omega_{d}\) is the maximum driving field amplitude, \(\left|f(t)\right|\leq 1\) is the waveform, and \(\theta_{d}\in\left[0,\frac{\pi}{2}\right)\) is the driving field's tilt angle from \(\hat{x}\). Under this definition, the drive vector is not normalized; rather, the field's magnitude depends on the angle \(\theta_{d}\).
In the weak-driving regime, conventional driving pulses are based on resonant sine waveforms, i.e., \(f(t)=\sin\left(\omega_{0}t+\varphi_{d}\right)\). The standard analysis proceeds with the rotating-wave-approximation (RWA) [33], which neglects the \(\hat{z}\) component of the drive (i.e., assuming \(\theta_{d}=0\), see Fig. S2(a) for schematic), as well as the counter-rotating term of the transversal component. The resulting rotating frame Hamiltonian is \(\mathcal{H}_{I}=\frac{\Omega_{d}}{2}(\sin(\varphi_{d})\sigma_{x}-\cos(\varphi_ {d})\sigma_{y})\). In this regime, the only effect of the phase, \(\varphi_{d}\), is to determine the axis of Rabi nutation, and for a \(\pi\) pulse, the effect vanishes.
In the strong-driving regime, we expect the dynamics to depend on \(\varphi_{d}\) beyond the trivial dependence of the weak-driving regime. This is based on the observation that finite-duration sine waveforms contain a DC component. Namely, the zero-frequency Fourier component, \(\frac{2\Omega_{d}}{\pi}\int_{0}^{\pi/\Omega_{d}}\sin\left(\omega_{0}t+\varphi _{d}\right)\mathrm{dt}\), which depends on \(\varphi_{d}\), is significant for \(\Omega_{d}/\omega_{0}\sim 1\). Thus, in this regime, we anticipate that varying the phase modulates the interplay between the different terms of the Hamiltonian, offering flexibility for pulse fidelity optimization.
We suggest utilizing the phase \(\varphi_{d}\) to mitigate the effects of the counter-rotating term in the regime of \(\Omega_{d}\lesssim\omega_{0}\) and optimize pulse fidelity rates in this regime. This may be supplemented by a DC offset to the drive, serving as another DC component that may be controlled to optimize the pulse. We note that the phase of the driving field was shown to be important in the strong driving regime in NMR already more than five decades ago [34]. More recently, the phase's effect was shown in single solid-state
qubit experiments [35; 36] and in NMR [37]. However, the phase has not been discussed in the context of tilted drives or in combination with a DC offset.
The first-order correction to the rotating frame Hamiltonian \(\mathcal{H}_{I}\) is the Bloch-Siegert shift [38] that acts as an effective DC field along \(\hat{z}\) in the rotating frame [39]. Thus, the DC component of the longitudinal driving field (present for tilted field \(\theta_{d}>0\)) may assist in canceling out the effects of the counter-rotating term. Let us now consider the special case of driving at an amplitude of \(\Omega_{d}=\frac{\omega_{0}}{2\tan(\theta_{d})}\). In this case, a constant (DC) waveform equal dye \(-1\) yields an ideal driving Hamiltonian \(\mathcal{H}=\Omega_{d}\sigma_{x}\). These observations motivate us to consider waveforms \(f(t)\) based on the "offset-sine" waveform:
\[f(t)\equiv\epsilon\left(t\right)\left(a+\left(1-\left|a\right|\right)\sin \left(\omega_{0}t+\varphi_{d}\right)\right) \tag{2}\]
where the optimization parameters are \(\left|a\right|\leq 1\) (the DC offset component) and \(\varphi_{d}\) (the phase). For \(a=0\), we obtain a standard sine (symmetric around \(0\)), while for \(\left|a\right|=1\) we get a constant DC drive.
In Eq. 2 we introduced the pulse's envelope function \(0\leq\epsilon\left(t\right)\leq 1\), which is zero at the pulse edges (\(\epsilon\left(t_{0}\right)=\epsilon\left(t_{\text{pulse}}\right)=0\)). For weak driving, a simple rectangle function is often used as the envelope (i.e., rectangular pulse shape). However, as realistic transmission lines always have limited bandwidth, a discontinuous \(\epsilon\left(t\right)\) will result in a distorted signal, and this distortion is significant for strong and short pulses. A smooth envelope function with finite rise and fall times can fit the signal into a prescribed bandwidth [40], and here specifically, we used an error-function pulse envelope [41] (see Eq. S.2 and Fig. S2(b) in the SI for a schematic pulse).
### Optimizing control pulses based on the offset-sine waveform
We demonstrate the performance of the offset-sine waveforms by numerically calculating the state evolution of a qubit under such a drive Hamiltonian (Eq. 1 and Eq. 2). We consider as examples driving amplitudes of \(\Omega_{d}=\frac{\omega_{0}}{10},\frac{\omega_{0}}{3},\omega_{0}\). We focus on \(\pi\)-pulses, i.e., flipping the initial state \(\left|\uparrow\right>\) with the goal of maximizing the probability for \(\left|\downarrow\right>\).
As an illustration, we choose parameters inspired by the aforementioned spiral antenna, namely, a drive tilt of \(\theta_{d}=35.3^{\circ}\), and limit the signals to a bandwidth of \(\lesssim 10\omega_{0}\) using an error-function envelope with a rise-time \(\delta t=\frac{\pi}{10\omega_{0}}\). The pulse durations are extrapolated from the weak driving regime and set to be \(t_{\pi}=\frac{\pi}{\Omega_{d}}+2\delta t\), which accounts for the rise and fall times of the signal (for further details, see SI-3).
We numerically calculate the pulse fidelity according to \(\mathcal{F}=\left|\left<\psi\left(t_{\pi}\right)\mid\downarrow\right>\right|^ {2}\) under the driving field for each driving amplitude, sampling various values of the phase (\(\varphi_{d}\)) and offset (\(a\)) of the signal. The results are presented in Fig. 5
Figure 4: Fast Rabi oscillations of \({}^{1}\)H nuclear spins. (a) A diagram of the randomized XY8-N pulse sequence used to sense the \({}^{1}\)H nuclear magnetic resonance. (b) A randomized XY8-10 spectrum featuring a dip related to the \({}^{1}\)H Larmor precession. (c) Diagram of the pulse sequence used to observe \({}^{1}\)H nuclear spin Rabi oscillations. The nuclear spin precession was detected by correlating two XY8-4 dynamical decoupling blocks tuned to the \({}^{1}\)H frequency found in (b). A varying radio frequency pulse tuned to the \({}^{1}\)H frequency during the correlation time drives the \({}^{1}\)H spin state. (d) Rabi oscillations of the \({}^{1}\)H spins for different current amplitudes driven through the spiral antenna. (e) Summary of several Rabi frequencies measured, with a linear dependence on the current through the antenna.
(top row) in terms of infidelity \(1-\mathcal{F}\) to contrast the results. Fig. 5 (center row) shows the state evolution for various driving signals at the driving amplitude of the corresponding column. Evolutions are shown for various phases at zero-offset (\(a=0\), light gray curves), with the zero-offset phase yielding the best (worst) pulse fidelity marked by dashed (dotted) curves. Evolutions under an optimal offset-sine drive are marked by red curves. The optimal offset-sines have offset and phase corresponding to the coordinates of minimum infidelity in the diagrams of the top row, i.e., the brightest points. The bottom row shows the waveforms corresponding to the different state evolutions in the center row.
The center row of Fig. 5 illustrates how increasing the driving strength \(\Omega_{d}\) from \(\frac{\omega_{0}}{10}\) to \(\omega_{0}\) leads to increasing deviation from the standard sinusoidal evolution characteristic of the RWA. For the stronger drive amplitudes, adjusting the drive phase \(\varphi_{d}\) is crucial: for the extreme case of \(\Omega_{d}=\omega_{0}\), a correct choice of drive phase \(\varphi_{d}\) will yield \(\mathcal{F}\approx 0.94\), while the worst choice will yield \(\mathcal{F}\approx 0\). Additional optimization of the DC offset significantly impacts the final state fidelity for the strongest drive amplitudes. For example, at \(\Omega_{d}\lesssim\omega_{0}\), adding a proper offset will increase the fidelity to \(\mathcal{F}>0.999\), beyond the fault-tolerance threshold for some quantum computer architectures [42; 43].
The drive phase is a single optimization parameter to optimize strong drive pulses and obtain pulse fidelities over \(0.9\), sufficient for quantum sensing tasks. The optimal choice of phase would depend on the driving amplitude and envelope function [39].
The DC offset we introduced as a novel optimization parameter may also be significant, particularly for a driving field tilted by \(\theta_{d}\) from \(\hat{z}\). The drive tilt even serves as an additional resource: a tilted drive can achieve higher fidelities than driving fields purely along \(\hat{x}\) when both the phase and offset are optimized (see Fig. S3(a) in the SI).
### Comparing with optimal control theory signals
We compare our strategy with control signals generated by quantum optimal control theory (OCT) [44; 45; 46; 47; 48]. In OCT, the optimization task is formulated as a maximization problem of a functional object by means of variational calculus. This yields a set of control equations, which are solved numerically by optimization algorithms. The optimization target is the maximization of the occupation of \(|\downarrow\rangle\) at a predefined final time, \(t_{\pi}=\frac{\pi}{\Omega_{d}}+2\delta t\), as defined previously.
For our experiment, additional restrictions were added to the control problem:
1. A restriction was imposed on the total energy of the drive that effectively kept the peak amplitude near \(\Omega_{d}\). However, we emphasize that in the OCT solution, there was no explicit restriction on the amplitude as in the offset-sine optimization. As a result, some OCT waveforms shown below exceed an amplitude of \(\Omega_{d}\).
2. The spectral composition of the drive was restricted to \(<10.7\omega_{0}\) to produce an experimentally realistic waveform with a smooth temporal profile. As mentioned before, this value is inspired by the spiral antenna.
3. Homogeneous boundary conditions were imposed on the drive and its temporal derivative (i.e., zero drive and zero time-derivative of the drive at \(t=0\) and \(t=t_{\pi}\)). This was done in order to obtain a realistic, smooth rise and fall of the drive.
The optimization problem was solved for six values of \(\Omega_{d}\), corresponding to different values of \(t_{\pi}\). The drive tilt was set to \(\theta_{d}=35.3^{\circ}\). We compared the OCT signals with two forms of offset-sine signals: an approximation to the OCT signal, obtained by a least-square fitting of the OCT signal to Eq. 2; and the optimal offset-sine, obtained by optimizing the offset and phase for the same parameters as the OCT signals. The offset and phase optimization was done, as previously described, by fixing \(t_{\pi}\) and the amplitude \(\Omega_{d}\), and sampling a range of offsets and phases, choosing the offset and phase set that minimizes the infidelity.
The signals for three cases are presented in Fig. 6(a)-(c). The fitted offset-sine approximates the OCT-generated signals well, supporting the offset-sine approach. Conversely, the optimal offset-sine is very similar when \(\Omega_{d}\ll\omega_{0}\), but takes on a distinct shape as \(\Omega_{d}\) approaches \(\omega_{0}\). The difference, however, does not come at the expense of fidelity.
Fig. 6(d) compares the pulse fidelity rates for the OCT signals, the approximated offset-sines, and the optimized offset-sine (for clarity, the data is presented in terms of infidelity \(1-\mathcal{F}\)). The optimized offset-sine signals differ from the OCT signals but provide comparable fidelity rates, with \(\mathcal{F}>0.999\). Interestingly, for the highest values of \(\Omega_{d}\) considered here (i.e., shortest \(t_{\pi}\) times), the fidelity rate of the optimized offset-sine even surpasses that of the OCT signal. While the difference presumably stems from the details of the optimization procedure, it underlines the potential of the optimized offset-sine waveforms as an alternative optimization strategy.
Our strategy for controlling qubits in the regime of \(\Omega_{d}\lesssim\omega_{0}\) thus relies on optimizing an offset-sine driving signal as an alternative to existing approaches for designing strong driving pulses, namely optimal control theory [20] and bang-bang control sequences [17]. The optimized offset-sine approach does not require precise driving field characterization and prior numerical optimization, and it suits a tilted driving field. Although the fidelity rates of the OCT and optimized offset-sine waveforms differ, the real rate would likely be lower due to deviations between the simulated and actual drive parameters. This emphasizes the benefit of a strategy that conveniently enables _in situ_ optimization. The offset-sine
Figure 5: Optimizing offset sine \(\pi\)-pulse drives at different driving strengths. Top row: Calculating pulse infidelity for several driving strengths \(\Omega_{d}\) as a function of pulse drive phase \(\varphi_{d}\) and offset \(a\). Center row: spin state evolution for pulses at the corresponding drive strengths. Trajectories for many values of \(\varphi_{d}\) at \(a=0\) are shown in light curves. The best (worst) \(\varphi_{d}\) values for \(a=0\) are marked by dashed (dotted) curves. Evolution for pulses with both optimal phase and offset are shown by solid red curves. Insets focus on the pulsesβ ends to highlight the final fidelity for each case. Bottom row: the waveforms corresponding to the trajectories drawn in the center row, with matching curve format.
Figure 6: Comparing OCT drive signals with offset sine drives. (a)-(c) Drive signals for different drive strengths \(\Omega_{d}\) denoted above the plots, corresponding to different pulse durations \(t_{x}=\frac{\pi}{\Omega_{d}}+2\delta t\). The blue dashed curves are OCT waveforms. The dotted black curves are approximations of the OCT waveform by an offset-sine obtained by least-square fitting (OCT fit). The solid red curves are optimized offset-sine waveforms. (d) Infidelity rates of the driving signals for each of the shown waveforms.
signal may be optimized experimentally by varying over one or two parameters, namely the drive phase and DC offset. As such, this strategy is convenient to minimize deviations between the real and simulated conditions, for example, due to driving noise or a limited bandwidth [20].
## VI Summary and conclusions
We developed a broadband spiral antenna tailored for quantum sensing experiments with NV centers, such as nanoscale NMR. The antenna's bandwidth suits nuclear spins at fields up to 0.5 T. We drive \({}^{1}\)H spins at a Rabi frequency of over 500 kHz, faster than previously reported. The field-to-current ratio of the spiral antenna is three-fold better than the state-of-the-art, and the field-to-power ratio is over ten-fold better [13]. Thus, owing to a low field-to-current ratio, it is possible to drive spins at appreciable driving frequencies with low power consumption, e.g., a Rabi frequency of over 100 kHz requires less than 2.5 W input power, making it especially appropriate for sensitive samples or cryogenic environments.
Furthermore, we discussed the issue of driving spins in a strong driving regime where \(\Omega_{d}\lesssim\omega_{0}\). We show that spins may be flipped with high fidelity by utilizing resonant offset-sine drive pulses optimized by varying the drive field's phase and offset. Our approach obtains fidelity rates comparable to optimal control-derived signals and can be conveniently optimized _in situ_, which is significant in experimental settings where the driving field is noisy or not fully characterized. Also, offset-sine signals are especially suitable for tilted driving fields. Pulse fidelities over 0.95 may be achieved by optimizing the drive phase, while varying the offset may bring fidelity rates over 0.999, above the fault-tolerance threshold.
###### Acknowledgements.
We thank Nicolas Staudenmaier and Nabeel Aslam for delightful discussions on correlation spectroscopy. We thank Yonatan Vernik and Leah Fuhrman Javitt for their contributions. A.S. gratefully acknowledges the support of the Clore Israel Foundation Scholars Programme, the Israeli Council for Higher Education, and the Milner Foundation. I.S. acknowledges financial support by the German Federal Ministry of Education and Research (BMBF), project no. 13N15929 QCStack. A.F. is the incumbent of the Elaine Blond Career Development Chair in Perpteuity and acknowledges support from the Israel Science Foundation (ISF grants 963/19 and 419/20) as well as the Abramson Family Center for Young Scientists, the Willner Family Leadership Institute for the Weizmann Institute of Science and the Helen and Martin Kimmel Institute for Magnetic Resonance Research. We are grateful for the historic generosity of the Harold Perlman Family. A.R. acknowledges the support of ERC grant QRES, project number 770929, Quantera grant MfQDS, ISF and the Schwartzmann university chair.
|
2309.14489 | RoCK blocks for double covers of symmetric groups over a complete
discrete valuation ring | Recently the authors proved the existence of RoCK blocks for double covers of
symmetric groups over an algebraically closed field of odd characteristic. In
this paper we prove that these blocks lift to RoCK blocks over a suitably
defined discrete valuation ring. Such a lift is even splendidly derived
equivalent to its Brauer correspondent. We note that the techniques used in the
current article are almost completely independent from those previously used by
the authors. In particular, we do not make use of quiver Hecke superalgebras
and the main result is proved using methods solely from the theory of
representations of finite groups. Therefore, this paper much more resembles the
work of Chuang and Kessar, where RoCK blocks for symmetric groups were
constructed. | Alexander Kleshchev, Michael Livesey | 2023-09-25T19:33:13Z | http://arxiv.org/abs/2309.14489v1 | # Rock blocks for double covers of symmetric groups over a complete discrete valuation ring
###### Abstract.
Recently the authors proved the existence of RoCK blocks for double covers of symmetric groups over an algebraically closed field of odd characteristic. In this paper we prove that these blocks lift to RoCK blocks over a suitably defined discrete valuation ring. Such a lift is even splendidly derived equivalent to its Brauer correspondent. We note that the techniques used in the current article are almost completely independent from those previously used by the authors. In particular, we do not make use of quiver Hecke superalgebras and the main result is proved using methods solely from the theory of representations of finite groups. Therefore, this paper much more resembles the work of Chuang and Kessar, where RoCK blocks for symmetric groups were constructed.
2020 Mathematics Subject Classification: 20C20, 20C25, 20C30 The first author was supported by the NSF grant DMS-2101791 and Charles Simonyi Endowment at the Institute for Advanced Study. The second author was supported by the EPSRC (grant no EP/T004606/1). Both authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme 'Groups, representations and applications: new perspectives' where part of the work on this paper was undertaken.
Introduction
Throughout this article \(p\) will denote an odd prime and \((\mathbb{K},\mathcal{O},\mathbb{F})\) a \(p\)-modular system. So \(\mathbb{K}\) is a field of characteristic zero, \(\mathcal{O}\) a complete discrete valuation ring with field of fractions \(\mathbb{K}\) and residue field \(\mathbb{F}\), which, in turn, is an algebraically closed field of characteristic \(p\). We denote by \(\overline{\phantom{\overline{\phantom{\overline{\phantom{\overline{\phantom{\phantom{ \phantom{\phantom{\phantom{\phantom
and \(\tilde{\mathsf{A}}_{n}\) its double cover. There is a one-to-one, isomorphic correspondence between the blocks of \(\mathcal{O}\tilde{\mathsf{S}}_{n}\) and those of \(\mathcal{O}\tilde{\mathsf{S}}_{n}(1+z)/2\). In particular, Broue's conjecture is known for the blocks of \(\mathcal{O}\tilde{\mathsf{S}}_{n}(1+z)/2\), as it is known to hold for blocks of symmetric groups (see [**CK**] and [**CR**]). We, therefore, restrict our attention in this article to blocks of \(\mathcal{T}_{n}:=\mathcal{O}\tilde{\mathsf{S}}_{n}(1-z)/2\), also known as the _spin blocks of the symmetric group_.
Throughout the article it will be important to consider \(\mathcal{O}\tilde{\mathsf{S}}_{n}\) as a superalgebra via:
\[(\mathcal{O}\tilde{\mathsf{S}}_{n})_{\bar{0}}:=\mathcal{O}\tilde{\mathsf{A}}_ {n},\qquad(\mathcal{O}\tilde{\mathsf{S}}_{n})_{\bar{1}}:=\mathcal{O}(\tilde{ \mathsf{S}}_{n}\setminus\tilde{\mathsf{A}}_{n}).\]
We recall that the _spin superblocks of \(\mathcal{O}\tilde{\mathsf{S}}_{n}\)_ are labeled by pairs \((\rho,d)\), where \(\rho\) is a \(\bar{p}\)-core and \(d\in\mathbb{N}\) (referred to as the _weight of the block_) such that \(n=|\rho|+dp\). We denote such a superblock by \(B^{\rho,d}\). We note that the case \(d=0\) corresponds to the defect zero situations, so we will often assume that \(d>0\). In that case \(B^{\rho,d}\) is in fact a spin block (and not just a spin superblock) of \(\mathcal{O}\tilde{\mathsf{S}}_{n}\). In this case the defect group \(D\) of \(B^{\rho,d}\) is abelian if and only if \(d<p\). If it is abelian, then \(D\cong(C_{p})^{\times d}\).
In [**KL**] the authors defined the notion of a _\(d\)-Rouquier \(\bar{p}\)-core_, for \(d\in\mathbb{N}\). For the remainder of the introduction we assume \(0<d<p\) and that \(\rho\) is a \(d\)-Rouquier \(\bar{p}\)-core.
Throughout the article '\(\otimes\)' will denote a _tensor product of superalgebras_. For a superalgebra \(A\), a _twisted wreath superproduct_\(A\wr_{\mathsf{s}}\mathcal{T}_{d}\) is defined in SS3.7. For the definition of a _Morita superequivalence_, see SS3.3. Our main result (see Theorem 8.40) is as follows:
**Theorem A**.: _Let \(0<d<p\) and \(\rho\) be a \(d\)-Rouquier \(\bar{p}\)-core. Then \(B^{\rho,d}\) is Morita superequivalent to \(B^{\rho,0}\otimes(B^{\mathcal{O},1}\wr_{\mathsf{s}}\mathcal{T}_{d})\)._
The analogous result for blocks defined over \(\mathbb{F}\) was proved in [**KL**]. Indeed, that paper proved that we have Morita superequivalences in a more general setting. Namely, from specific (RoCK) blocks of quiver Hecke superalgebras to 'local' objects. We also use the term _RoCK block_ to describe the block \(B^{\rho,d}\) in Theorem A. Moreover, the corresponding purely even subalgebra \(B^{\rho,d}_{0}\) is a spin block of \(\mathcal{O}\tilde{\mathsf{A}}_{n}\), which is also referred to as a _RoCK block_.
Using Theorem A, we then go on to prove that \(B^{\rho,d}\) is not just derived equivalent but splendidly derived equivalent to its Brauer correspondent (see Corollary 9.8):
**Theorem B**.: _Let \(0<d<p\) and \(\rho\) be a \(d\)-Rouquier \(\bar{p}\)-core. Then \(B^{\rho,d}\) and \(B^{\rho,d}_{0}\) are splendidly derived equivalent to their respective Brauer correspondents. In particular, Broue's abelian defect group conjecture holds for RoCK blocks \(B^{\rho,d}\) and \(B^{\rho,d}_{0}\) of \(\mathcal{O}\tilde{\mathsf{S}}_{n}\) and \(\mathcal{O}\tilde{\mathsf{A}}_{n}\) respectively._
This is an improvement over the derived equivalence constructed for the corresponding blocks defined over \(\mathbb{F}\) in [**KL**].
Ebert, Lauda and Vera in [**ELV**] as well as Brundan and the first author in [**BK**] independently proved that any spin block of an \(\mathbb{F}\tilde{\mathsf{S}}_{n}\) with weight \(d\) is derived equivalent to some RoCK block \(B^{\rho,d}\). Together with [**KL**], this completed the proof of Broue's conjecture for the spin blocks of \(\mathbb{F}\tilde{\mathsf{S}}_{n}\). Currently, the conjecture remains open for blocks defined over \(\mathcal{O}\).
The article is organized as follows: Section 2 consists of various preliminaries concerning combinatorics and general algebras. Section 3 states all the relevant results on superalgebras and supermodules. Section 4 introduces the double covers of the symmetric group and their block theory. Section 5 is where we first introduce RoCK blocks and in Section 6 we analyse weight one RoCK blocks in detail. In Section 7 the bisupermodule
\(\mathbf{X}\), which will ultimately induce our desired Morita superequivalence, and the related bisupermodule \(\mathbf{Y}\) are defined. Theorem A is proved in Section 8, while Theorem B is proved in Section 9.
## 2. Preliminaries
### Generalities
We denote \(\mathbb{N}:=\mathbb{Z}_{\geq 0}\). Throughout this article \(p\) is an odd prime,
\[\ell:=(p-1)/2\quad\text{and}\quad I:=\{0,1,\dots,\ell\}, \tag{2.1}\]
and \((\mathbb{K},\mathcal{O},\mathbb{F})\) is a \(p\)-modular system. So \(\mathbb{K}\) is a field of characteristic zero, \(\mathcal{O}\) a complete discrete valuation ring with field of fractions \(\mathbb{K}\) and residue field \(\mathbb{F}\), which is an algebraically closed field of characteristic \(p\). We denote by \(\overline{\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt} }\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.
and Brauer tree algebras, both of which satisfy the above property, provided \(\mathbb{K}\) contains enough roots of unity.
We set \(\mathcal{G}_{0}(A)\) to be the Grothendieck group of (finite dimensional) \(A\)-modules and \(\mathcal{G}_{0}^{+}(A)\) the subset of classes in \(\mathcal{G}_{0}(A)\) that represent actual \(A\)-modules. So
\[\mathcal{G}_{0}(A)=\Big{\{}\sum_{M}a_{M}[M]\mid a_{M}\in\mathbb{Z}\Big{\}}\quad \text{and}\quad\mathcal{G}_{0}^{+}(A)=\Big{\{}\sum_{M}a_{M}[M]\mid a_{M}\in \mathbb{N}\Big{\}},\]
where \(M\) runs over the isomorphism classes of irreducible \(A\)-modules. We refer to the elements of \(\mathcal{G}_{0}^{+}(A)\) as the _characters of \(A\)_ and call \([M]\) an _irreducible character_ if \(M\) is an irreducible \(A\)-module. We denote by \(\mathrm{Irr}(A)\) the set of irreducible characters of \(A\). We will also often use \(\mathbb{Z}\mathrm{Irr}(A)\) and \(\mathbb{N}\mathrm{Irr}(A)\) to mean \(\mathcal{G}_{0}(A)\) and \(\mathcal{G}_{0}^{+}(A)\) respectively.
Let \(B\) be an \(\mathcal{O}\)-free \(\mathcal{O}\)-algebra such that \(\mathbb{K}B:=\mathbb{K}\otimes_{\mathcal{O}}B\) is split semisimple. We say \(B\) is \(\mathbb{K}\)_-split semisimple_. Set \(\mathrm{Irr}(B):=\mathrm{Irr}(\mathbb{K}B)\) and
\[\mathrm{Prj}(B) :=\{[\mathbb{K}\otimes_{\mathcal{O}}M]\in\mathcal{G}_{0}^{+}( \mathbb{K}B)\mid M\text{ is an indecomposable projective $B$-module}\},\] \[\mathbb{N}\,\mathrm{Prj}(B) :=\{[\mathbb{K}\otimes_{\mathcal{O}}M]\in\mathcal{G}_{0}^{+}( \mathbb{K}B)\mid M\text{ is a projective $B$-module}\}.\]
For \(\chi,\psi\in\mathbb{N}\mathrm{Irr}(B)\), we write
\[\chi\geq_{\mathrm{Irr}(B)}\psi\quad\text{and}\quad\chi\geq_{\mathrm{Pri}(A)}\psi\]
to mean \(\chi-\psi\in\mathbb{N}\mathrm{Irr}(B)\) and \(\chi-\psi\in\mathbb{N}\,\mathrm{Prj}(B)\), respectively.
Let \(B,C\) be \(\mathbb{K}\)-split semisimple algebras and \(N\) a \((B,C)\)-bimodule. Then the functor \(N\otimes_{C}\)? induces a \((\mathbb{Z}\)-)linear function \(\mathcal{G}_{0}(\mathbb{K}C)\to\mathcal{G}_{0}(\mathbb{K}B)\) that, by an abuse of notation, we also denote \(N\otimes_{C}\)?. If \(C\) is a subalgebra of \(B\), then we denote by
\[\downarrow^{B}_{C}:\mathcal{G}_{0}(\mathbb{K}B)\to\mathcal{G}_{0}(\mathbb{K}C )\qquad\text{and}\qquad\uparrow^{B}_{C}:\mathcal{G}_{0}(\mathbb{K}C)\to \mathcal{G}_{0}(\mathbb{K}B)\]
the linear functions induced by the functors \(\mathrm{Res}^{B}_{C}\) and \(\mathrm{Ind}^{B}_{C}\) respectively. Similar notation applies for algebras over \(\mathbb{K}\)--for example if \(B\) and \(C\) are split semisimple \(\mathbb{K}\)-algebras and \(N\) a \((B,C)\)-bimodule, then \(N\otimes_{C}\)? induces a \((\mathbb{Z}\)-)linear function \(\mathcal{G}_{0}(C)\to\mathcal{G}_{0}(B)\) that we also denote \(N\otimes_{C}\)?.
If \(G\) is a finite group and \(H\) a subgroup, we write
\[\downarrow^{G}_{H}:\mathcal{G}_{0}(\mathbb{K}G)\to\mathcal{G}_{0}(\mathbb{K}H )\qquad\text{and}\qquad\uparrow^{G}_{H}:\mathcal{G}_{0}(\mathbb{K}H)\to \mathcal{G}_{0}(\mathbb{K}G)\]
for the functions induced by the functors \(\mathrm{Res}^{G}_{H}\) and \(\mathrm{Ind}^{G}_{H}\) respectively. Furthermore, if \(b\) is a block idempotent of \(\mathcal{O}G\) and \(c\) of \(\mathcal{O}H\), then
\[\downarrow^{G,b}_{H,c}:\mathcal{G}_{0}(\mathbb{K}Gb)\to\mathcal{G}_{0}( \mathbb{K}Hc)\qquad\text{and}\qquad\uparrow^{G,b}_{H,c}:\mathcal{G}_{0}( \mathbb{K}Hc)\to\mathcal{G}_{0}(\mathbb{K}Gb)\]
will denote the functions induced by truncated restriction \(\mathrm{Res}^{G,b}_{H,c}\) and truncated induction \(\mathrm{Ind}^{G,b}_{H,c}\) respectively. That is, the functions induced by \(c\mathcal{O}Gb\otimes_{\mathcal{O}Gb}\)? and \(b\mathcal{O}Gc\otimes_{\mathcal{O}Hc}\)?.
### Group algebras
Throughout the article we assume standard facts about vertices and sources, see, for example, [**L\({}_{5}\)**, Chapter 5].
At several stages we will use the Brauer homomorphism, see [**L\({}_{5}\)**, SS5.4]. Let \(G\) be a finite group and \(P\) a \(p\)-subgroup. The _Brauer homomorphism_\(\mathrm{Br}_{P}\) is defined by
\[\mathrm{Br}_{P}:Z(\mathcal{O}G)\to Z(\mathbb{F}C_{G}(P)),\ \sum_{g\in G}\alpha_{g}g \mapsto\sum_{g\in C_{G}(P)}\bar{\alpha}_{g}g.\]
For any finite group \(G\), when we refer to a _block_\(\mathcal{O}Gb\), we implicitly mean that \(b\) is a block idempotent of \(\mathcal{O}G\). We will use Alperin's definition of _defect group_, see [**A**, Chapter
IV]. Namely, \(D\) is the defect group of a block \(\mathcal{O}Gb\) if the block has vertex \(\Delta D\) when considered as an \(\mathcal{O}(G\times G)\)-module.
Let \(D\) be a fixed finite \(p\)-group. For any subgroup \(P\leq D\) and group monomorphism \(\varphi:P\hookrightarrow D\), we set
\[\Delta_{\varphi}P:=\{(x,\varphi(x))\mid x\in P\}\leq D\times D. \tag{2.2}\]
If \(\varphi\) is the identity on \(P\), we just write \(\Delta P\).
**Lemma 2.3**.: _Let \(G\), \(H\) and \(J\) be finite groups with a common \(p\)-subgroup \(D\), and \(M\) be an indecomposable \((\mathcal{O}G,\mathcal{O}H)\)-bimodule with vertex \(\Delta_{\varphi}P\) for some \(P\leq D\) and \(\varphi:P\hookrightarrow D\)._
1. _If_ \(V\) _is an indecomposable_ \(\mathcal{O}H\)_-module with vertex_ \(Q\)_, then_ \(M\otimes_{\mathcal{O}H}V\) _is a direct sum of indecomposable_ \(\mathcal{O}G\)_-modules each with vertex contained in_ \(P\cap\varphi^{-1}({}^{h}Q\cap D)\)_, for some_ \(h\in H\)_._
2. _If_ \(N\) _is an indecomposable_ \((\mathcal{O}H,\mathcal{O}J)\)_-bimodule with vertex_ \(\Delta_{\psi}Q\) _for some_ \(Q\leq D\) _and_ \(\psi:Q\hookrightarrow D\)_, then_ \(M\otimes_{\mathcal{O}H}N\) _is a direct sum of indecomposable_ \((\mathcal{O}G,\mathcal{O}J)\)_-bimodules each with vertex of the form_ \(\Delta_{\vartheta}R\)_, for some_ \(R\leq P\) _and_ \(\vartheta:R\hookrightarrow D\)_, with_ \(\vartheta(R)\leq\psi(Q)\)_._
Proof.: (i) We have \(M\mid\operatorname{Ind}_{\Delta_{\varphi}P}^{G\times H}U\) for some indecomposable \(\mathcal{O}\Delta_{\varphi}P\)-module \(U\), and \(V\mid\operatorname{Ind}_{Q}^{H}W\) for some indecomposable \(\mathcal{O}Q\)-module \(W\). By Mackey's Theorem, we have that \(\operatorname{Res}_{\varphi(P)}^{H}\operatorname{Ind}_{Q}^{H}W\) is a direct sum of indecomposable \(\mathcal{O}\varphi(P)\)-modules each with vertex contained in \(\varphi(P)\cap{}^{h}Q\) for some \(h\in H\).
We now prove that we have \(\mathcal{O}G\)-module isomorphisms
\[(\operatorname{Ind}_{\Delta_{\varphi}P}^{G\times\varphi(P)}U)\otimes_{ \mathcal{O}\varphi(P)}\operatorname{Ind}_{\varphi(R)}^{\varphi(P)}Z\cong( \operatorname{Ind}_{\Delta_{\varphi}P}^{G\times\varphi(P)}U)\otimes_{\mathcal{ O}\varphi(R)}Z\cong\operatorname{Ind}_{R}^{G}(U\otimes Z), \tag{2.4}\]
for any \(R\leq P\) and \(\mathcal{O}\varphi(R)\)-module \(Z\), where \(U\otimes Z\) is given the structure of an \(\mathcal{O}R\)-module via
\[r\cdot(u\otimes z):=ru\varphi(r)^{-1}\otimes\varphi(r)z\qquad(\text{for $r\in R $, $u\in U$, $z\in Z$}).\]
The first isomorphism in (2.4) is immediate. For the second, we need only observe that we have an \(\mathcal{O}R\)-module isomorphism given by
\[(\operatorname{Ind}_{\Delta_{\varphi}R}^{R\times\varphi(R)} \operatorname{Res}_{\Delta_{\varphi}R}^{\Delta_{\varphi}P}U)\otimes_{\mathcal{ O}\varphi(R)}Z \to U\otimes Z\] \[(r_{1}\otimes u\otimes\varphi(r_{2}))\otimes z \mapsto r_{1}u\varphi(r_{1})^{-1}\otimes\varphi(r_{1}r_{2})z\] \[u\otimes z \maps u\otimes z,\]
and an \(\mathcal{O}(G\times\varphi(R))\)-module isomorphism
\[\operatorname{Ind}_{\Delta_{\varphi}R}^{G\times\varphi(R)}\operatorname{Res} _{\Delta_{\varphi}R}^{\Delta_{\varphi}P}U\cong\operatorname{Res}_{G\times \varphi(R)}^{G\times\varphi(P)}\operatorname{Ind}_{\Delta_{\varphi}P}^{G\times \varphi(P)}U,\]
given by the Mackey decomposition formula.
We have that
\[M\otimes_{\mathcal{O}H}V\mid(\operatorname{Ind}_{\Delta_{\varphi}P}^{G\times H }U)\otimes_{\mathcal{O}H}\operatorname{Ind}_{Q}^{H}W\] \[\cong(\operatorname{Ind}_{\Delta_{\varphi}P}^{G\times\varphi(P)}U) \otimes_{\mathcal{O}\varphi(P)}\operatorname{Res}_{\varphi(P)}^{H} \operatorname{Ind}_{Q}^{H}W,\]
which, by (2.4) and the comments preceding it, is a direct sum of modules each with vertex contained in
\[\varphi^{-1}(\varphi(P)\cap{}^{h}Q)=P\cap\varphi^{-1}({}^{h}Q),\]
for some \(h\in H\).
(ii) Suppose \(N\) lies in the block \(\mathcal{O}Jb\) when considered as a right \(\mathcal{O}J\) module. Certainly \(\mathcal{O}Jb\) is naturally an \((\mathcal{O}J,\mathcal{O}J)\)-bimodule. We consider the \((\mathcal{O}(G\times J),\mathcal{O}(H\times J))\)-bimodule \(M\otimes\mathcal{O}Jb\). Say \(\mathcal{O}Jb\) has defect group \(S\leq J\). Then \(\mathcal{O}Jb\) has vertex \(\Delta S\) and \(M\otimes\mathcal{O}Jb\) has vertex \(\Delta_{\varphi\times\operatorname{Id}_{S}}(P\times S)\leq(D\times S)\times( D\times S)\).
We have the following isomorphism of \(\mathcal{O}(G\times J)\)-modules
\[(M\otimes\mathcal{O}Jb)\otimes_{\mathcal{O}(H\times J)}N\cong M\otimes_{ \mathcal{O}H}N\]
\[(m\otimes jb)\otimes n\mapsto m\otimes nbj^{-1}\]
\[(m\otimes b)\otimes n\maps m\otimes n.\]
Therefore, by part (i), \(M\otimes_{\mathcal{O}H}N\) is a direct sum of indecomposable \((\mathcal{O}G,\mathcal{O}J)\)-bimodules each with vertex contained in
\[(P\times S)\cap(\varphi\times\operatorname{Id}_{S})^{-1}(^{(h,j)}\Delta_{ \psi}Q),\]
for some \(h\in H\) and \(j\in J\). Since \(M\otimes_{\mathcal{O}H}N\) is a right \(\mathcal{O}J\)-module each vertex is contained in
\[{}^{(1,j^{-1})}[(P\times S)\cap(\varphi\times\operatorname{Id}_{S })^{-1}(^{(h,j)}\Delta_{\psi}Q)]\] \[= (P\times{}^{j^{-1}}S)\cap(\varphi\times\operatorname{Id}_{j^{-1}} S)^{-1}(^{(h,1)}\Delta_{\psi}Q),\]
for some \(h\in H\) and \(j\in J\). Setting \(R^{\prime}=P\cap\varphi^{-1}(^{h}Q)\) and \(\vartheta:R^{\prime}\hookrightarrow D\), \(x\mapsto\psi(^{h^{-1}}\varphi(x))\) we now have
\[(P\times{}^{j^{-1}}S)\cap(\varphi\times\operatorname{Id}_{j^{-1}}S)^{-1}(^{(h,1)}\Delta_{\psi}Q)\leq\Delta_{\vartheta^{\prime}}R^{\prime}.\]
Since every subgroup of \(\Delta_{\vartheta^{\prime}}R^{\prime}\) is of the form \(\Delta_{\vartheta}R\), for suitably defined \(R\) and \(\vartheta\), the claim follows.
In the next lemma \(\operatorname{Tr}_{H}^{G}:Z(\mathcal{O}H)\to Z(\mathcal{O}G)\) is the relative trace, see e.g. [**L**\({}_{5}\), SS2.5].
**Lemma 2.5**.: _Let \(G\) be a finite group, \(H\) a normal subgroup and \(\mathcal{O}He\) a block with defect group \(D\). If \(C_{G}(e)=H\), then \(\mathcal{O}Gf\) is Morita equivalent to \(\mathcal{O}He\) with defect group \(D\), where \(f:=\operatorname{Tr}_{H}^{G}(e)\). In particular, \(f\) is a block idempotent of \(\mathcal{O}G\)._
Proof.: The Morita equivalence is just a special case of [**Ku**\({}_{1}\), Theorem C]. That \(\mathcal{O}Gf\) has defect group \(D\) follows from the following isomorphisms of \(\mathcal{O}(G\times G)\)-modules and \(\mathcal{O}(H\times H)\)-modules respectively
\[\operatorname{Ind}_{H\times H}^{G\times G}(\mathcal{O}He)\cong\mathcal{O}Gf, \qquad\operatorname{Res}_{H\times H}^{G\times G}(\mathcal{O}Gf)\cong\bigoplus _{g_{1},g_{2}\in G/H}g_{1}\mathcal{O}Heg_{2}.\]
The first isomorphism implies that \(\mathcal{O}Gf\) is relatively \(\Delta D\)-projective, while the second dictates that the vertex of \(\mathcal{O}Gf\) cannot be any smaller that \(\Delta D\).
**Lemma 2.6**.: _Let \(G\) be a finite group, \(H\) a normal subgroup and \(M\) an indecomposable \(\mathcal{O}H\)-module with vertex \(D\). If \(N_{G}(D)\leq H\), then \(\operatorname{Ind}_{H}^{G}M\) is indecomposable._
Proof.: Consider the decomposition
\[\operatorname{Res}_{H}^{G}\operatorname{Ind}_{H}^{G}M\cong\bigoplus_{g\in G/H}gM \tag{2.7}\]
of \(\operatorname{Res}_{H}^{G}\operatorname{Ind}_{H}^{G}M\) into indecomposable \(\mathcal{O}H\)-modules. Since \(M\) has vertex \(D\), \(gM\) has vertex \({}^{g}D\). Let \(g_{1},g_{2}\in G\) and suppose \({}^{g_{1}}D\) is conjugate to \({}^{g_{2}}D\) in \(H\). In other words \({}^{g_{1}}D={}^{hg_{1}}D\), for some \(h\in H\). Then \(g_{1}^{-1}hg_{2}\in N_{G}(D)\leq H\) and so \(g_{1}^{-1}g_{2}\in H\). We
have now proved that all summands in (2.7) are pairwise non-isomorphic. The claim now follows from [**W**, SS5, Propositon 2].
We now examine a specific application of Lemma 2.6. If \(G\) is a finite group and \(H\) a normal subgroup, we define the subgroup
\[(G\times G)_{G/H}:=\{(g_{1},g_{2})\in G\times G\mid g_{1}H=g_{2}H\}\leq G\times G. \tag{2.8}\]
**Lemma 2.9**.: _Let \(G\) be a finite group, \(H\) a normal subgroup and \(M\) an indecomposable \(\mathcal{O}(H\times H)\)-module, with vertex \(\Delta D\)._
1. _If_ \(C_{G}(D)\leq H\)_, then_ \(\mathrm{Ind}_{H\times H}^{G\times H}M\) _is an indecomposable_ \(\mathcal{O}(G\times H)\)_-module._
2. _If_ \(M\) _extends to an_ \(\mathcal{O}(G\times G)_{G/H}\)_-module, then_ \[\mathrm{Res}_{G\times H}^{G\times G}\mathrm{Ind}_{(G\times G)_{G/H}}^{G\times G }M\cong\mathrm{Ind}_{H\times H}^{G\times H}M.\]
_In particular, if the hypotheses of both (i) and (ii) hold, then \(\mathrm{Ind}_{(G\times G)_{G/H}}^{G\times G}M\) is an indecomposable \(\mathcal{O}(G\times G)\)-module._
1. _If_ \(e\in Z(\mathcal{O}G)\) _is a block idempotent of_ \(\mathcal{O}H\) _such that_ \(\mathcal{O}He\) _has defect group_ \(D\) _and_ \(C_{G}(D)\leq H\)_, then_ \(e\) _is a block idempotent of_ \(\mathcal{O}G\)_._
Proof.: (i) Suppose \((g,h)\in G\times H\) normalizes \(\Delta D\). Then
\[\Delta D={}^{(g,h)}\Delta D=\{({}^{g}x,{}^{h}x)\mid x\in D\}={}^{(g,g)}\{(x,{ }^{g^{-1}h}x)\mid x\in D\}.\]
In particular, \(g^{-1}h\in C_{G}(D)\leq H\) and so \(N_{G\times H}(\Delta D)\leq H\times H\). The claim now follows from Lemma 2.6.
(ii) This is just the Mackey decomposition formula once we note that
\[(G\times H)\cap(G\times G)_{G/H}=H\times H\]
and
\[|(G\times H)\backslash(G\times G)/(G\times G)_{G/H}|=1.\]
(iii) This just follows from (i) and (ii), as \(\mathcal{O}He\) is certainly an \(\mathcal{O}(G\times G)_{G/H}\)-module such that \(\mathcal{O}Ge\cong\mathrm{Ind}_{(G\times G)_{G/H}}^{G\times G}(\mathcal{O}He)\).
### Brauer trees
We assume the reader is familiar with Brauer trees, in particular the fact that the basic algebra of a block of a finite group with cyclic defect is isomorphic to an appropriately constructed Brauer tree algebra (see [**L\({}_{1}\)**, Proposition 3.10]). We refer the reader to the same result for the definition of the Brauer tree algebra defined over \(\mathcal{O}\). There, it is stated only for basic algebras of blocks with cyclic defect but the definition works for arbitrary Brauer trees. We briefly state some key facts used in this article. A good reference is [**L\({}_{6}\)**, SS11.1].
Let \(\mathscr{T}\) be a Brauer tree with associated Brauer tree algebra \(A\) defined over \(\mathcal{O}\). It is implicitly assumed that the nodes of \(\mathscr{T}\) are labeled by \(\mathrm{Irr}(A)\). Similarly, the edges of \(\mathscr{T}\) are labeled by the isomorphism classes of irreducible \(\mathbb{F}A\)-modules. A node with multiplicity greater than one (of which there will be at most one) will label a number of irreducible characters equal to said multiplicity. Finally, the projective cover (as an \(A\)-module) of an irreducible \(\mathbb{F}A\)-module has character equal to the sum of characters associated to the nodes at either end of the appropriate edge of \(\mathscr{T}\).
It is our convention with Brauer trees that a node has multiplicity one unless it has some number of rings around it. In this case the multiplicity is always one more than
the number of rings. In this article we will only consider Brauer trees with multiplicities at most two.
Recall the notation (2.1). Consider the following Brauer tree:
(2.10)
We use \(\mathtt{B}_{\ell}\) to signify the corresponding Brauer tree algebra defined over \(\mathcal{O}\). Note that \(\mathbb{F}\otimes_{\mathcal{O}}\mathtt{B}_{\ell}\) is the algebra denoted \(\mathtt{B}_{\ell}\) in [**KL**]. We denote by \(\chi_{i(\pm)}\in\operatorname{Irr}(\mathtt{B}_{\ell})\) the element corresponding to the node \(i^{(\pm)}\), for \(i\in I\).
Next, consider the following Brauer tree:
(2.11)
We use \(\mathtt{A}_{\ell}\) to signify the corresponding Brauer tree algebra defined over \(\mathcal{O}\). Note that \(\mathbb{F}\otimes_{\mathcal{O}}\mathtt{A}_{\ell}\) is isomorphic to the algebra denoted \(\mathtt{A}_{\ell}\) in [**KL**]. We denote by \(\chi_{i}\in\operatorname{Irr}(\mathtt{A}_{\ell})\) the element corresponding to the node \(i\), for \(1\leq i\leq\ell\) and by \(\chi_{0^{+}},\chi_{0^{-}}\in\operatorname{Irr}(\mathtt{A}_{\ell})\) the pair corresponding to the node \(0\).
Recall from SS2.2 that, if \(N\) is an \((A,B)\)-bimodule, then we use \(N\otimes_{B}\)? to denote the corresponding function \(\mathcal{G}_{0}(\mathbb{K}B)\to\mathcal{G}_{0}(\mathbb{K}A)\).
**Lemma 2.12**.: _We have the following relationships:_
* _If_ \(0\leq n\leq 4\ell-1\) _then_ \[\Omega^{n}_{\mathtt{B}_{\ell}\otimes\mathtt{B}_{\ell}^{\mathrm{op}}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\chi_{\ell^{+}}\geq_{ \operatorname{Irr}(\mathtt{B}_{\ell})}\begin{cases}\chi_{(\ell-n)^{+}}&\text{ if }0\leq n\leq\ell-1,\\ \chi_{0}&\text{ if }n=\ell,\\ \chi_{(n-\ell)^{-}}&\text{ if }\ell+1\leq n\leq 2\ell,\\ \chi_{(3\ell-n)^{-}}&\text{ if }2\ell+1\leq n\leq 3\ell-1,\\ \chi_{0}&\text{ if }n=3\ell,\\ \chi_{(n-3\ell)^{+}}&\text{ if }3\ell+1\leq n\leq 4\ell-1.\end{cases}\] _Moreover, if_ \(1\leq i\leq\ell\) _then_ \[\Omega^{3\ell}_{\mathtt{B}_{\ell}\otimes\mathtt{B}_{\ell}^{\mathrm{op}}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\chi_{i^{+}}\geq_{ \operatorname{Pri}(\mathtt{B}_{\ell})}\begin{cases}\chi_{(\ell-i)^{+}}\text{ and }\chi_{(\ell-i)^{-}}&\text{ if }1\leq i\leq\ell-1,\\ \chi_{0}&\text{ if }i=\ell,\end{cases}\] \[\Omega^{3\ell}_{\mathtt{B}_{\ell}\otimes\mathtt{B}_{\ell}^{\mathrm{op}}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\chi_{i^{-}}\geq_{ \operatorname{Pri}(\mathtt{B}_{\ell})}\begin{cases}\chi_{(\ell-i)^{+}}\text{ and }\chi_{(\ell-i)^{-}}&\text{ if }1\leq i\leq\ell-1,\\ \chi_{0}&\text{ if }i=\ell,\end{cases}\] _and_ \[\Omega^{3\ell}_{\mathtt{B}_{\ell}\otimes\mathtt{B}_{\ell}^{\mathrm{op}}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\chi_{0}\geq_{\operatorname{Pri} (\mathtt{B}_{\ell})}\chi_{\ell^{+}}\text{ and }\chi_{\ell^{-}}.\]
* _If_ \(0\leq n\leq 2\ell-1\) _then_ \[\Omega^{n}_{\mathtt{A}_{\ell}\otimes\mathtt{A}_{\ell}^{\mathrm{op}}}( \mathtt{A}_{\ell})\otimes_{\mathtt{A}_{\ell}}\chi_{\ell}\geq_{ \operatorname{Irr}(\mathtt{A}_{\ell})}\begin{cases}\chi_{\ell-n}&\text{ if }0\leq n\leq\ell-1,\\ \chi_{0^{+}}+\chi_{0^{-}}&\text{ if }n=\ell,\\ \chi_{n-\ell}&\text{ if }\ell+1\leq n\leq 2\ell-1.\end{cases}\]
_Moreover, if \(1\leq i\leq\ell\) then_
\[\Omega^{\ell}_{\mathbf{\Lambda}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}}( \mathtt{A}_{\ell})\otimes_{\mathtt{A}_{\ell}}\chi_{i}\geq_{\mathrm{P}\mathrm{rj} (\mathtt{A}_{\ell})}\begin{cases}\chi_{\ell-i}&\text{if }1\leq i\leq\ell-1,\\ \chi_{0^{+}}+\chi_{0^{-}}&\text{if }i=\ell,\end{cases}\]
_and_
\[\Omega^{\ell}_{\mathbf{\Lambda}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}}( \mathtt{A}_{\ell})\otimes_{\mathtt{A}_{\ell}}(\chi_{0^{+}}+\chi_{0^{-}})\geq_ {\mathrm{P}\mathrm{rj}(\mathtt{A}_{\ell})}\chi_{\ell}.\]
Proof.: (i) By [G, Theorem 2], there exists a sequence \(v_{0},\ldots,v_{4\ell-1}\) of vertices of the Brauer tree of \(\mathtt{B}_{\ell}\) (possibly with repeats) and \(\mathcal{O}\)-free \(\mathtt{B}_{\ell}\)-modules \(\mathtt{M}_{0},\ldots,\mathtt{M}_{4\ell-1}\) such that \([\mathtt{M}_{i}]=\chi_{v_{i}}\), for all \(0\leq i\leq 4\ell-1\), with the following properties: \((v_{i},v_{i+1})\) is an edge of the Brauer tree, for all \(i\in\mathbb{Z}\) (where the subscripts are considered modulo \(4\ell\)), each edge occurs exactly twice in the sequence
\[(v_{0},v_{1}),(v_{1},v_{2}),\ldots,(v_{4\ell-2},v_{4\ell-1}),(v_{4\ell-1},v_{0})\]
and \(\Omega_{\mathtt{B}_{\ell}}\mathtt{M}_{i}\cong\mathtt{M}_{i+1}\) for all \(i\in\mathbb{Z}\) (where again the subscripts are considered modulo \(4\ell\)).
Since each edge must occur exactly twice, it is clear that our sequence must be some cyclic permutation of
\[\ell^{+},(\ell-1)^{+},\ldots,1^{+},0,1^{-},\ldots,(\ell-1)^{-},\ell^{-},(\ell -1)^{-},\ldots,1^{-},0,1^{+},\ldots,(\ell-1)^{+}.\]
In fact, by shifting, we can assume our sequence is exactly as above. In other words, the sequence walks from one end of the Brauer tree to the other and then back again.
Next note that, for all \(\mathtt{B}_{\ell}\)-modules \(M\) and \(n\in\mathbb{N}\),
\[\Omega^{n}_{\mathtt{B}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}M\cong\Omega^{n}_{\mathtt{B}_{ \ell}}M\oplus P,\]
for some projective \(\mathtt{B}_{\ell}\)-module \(P\). This can be seen easily by taking a projective resolution of \(\mathtt{B}_{\ell}\) as a \(\mathtt{B}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}\)-module and then applying \(?\otimes_{\mathtt{B}_{\ell}}M\). Therefore, for all \(i,n\in\mathbb{N}\),
\[\Omega^{n}_{\mathtt{B}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\mathtt{M}_{i}\cong\mathtt{M}_{ i+n}\oplus\mathtt{P}_{i,n}, \tag{2.13}\]
for some projective \(\mathtt{B}_{\ell}\)-module \(\mathtt{P}_{i,n}\). In particular, for all \(n\in\mathbb{N}\),
\[\Omega^{n}_{\mathtt{B}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\mathtt{M}_{0}\cong\mathtt{M}_{ n}\oplus\mathtt{P}_{0,n}.\]
The first claim now follows. The second claim also follows immediately from (2.13) once we have noted that all the nodes that are not \(\ell^{\pm}\) occur twice in our sequence. For example, for all \(1\leq i\leq\ell-1\), since \(v_{\ell-i}=v_{3\ell+i}=i^{+}\) and \(v_{4\ell-i}=(\ell-i)^{+},v_{6\ell+i}=v_{i}=(\ell-i)^{-}\), we have
\[\Omega^{3\ell}_{\mathtt{B}_{\ell}\otimes\mathtt{B}^{\mathrm{op}}_{\ell}}( \mathtt{B}_{\ell})\otimes_{\mathtt{B}_{\ell}}\chi_{i^{+}}\geq_{\mathrm{P} \mathrm{rj}(\mathtt{B}_{\ell})}\chi_{(\ell-i)^{+}}\quad\text{and}\quad\chi_{( \ell-i)^{-}}.\]
(ii) The application of [G, Theorem 2] gives a sequence \(\mathtt{M}_{0},\ldots,\mathtt{M}_{2\ell-1}\) of \(\mathcal{O}\)-free \(\mathtt{A}_{\ell}\)-modules such that
\[[\mathtt{M}_{i}]=\begin{cases}\chi_{\ell-i}&\text{if }0\leq i\leq\ell-1\\ \chi_{0^{+}}+\chi_{0^{-}}&\text{if }i=\ell\\ \chi_{i-\ell}&\text{if }\ell+1\leq i\leq 2\ell-1\end{cases}\]
and \(\Omega_{\mathbf{\Lambda}_{\ell}}\mathtt{M}_{i}\cong\mathtt{M}_{i+1}\) for all \(i\in\mathbb{Z}\) (where the subscripts are considered modulo \(2\ell\)). The proof now proceeds as in part (i).
### Combinatorics
Let \(n\in\mathbb{N}\). We use \([n]\) to signify the set \(\{1,\ldots,n\}\). We denote by \(\mathscr{P}(n)\) the set of partitions of \(n\) and by \(\mathscr{P}_{0}(n)\) the set of strict partitions of \(n\), i.e. partitions of \(n\) without repeated parts. In addition, we set \(\mathscr{P}:=\bigsqcup_{n\in\mathbb{N}}\mathscr{P}(n)\) and \(\mathscr{P}_{0}:=\bigsqcup_{n\in\mathbb{N}}\mathscr{P}_{0}(n)\). For any partition \(\lambda\), we set \(h(\lambda):=\max\{k\mid\lambda_{k}>0\}\) to be the _length_ of \(\lambda\). If, in addition, \(\mu\in\mathscr{P}\), we write \(\mu\subseteq\lambda\) if \(h(\mu)\leq h(\lambda)\) and \(\mu_{i}\leq\lambda_{i}\), for all \(1\leq i\leq h(\mu)\).
For a partition \(\lambda\), we denote by \([\lambda]\) the _Young diagram_ of \(\lambda\), which consists of the _boxes_\((i,j)\in\mathbb{Z}_{>0}\times\mathbb{Z}_{>0}\) satisfying the following conditions:
\[[\lambda]:=\{(i,j)\in\mathbb{Z}_{>0}\times\mathbb{Z}_{>0}\mid i\leq h(\lambda )\text{ and }1\leq j\leq\lambda_{i}\}.\]
For a _strict_ partition \(\lambda\) it is often more natural to work with its _shifted diagram_, which consists of the boxes satisfying the following conditions:
\[\mathsf{sh}[\lambda]:=\{(i,j)\in\mathbb{Z}_{>0}\times\mathbb{Z}_{>0}\mid i\leq h (\lambda)\text{ and }i\leq j\leq\lambda_{i}+i-1\}.\]
For example, if \(\lambda=(6,4,2,1)\in\mathscr{P}_{0}(13)\), then the shifted diagram of \(\lambda\) is
Let \(\lambda\in\mathscr{P}(n)\). We define \(K_{\lambda}\) to be the number of bijections \(T:[n]\to[\lambda]\) such that \(T([m])\), for \(1\leq m\leq n\), is always the Young diagram of a partition. We also denote by \(\mathscr{P}(\lambda)^{+1}\) the set of all \(\mu\in\mathscr{P}(n+1)\) such that \([\mu]\) is obtained by adding a box to \([\lambda]\). Similarly, we denote by \(\mathscr{P}(\lambda)^{-1}\) the set of all \(\mu\in\mathscr{P}(n-1)\) such that \([\mu]\) is obtained by removing a box from \([\lambda]\).
Let \(\lambda\in\mathscr{P}_{0}(n)\). We define \(K^{\prime}_{\lambda}\) to be the number of bijections \(T:[n]\to\mathsf{sh}[\lambda]\) such that \(T([m])\), for \(1\leq m\leq n\), is always the shifted diagram of a strict partition. We also denote by \(\mathscr{P}_{0}(\lambda)^{+1}\) the set of \(\mu\in\mathscr{P}_{0}(n+1)\) such that \(\mathsf{sh}[\mu]\) is obtained by adding a box to \(\mathsf{sh}[\lambda]\). Similarly, \(\mathscr{P}_{0}(\lambda)^{-1}\) will denote the set of \(\mu\in\mathscr{P}_{0}(n-1)\) such that \(\mathsf{sh}[\mu]\) is obtained by removing a box from \(\mathsf{sh}[\lambda]\).
The following lemma is well known and is easy to see:
**Lemma 2.14**.: _Let \(n\in\mathbb{Z}_{>0}\). We have:_
* _if_ \(\lambda\in\mathscr{P}(n)\) _then_ \(\sum_{\mu\in\mathscr{P}(\lambda)^{-1}}K_{\mu}=K_{\lambda}\);__
* _if_ \(\lambda\in\mathscr{P}_{0}(n)\) _then_ \(\sum_{\mu\in\mathscr{P}_{0}(\lambda)^{-1}}K^{\prime}_{\mu}=K^{\prime}_{\lambda}\)_._
The next lemma is proved in [KL, Lemma 4.4.13].
**Lemma 2.15**.: _Let \(n\in\mathbb{N}\). Then_
\[\sum_{\lambda\in\mathscr{P}(n)}K^{2}_{\lambda}=n!=\sum_{\lambda\in\mathscr{P}_ {0}(n)}2^{n-h(\lambda)}(K^{\prime}_{\lambda})^{2}.\]
We will use the \(\bar{p}\)-abacus notation for strict partitions introduced in [MY]. For \(\lambda\in\mathscr{P}_{0}(n)\), \(\mathtt{Ab}_{\lambda}\) will signify its \(\bar{p}\)-abacus display, see [KL, SS2.3b] for more details on this. Our convention is that the \(0^{\text{th}}\) position on the abacus display is always unoccupied, i.e. we always use \(h(\lambda)\) beads in \(\mathtt{Ab}_{\lambda}\). An _elementary slide_ up on some abacus display means simply moving a bead from position \(r\) to position \(r-p\), for some \(r\geq p\). For \(r=p\), this means removing the bead in position \(p\) entirely. There is, of course, the analogous concept of an elementary slide down on an abacus display. In particular, inserting a bead
in the empty \(p^{\text{th}}\) position in an abacus display is considered to be an elementary slide down.
Following [**M**], we can associate to every \(\lambda\in\mathscr{P}_{0}\) its _\(\bar{p}\)-core_\(\operatorname{core}(\lambda)\in\mathscr{P}_{0}\). We can also define a non-negative integer called the _\(\bar{p}\)-weight_ of \(\lambda\):
\[\operatorname{wt}(\lambda):=(|\lambda|-|\operatorname{core}(\lambda)|)/p\in \mathbb{N}.\]
If \(\rho\) is a \(\bar{p}\)-core and \(d\in\mathbb{N}\), we define
\[\mathscr{P}_{0}(\rho,d):=\{\lambda\in\mathscr{P}_{0}\mid\operatorname{core}( \lambda)=\rho,\ \operatorname{wt}(\lambda)=d\}.\]
If \(\lambda\in\mathscr{P}_{0}(\rho,d)\), then, for \(i\in I\), we set \(\lambda^{(i)}\) to be the \(i^{\text{th}}\) _quotient of \(\lambda\)_, as defined in [**MY**, p.27]. So the \(\lambda^{(i)}\)'s are partitions, \(\lambda^{(0)}\) being strict, with \(\sum_{i=0}^{\ell}|\lambda^{(i)}|=d\). We set
\[K(\lambda):=K^{\prime}_{\lambda^{(0)}}K_{\lambda^{(1)}}\dots K_{\lambda^{( \ell)}}. \tag{2.16}\]
We denote by \(\Lambda(I,d)\) the set of all tuples \(\underline{d}=(d_{1},\dots,d_{\ell})\) of non-negative integers such that \(d_{0}+\dots+d_{\ell}=d\). Given a tuple \(\underline{d}=(d_{1},\dots,d_{\ell})\in\Lambda(I,d)\), we set
\[\mathscr{P}_{0}(\rho,\underline{d})=\{\lambda\in\mathscr{P}_{0}(\rho,d)\ |\ | \lambda^{(i)}|=d_{i}\text{ for all }i\in I\}. \tag{2.17}\]
We also define the multinomial coefficient
\[\binom{d}{\underline{d}}:=\binom{d}{d_{0},\dots,d_{\ell}}. \tag{2.18}\]
For \(\lambda\in\mathscr{P}_{0}(n-p)\) and \(j\in I\), \(\mathscr{P}_{0}^{j}(\lambda)^{+}\) will signify the set of \(\mu\in\mathscr{P}_{0}(n)\) such that \(\mathtt{Ab}_{\mu}\) is obtained from \(\mathtt{Ab}_{\lambda}\) by an elementary slide down on the runner \(j\) or \(p-j\) or by inserting beads at the top of the runners \(j\) and \(p-j\). It is well known that this is equivalent to \(\lambda\) and \(\mu\) having the same \(\bar{p}\)-core, \(\lambda^{(i)}=\mu^{(i)}\), for all \(i\neq j\), and \(\mu^{(0)}\in\mathscr{P}_{0}(\lambda^{(0)})^{+1}\), if \(j=0\) or \(\mu^{(j)}\in\mathscr{P}(\lambda^{(j)})^{+1}\), if \(j\neq 0\). We set \(\mathscr{P}_{0}^{\leq j}(\lambda)^{+}:=\bigsqcup_{i=0}^{j}\mathscr{P}_{0}^{i}( \lambda)^{+}\). Conversely, for \(\lambda\in\mathscr{P}_{0}(n)\), \(\mathscr{P}_{0}^{j}(\lambda)^{-}\) is the set of \(\mu\in\mathscr{P}_{0}(n-p)\) such that \(\lambda\in\mathscr{P}_{0}^{j}(\mu)^{+}\). Again, we set \(\mathscr{P}_{0}^{\leq j}(\lambda)^{-}:=\bigsqcup_{i=0}^{j}\mathscr{P}_{0}^{i}( \lambda)^{-}\).
## 3. Superalgebras and supermodules
### Superspaces and superalgebras
Most of the definitions and results in the first part of this section come directly from [**KL**, SS2.2]. However, we restate a lot of the content of that article as all the superalgebras in [**KL**] are defined over fields and we are primarily concerned with \(\mathcal{O}\)-superalgebras.
Recall that \(\mathcal{R}\) denotes \(\mathbb{K}\) or \(\mathcal{O}\). By an \(\mathcal{R}\)_-superspace_ we mean a finitely generated \(\mathcal{R}\)-module \(V\) with decomposition \(V=V_{\bar{0}}\oplus V_{\bar{1}}\). For \(\varepsilon\in\mathbb{Z}/2\), we refer to the elements of \(V_{\varepsilon}\) as the homogeneous elements of _parity_\(\varepsilon\), and write \(|v|=\varepsilon\) for \(v\in V_{\varepsilon}\). In fact, whenever we write \(|v|\), for some \(v\in V\), it is always assumed that \(v\) is homogeneous.
An \(\mathcal{R}\)-subsuperspace \(W\subseteq V\) is an \(\mathcal{R}\)-submodule of \(V\) such that \(W=(W\cap V_{\bar{0}})\oplus(W\cap V_{\bar{1}})\). We can, of course, treat \(W\) as an \(\mathcal{R}\)-subsuperspace in its own right.
Let \(V,W\) be \(\mathcal{R}\)-superspaces. The direct sum \(V\oplus W\) and tensor product \(V\otimes W\) are both considered as \(\mathcal{R}\)-superspaces in the obvious way. A _homomorphism_\(f:V\to W\) of \(\mathcal{R}\)-superspaces is an \(\mathcal{R}\)-linear map satisfying \(f(V_{\varepsilon})\subseteq W_{\varepsilon}\), for \(\varepsilon\in\mathbb{Z}/2\).
An _\(\mathcal{R}\)-superalgebra_ is an \(\mathcal{R}\)-superspace \(A\) that is also an \(\mathcal{R}\)-algebra such that \(A_{\varepsilon}A_{\delta}\subseteq A_{\varepsilon+\delta}\), for all \(\varepsilon,\delta\in\mathbb{Z}/2\). Recalling our conventions that all \(\mathcal{R}\)-algebras are assumed to be finitely generated as \(\mathcal{R}\)-modules and _algebra_ refers to an \(\mathcal{O}\)-algebra, the same thus applies to superalgebras: they are all assumed to be finitely generated as \(\mathcal{R}\)-modules and _superalgebra_ refers to an \(\mathcal{O}\)-superalgebra.
A _subsuperalgebra_ of an \(\mathcal{R}\)-superalgebra \(A\) is a (unital) subalgebra that is also a subsuperspace. A _homomorphism_\(f:A\to B\) of \(\mathcal{R}\)-superalgebras is an algebra homomorphism that is also a homomorphism of \(\mathcal{R}\)-superspaces.
We define the \(\mathcal{R}\)-superalgebra homomorphism
\[\sigma=\sigma_{A}:A\to A,\ a\mapsto(-1)^{|a|}a. \tag{3.1}\]
Note that the superstructure (i.e. \(\mathbb{Z}/2\)-grading) on \(A\) is completely determined by \(\sigma_{A}\).
The tensor product \(A\otimes B\) of \(\mathcal{R}\)-superalgebras \(A\) and \(B\) is considered to be an \(\mathcal{R}\)-superalgebra via
\[(a\otimes b)(a^{\prime}\otimes b^{\prime})=(-1)^{|b||a^{\prime}|}aa^{\prime} \otimes bb^{\prime}.\]
For \(a\in A\) and \(1\leq r\leq n\), we set
\[a_{r}:=1^{\otimes(r-1)}\otimes a\otimes 1^{\otimes(n-r)}\in A^{\otimes n}. \tag{3.2}\]
We define the superalgebra \(A^{\mathrm{op}}\) to be equal to \(A\) as an \(\mathcal{R}\)-superspace but with multiplication given by \(a.b=(-1)^{|a||b|}ba\), for all \(a,b\in A\). We define the superalgebra \(A^{\mathrm{op}}\) to be equal to \(A\) as an \(\mathcal{R}\)-superspace but with multiplication given by \(a.b=ba\), for all \(a,b\in A\).
A superalgebra \(A\) is called _supersymmetric_ if we have an \(\mathcal{R}\)-linear symmetrizing form \(\mathrm{tr}:A\to\mathcal{R}\) such that \(\mathrm{tr}(A_{\bar{1}})=\{0\}\). In this case we refer to \(\mathrm{tr}\) as a _supersymmetrizing form_.
Let \(A\) be a superalgebra. We denote by \(A^{\times}\) the set of units in \(A\). If \(A_{\bar{1}}\cap A^{\times}\neq\varnothing\), we call \(A\) a _superalgebra with superunit_, and any \(u\in A_{\bar{1}}\cap A^{\times}\) is called a _superunit_.
**Example 3.3**.: We define the rank \(n\)_Clifford superalgebra_\(\mathcal{C}_{n}\) to be the \(\mathcal{R}\)-superalgebra given by odd generators \(\mathpzc{c}_{1},\dots,\mathpzc{c}_{n}\) subject to the relations \(\mathpzc{c}_{r}^{2}=1\) for \(r=1,\dots,n\) and \(\mathpzc{c}_{r}\mathpzc{c}_{s}=-\mathpzc{c}_{s}\mathpzc{c}_{r}\) for all \(1\leq r\neq s\leq n\). We will use the well-known fact that \(\mathcal{C}_{n}\cong\mathcal{C}_{1}^{\otimes n}\).
**Example 3.4**.: We denote by \(\mathcal{M}_{m\times n}(\mathcal{R})\) the set of \(m\times n\) matrices with entries in \(\mathcal{R}\). Denote by \(\mathcal{M}_{m|n}(\mathcal{R})\) the superalgebra \(\mathcal{M}_{(m+n)\times(m+n)}(\mathcal{R})\) with the usual matrix multiplication and
\[\mathcal{M}_{m|n}(\mathcal{R})_{\bar{0}} =\bigg{\{}\begin{pmatrix}W&0\\ 0&Z\end{pmatrix}\mid W\in\mathcal{M}_{m\times m}(\mathcal{R}),Z\in\mathcal{M}_ {n\times n}(\mathcal{R})\bigg{\}},\] \[\mathcal{M}_{m|n}(\mathcal{R})_{\bar{1}} =\bigg{\{}\begin{pmatrix}0&X\\ Y&0\end{pmatrix}\mid X\in\mathcal{M}_{m\times n}(\mathcal{R}),Y\in\mathcal{M}_ {n\times m}(\mathcal{R})\bigg{\}}.\]
**Example 3.5**.: The superalgebra \(\mathcal{Q}_{n}(\mathcal{R})\) equals \(\mathcal{M}_{n\times n}(\mathcal{R})\oplus\mathcal{M}_{n\times n}(\mathcal{R})\) as an algebra, with \(\sigma_{\mathcal{Q}_{n}(\mathcal{R})}:(x,y)\mapsto(y,x)\), for all \(x,y\in\mathcal{M}_{n\times n}(\mathcal{R})\), cf. (3.1).
**Example 3.6**.: The _twisted group superalgebra_\(\mathcal{T}_{n}\) over \(\mathcal{R}\) of the symmetric group \(\mathsf{S}_{n}\) is the \(\mathcal{R}\)-superalgebra given by odd generators \(\mathpzc{t}_{1},\dots,\mathpzc{t}_{n-1}\) subject to the relations
\[\mathpzc{t}_{r}^{2}=1,\quad\mathpzc{t}_{r}\mathpzc{t}_{s}=-\mathpzc{t}_{s} \mathpzc{t}_{r}\ \text{if}\ |r-s|>1,\quad(\mathpzc{t}_{r}\mathpzc{t}_{r+1})^{3}=1.\]
Choosing, for each \(w\in\mathsf{S}_{n}\), a reduced expression \(w=s_{r_{1}}\cdots s_{r_{k}}\) in terms of simple transpositions, we define \(\mathpzc{t}_{w}:=\mathpzc{t}_{r_{1}}\cdots\mathpzc{t}_{r_{k}}\) (which depends up to a sign on the choice of a reduced expression). It is well-known that \(\{\mathpzc{t}_{w}\mid w\in\mathsf{S}_{n}\}\) is an \(\mathcal{R}\)-basis of \(\mathcal{T}_{n}\) (one way to see that is to use one of the isomorphisms (4.3) below).
### Supermodules and bisupermodules
Let \(A\) be an \(\mathcal{R}\)-superalgebra. By an \(A\)-_supermodule_\(M\) we mean an \(A\)-module which is also an \(\mathcal{R}\)-superspace such that \(A_{\varepsilon}M_{\delta}\subseteq M_{\varepsilon+\delta}\) for all \(\varepsilon,\delta\in\mathbb{Z}/2\). A _subsupermodule_ of \(M\) is a submodule that is also a subsuperspace. The \(A\)-supermodule \(M\) is called _irreducible_ if it has exactly two subsupermodules.
If \(B\) is another \(\mathcal{R}\)-superalgebra, an \((A,B)\)_-bisupermodule_\(M\) is an \((A,B)\)-bimodule which is also an \(\mathcal{R}\)-superspace such that \(A_{\varepsilon}M_{\delta}\subseteq M_{\varepsilon+\delta}\) and \(M_{\delta}B_{\varepsilon}\subseteq M_{\varepsilon+\delta}\) for all \(\varepsilon,\delta\in\mathbb{Z}/2\). We note that we can also view the \((A,B)\)-bisupermodule \(M\) as an \((A\otimes B^{\mathrm{sop}})\)-supermodule via
\[(a\otimes b).m=(-1)^{|b||m|}amb, \tag{3.7}\]
for all \(a\in A\), \(b\in B\) and \(m\in M\). In this way, we identify the notions of an \((A,B)\)-bisupermodule and an \((A\otimes B^{\mathrm{sop}})\)-supermodule.
If \(M\) is an \(A\)-supermodule, we denote by \(\Pi M\) the module \(M\) with parity swapped, i.e. \((\Pi M)_{\varepsilon}=M_{\varepsilon+1}\) for \(\varepsilon\in\mathbb{Z}/2\mathbb{Z}\), and the \(A\)-supermodule structure on \(\Pi M\) is given by \(a.m=am\).
A _homomorphism_\(f:M\to N\) of \(A\)-supermodules is an \(A\)-module homomorphism that is also a homomorphism of \(\mathcal{R}\)-superspaces, i.e. we are working in the (\(\Pi\)-)category \(A\)-\(\underline{\mathrm{smod}}\) of finitely generated \(A\)-supermodules and even \(A\)-supermodule homomorphisms, cf. [**BE**, Definition 1.6]. So an isomorphism of \(A\)-supermodules is also always assumed even, and we use the notation \(M\simeq N\) to signify that the \(A\)-supermodules \(M\) and \(N\) are (evenly) isomorphic. Note, for example, the regular supermodule for a superalgebra \(A\) with superunit \(u\) satisfies \(A\simeq\Pi A\) via the isomorphism \(a\mapsto au\). We write \(\hom_{A}(M,N)\) for the \(\mathcal{R}\)-space of all homomorphisms between \(A\)-supermodules \(M\) and \(N\). We also need the \(\mathcal{R}\)-superspace
\[\Hom_{A}(M,N)=\Hom_{A}(M,N)_{\bar{0}}\oplus\Hom_{A}(M,N)_{\bar{1}},\]
where
\[\Hom_{A}(M,N)_{\bar{0}}:=\hom_{A}(M,N)\quad\text{and}\quad\Hom_{A}(M,N)_{\bar {1}}:=\hom_{A}(M,\Pi N).\]
If \(V\) is an \(\mathcal{R}\)-superspace we denote by \(|V|\) the \(\mathcal{R}\)-space which is \(V\) with grading forgetten. In particular, if \(A\) is an \(\mathcal{R}\)-superalgebra and \(M\) is an \(A\)-supermodule, we have the \(\mathcal{R}\)-algebra \(|A|\) and an \(|A|\)-module \(|M|\). Similarly, if \(B\) is another superalgebra and \(M\) is an \((A,B)\)-bisupermodule, we have an \((|A|,|B|)\)-bisupermodule \(|M|\). Note that
\[|\Hom_{A}(M,N)|=\Hom_{|A|}(|M|,|N|)\]
for all \(M,N\in A\)-\(\underline{\mathrm{smod}}\). For the purely even sub(super)algebra \(A_{\bar{0}}\) we do not distinguish between \(|A_{\bar{0}}|\) and \(A_{\bar{0}}\).
The category \(A\)-\(\underline{\mathrm{smod}}\) is isomorphic to the category \(\hat{A}\)-\(\mathrm{mod}\), where
\[\hat{A}:=\langle A,e_{\bar{0}},e_{\bar{1}}\mid e_{\varepsilon}e_{\delta}= \delta_{\varepsilon,\delta}e_{\varepsilon},\ e_{\bar{0}}+e_{\bar{1}}=1,\ ae_{ \varepsilon}=e_{\varepsilon+|a|}a,\ \ \text{for all}\ \,\varepsilon,\delta\in \mathbb{Z}/2,\,a\in A,\rangle_{\mathcal{R}}.\]
Here, \(e_{\varepsilon}\) is understood to correspond to the projection \(M\twoheadrightarrow M_{\varepsilon}\), for an \(A\)-supermodule \(M\). In particular, we have the Krull-Schmidt property for \(A\)-\(\underline{\mathrm{smod}}\), see also [**NO**, Corollary 3.7.5].
Using the above identification of \((A,B)\)-bisupermodules with \((A\otimes B^{\mathrm{sop}})\)-supermodules we can also define \((A,B)\)-bisupermodule homomorphisms and isomorphisms. In particular for \((A,B)\)-bisupermodules \(M\) and \(N\) we have the \(\mathcal{R}\)-space \(\hom_{A\otimes B^{\mathrm{sop}}}(M,N)\) and the \(\mathcal{R}\)-superspace \(\Hom_{A\otimes B^{\mathrm{sop}}}(M,N)\). As with ordinary (bi)modules, we again use the notation \(M\mid N\) to signify that \(M\) is (evenly) isomorphic to a direct summand of \(N\), as (bi)supermodules.
An \(A\)-supermodule \(M\) is called _absolutely indecomposable_ if \(|M|\) is indecomposable as an \(|A|\)-module. An \((A,B)\)-bisupermodule \(M\) is called _absolutely indecomposable_ if \(|M|\) indecomposable as an \((|A|,|B|)\)-bimodule.
For an \(|A|\)-module \(M\), we define \(M^{\sigma}\) to be the \(|A|\)-module equal to \(M\) as an \(\mathcal{R}\)-module but with the action given via \(a.m:=(-1)^{|a|}am\), for all \(a\in A\) and \(m\in M\). We call \(M\)_self-associate_ when \(M\cong M^{\sigma}\). Otherwise, we say \(M\) is _non-self-associate_.
Let \(A,B,C\) and \(D\) be \(\mathcal{R}\)-superalgebras, \(M\) an \((A,C)\)-bisupermodule and \(N\) a \((B,D)\)-bisupermodule. We define the \((A\otimes B,C\otimes D)\)-bisupermodule \(M\boxtimes N\) to be the \(\mathcal{R}\)-superspace \(M\otimes N\) with the action
\[(a\otimes b)(m\otimes n)=(-1)^{|b||m|}(am\otimes bn),\qquad(m\otimes n)(c \otimes d)=(-1)^{|c||n|}(mc\otimes nd),\]
for all \(a\in A\), \(b\in B\), \(c\in C\), \(d\in D\), \(m\in M\) and \(n\in N\). In particular, given an \(A\)-supermodule \(M\) and a \(B\)-supermodule \(N\), we have the \((A\otimes B)\)-supermodule \(M\boxtimes N\) with action \((a\otimes b)(m\otimes n)=(-1)^{|b||m|}(am\otimes bn)\).
The following Remark, taken from [KL, Remark 2.2.10], allows us to apply results concerning tensor products of supermodules to tensor products of bisupermodules.
**Remark 3.8**.: Using the above notation, the \((A\otimes C^{\mathrm{sop}})\otimes(B\otimes D^{\mathrm{sop}})\)-supermodule \(M\boxtimes N\) can be identified with the \((A\otimes B,C\otimes D)\)-bisupermodule \(M\boxtimes N\) using (3.7) and the superalgebra isomorphism
\[(A\otimes C^{\mathrm{sop}})\otimes(B\otimes D^{\mathrm{sop}}) \to(A\otimes B)\otimes(C\otimes D)^{\mathrm{sop}}\] \[(a\otimes c)\otimes(b\otimes d) \mapsto(-1)^{|b||c|}(a\otimes b)\otimes(c\otimes d).\]
**Lemma 3.9**.: _Let \(A_{1}\), \(A_{2}\), \(B_{1}\), \(B_{2}\), \(C_{1}\) and \(C_{2}\) be \(\mathcal{R}\)-superalgebras, \(M_{1}\) an \((A_{1},B_{1})\)-bisupermodule, \(M_{2}\) an \((A_{2},B_{2})\)-bisupermodule, \(N_{1}\) a \((B_{1},C_{1})\)-bisupermodule and \(N_{2}\) a \((B_{2},C_{2})\)-bisupermodule. Then_
\[(M_{1}\boxtimes M_{2})\otimes_{B_{1}\otimes B_{2}}(N_{1}\boxtimes N _{2}) \to(M_{1}\otimes_{B_{1}}N_{1})\boxtimes(M_{2}\otimes_{B_{2}}N_{2})\] \[(m_{1}\otimes m_{2})\otimes(n_{1}\otimes n_{2}) \mapsto(-1)^{|m_{2}||n_{1}|}(m_{1}\otimes n_{1})\otimes(m_{2} \otimes n_{2}),\]
_for all \(m_{1}\in M_{1}\), \(m_{2}\in M_{2}\), \(n_{1}\in N_{1}\) and \(n_{2}\in N_{2}\), is an isomorphism of \((A_{1}\otimes A_{2},C_{1}\otimes C_{2})\)-bisupermodules._
Proof.: This is just a fairly quick checking exercise.
### Morita superequivalences
Let \(A\) and \(B\) be superalgebras. A _Morita superequivalence_ between \(A\) and \(B\) is a Morita equivalence between \(A\) and \(B\) induced by an \((A,B)\)-bisupermodule \(M\) and a \((B,A)\)-bisupermodule \(N\), i.e. \(M\otimes_{B}N\simeq A\) as \((A,A)\)-bisupermodules and \(N\otimes_{A}M\simeq B\) as \((B,B)\)-bisupermodules. We write \(A\sim_{\mathrm{sMor}}B\).
A _stable superequivalence of Morita type_ between \(A\) and \(B\) is a stable equivalence of Morita type between \(A\) and \(B\) induced by an \((A,B)\)-bisupermodule \(M\) and a \((B,A)\)-bisupermodule \(N\). That is, there exist bisupermodule isomorphisms \(M\otimes_{B}N\simeq A\oplus P\) and \(N\otimes_{A}M\simeq B\oplus Q\), where \(P\) (resp. \(Q\)) is an \((A,A)\)-bisupermodule (resp. \((B,B)\)-bisupermodule) such that \(|P|\) (resp. \(|Q|\)) is projective as an \((|A|,|A|)\)-bimodule (resp. \((|B|,|B|)\)-bimodule).
Many of the following results are taken from [KL, SSSS2.2c,d], where it is assumed that the superalgebras are all defined over a field. However, all the proofs run through for \(\mathcal{O}\)-superalgebras in exactly the same way.
**Lemma 3.10**.: _Let \(A\) be a superalgebra and \(e\in A_{\bar{0}}\) an idempotent such that \(AeA=A\). Then \(A\) and \(eAe\) are Morita superequivalent via the \((eAe,A)\)-bisupermodule \(eA\) and the \((A,eAe)\)-bisupermodule \(Ae\)._
Proof.: [**KL**, Lemma 2.2.14].
**Lemma 3.11**.: _Let \(A_{1},A_{2},B_{1},B_{2}\) be superalgebras. If \(A_{i}\) and \(B_{i}\) are Morita superequivalent via the \((A_{i},B_{i})\)-bisupermodule \(M_{i}\) and the \((B_{i},A_{i})\)-bisupermodule \(N_{i}\), for \(i=1,2\), then \(A_{1}\otimes A_{2}\) and \(B_{1}\otimes B_{2}\) are Morita superequivalent via the \((A_{1}\otimes A_{2},B_{1}\otimes B_{2})\)-bisupermodule \(M_{1}\boxtimes M_{2}\) and the \((B_{1}\otimes B_{2},A_{1}\otimes A_{2})\)-bisupermodule \(N_{1}\boxtimes N_{2}\)._
Proof.: [**KL**, Lemma 2.2.17].
**Lemma 3.12**.: _Let \(A\) and \(B\) be superalgebras with superunit. If the \((A,B)\)-bisupermodule \(M\) induces a Morita superequiavalence between \(A\) and \(B\), then \(M_{\bar{0}}\) induces a Morita equiavalence between \(A_{\bar{0}}\) and \(B_{\bar{0}}\)._
Proof.: [**KL**, Lemma 2.2.19].
Let \(A\) be a superalgebra, \(M\) be an \(A_{\bar{0}}\)-module and \(N\) be an \(|A|\)-module. We give \(\mathrm{Ind}_{A_{\bar{0}}}^{A}(M)\) the structure of an \(A\)-supermodule via \(|a\otimes m|:=|a|\), for all \(a\in A\) and \(m\in M\). If, in addition, \(A\) has a superunit \(u\), we denote by \({}^{u}M\) the \(A_{\bar{0}}\)-module which is \(M\) as a \(\mathcal{O}\)-module but the \(A_{\bar{0}}\)-action is given by \(a.m=(u^{-1}au)m\). If \(M\subseteq\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\), we have the \(A_{\bar{0}}\)-submodule \(uM\) of \(\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\), and \(uM\cong{}^{u}M\).
**Lemma 3.13**.: _Let \(A\) be a superalgebra with superunit \(u\)._
* _If_ \(M\) _is an_ \(A\)_-supermodule, then_ \(M\simeq\mathrm{Ind}_{A_{\bar{0}}}^{A}M_{\bar{0}}\)_._
* _If_ \(N\) _is an indecomposable_ \(|A|\)_-module, then_ \(\mathrm{Ind}_{A_{\bar{0}}}^{|A|}\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\cong N \oplus N^{\sigma}\)_. Moreover, either_ \(\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\) _is indecomposable or_ \(\mathrm{Res}_{A_{\bar{0}}}^{|A|}N=M\oplus uM\) _for indecomposable_ \(A_{\bar{0}}\)_-modules_ \(M\not\cong uM\)_._
Proof.: (i) The \(A\)-supermodule homomorphism \(\mathrm{Ind}_{A_{\bar{0}}}^{A}M_{\bar{0}}\to M,\ a\otimes m\mapsto am\) is an isomorphism, as \(M=M_{\bar{0}}\oplus uM_{\bar{0}}\).
(ii) We can give \(N\oplus N^{\sigma}\) the structure of an \(A\)-supermodule via
\[(N\oplus N^{\sigma})_{\bar{0}}:=\{(n,n)\mid n\in N\}\qquad\text{and}\qquad(N \oplus N^{\sigma})_{\bar{1}}:=\{(n,-n)\mid n\in N\}.\]
Since \(2\) is invertible in \(\mathcal{O}\), \(N\oplus N^{\sigma}\) is indeed a direct sum of \((N\oplus N^{\sigma})_{\bar{0}}\) and \((N\oplus N^{\sigma})_{\bar{1}}\). Moreover, \(\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\cong(N\oplus N^{\sigma})_{\bar{0}}\), as \(A_{\bar{0}}\)-modules, and so, by part (i), we have that
\[\mathrm{Ind}_{A_{\bar{0}}}^{A}\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\simeq N\oplus N ^{\sigma} \tag{3.14}\]
as \(A\)-supermodules. The fact that \(\mathrm{Res}_{A_{\bar{0}}}^{|A|}(N)\) is the direct sum of at most two indecomposable \(A_{\bar{0}}\)-modules follows immediately.
Suppose \(\mathrm{Res}_{A_{\bar{0}}}^{|A|}N=M\oplus M^{\prime}\), for indecomposable \(A_{\bar{0}}\)-modules \(M,M^{\prime}\). By (3.14),
\[N\oplus N^{\sigma}\cong\mathrm{Ind}_{A_{\bar{0}}}^{|A|}M\,\oplus\,\mathrm{ Ind}_{A_{\bar{0}}}^{|A|}M^{\prime},\]
as \(|A|\)-modules. Since \(N\) is indecomposable, we may assume that \(N\cong\mathrm{Ind}_{A_{\bar{0}}}^{|A|}M\) and \(N^{\sigma}\cong\mathrm{Ind}_{A_{\bar{0}}}^{|A|}M^{\prime}\). Now,
\[M\oplus M^{\prime}\cong\mathrm{Res}_{A_{\bar{0}}}^{|A|}N\cong\mathrm{Res}_{A_{ \bar{0}}}^{|A|}\mathrm{Ind}_{A_{\bar{0}}}^{|A|}M\cong M\oplus uM\]
and so \(M^{\prime}\cong uM\). In particular,
\[\operatorname{Ind}_{A_{\bar{0}}}^{A}M^{\prime}\cong\operatorname{Ind}_{A_{\bar{0} }}^{A}uM\cong\operatorname{Ind}_{A_{\bar{0}}}^{A}M\cong N,\]
as \(|A|\)-modules, proving all the claims except that \(M\not\cong uM\).
Suppose \(\varphi:M\to uM\) is an \(A_{\bar{0}}\)-module isomorphism. Since, \(N\cong\operatorname{Ind}_{A_{\bar{0}}}^{|A|}M\), we may identify \(N\) with \(M\oplus uM\) and consider the \(|A|\)-module automorphism \(\psi:N\to N,\ m+um^{\prime}\mapsto\varphi(m)+u\varphi(m^{\prime})\), for all \(m,m^{\prime}\in M\).
Certainly \(\psi^{2}\) is an \(|A|\)-module automorphism of \(N\). Since \(N\) is indecomposable, there is some \(c\in\mathcal{O}^{\times}\) such that \(c.\operatorname{Id}_{N}-\psi^{2}\in\operatorname{rad}\left(\operatorname{End} _{|A|}(N)\right)\). Let \(k\in\mathcal{O}^{\times}\) such that \(\bar{k}^{2}=\bar{c}^{-1}\) in \(\mathbb{F}\). Now, the images of \((\operatorname{Id}_{N}+k.\psi)/2\) and \((\operatorname{Id}_{N}-k.\psi)/2\) in \(\operatorname{End}_{|A|}(N)/\operatorname{rad}\left(\operatorname{End}_{|A|} (N)\right)\) are orthogonal idempotents. Once we have shown that they are also non-zero we will have contradicted the indecomposability of \(N\), proving the non-existence of \(\varphi\).
One can quickly check that \(\operatorname{End}_{|A|}(N)\) has an automorphism given by conjugation by the \(\mathcal{O}\)-linear map
\[N\to N,\ m+um^{\prime}\mapsto m-um^{\prime},\]
for all \(m,m^{\prime}\in M\). Note that, under this automorphism, \(\psi\) and \(-\psi\) get swapped and hence \((\operatorname{Id}_{N}+k.\psi)/2\) and \((\operatorname{Id}_{N}-k.\psi)/2\) get swapped. Therefore, \((\operatorname{Id}_{N}+k.\psi)/2\) is zero in \(\operatorname{End}_{|A|}(N)/\operatorname{rad}\left(\operatorname{End}_{|A|} (N)\right)\) if and only if \((\operatorname{Id}_{N}-k.\psi)/2\) is. Since they sum to \(\operatorname{Id}_{N}\), we have shown that they are both non-zero and the proof is complete.
### Projective supermodules
Let \(A\) be a superalgebra with a superunit. In particular, for the left regular supermodule \(A\) we have \(A\simeq\Pi A\) as noted in SS3.2. A standard argument then shows that a supermodule \(P\in A\text{-}\underline{\text{smod}}\) is projective if and only if \(P\mid A^{\oplus n}\) for some \(n\).
**Lemma 3.15**.: _Let \(A\) be a superalgebra with a superunit \(u\) and \(P\in A\text{-}\underline{\text{smod}}\). Then the following are equivalent:_
1. \(P\) _is a projective_ \(A\)_-supermodule;_
2. \(|P|\) _is a projective_ \(|A|\)_-module;_
3. \(P_{\bar{0}}\) _is a projective_ \(A_{\bar{0}}\)_-module;_
4. \(P\simeq\operatorname{Ind}_{A_{\bar{0}}}^{A}Q\) _for a projective_ \(A_{\bar{0}}\)_-module_ \(Q\)_._
Proof.: (iv) \(\Longrightarrow\) (i) If \(Q\) is a projective \(A_{\bar{0}}\)-module, then \(Q\mid A_{\bar{0}}^{\oplus n}\), for some \(n\), and so \(\operatorname{Ind}_{A_{\bar{0}}}^{A}Q\mid\operatorname{Ind}_{A_{\bar{0}}}^{A} A_{\bar{0}}^{\oplus n}\simeq A^{\oplus n}\), hence \(\operatorname{Ind}_{A_{\bar{0}}}^{A}Q\) is projective.
(i) \(\Longrightarrow\) (iii),(iv) Let \(P\) be a projective \(A\)-supermodule. Then \(P\simeq\operatorname{Ind}_{A_{\bar{0}}}^{A}P_{\bar{0}}\), by Lemma 3.13(i). Moreover, \(P\mid A^{\oplus n}\), for some \(n\), and so
\[P_{\bar{0}}\mid\operatorname{Res}_{A_{\bar{0}}}^{A}P\mid\operatorname{Res}_{A_{ \bar{0}}}^{A}A^{\oplus n}\cong A_{\bar{0}}^{\oplus n}\oplus A_{\bar{1}}^{\oplus n},\]
as \(A_{\bar{0}}\)-modules. It remains to note that \(A_{\bar{0}}\cong A_{\bar{1}}\), as \(A_{\bar{0}}\)-modules, via \(a\mapsto au\), and so \(P_{\bar{0}}\) is a projective \(A_{\bar{0}}\)-module, as desired.
(iii) \(\Longrightarrow\) (i) By Lemma 3.13(i), we have \(P\simeq\operatorname{Ind}_{A_{\bar{0}}}^{A}P_{\bar{0}}\), so (i) comes from the implication (iv) \(\Longrightarrow\) (i).
(i) \(\Longrightarrow\) (ii) is clear since \(P\mid A^{\oplus n}\) implies \(|P|\ \big{|}\ |A|^{\oplus n}\).
(ii) \(\Longrightarrow\) (i) Let \(P\) be an \(A\)-supermodule with \(|P|\) being projective as an \(|A|\)-module. Then \(P\simeq\operatorname{Ind}_{A_{\bar{0}}}^{A}P_{\bar{0}}\), by Lemma 3.13(i). Moreover, \(|P|\ \big{|}\ |A|^{\oplus n}\), for some \(n\), and so
\[P_{\bar{0}}\ \big{|}\ \operatorname{Res}_{A_{\bar{0}}}^{|A|}|P|\ \big{|}\ \operatorname{Res}_{A_{\bar{0}}}^{|A|}|A|^{\oplus n}\cong A_{\bar{0}}^{\oplus n} \oplus A_{\bar{1}}^{\oplus n},\]
as \(A_{\bar{0}}\)-modules. But the \(A_{\bar{0}}\)-modules \(A_{\bar{0}}\) and \(A_{\bar{1}}\) are isomorphic, as noticed above. So \(P_{\bar{0}}\) is projective, hence \(P\) is projective by the implication (iii) \(\Longrightarrow\) (i).
A _projective cover_ of an \(A\)-supermodule is defined exactly like for modules. Our conditions on the superalgebra \(A\) ensure that a projective cover of a (finitely generated) \(A\)-supermodule \(M\) exists and is unique up to isomorphism. So we can define _Heller translate_\(\Omega_{A}(M)\) of \(M\) just like for modules.
**Lemma 3.16**.: _Let \(A\) be a superalgebra with a superunit and \(M\in A\text{-}\underline{\text{\rm{smod}}}\). If \((P_{M_{\bar{0}}},\varphi_{M_{\bar{0}}})\) is a projective cover of \(M_{\bar{0}}\) as an \(A_{\bar{0}}\)-module, then \((\operatorname{Ind}^{A}_{A_{\bar{0}}}P_{M_{\bar{0}}},\operatorname{Ind}^{A}_ {A_{\bar{0}}}\varphi_{M_{\bar{0}}})\) is a projective cover of \(M\) as an \(A\)-supermodule and of \(|M|\) as an \(|A|\)-module._
Proof.: By Lemma 3.15, \(\operatorname{Ind}^{A}_{A_{0}}P_{M_{\bar{0}}}\) is projective as an \(A\)-supermodule and as an \(|A|\)-module. Moreover, no proper summand of \(\operatorname{Ind}^{A}_{A_{\bar{0}}}P_{M_{\bar{0}}}\) will suffice as a projective cover of \(M\) or \(|M|\), since \(\operatorname{Res}^{A}_{A_{0}}\operatorname{Ind}^{A}_{A_{\bar{0}}}P_{M_{\bar{0 }}}\cong P_{M_{\bar{0}}}\oplus{}^{u}P_{M_{\bar{0}}}\) is a projective cover of \(\operatorname{Res}^{A}_{A_{0}}M\cong M_{\bar{0}}\oplus uM_{\bar{0}}\).
**Corollary 3.17**.: _Let \(A\) be a superalgebra with a superunit and \(M\in A\text{-}\underline{\text{\rm{smod}}}\). Then \(\Omega_{A}(M)\simeq\operatorname{Ind}^{A}_{A_{\bar{0}}}\Omega_{A_{\bar{0}}}(M_ {\bar{0}})\) and \(|\Omega_{A}(M)|\cong\Omega_{|A|}(|M|)\)._
Proof.: Follows from Lemma 3.16 and the exactness of the functor \(\operatorname{Ind}^{A}_{A_{\bar{0}}}\).
**Corollary 3.18**.: _Let \(A,B\) be superalgebras with superunits, \(M\) an \(A\)-supermodule and \(N\) a \(B\)-supermodule. If \((P_{M_{\bar{0}}},\varphi_{M_{\bar{0}}})\) is a projective cover of \(M_{\bar{0}}\) as an \(A_{\bar{0}}\)-module and \((P_{N_{\bar{0}}},\varphi_{N_{\bar{0}}})\) is a projective cover of \(N_{\bar{0}}\) as a \(B_{\bar{0}}\)-module then_
\[\Big{(}\operatorname{Ind}^{A\otimes B}_{A_{\bar{0}}\otimes B_{0}}(P_{M_{\bar{0 }}}\boxtimes P_{N_{\bar{0}}}),\,\operatorname{Ind}^{A\otimes B}_{A_{\bar{0}} \otimes B_{0}}(\varphi_{M_{\bar{0}}}\otimes\varphi_{N_{\bar{0}}})\Big{)}\]
_is a projective cover of \(M\boxtimes N\) as an \((A\otimes B)\)-supermodule._
Proof.: This follows similarly to Lemma 3.16 once we have observed that \((P_{M_{\bar{0}}}\boxtimes P_{N_{\bar{0}}},\varphi_{M_{\bar{0}}}\otimes\varphi_ {N_{\bar{0}}})\) is a projective cover of \(M_{\bar{0}}\boxtimes N_{\bar{0}}\) as an \(A_{\bar{0}}\otimes B_{\bar{0}}\)-module.
**Lemma 3.19**.: _Let \(A\) and \(B\) be two superalgebras with superunits, \(M\in A\text{-}\underline{\text{\rm{smod}}}\) and \(N\in B\text{-}\underline{\text{\rm{smod}}}\). Then there exists a canonically defined monomorphism_
\[\Omega_{A}(M)\boxtimes\Omega_{B}(N)\hookrightarrow\Omega_{A\otimes B}(M \boxtimes N)\]
_of \((A\otimes B)\)-supermodules. Furthermore, through this monomorphism,_
\[\Omega_{A\otimes B}(M\boxtimes N)/(\Omega_{A}(M)\boxtimes\Omega_{B}(N))\simeq( \Omega_{A}(M)\boxtimes N)\oplus(M\boxtimes\Omega_{B}(N)),\]
_as \((A\otimes B)\)-supermodules._
Proof.: If \((P_{M},\varphi_{M})\) is a projective cover of \(M\) and \((P_{N},\varphi_{N})\) is a projective cover of \(N\) then, using for example Lemma 3.16 and Corollary 3.18, we can see that \((P_{M}\boxtimes P_{N},\varphi_{M}\otimes\varphi_{N})\) is a projective cover of \(M\boxtimes N\). Now,
\[\Omega_{A\otimes B}(M\boxtimes N)=\ker(\varphi_{M}\otimes\varphi_{N})=(\Omega_ {A}(M)\boxtimes P_{N})+(P_{M}\boxtimes\Omega_{B}(N))\subseteq P_{M}\boxtimes P _{N}\]
and
\[\Big{(}(\Omega_{A}(M)\boxtimes P_{N})+(P_{M}\boxtimes\Omega_{B}(N))\Big{)}/ \Omega_{A}(M)\boxtimes\Omega_{B}(N)\simeq(\Omega_{A}(M)\boxtimes N)\oplus(M \boxtimes\Omega_{B}(N)),\]
which implies the result.
### Crossed superproducts
Let \(G\) be a finite group. A \(G\)_-graded crossed superproduct_ will refer to a superalgebra \(A\) with a decomposition \(\bigoplus_{g\in G}A_{g}\) into subsuperspaces such that \(A_{g}A_{h}\subseteq A_{gh}\), for all \(g,h\in G\), and such that, for all \(g\in G\), we have \(A_{g}\cap A^{\times}\neq\varnothing\). If \(A\) is a \(G\)-graded crossed superproduct, then so is \(A^{\mathrm{sop}}\) by defining \((A^{\mathrm{sop}})_{g}=A_{g^{-1}}\), for all \(g\in G\). Note that \(A_{1_{G}}\) is always a subsuperalgebra of \(A\), and \((A_{1_{G}})^{\mathrm{sop}}=(A^{\mathrm{sop}})_{1_{G}}\).
A \(G\)-graded crossed superproduct \(A\) is called _supersymmetric_ if it has a superymmetrizing form \(\mathrm{tr}:A\to\mathcal{O}\) that turns \(A\) into a supersymmetric algebra and \(\mathrm{tr}(A_{g})=0\), for all \(g\in G\setminus\{1_{G}\}\). In particular, \(\mathrm{tr}\) gives \(A_{1_{G}}\) the structure of a supersymmetric algebra.
If \(A\) and \(B\) are \(G\)-graded crossed superproducts, we define
\[(A,B)_{G}:=\sum_{g\in G}A_{g}\otimes B_{g^{-1}}=\sum_{g\in G}A_{g}\otimes(B^{ \mathrm{sop}})_{g}\subseteq A\otimes B^{\mathrm{sop}}. \tag{3.20}\]
The definition of \(G\)-graded crossed superproduct ensures that
\[A_{1_{G}}\otimes B^{\mathrm{sop}}_{1_{G}}\subseteq(A,B)_{G}\subseteq A\otimes B ^{\mathrm{sop}}\]
are subsuperalgebras. In particular, if \(A\) and \(B\) are both superalgebras with super units, we consider \(A\) and \(B\) as \(C_{2}\)-graded crossed superproducts in the natural way, and
\[(A,B)_{C_{2}}=(A_{\bar{0}}\otimes B_{\bar{0}})\oplus(A_{\bar{1}}\otimes B_{ \bar{1}})\subseteq A\otimes B^{\mathrm{sop}}.\]
The following Proposition is a super version of [Ma, Theorem 3.4(a)], proved in [KL, Proposition 2.2.22] for superalgebras defined over a field. However, the proof is no more complicated for superalgebras defined over \(\mathcal{O}\). As long as the superalgebras are supersymmetric the proof of [KL, Proposition 2.2.22] runs through unaltered (see [Ma, Remark 3.2(e)]). Recall that one can view an \((A,B)\)-bisupermodule as an \(A\otimes B^{\mathrm{sop}}\)-supermodule via (3.7).
**Proposition 3.21**.: _Let \(G\) be a finite group, and \(A,\,B\) be supersymmetric \(G\)-graded crossed superproducts. Suppose \(M\) is an \((A_{1_{G}}\otimes B^{\mathrm{sop}}_{1_{G}})\)-supermodule inducing a Morita superequivalence between \(A_{1_{G}}\) and \(B_{1_{G}}\). If \(M\) extends to an \((A,B)_{G}\)-supermodule, then \(\mathrm{Ind}_{(A,B)_{G}}^{A\otimes B^{\mathrm{sop}}}(M)\) induces a Morita superequivalence between \(A\) and \(B\)._
### Dual supermodules
Let \(A\) be a superalgebra and \(M\in A\)-\(\underline{\mathrm{smod}}\). We define the _dual_\(M^{*}\) of \(M\) to be \(\mathrm{Hom}_{\mathcal{O}}(M,\mathcal{O})\) that we give the structure of a superspace through
\[\mathrm{Hom}_{\mathcal{O}}(M,\mathcal{O})_{\varepsilon}:=\{f\in\mathrm{Hom}_{ \mathcal{O}}(M,\mathcal{O})\mid f(M_{\varepsilon+\bar{1}})=0\}\qquad( \varepsilon\in\mathbb{Z}/2).\]
We treat \(M^{*}\) as a right \(A\)-supermodule via \((f.a)(m):=f(am)\), for all \(a\in A\), \(f\in M^{*}\) and \(m\in M\), hence also as an \(A^{\mathrm{sop}}\)-module via \((a.f)(m):=(-1)^{|a||f|}f(am)\).
If \(B\) is a another superalgebra and \(M\) an \((A,B)\)-bisupermodule, we can view \(M^{*}\) as a \((B,A)\)-bisupermodule via \((b.f.a)(m):=f(amb)\). Note that this is not always isomorphic to the bimodule obtained by considering \(M\) as an \((A\otimes B^{\mathrm{sop}})\)-supermodule and hence \(M^{*}\) as an \((A\otimes B^{\mathrm{sop}})^{\mathrm{sop}}\)-supermodule (which can also be thought of as a \((B,A)\)-bisupermodule). The reason we have chosen to define \(M^{*}\) using the former definition is that our main reason for introducing \(M^{*}\) is Lemma 3.25, which does not hold using this alternative definition of \(M^{*}\).
**Lemma 3.22**.: _Let \(A\), \(B\) and \(C\) be superalgebras, \(M\) an \((A,B)\)-bisupermodule and \(N\) a \((B,C)\)-bisupermodule. Then there is an isomorphism of \((C,A)\)-bisupermodules_
\[N^{*}\otimes_{B}M^{*}\stackrel{{\sim}}{{\longrightarrow}}(M \otimes_{B}N)^{*},\ g\otimes f\mapsto(m\otimes n\mapsto f(m)g(n)),\]
Proof.: This is a standard check.
**Lemma 3.23**.: _Let \(G,H\) be finite groups, \(A\) a \(G\)-graded crossed superproduct with \(a_{g}\in A_{g}\cap A^{\times}\) for all \(g\in G\), and \(B\) an \(H\)-graded crossed superproduct with \(b_{h}\in B_{h}\cap B^{\times}\) for all \(h\in H\). If \(M\) is an \((A_{1_{G}},B_{1_{H}})\)-bisupermodule then there is an isomorphism of \((B,A)\)-bisupermodules_
\[B\otimes_{B_{1_{H}}}M^{*}\otimes_{A_{1_{G}}}A\stackrel{{ \sim}}{{\longrightarrow}}(A\otimes_{A_{1_{G}}}M\otimes_{B_{1_{H}}}B)^{*},\] \[b_{h}\otimes f\otimes a_{g}\mapsto\bigg{(}a_{g^{\prime}}\otimes m \otimes b_{h^{\prime}}\mapsto\begin{cases}f(a_{g}a_{g^{\prime}}mb_{h^{\prime} }b_{h})&\text{if $g^{\prime}=g^{-1},h^{\prime}=h^{-1}$},\\ 0&\text{otherwise.}\end{cases}\bigg{)}.\]
Proof.: The map is certainly a bijection as the set of \(b_{h}\otimes f\otimes a_{g}\), as \(f\) runs over \(M^{*}\), is precisely the set of \((A\otimes_{A_{1_{G}}}M\otimes_{B_{1_{H}}}B)^{*}\) that is zero on all \(a_{g^{\prime}}\otimes M\otimes b_{h^{\prime}}\), for all \(g^{\prime},h^{\prime}\) with \(g^{\prime}\neq g^{-1}\) or \(h^{\prime}\neq h^{-1}\). One can readily check that this bijection is a homomorphism of \((B,A)\)-bisupermodules.
**Lemma 3.24**.: _Let \(A_{1},A_{2},B_{1},B_{2}\) be superalgebras, \(M\) an \((A_{1},B_{1})\)-bisupermodule and \(N\) an \((A_{2},B_{2})\)-bisupermodule. Then there is an isomorphism of \((B_{1}\otimes B_{2},A_{1}\otimes A_{2})\)-bisupermodules_
\[M^{*}\boxtimes N^{*}\stackrel{{\sim}}{{\longrightarrow}}(M \boxtimes N)^{*},\ f\otimes g\mapsto\big{(}m\otimes n\mapsto(-1)^{|g||m|}f(m )g(n)\big{)}.\]
Proof.: Again, this is a standard check.
**Lemma 3.25**.: _Let \(A\) and \(B\) be superalgebras, with \(A\) being supersymmetric. If \(M\) is an \((A,B)\)-bisupermodule such that \(|M|\otimes_{|B|}?\) induces a Morita equivalence between \(|B|\) and \(|A|\), then \(M\) and \(M^{*}\) induce a Morita superequivalence between \(B\) and \(A\)._
Proof.: By assumption, \(\operatorname{End}_{A}(M)\cong B^{\operatorname{op}}\) as superalgebras. By Morita theory, \(|M|\) and \(\operatorname{Hom}_{|A|}(|M|,|A|)\) induce a Morita equivalence between \(|B|\) and \(|A|\). More precisely,
\[|M\otimes_{B}\operatorname{Hom}_{A}(M,A)|=|M|\otimes_{|B|}\operatorname{Hom}_{ |A|}(|M|,|A|)\to|A|,\ m\otimes\varphi\mapsto\varphi(m)\]
is an isomorphism of \((|A|,|A|)\)-bimodules and
\[|\operatorname{Hom}_{A}(M,A)\otimes_{A}M|=\operatorname{Hom}_{|A|}(|M|,|A|) \otimes_{|A|}|M| \to\operatorname{End}_{|A|}(|M|)\cong|B|,\] \[\varphi\otimes m \mapsto(m^{\prime}\mapsto\varphi(m^{\prime})m)\]
is an isomorphism of \((|B|,|B|)\)-bimodules. Now, the first isomorphism is easily checked to be an isomorphism of \((A,A)\)-bisupermodules \(M\otimes_{B}\operatorname{Hom}_{A}(M,A)\) and \(A\), and the second isomorphism is easily checked to be an isomorphism of \((B,B)\)-bisupermodules \(\operatorname{Hom}_{A}(M,A)\otimes_{A}M\) and \(B\). In other words, \(M\) and \(\operatorname{Hom}_{A}(M,A)\) induce a Morita superequivalence between \(B\) and \(A\).
Let \(\operatorname{tr}:A\to\mathcal{O}\) be a supersymmetrizing form on \(A\). Then, by [\(\mathbf{L}_{5}\), Corollary 2.12.2],
\[|\operatorname{Hom}_{A}(M,A)|=\operatorname{Hom}_{|A|}(|M|,|A|)\to|M^{*}|,\ f \mapsto\operatorname{tr}\circ f\]
is an isomorphism of \((|B|,|A|)\)-bimodules. Once again, this is easily checked to be an isomorphism of \((B,A)\)-bisupermodules \(\operatorname{Hom}_{A}(M,A)\) and \(M^{*}\).
**Lemma 3.26**.: _Let \(A\) be a supersymmetric superalgebra with supersymmtrizing form \(\operatorname{tr}:A\to\mathcal{O}\). If \(e_{1},e_{2}\in A_{\bar{0}}\) are non-zero idempotents then, considering \(e_{1}Ae_{2}\) as a \((e_{1}Ae_{1},e_{2}Ae_{2})\)-bisupermodule, we have the isomorphism of \((e_{2}Ae_{2},e_{1}Ae_{1})\)-bisupermodules_
\[e_{2}Ae_{1}\stackrel{{\sim}}{{\longrightarrow}}(e_{1}Ae_{2})^{*},\ x\mapsto(y\mapsto\operatorname{tr}(xy)).\]
Proof.: One can quickly check this is a homomorphism of \((e_{2}Ae_{2},e_{1}Ae_{1})\)-bisupermodules. That it is an isomorphism in the case \(e_{1}=e_{2}=1_{A}\) follows immediately from the fact \(\operatorname{tr}\) is non-degenerate. Now, the kernel of the induced epimorphism \(A\to(e_{1}Ae_{2})^{*}\) is \(A(1-e_{1})+(1-e_{2})A\) and \(A/[A(1-e_{1})+(1-e_{2})A]\simeq e_{2}Ae_{1}\), as desired.
### Wreath superproducts
Throughout this subsection it is assumed that both \(-1\) and \(2\) have square roots in \(\mathcal{O}\) and that \(A\) is a superalgebra.
Let \(V\) be a superspace and \(d\in\mathbb{Z}_{>0}\). The symmetric group \(\mathsf{S}_{d}\) acts on \(V^{\otimes d}\) via
\[{}^{w}(v_{1}\otimes\cdots\otimes v_{d}):=(-1)^{[w;v_{1},\ldots,v_{d}]}v_{w^{-1 }(1)}\otimes\cdots\otimes v_{w^{-1}(d)},\]
where
\[[w;v_{1},\ldots,v_{d}]:=\sum_{1\leq a<c\leq d,\,w(a)>w(c)}|v_{a}||v_{c}|.\]
Following [KL, SS2.2a] (where we have worked over \(\mathbb{F}\)), given a superalgebra \(A\), we consider the _wreath superproduct_\(A\wr_{\mathsf{s}}\mathsf{S}_{d}\) to be \(A^{\otimes d}\otimes\mathsf{OS}_{d}\) as superspaces, with \(\mathsf{OS}_{d}\) concentrated in parity \(\bar{0}\). To describe the algebra structure we identify \(A^{\otimes d}\) and \(\mathsf{OS}_{d}\) as subalgebras in the obvious way and define
\[w\left(a_{1}\otimes\cdots\otimes a_{d}\right)={}^{w}(a_{1}\otimes\cdots \otimes a_{d})\,w\qquad(w\in\mathsf{S}_{d},\ a_{1},\ldots,a_{d}\in A).\]
Note that, \(A\wr_{\mathsf{s}}\mathsf{S}_{d}\) is an \(\mathsf{S}_{d}\)-graded crossed superproduct via
\[A\wr_{\mathsf{s}}\mathsf{S}_{d}=\bigoplus_{w\in\mathsf{S}_{d}}A^{\otimes d}w. \tag{3.27}\]
Recall from Example 3.6 the twisted group superalgebra \(\mathcal{T}_{d}\) with basis \(\{t_{w}\mid w\in\mathsf{S}_{d}\}\). Following [KL, SS5.1a], we define the _twisted wreath superproduct_\(A\wr_{\mathsf{s}}\mathcal{T}_{d}\) as the free product \(A^{\otimes d}\star\mathcal{T}_{d}\) of superalgebras subject to the relations
\[t_{r}\left(a_{1}\otimes\cdots\otimes a_{d}\right)=(-1)^{\sum_{u\neq r,r+1}|a_ {u}|}\left({}^{s_{r}}(a_{1}\otimes\cdots\otimes a_{d})\,t_{r}\right) \tag{3.28}\]
for all \(a_{1},\ldots,a_{d}\in A\) and \(1\leq r\leq d-1\).
Recall Example 3.3 and the notation from (3.2).
**Proposition 3.29**.: _We have an isomorphism of superalgebras_
\[(A\otimes\mathcal{C}_{1})\wr_{\mathsf{s}}\mathsf{S}_{d}\stackrel{{ \sim}}{{\longrightarrow}}(A\wr_{\mathsf{s}}\mathcal{T}_{d})\otimes\mathcal{C }_{d}\]
\[(a\otimes x)_{u}\mapsto(-1)^{u|a|}a_{u}\otimes x_{u},\ \ \ s_{r}\mapsto\frac{1}{ \sqrt{-2}}t_{r}\otimes(\epsilon_{r}-\epsilon_{r+1})\]
_for all \(a\in A\), \(x\in\mathcal{C}_{1}\), \(1\leq u\leq d\) and \(1\leq r<d\)._
Proof.: This was proved in [KL, Proposition 5.1.3] when everything is defined over a field. However, the proof over \(\mathcal{O}\) is identical, since \(2\) is still invertible in \(\mathcal{O}\).
Since \((\epsilon_{r}-\epsilon_{r+1})/\sqrt{-2}\) is invertible in \(\mathcal{C}_{d}\), an immediate consequence of Proposition 3.29 is that \(A\wr_{\mathsf{s}}\mathcal{T}_{d}\) is an \(\mathsf{S}_{d}\)-graded crossed superproduct via
\[A\wr_{\mathsf{s}}\mathcal{T}_{d}=\bigoplus_{w\in\mathsf{S}_{d}}A^{\otimes d}t_ {w}. \tag{3.30}\]
Given the \(\mathsf{S}_{d}\)-grading in (3.30), recall the definition of \((A\wr_{\mathsf{s}}\mathcal{T}_{d},A\wr_{\mathsf{s}}\mathcal{T}_{d})_{\mathsf{ S}_{d}}\) from (3.20).
**Lemma 3.31**.: _Let \(M\) be an \((A,A)\)-bisupermodule. We can extend \(M^{\boxtimes d}\) to an \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{ \mathfrak{s}_{d}}\)-supermodule, that we denote \(M^{\boxtimes d}_{\mathfrak{s}_{d}}\), via_
\[(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}).(m_{1}\otimes\cdots\otimes m_{ d}):=(-1)^{|m_{r}|+|m_{r+1}|}\left({}^{s_{r}}\left(m_{1}\otimes\cdots \otimes m_{d}\right)\right), \tag{3.32}\]
_for all \(m_{1},\ldots,m_{d}\in M\) and \(1\leq r\leq d-1\)._
Proof.: This is essentially proved in the proof of [**KL**, Proposition 5.1.5]. In that proof the bisupermodule \(M\) is assumed to induce a Morita superequivalence and our superalgebras are defined over a field. However, neither of these two details alter the proof at all.
For an \((A,A)\)-bisupermodule \(M\), we now define
\[M\wr_{\mathfrak{s}}\mathcal{T}_{d}:=\operatorname{Ind}_{(A\wr_{\mathfrak{s} }\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{\mathfrak{s}_{d}}}^{ \operatorname{\mathrm{sop}}}(M^{\boxtimes d}_{\mathfrak{s}_{d}}). \tag{3.33}\]
Note that
\[A\wr_{\mathfrak{s}}\mathcal{T}_{d}\otimes(A\wr_{\mathfrak{s}}\mathcal{T}_{d })^{\operatorname{\mathrm{sop}}}=\bigoplus_{w\in\mathfrak{S}_{d}}(\mathfrak{t }_{w}\otimes 1)(A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}} \mathcal{T}_{d})_{\mathfrak{s}_{d}}.\]
Therefore, using the bimodule notation, we can write
\[M\wr_{\mathfrak{s}}\mathcal{T}_{d}=\bigoplus_{w\in\mathfrak{S}_{d}}\mathfrak{ t}_{w}M^{\boxtimes d}. \tag{3.34}\]
In particular,
\[M\wr_{\mathfrak{s}}\mathcal{T}_{d}\simeq(A\wr_{\mathfrak{s}}\mathcal{T}_{d}) \otimes_{A^{\otimes d}}M^{\boxtimes d}, \tag{3.35}\]
as \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A^{\otimes d})\)-bisupermodules.
**Lemma 3.36**.: _Let \(M\) and \(N\) be \((A,A)\)-bisupermodules. We have:_
1. \(M\wr_{\mathfrak{s}}\mathcal{T}_{d}\mid(M\oplus N)\wr_{\mathfrak{s}}\mathcal{T }_{d}.\)__
2. \((M\wr_{\mathfrak{s}}\mathcal{T}_{d})\otimes_{A\wr_{\mathfrak{s}}\mathcal{T}_{ d}}(N\wr_{\mathfrak{s}}\mathcal{T}_{d})\simeq(M\otimes_{A}N)\wr_{ \mathfrak{s}}\mathcal{T}_{d}.\)__
Proof.: (i) Certainly \((M\oplus N)^{\boxtimes d}=M^{\boxtimes d}\oplus M^{\prime}\), where
\[M^{\prime}:=\bigoplus_{\begin{subarray}{c}M_{i}\in\{M,N\}\\ \text{at least one }M_{i}=N\end{subarray}}M_{1}\boxtimes\cdots\boxtimes M_{d}.\]
Moreover, \(M^{\boxtimes d}\) and \(M^{\prime}\) are both \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{ \mathfrak{s}_{d}}\)-subsupermodules of \((M\oplus N)^{\boxtimes d}\). The claim now follows by inducing up to \(A\wr_{\mathfrak{s}}\mathcal{T}_{d}\otimes(A\wr_{\mathfrak{s}}\mathcal{T}_{d} )^{\operatorname{\mathrm{sop}}}\).
(ii) We first note that
\[(M\wr_{\mathfrak{s}}\mathcal{T}_{d})\otimes_{A\wr_{\mathfrak{s}}\mathcal{T}_{ d}}(N\wr_{\mathfrak{s}}\mathcal{T}_{d})\simeq(M\wr_{\mathfrak{s}}\mathcal{T}_{d} )\otimes_{A^{\otimes d}}N^{\boxtimes d}=\bigoplus_{w\in\mathfrak{S}_{d}} \mathfrak{t}_{w}\big{(}M^{\boxtimes d}\otimes_{A^{\otimes d}}N^{\boxtimes d }\big{)} \tag{3.37}\]
as \((A^{\otimes d},A^{\otimes d})\)-bisupermodules, where the first isomorphism follows from (3.35) and the second from (3.34). In particular, the canonical homomorphism of \((A^{\otimes d},A^{\otimes d})\)-bisupermodules
\[M^{\boxtimes d}\otimes_{A^{\otimes d}}N^{\boxtimes d}\to(M\wr_{\mathfrak{s}} \mathcal{T}_{d})\otimes_{A\wr_{\mathfrak{s}}\mathcal{T}_{d}}(N\wr_{\mathfrak{s }}\mathcal{T}_{d})\]
is injective. Moreover, if we identify \(M^{\boxtimes d}\otimes_{A^{\otimes d}}N^{\boxtimes d}\) with its image under the above map, it also follows from (3.37) that \((M\wr_{\mathfrak{s}}\mathcal{T}_{d})\otimes_{A\wr_{\mathfrak{s}}\mathcal{T}_{ d}}(N\wr_{\mathfrak{s}}\mathcal{T}_{d})\) is \(\mathfrak{S}_{d}\)-graded. Note that
\(M^{\boxtimes d}\otimes_{A^{\otimes d}}N^{\boxtimes d}\) is even an \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{ \mathfrak{s}_{d}}\)-subsupermodule of \((M\wr_{\mathfrak{s}}\mathcal{T}_{d})\otimes_{A\wr_{\mathfrak{s}}\mathcal{T}_{d }}(N\wr_{\mathfrak{s}}\mathcal{T}_{d})\) with
\[\begin{split}(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}).[(m_{ 1}\otimes\cdots\otimes m_{d})\otimes(n_{1}\otimes\cdots\otimes n_{d})]\\ =&[(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}).( m_{1}\otimes\cdots\otimes m_{d})]\otimes[(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}).( n_{1}\otimes\cdots\otimes n_{d})],\end{split} \tag{3.38}\]
for all \(1\leq r\leq d-1\), \(m_{1},\ldots,m_{d}\in M\) and \(n_{1},\ldots,n_{d}\in N\). Therefore, by (3.37),
\[(M\wr_{\mathfrak{s}}\mathcal{T}_{d})\otimes_{A\wr_{\mathfrak{s}}\mathcal{T}_ {d}}(N\wr_{\mathfrak{s}}\mathcal{T}_{d})\simeq\operatorname{Ind}_{(A\wr_{ \mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{\mathfrak{ s}_{d}}}^{\operatorname{\mathrm{Ad}}_{\mathfrak{s}}\mathcal{T}_{d}}\big{(}M^{ \boxtimes d}\otimes_{A^{\otimes d}}N^{\boxtimes d}\big{)},\]
as \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})\)-bisupermodules. The claim will be complete once we have shown that
\[\varphi:M^{\boxtimes d}\otimes_{A^{\otimes d}}N^{\boxtimes d} \rightarrow(M\otimes_{A}N)^{\boxtimes d}_{\mathcal{S}_{d}}\] \[(m_{1}\otimes\cdots\otimes m_{d})\otimes(n_{1}\otimes\cdots \otimes n_{d}) \mapsto(-1)^{\sum_{i>j}|m_{i}||n_{j}|}(m_{1}\otimes n_{1})\otimes \cdots\otimes(m_{d}\otimes n_{d})\]
is an isomorphism of \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{ \mathfrak{s}_{d}}\)-supermodules, where \((M\otimes_{A}N)^{\boxtimes d}_{\mathfrak{s}_{d}}\) is defined via Lemma 3.31. That it is an isomorphism of \((A^{\otimes d},A^{\otimes d})\)-bisupermodules follows immediately from Lemma 3.9 and induction on \(d\). Now, for all \(1\leq r\leq d-1\), \(m_{1},\ldots,m_{d}\in M\) and \(n_{1},\ldots,n_{d}\in N\),
\[\begin{split}&(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}).[(m_{ 1}\otimes\cdots\otimes m_{d})\otimes(n_{1}\otimes\cdots\otimes n_{d})]\\ =&[(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}). (m_{1}\otimes\cdots\otimes m_{d})]\otimes[(\mathfrak{t}_{r}\otimes \mathfrak{t}_{r}^{-1}).(n_{1}\otimes\cdots\otimes n_{d})]\\ =&(-1)^{|m_{r}|+|m_{r+1}|+|n_{r}|+|n_{r+1}|\big{[} \mathfrak{t}_{r}^{s_{r}}(m_{1}\otimes\cdots\otimes m_{d})]\otimes[ \mathfrak{t}_{r}^{s_{r}}(n_{1}\otimes\cdots\otimes n_{d})]\\ =&(-1)^{C_{1}}(m_{s_{r}(1)}\otimes\cdots\otimes m _{s_{r}(d)})\otimes(n_{s_{r}(1)}\otimes\cdots\otimes n_{s_{r}(d)}),\end{split} \tag{3.39}\]
where
\[C_{1}=|m_{r}|+|m_{r+1}|+|n_{r}|+|n_{r+1}|+|m_{r}||m_{r+1}|+|n_{r}||n_{r+1}|,\]
while
\[\begin{split}&(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}^{-1}).[(m_{ 1}\otimes n_{1})\otimes\cdots\otimes(m_{d}\otimes n_{d})]\\ =&(-1)^{|m_{r}\otimes n_{r}|+|m_{r+1}|\otimes n_{r+1 }|}\left(\mathfrak{s}_{r}\big{[}(m_{1}\otimes n_{1})\otimes\cdots\otimes(m_{ d}\otimes n_{d})]\big{)}\\ =&(-1)^{|m_{r}|+|n_{r}|+|m_{r+1}|+|n_{r+1}|}\left( \mathfrak{s}_{r}\big{[}(m_{1}\otimes n_{1})\otimes\cdots\otimes(m_{d}\otimes n _{d})]\big{)}\\ =&(-1)^{C_{2}}(m_{s_{r}(1)}\otimes n_{s_{r}(1)}) \otimes\cdots\otimes(m_{s_{r}(d)}\otimes n_{s_{r}(d)}),\end{split} \tag{3.40}\]
where
\[C_{2}=|m_{r}|+|n_{r}|+|m_{r+1}|+|n_{r+1}|+(|m_{r}|+|m_{r+1}|)(|n_{r}|+|n_{r+1}|).\]
Note that
\[C_{2}=C_{1}+|m_{r}||n_{r+1}|+|m_{r+1}||n_{r}|.\]
Finally, a direct calculation shows that and also that
\[\begin{split}&\varphi\big{(}(m_{s_{r}(1)}\otimes\cdots\otimes m _{s_{r}(d)})\otimes(n_{s_{r}(1)}\otimes\cdots\otimes n_{s_{r}(d)})\big{)}\\ =&(-1)^{C_{3}+\sum_{i>j}|m_{i}||n_{j}|}(m_{s_{r}(1)} \otimes n_{s_{r}(1)})\otimes\cdots\otimes(m_{s_{r}(d)}\otimes n_{s_{r}(d)}), \end{split} \tag{3.41}\]
where \(C_{3}=|m_{r}||n_{r+1}|+|m_{r+1}||n_{r}|\). Putting together (3.39), (3.40) and (3.41), we have now shown that the action of \(\mathfrak{t}_{r}\otimes\mathfrak{t}_{r}\) commutes with \(\varphi\). The claim follows.
**Lemma 3.42**.: _Let \(A\) be a superalgebra with superunit and \(M\) an \((A,A)\)-bisupermodule. Then \((M\wr_{\mathfrak{s}}\mathcal{T}_{d})^{*}\simeq M^{*}\wr_{\mathfrak{s}} \mathcal{T}_{d}\) as \((A\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})\)-bisupermodules._
Proof.: We first show that \((M\wr_{\mathfrak{s}}\mathcal{T}_{d},A\wr_{\mathfrak{s}}\mathcal{T}_{d})_{ \mathfrak{s}_{d}}\) is a superalgebra with superunit and \(M\) is a supermodule with superunit and \(M\
Proof.: Let \(u\in A_{\bar{1}}\cap A^{\times}\). We consider the wreath product
\[C_{2}\wr\mathsf{S}_{d}=\{g_{1}^{\varepsilon_{1}}\cdots g_{d}^{\varepsilon_{d}}w \mid\varepsilon_{1},\ldots,\varepsilon_{d}\in\{0,1\},\,w\in\mathsf{S}_{d}\}\]
where \(g_{r}\) is the generator of \(r^{\text{th}}\)\(C_{2}\) factor in the base group \(C_{2}^{\times d}\). With (3.30) in mind we can give \(A\wr_{\mathsf{s}}\mathcal{T}_{d}\) the structure of a \(C_{2}\wr\mathsf{S}_{d}\)-graded crossed superproduct with base component \(A_{0}^{\otimes d}\) and graded components
\[(A\wr_{\mathsf{s}}\mathcal{T}_{d})_{g_{1}^{\varepsilon_{1}}\ldots g_{d}^{ \varepsilon_{d}}w}=A_{\bar{0}}^{\otimes d}u_{1}^{\varepsilon_{1}}\ldots u_{d} ^{\varepsilon_{d}}t_{w},\]
for all \(\varepsilon_{1},\ldots,\varepsilon_{d}\in\{0,1\}\) and \(w\in\mathsf{S}_{d}\), where we are utilising the notation from (3.2). We can now define \((A\wr_{\mathsf{s}}\mathcal{T}_{d},A\wr_{\mathsf{s}}\mathcal{T}_{d})_{C_{2} \mathsf{S}_{d}}\) as in (3.20). Note that \((A\wr_{\mathsf{s}}\mathcal{T}_{d},A\wr_{\mathsf{s}}\mathcal{T}_{d})_{C_{2} \mathsf{S}_{d}}\) is generated by \(A_{0}^{\otimes d}\otimes(A_{0}^{\otimes d})^{\text{op}}\) together with \(u_{i}\otimes u_{i}^{-1}\), for \(1\leq i\leq d\) and \(t_{w}\otimes t_{w}^{-1}\), for all \(w\in\mathsf{S}_{d}\). We now extend the \(A_{\bar{0}}^{\otimes d}\otimes(A_{\bar{0}}^{\otimes d})^{\text{op}}\)-module \(M_{\bar{0}}^{\boxtimes d}\) to an \((A\wr_{\mathsf{s}}\mathcal{T}_{d},A\wr_{\mathsf{s}}\mathcal{T}_{d})_{C_{2} \mathsf{S}_{d}}\)-module, that we denote \((M_{\bar{0}}^{\boxtimes d})_{C_{2}\mathsf{S}_{d}}\), via
\[(u_{i}\otimes u_{i}^{-1})(m_{1}\otimes\cdots\otimes m_{d}) =u_{i}(m_{1}\otimes\cdots\otimes m_{d})u_{i}^{-1} \tag{3.43}\] \[=m_{1}\otimes\cdots\otimes um_{i}u^{-1}\otimes\cdots\otimes m_{d}\] \[(t_{w}\otimes t_{w}^{-1})(m_{1}\otimes\cdots\otimes m_{d}) =t_{w}(m_{1}\otimes\cdots\otimes m_{d})t_{w}^{-1}\] \[=m_{w^{-1}(1)}\otimes\cdots\otimes m_{w^{-1}(d)},\]
for all \(1\leq i\leq d\), \(w\in\mathsf{S}_{d}\) and \(m_{1}\otimes\cdots\otimes m_{d}\in M_{\bar{0}}^{\boxtimes d}\). Note that no relations need to be checked here as this is just \(M_{\bar{0}}^{\boxtimes d}\) viewed as an \((A\wr_{\mathsf{s}}\mathcal{T}_{d},A\wr_{\mathsf{s}}\mathcal{T}_{d})_{C_{2} \mathsf{S}_{d}}\)-submodule of \(M_{\mathsf{S}_{d}}^{\boxtimes d}\), as defined in (3.32). In particular,
\[M_{\mathsf{S}_{d}}^{\boxtimes d}=\bigoplus_{\varepsilon_{1},\ldots,\varepsilon _{d}\in\{0,1\}}u_{1}^{\varepsilon_{1}}\ldots u_{d}^{\varepsilon_{d}}(M_{\bar{0 }}^{\boxtimes d})_{C_{2}\mathsf{S}_{d}}\]
and so
\[M_{\mathsf{S}_{d}}^{\boxtimes d}\simeq\operatorname{Ind}_{(A_{\mathsf{s}} \mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d})_{C_{2}\mathsf{S} _{d}}}^{(A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s} }}\mathcal{T}_{d})_{C_{2}\mathsf{S}_{d}}}(M_{\bar{0}}^{\boxtimes d})_{C_{2} \mathsf{S}_{d}}.\]
Therefore, by (3.33),
\[M\wr_{\mathsf{s}}\mathcal{T}_{d}\simeq\operatorname{Ind}_{(A_{\mathsf{s}} \mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d})_{C_{2}\mathsf{S} _{d}}}^{\text{op}}(M_{\bar{0}}^{\boxtimes d})_{C_{2}\mathsf{S}_{d}}.\]
In exactly the same way we can extend \((M_{\bar{0}}^{*})^{\boxtimes d}\) to the \((A\wr_{\mathsf{s}}\mathcal{T}_{d},A\wr_{\mathsf{s}}\mathcal{T}_{d})_{C_{2} \mathsf{S}_{d}}\)-module \(((M_{\bar{0}}^{*})^{\boxtimes d})_{C_{2}\mathsf{S}_{d}}\) that satisfies
\[M^{*}\wr_{\mathsf{s}}\mathcal{T}_{d}\simeq\operatorname{Ind}_{(A_{\mathsf{s}} \mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d})_{C_{2}\mathsf{S}_{ d}}}^{(A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s} }}\mathcal{T}_{d})_{C_{2}\mathsf{S}_{d}}}\bigl{(}(M_{\bar{0}}^{*})^{\boxtimes d }\bigr{)}_{C_{2}\mathsf{I}_{\mathsf{S}_{d}}}. \tag{3.44}\]
To complete the proof we construct an isomorphism
\[\operatorname{Ind}_{(A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d},A_{\mathsf{l}_ {\mathsf{s}}}\mathcal{T}_{d})_{C_{2}\mathsf{S}_{d}}}^{(A_{\mathsf{l}_{\mathsf{s} }}\mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d})_{C_{2}\mathsf{S}_ {d}}}\bigl{(}(M_{\bar{0}}^{*})^{\boxtimes d}\bigr{)}_{C_{2}\mathsf{S}_{d}} \to\Bigl{(}\operatorname{Ind}_{(A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d},A_{ \mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d})_{C_{2}\mathsf{S}_{d}}}^{(A_{\mathsf{l}_ {\mathsf{s}}}\mathcal{T}_{d},A_{\mathsf{l}_{\mathsf{s}}}\mathcal{T}_{d})_{C_{2} \mathsf{S}_{d}}}(M_{\bar{0}}^{\boxtimes d})_{C_{2}\mathsf{S}_{d}}\Bigr{)}^{*}.\]
To do this we first set
\[a_{g}:=u_{1}^{\varepsilon_{1}}\ldots u_{d}^{\varepsilon_{d}}t_{w}\in(A\wr_{ \mathsf{s}}\mathcal{T}_{d})_{g},\]
for each \(g=g_{1}^{\varepsilon_{1}}\ldots g_{d}^{\varepsilon_{d}}w\in C_{2}\wr\mathsf{S}_{d}\). Next we identify \((M_{\bar{0}}^{*})^{\boxtimes d}\) and \((M_{\bar{0}}^{\boxtimes d})^{*}\) via
\[(f_{1}\otimes\cdots\otimes f_{d})(m_{1}\otimes\cdots\otimes m_{d}):=f_{1}(m_{1}) \cdot\ldots\cdot f_{d}(m_{d}),\]
for all \(f_{1}\otimes\cdots\otimes f_{d}\in(M_{0}^{*})^{\boxtimes d}\) and \(m_{1}\otimes\cdots\otimes m_{d}\in M_{0}^{\boxtimes d}\). It is trivial to check that this is an isomorphism of \((A\wr_{\mathfrak{a}}\mathcal{T}_{d},A\wr_{\mathfrak{a}}\mathcal{T}_{d})_{C_{ 2}\wr_{\mathfrak{a}}\mathcal{S}_{d}}\)-supermodules, as there are no signs to check in (3.43).
We now construct the isomorphism in (3.44) via
\[f\otimes a_{g}\mapsto\bigg{(}a_{h}\otimes m\mapsto\begin{cases}f(a_{g}a_{h}m)& \text{if }h=g^{-1}\\ 0&\text{otherwise.}\end{cases}\bigg{)},\]
for all \(g,h\in C_{2}\wr_{\mathfrak{a}}\mathcal{S}_{d}\), \(f\in(M_{0}^{*})^{\boxtimes d}\) and \(m\in M_{0}^{\boxtimes d}\). Much like the proof of Lemma 3.23, one can now readily check that this does indeed define an isomorphism.
### Split semisimple \(\mathbb{K}\)-superalgebras
For much of this subsection we mirror the results of [K, SS12.2]. There it is assumed that \(\mathbb{K}\) is algebraically closed. We do not make that assumption here meaning we cannot directly refer to the results from [K].
Let \(A\) be a \(\mathbb{K}\)-superalgebra. An irreducible \(A\)-supermodule \(M\) is of type \(\mathtt{M}\) if \(|M|\) is an irreducible \(|A|\)-module, and type \(\mathtt{Q}\) otherwise. \(A\) is called _split_ if \(\dim_{\mathbb{K}}\operatorname{End}_{|A|}(|M|)=1\) for all irreducible \(M\) of type \(\mathtt{M}\) and \(\dim_{\mathbb{K}}\operatorname{End}_{|A|}(|Q|)=2\) for all irreducible \(Q\) of type \(\mathtt{Q}\). An \(A\)-supermodule \(M\) is called _semisimple_ if it is isomorphic to a direct sum of irreducible \(A\)-supermodules. The superalgebra \(A\) is called _semisimple_ if every \(A\)-supermodule is semisimple.
Recall the algebra \(\hat{A}\) from SS3.2. Using the isomorphism of the categories \(A\)-\(\underline{\text{smod}}\) and \(\hat{A}\)-mod, \(A\) being semisimple is equivalent to \(\hat{A}\) being semisimple as an algebra. This, in turn, is equivalent to \(\hat{A}=\hat{A}e_{\bar{0}}\oplus\hat{A}e_{\bar{1}}\) being semisimple as an \(\hat{A}\)-module. Now, \(\hat{A}e_{\bar{0}}\) and \(\hat{A}e_{\bar{1}}\) correspond to the \(A\)-supermodules \(A\) and \(\Pi A\) respectively. Therefore, \(\hat{A}\) being semisimple corresponds to \(A\) and \(\Pi A\) being semisimple \(A\)-supermodules. Certainly \(\Pi A\) is semisimple if and only if \(A\) is. We have, therefore, shown that \(A\) is semisimple if and only if \(A\) is semisimple as an \(A\)-supermodule.
For the next lemma we introduce the \(\mathbb{K}\)-linear map \(\sigma_{M}:M\to M\), \(m\mapsto(-1)^{|m|}m\), for an \(A\)-supermodule \(M\). Suppose that \(N\) is an \(|A|\)-submodule of \(|M|\). Then \(\sigma_{M}(N)\) is a submodule of \(|M|\) isomorphic to \(N^{\sigma}\). Moreover, \(N\) is a subsupermodule of \(M\) if and only if \(\sigma_{M}(N)=N\).
**Lemma 3.45**.: _Let \(A\) be a split \(\mathbb{K}\)-superalgebra._
1. _Let_ \(M\) _be an irreducible_ \(A\)_-supermodule of type_ \(\mathtt{M}\)_._ 1. \(|M|\) _is an irreducible_ \(|A|\)_-module._ 2. \(\operatorname{End}_{A}(M)\cong\mathbb{K}\) _as superalgebras._ 3. \(|M|\cong|\Pi M|\) _but_ \(M\not\simeq\Pi M\)_._ 2. _Let_ \(Q\) _be an irreducible_ \(A\)_-supermodule of type_ \(\mathtt{Q}\)_._ 1. \(|Q|\cong N\oplus N^{\sigma}\)_, for some irreducible_ \(|A|\)_-module_ \(N\)_, with_ \(N\ncong N^{\sigma}\)_._ 2. \(\operatorname{End}_{A}(Q)\cong\mathcal{Q}_{1}(\mathbb{K})\) _as superalgebras._ 3. \(Q\simeq\Pi Q\)_._ 3. _If_ \(M\) _and_ \(N\) _are irreducible_ \(A\)_-supermodule, then precisely one of the following occurs:_ 1. \(M\simeq N\)_,_ 2. \(M\) _and_ \(N\) _are both of type_ \(\mathtt{M}\) _and_ \(M\simeq\Pi N\)_,_ 3. \(\operatorname{Hom}_{|A|}(|M|,|N|)=\{0\}\)_._
Proof.: (i) Since \(M\) is of type \(\mathtt{M}\), parts (a) and (b) are clear by definitions. Part (c) just follows by the definition of \(\Pi M\) and part (b).
(ii) We follow [**K**, Lemma 12.2.1]. Let \(N\subseteq|Q|\) be an irreducible \(|A|\)-submodule. Since, \(|Q|\) is not irreducible, \(N\) cannot be an \(A\)-subsupermodule of \(Q\) and \(\sigma_{Q}(N)\neq N\). However, \(N+\sigma_{Q}(N)=N\oplus\sigma_{Q}(N)\) is \(\sigma_{Q}\)-stable. Since we have \(\dim_{\mathbb{K}}\operatorname{End}_{|A|}(|Q|)=2\), \(N\) and \(\sigma_{Q}(N)\cong N^{\sigma}\) must be non-isomorphic, proving part (a).
We now define \(J\in\operatorname{End}_{A}(Q)\) by \((-\operatorname{Id}_{N})\oplus\operatorname{Id}_{\sigma_{Q}(N)}\). Certainly \(J\) is an \(A\)-module isomorphism. Moreover, \(J\) anti-commutes with \(\sigma_{Q}\) and so \(J\) is odd. In particular, \(Q\simeq\Pi Q\), proving part (c). Since \(\dim_{\mathbb{K}}\operatorname{End}_{A}(Q)=\dim_{\mathbb{K}}\operatorname{End }_{|A|}(|Q|)=2\), we can now construct the isomorphism \(\operatorname{End}_{A}(Q)\cong\mathcal{Q}_{1}(\mathbb{K})\), \(J\mapsto(1,-1)\).
(iii) Suppose both (a) and (b) hold. Then, \(M\simeq\Pi M\), contradicting part (i). Certainly (c) cannot hold together with either (a) or (b). It remains to show that at least one of the three statements is true.
Suppose (a) and (c) fail to hold. So, \(M\not\simeq N\) and \(|\operatorname{Hom}_{A}(M,N)|=\operatorname{Hom}_{|A|}(|M|,|N|)\neq\{0\}\). Now, \(\operatorname{Hom}_{A}(M,N)_{\bar{0}}=\{0\}\), since \(M\) and \(N\) are non-isomorphic irreducible \(A\)-supermodules. Therefore, \(\operatorname{Hom}_{A}(M,N)_{\bar{1}}\neq\{0\}\) or equivalently \(\operatorname{Hom}_{A}(M,\Pi N)_{\bar{0}}\neq\{0\}\). Since \(M\) and \(\Pi N\) are irreducible \(A\)-supermodules, we must have \(M\simeq\Pi N\). Finally, if \(N\) has type \(\mathtt{Q}\), then \(M\simeq\Pi N\simeq N\), a contradiction. Similarly, \(M\) has type \(\mathtt{M}\) and (b) holds.
We now take a brief moment to describe the irreducible supermodules for \(\mathcal{M}_{m|n}(\mathbb{K})\) and \(\mathcal{Q}_{t}(\mathbb{K})\). Let \(U_{m,n}\) be the standard column vector supermodule for \(\mathcal{M}_{m|n}(\mathbb{K})\), where the first \(m\) entries are considered even and the last \(n\) odd. Clearly \(U_{m,n}\) is an irreducible \(\mathcal{M}_{m|n}(\mathbb{K})\)-supermodule of type \(\mathtt{M}\).
Let \(V_{t}:=V_{1}\oplus V_{2}\), where \(V_{1}\) and \(V_{2}\) are the standard column vector spaces for the two matrix factors in \(\mathcal{Q}_{t}(\mathbb{K})\). We identify \(V_{1}\) and \(V_{2}\) through \(\sigma_{\mathcal{Q}_{t}(\mathbb{K})}\) and give \(V_{t}\) the structure of a \(\mathcal{Q}_{t}(\mathbb{K})\)-supermodule by setting
\[(V_{t})_{\bar{0}}:=\{(v,v)\in V_{t}\},\qquad(V_{t})_{\bar{1}}:=\{(v,-v)\in V_{ t}\}.\]
Since \(V_{1}\) and \(V_{2}\) are non-isomorphic, irreducible \(|\mathcal{Q}_{t}(\mathbb{K})|\)-modules and \(\sigma_{V_{t}}(V_{1})=V_{2}\), \(V_{t}\) is an irreducible \(\mathcal{Q}_{t}(\mathbb{K})\)-supermodule of type \(\mathtt{Q}\).
We claim that, up to isomorphism, \(U_{m,n}\) and \(\Pi U_{m,n}\) are the only irreducible \(\mathcal{M}_{m|n}(\mathbb{K})\)-supermodules and \(V_{t}\) is the only irreducible \(\mathcal{Q}_{t}(\mathbb{K})\)-supermodule. By construction, in either case, any irreducible supermodule must have a non-zero module homomorphism from one of the above irreducible supermodules already constructed. The claim now follows from Lemma 3.45(iii).
The following lemma is well known. For example it can be deduced from results in [**J**] and [**NO**]. However, since most texts do not deal in the generality we do here, we include a proof for the convenience of the reader.
**Lemma 3.46**.: _Let \(A\) be a \(\mathbb{K}\)-superalgebra. The following are equivalent:_
* \(A\) _is split semisimple._
* \(A\) _is a direct sum of_ \(\mathbb{K}\)_-superalgebras each of the form_ \(\mathcal{M}_{m|n}(\mathbb{K})\) _or_ \(\mathcal{Q}_{t}(\mathbb{K})\)_, for_ \(m,n,t\in\mathbb{N}\)_, with_ \(m+n,t>0\)_._
_Moreover, every irreducible \(|A|\)-module is isomorphic to the direct summand of an irreducible \(A\)-supermodule considered as an \(|A|\)-module._
Proof.: (i) \(\Longrightarrow\) (ii) Suppose \(A\) is split semisimple. In particular, the left regular supermodule \(A\) decomposes into a direct sum of irreducible \(A\)-supermodules. By Lemma 3.45,
we can write
\[A\simeq\bigoplus_{i=1}^{a}\left(M_{i}^{\oplus m_{i}}\oplus(\Pi M_{i})^{\oplus m^{ \prime}_{i}}\right)\oplus\bigoplus_{i=1}^{b}Q_{i}^{\oplus q_{i}},\]
for some \(a,b,m_{i},m^{\prime}_{i},q_{i}\in\mathbb{N}\), where the \(M_{i}\)'s and \(\Pi M_{i}\)'s form a complete list of representatives for the isomorphism classes of irreducible \(A\)-supermodules of type M and the \(Q_{i}\)'s form a complete list of representatives for the isomorphism classes of irreducible \(A\)-supermodules of type Q. In particular, by Lemma 3.45, we have
\[\operatorname{End}_{A}(A) \cong\bigoplus_{i=1}^{a}\operatorname{End}_{A}\left(M_{i}^{\oplus m _{i}}\oplus(\Pi M_{i})^{\oplus m^{\prime}_{i}}\right)\oplus\bigoplus_{i=1}^{b }\operatorname{End}_{A}(Q_{j}^{\oplus q_{i}})\] \[\cong\bigoplus_{i=1}^{a}\mathcal{M}_{m_{i}|m^{\prime}_{i}}( \mathbb{K})\oplus\bigoplus_{i=1}^{b}\mathcal{Q}_{q_{j}}(\mathbb{K}).\]
Now, \(\operatorname{End}_{A}(A)\cong A^{\operatorname{op}}\) as superalgebras, where the isomorphism is given by right multiplication. The claim now follows by the isomorphisms of superalgebras:
\[\mathcal{M}_{m_{i}|m^{\prime}_{i}}(\mathbb{K})^{\operatorname{op}} \cong\mathcal{M}_{m_{i}|m^{\prime}_{i}}(\mathbb{K}) \mathcal{Q}_{q_{i}}(\mathbb{K})^{\operatorname{op}} \cong\mathcal{Q}_{q_{i}}(\mathbb{K})\] \[x \mapsto x^{T} (x,y) \mapsto(x^{T},y^{T}).\]
(ii) \(\Longrightarrow\) (i) It is enough to prove the statement for \(A=\mathcal{M}_{m|n}(\mathbb{K})\) and \(\mathcal{Q}_{t}(\mathbb{K})\), for \(m,n,t\in\mathbb{N}\), with \(m+n,t>0\).
By the comments at the beginning of this subsection, to show \(A\) is semisimple, we need only decompose \(A\) into a direct sum of irreducible supermodules. Recall the irreducible \(\mathcal{M}_{m|n}(\mathbb{K})\)-supermodule \(U_{m,n}\) and the irreducible \(\mathcal{Q}_{t}(\mathbb{K})\)-supermodule \(V_{t}\) from the comments preceding the lemma. Now, \(\mathcal{M}_{m|n}(\mathbb{K})\simeq(U_{m,n})^{\oplus m}\oplus(\Pi U_{m,n})^{ \oplus n}\), as \(\mathcal{M}_{m|n}(\mathbb{K})\)-supermodules, and \(\mathcal{Q}_{t}(\mathbb{K})\simeq V_{t}^{\otimes t}\), as \(\mathcal{Q}_{t}(\mathbb{K})\)-supermodules, proving semisimplicity. Finally,
\[\dim_{\mathbb{K}}\operatorname{End}_{|\mathcal{M}_{m|n}(\mathbb{K})|}(|U_{m,n }|)=\dim_{\mathbb{K}}\operatorname{End}_{|\mathcal{M}_{m|n}(\mathbb{K})|}(|\Pi U _{m,n}|)=1\]
and \(\dim_{\mathbb{K}}\operatorname{End}_{|\mathcal{Q}_{t}(\mathbb{K})|}(V_{t})=2\), so all our irreducible supermodules are split, completing the claim.
We now prove the final part of the lemma. Since we have decomposed \(A\) into a direct sum of irreducible \(A\)-supermodules, by Lemma 3.45(i)(a),(ii)(a), we can also decomposed \(|A|\) into a direct sum of irreducible \(|A|\)-modules. Since every irreducible \(|A|\)-module is a homomorphic image of \(|A|\), the claim follows.
### (Super)characters of split semisimple superalgebras
Throughout the subsection, let \(A\) be a split semisimple \(\mathbb{K}\)-superalgebra. In particular, \(|A|\) is a split semisimple \(\mathbb{K}\)-algebra, and we have the terminology of irreducible characters of \(|A|\) from SS2.2. By definition, these are elements of the Grothendieck group \(\mathcal{G}_{0}(|A|)\).
For simplicity we will write \(\mathcal{G}_{0}(A)\) instead of \(\mathcal{G}_{0}(|A|)\).
Recalling the operation \(M\mapsto M^{\sigma}\) introduced in SS3.2, we denote by
\[-^{\sigma}:\mathcal{G}_{0}(A)\to\mathcal{G}_{0}(A)\]
the corresponding automorphism of the Grothendieck group.
The set of irreducible \(A\)-supermodules (up to isomorphism \(\simeq\)) can be written as
\[\{X_{\lambda},\Pi X_{\lambda}\mid\lambda\in\Lambda_{0}^{A}\}\sqcup\{X_{\lambda }\mid\lambda\in\Lambda_{1}^{A}\},\]
where the first set contains type M irreducible \(A\)-supermodules, the second set contains type Q irreducible \(A\)-supermodules, and \(\Lambda^{A}_{\bar{0}},\Lambda^{A}_{\bar{1}}\) are just labeling sets. Later on, particularly when dealing with characters of double covers of the symmetric groups, it will be natural to call elements of \(\Lambda^{A}_{\bar{0}}\)_even_ and elements of \(\Lambda^{A}_{\bar{1}}\)_odd_. We put \(\Lambda^{A}:=\Lambda^{A}_{\bar{0}}\sqcup\Lambda^{A}_{\bar{1}}\). For the corresponding classes in the Grothendieck group \(\mathcal{G}_{0}(A)\), we use the following notation:
\[\xi_{\lambda} :=[|X_{\lambda}|]=[|\Pi X_{\lambda}|]\in\mathcal{G}_{0}(A) (\lambda\in\Lambda^{A}_{\bar{0}}),\] \[\xi_{\lambda} :=[|X_{\lambda}|]\in\mathcal{G}_{0}(A) (\lambda\in\Lambda^{A}_{\bar{1}}).\]
We call the classes \(\xi_{\lambda}\) the _irreducible supercharacters_ of \(A\). We denote the set of the irreducible supercharacters of \(A\) by \(\operatorname{Irr}_{\operatorname{super}}(A)\). Thus
\[\operatorname{Irr}_{\operatorname{super}}(A)=\{\xi_{\lambda}\mid\lambda\in \Lambda^{A}\}.\]
These need to be distinguished from the irreducible characters of \(|A|\) which we denote simply by \(\operatorname{Irr}(A)\) so that \(\mathcal{G}_{0}(A)=\mathbb{Z}\operatorname{Irr}(A)\).
Let \(\lambda\in\Lambda^{A}_{\bar{1}}\). By Lemma 3.45(ii)(a), \(\xi_{\lambda}=\xi^{+}_{\lambda}+\xi^{-}_{\lambda}\), for some irreducible characters \(\xi^{+}_{\lambda},\xi^{-}_{\lambda}\in\operatorname{Irr}(A)\), with \(\xi^{+}_{\lambda}\neq\xi^{-}_{\lambda}\) and \((\xi^{\pm}_{\lambda})^{\sigma}=\xi^{\mp}_{\lambda}\). Moreover, by Lemma 3.45 and the final part of Lemma 3.46,
\[\operatorname{Irr}(A)=\{\xi_{\lambda}\mid\lambda\in\Lambda^{A}_{\bar{0}}\} \sqcup\{\xi^{+}_{\lambda},\xi^{-}_{\lambda}\mid\lambda\in\Lambda^{A}_{\bar{1}}\}. \tag{3.47}\]
is a complete irredundant list of irreducible characters of \(|A|\). Adopting the language of SS3.2, if \(\lambda\in\Lambda^{A}_{\bar{0}}\), we say \(\xi_{\lambda}\) is _self-associate_ and if \(\lambda\in\Lambda^{A}_{\bar{1}}\), we say \(\xi^{+}_{\lambda},\xi^{-}_{\lambda}\) are _non-self-associate_ and call them an _associate pair_. In this latter case it is usually not going to be important to specify which character is \(\xi^{+}_{\lambda}\) and which is \(\xi^{-}_{\lambda}\). However, when it is important, we will make this choice clear.
When \(A\) is a \(\mathbb{K}\)-split semisimple (\(\mathcal{O}\)-)superalgebra, we denote
\[\operatorname{Irr}_{\operatorname{super}}(A):=\operatorname{Irr}_{ \operatorname{super}}(\mathbb{K}A)\quad\text{and}\quad\operatorname{Irr}(A):= \operatorname{Irr}(\mathbb{K}A)=\operatorname{Irr}(|\mathbb{K}A|). \tag{3.48}\]
In particular, in this case we have \(\mathcal{G}_{0}(\mathbb{K}A)=\mathbb{Z}\operatorname{Irr}(A)\).
Let \(B\) and \(C\) be \(\mathbb{K}\)-split semisimple superalgebras and \(N\) a \((B,C)\)-bisupermodule. As in SS2.2, \(|\mathbb{K}N|\otimes_{|\mathbb{K}C|}?\) induces a (\(\mathbb{Z}\)-)linear function \(\mathbb{Z}\operatorname{Irr}(C)\to\mathbb{Z}\operatorname{Irr}(B)\) that, by an abuse of notation, we denote \(N\otimes_{C}?\). If \(C\) is a subsuperalgebra of \(B\), then we denote by
\[\downarrow^{B}_{C}:\mathbb{Z}\operatorname{Irr}(B)\to\mathbb{Z} \operatorname{Irr}(C)\qquad\text{and}\qquad\uparrow^{B}_{C}:\mathbb{Z} \operatorname{Irr}(C)\to\mathbb{Z}\operatorname{Irr}(B)\]
the linear functions induced by the functors \(\operatorname{Res}^{|\mathbb{K}B|}_{|\mathbb{K}C|}\) and \(\operatorname{Ind}^{|\mathbb{K}B|}_{|\mathbb{K}C|}\) respectively. Similar notation applies for superalgebras over \(\mathbb{K}\)--for example if \(B\) and \(C\) are split semisimple \(\mathbb{K}\)-superalgebras and \(N\) a \((B,C)\)-bisupermodule, then \(|N|\otimes_{|C|}?\) induces a (\(\mathbb{Z}\)-)linear function \(\mathbb{Z}\operatorname{Irr}(C)\to\mathbb{Z}\operatorname{Irr}(B)\) that, by an abuse of notation, we denote \(N\otimes_{C}?\).
**Lemma 3.49**.: _Let \(A\) be a split semisimple \(\mathbb{K}\)-superalgebra with superunit \(u\). Then \(A_{\bar{0}}\) is a split semisimple \(\mathbb{K}\)-algebra. Moreover, if \(\lambda\in\Lambda^{A}_{\bar{0}}\), then \(\tilde{\xi}_{\lambda}:=\xi_{\lambda}\downarrow^{A}_{A_{\bar{0}}}=\tilde{\xi}^{ +}_{\lambda}+\tilde{\xi}^{-}_{\lambda}\), for distinct \(\tilde{\xi}^{+}_{\lambda},\tilde{\xi}^{-}_{\lambda}\in\operatorname{Irr}(A_{ \bar{0}})\), with \({}^{u}\tilde{\xi}^{+}_{\lambda}=\tilde{\xi}^{-}_{\lambda}\). If \(\lambda\in\Lambda^{A}_{\bar{1}}\), then \(\xi^{+}_{\lambda}\downarrow^{A}_{A_{\bar{0}}}=\xi^{-}_{\lambda}\downarrow^{A}_{A_ {\bar{0}}}=\tilde{\xi}_{\lambda}\), for some \(\tilde{\xi}_{\lambda}\in\operatorname{Irr}(A_{\bar{0}})\). Furthermore,_
\[\operatorname{Irr}(A_{\bar{0}})=\{\tilde{\xi}^{+}_{\lambda},\tilde{\xi}^{-}_{ \lambda}\mid\lambda\in\Lambda^{A}_{\bar{0}}\}\sqcup\{\tilde{\xi}_{\lambda}\mid \lambda\in\Lambda^{A}_{\bar{1}}\}\]
_is a complete, irredundant list of irreducible characters of \(A_{\bar{0}}\)._
Proof.: By Lemma 3.46, it is enough to prove the result for \(A=\mathcal{M}_{m|n}(\mathbb{K})\) and \(\mathcal{Q}_{t}(\mathbb{K})\), for \(m,n,t\in\mathbb{N}\), with \(m+n,t>0\). Recall the irreducible \(\mathcal{M}_{m|n}(\mathbb{K})\)-supermodule \(U_{m,n}\) and the irreducible \(\mathcal{Q}_{t}(\mathbb{K})\)-supermodule \(V_{t}=V_{1}\oplus V_{2}\) from the comments preceding Lemma 3.46.
We first deal with \(A\cong\mathcal{M}_{m|n}(\mathbb{K})\). Note that \(U_{m,n}\) is of type M meaning we are in the \(\Lambda^{A}_{0}\) case. If \(m\neq n\), then every element of \(\mathcal{M}_{m|n}(\mathbb{K})_{\bar{1}}\) has determinant zero. This contradicts \(A\) having a superunit and so we must have \(m=n\). Therefore, \(A_{\bar{0}}\cong\mathcal{M}_{m\times m}(\mathbb{K})\oplus\mathcal{M}_{m\times m }(\mathbb{K})\). Moreover, the two matrix factors gets swapped by conjugation by
\[u:=\begin{pmatrix}0&I_{m}\\ I_{m}&0\end{pmatrix}\in\mathcal{M}_{m|n}(\mathbb{K})^{\times}\cap\mathcal{M}_{m |n}(\mathbb{K})_{\bar{1}}.\]
Certainly \(\operatorname{Res}^{|A|}_{A_{\bar{0}}}|U_{m,m}|\) decomposes as a direct sum of two irreducible \(|A|\)-modules, one for each isomorphism class, proving all the claims.
Next we consider \(A\cong\mathcal{Q}_{t}(\mathbb{K})\). This time \(V_{t}\) is of type Q meaning we are in the \(\Lambda^{A}_{\bar{1}}\) case. Now, \(A_{\bar{0}}\cong\mathcal{M}_{t\times t}(\mathbb{K})\) and \(\operatorname{Res}^{|A|}_{A_{\bar{0}}}|V_{1}|\cong\operatorname{Res}^{|A|}_{A _{\bar{0}}}|V_{2}|\) is, up to isomorphism, the unique irreducible \(A_{\bar{0}}\)-module, proving all the claims.
From now on we will adopt the labeling of \(\operatorname{Irr}(A_{\bar{0}})\) as in Lemma 3.49. As with \(\xi^{+}_{\lambda},\xi^{-}_{\lambda}\in\operatorname{Irr}(|A|)\) when \(\lambda\) is odd, most of the time we will not be careful about distinguishing between \(\tilde{\xi}^{+}_{\lambda}\) and \(\tilde{\xi}^{-}_{\lambda}\), for \(\lambda\) even.
**Remark 3.50**.: As a consequence of Lemma 3.49 and the fact that \(\operatorname{Ind}^{|A|}_{A_{0}}\) and \(\operatorname{Res}^{|A|}_{A_{0}}\) are adjoint functors, we can describe \(\uparrow^{A}_{A_{\bar{0}}}:\mathbb{Z}\mathrm{Irr}(A_{\bar{0}})\to\mathbb{Z} \mathrm{Irr}(A)\). If \(\lambda\in\Lambda^{A}_{\bar{0}}\), then \(\tilde{\xi}^{+}_{\lambda}\uparrow^{A}_{A_{\bar{0}}}=\tilde{\xi}^{-}_{\lambda} \uparrow^{A}_{A_{\bar{0}}}=\xi_{\lambda}\). If \(\lambda\in\Lambda^{A}_{\bar{1}}\), then \(\tilde{\xi}_{\lambda}\uparrow^{A}_{A_{\bar{0}}}=\xi^{+}_{\lambda}+\xi^{-}_{\lambda}\).
Let \(A\) and \(B\) be split semisimple \(\mathbb{K}\)-superalgebras, \(U\) an \(A\)-supermodule and \(V\) a \(B\)-supermodule such that \(|U|\) has character \(\chi\in\mathbb{Z}\mathrm{Irr}(A)\) and \(|V|\) has character \(\psi\in\mathbb{Z}\mathrm{Irr}(B)\). We write \(\chi\boxtimes\psi\in\mathbb{Z}\mathrm{Irr}(A\otimes B)\) for the character of the \(|A\otimes B|\)-module \(|U\boxtimes V|\).
**Lemma 3.51**.: _Let \(A\) and \(B\) be split semisimple \(\mathbb{K}\)-superalgebras._
* \(A\otimes B\) _is a split semisimple_ \(\mathbb{K}\)_-superalgebra._
* _If_ \(\lambda\in\Lambda^{A}\) _and_ \(\mu\in\Lambda^{B}\)_, then_ \(\xi_{\lambda}\boxtimes\xi_{\mu}\in\mathrm{Irr}_{\mathrm{super}}(A\otimes B)\) _unless_ \(\lambda\in\Lambda^{A}_{\bar{1}}\) _and_ \(\mu\in\Lambda^{B}_{\bar{1}}\)_. In the latter case_ \(\xi_{\lambda}\boxtimes\xi_{\mu}\) _is the sum of two copies of the same irreducible supercharacter of_ \(A\otimes B\)_._
_In all cases we write \(\xi_{\lambda,\mu}\) for the unique irreducible constituent of \(\xi_{\lambda}\boxtimes\xi_{\mu}\)._
* \(\mathrm{Irr}_{\mathrm{super}}(A\otimes B)=\{\xi_{\lambda,\mu}\mid\lambda\in \Lambda^{A},\mu\in\Lambda^{B}\}\) _is a complete irredundant set of irreducible supercharacters of_ \(A\otimes B\)_. Moreover,_ \(\xi_{\lambda,\mu}\) _corresponds to an irreducible supermodule of type_ M _if_ \(\xi_{\lambda}\) _and_ \(\xi_{\mu}\) _have the same type and type_ Q _if_ \(\xi_{\lambda}\) _and_ \(\xi_{\mu}\) _have opposite type._
Proof.: All claims, except that our set in (iii) is irredundant, follow from [**K**, Lemma 12.2.13]. There it is assumed that \(\mathbb{K}\) is algebraically closed. However, the proof runs through in exactly the same manner once we have established Lemma 3.46.
For the irredundancy claim, if \(\xi_{\lambda_{1},\mu_{1}}=\xi_{\lambda_{2},\mu_{2}}\), then, by restricting to \(A\) and to \(B\) we can show that \(\lambda_{1}=\lambda_{2}\) and \(\mu_{1}=\mu_{2}\).
**Remark 3.52**.: Occasionally, instead of \(\xi_{\lambda,\mu}\) we use the notation \(\xi_{\lambda}\)\(\otimes\)\(\xi_{\mu}\). This corresponds to the operation '\(\&\)' on supermodules as in [**K**, SS12.2].
Using the above lemma we make the following identifications
\[\Lambda^{A\otimes B}=\Lambda^{A}\times\Lambda^{B},\quad\Lambda_{\bar{0}}^{A \otimes B}=\Lambda_{\bar{0}}^{A}\times\Lambda_{\bar{0}}^{B}\sqcup\Lambda_{\bar {1}}^{A}\times\Lambda_{\bar{1}}^{B},\quad\Lambda_{\bar{1}}^{A\otimes B}= \Lambda_{\bar{0}}^{A}\times\Lambda_{\bar{1}}^{B}\sqcup\Lambda_{\bar{1}}^{A} \times\Lambda_{\bar{0}}^{B},\]
so that
\[\operatorname{Irr}(A\otimes B)=\{\xi_{\lambda,\mu}\mid(\lambda,\mu)\in\Lambda _{\bar{0}}^{A\otimes B}\}\sqcup\{\xi_{\lambda,\mu}^{+},\xi_{\lambda,\mu}^{-} \mid(\lambda,\mu)\in\Lambda_{\bar{1}}^{A\otimes B}\}\]
is a complete, irredundant list of irreducible characters of \(|A\otimes B|\) and
\[\xi_{\lambda,\mu}^{\sigma}=\xi_{\lambda,\mu} \text{if }(\lambda,\mu)\in\Lambda_{\bar{0}}^{A\otimes B},\] \[(\xi_{\lambda,\mu}^{\pm})^{\sigma}=\xi_{\lambda,\mu} \text{if }(\lambda,\mu)\in\Lambda_{\bar{1}}^{A\otimes B}.\]
More generally, if \(A_{1},\ldots,A_{n}\) are split semisimple \(\mathbb{K}\)-superalgebras, we can identify
\[\Lambda^{A_{1}\otimes\cdots\otimes A_{n}} =\Lambda^{A_{1}}\times\cdots\times\Lambda^{A_{n}},\] \[\Lambda_{\bar{0}}^{A_{1}\otimes\cdots\otimes A_{n}} =\{(\lambda^{1},\ldots,\lambda^{n})\in\Lambda^{A_{1}}\times \cdots\times\Lambda^{A_{n}}\mid\text{number of the odd }\lambda^{i}\text{ is even}\},\] \[\Lambda_{\bar{1}}^{A_{1}\otimes\cdots\otimes A_{n}} =\{(\lambda^{1},\ldots,\lambda^{n})\in\Lambda^{A_{1}}\times \cdots\times\Lambda^{A_{n}}\mid\text{number of the odd }\lambda^{i}\text{ is odd}\},\]
so that
\[\operatorname{Irr}_{\text{super}}(A_{1}\otimes\cdots\otimes A_{n}) =\{\xi_{\lambda^{1},\ldots,\lambda^{n}}\mid(\lambda^{1},\ldots, \lambda^{n})\in\Lambda^{A_{1}}\times\cdots\times\Lambda^{A_{n}}\}, \tag{3.53}\] \[\operatorname{Irr}(A_{1}\otimes\cdots\otimes A_{n}) =\{\xi_{\lambda^{1},\ldots,\lambda^{n}}\mid(\lambda^{1},\ldots, \lambda^{n})\in\Lambda_{\bar{0}}^{A_{1}\otimes\cdots\otimes A_{n}}\}\] (3.54) \[\sqcup\{\xi_{\lambda^{1},\ldots,\lambda^{n}}^{+},\xi_{\lambda^{1 },\ldots,\lambda^{n}}^{-}\mid(\lambda^{1},\ldots,\lambda^{n})\in\Lambda_{\bar {1}}^{A_{1}\otimes\cdots\otimes A_{n}}\}.\]
**Lemma 3.55**.: _Let \(A\) and \(B\) be split semisimple \(\mathbb{K}\)-superalgebras with superunits. Then \(A\otimes B\) is also a split semisimple \(\mathbb{K}\)-superalgebra with superunit. Furthermore, we can label the elements of \(\operatorname{Irr}(A_{\bar{0}})\), \(\operatorname{Irr}(B)\) and \(\operatorname{Irr}(A\otimes B)\) such that_
1. \[\xi_{\lambda,\mu}\downarrow_{A_{0}\otimes B}^{A\otimes B}=(\tilde{ \xi}_{\lambda}^{+}\boxtimes\xi_{\mu})+(\tilde{\xi}_{\lambda}^{-}\boxtimes\xi_{ \mu}) \text{if }\lambda\in\Lambda_{\bar{0}}^{A}\text{ and }\mu\in\Lambda_{\bar{0}}^{B}\text{;}\] \[\xi_{\lambda,\mu}^{\pm}\downarrow_{A_{0}\otimes B}^{A\otimes B}=( \tilde{\xi}_{\lambda}^{+}\boxtimes\xi_{\mu}^{\pm})+(\tilde{\xi}_{\lambda}^{-} \boxtimes\xi_{\mu}^{\mp}) \text{if }\lambda\in\Lambda_{\bar{0}}^{A}\text{ and }\mu\in\Lambda_{\bar{1}}^{B}\text{;}\] \[\xi_{\lambda,\mu}^{\pm}\downarrow_{A_{0}\otimes B}^{A\otimes B}= \tilde{\xi}_{\lambda}\boxtimes\xi_{\mu} \text{if }\lambda\in\Lambda_{\bar{1}}^{A}\text{ and }\mu\in\Lambda_{\bar{0}}^{B}\text{;}\] \[\xi_{\lambda,\mu}\downarrow_{A_{0}\otimes B}^{A\otimes B}=\tilde{ \xi}_{\lambda}\boxtimes\xi_{\mu}^{+}+\tilde{\xi}_{\lambda}\boxtimes\xi_{\mu}^{-} \text{if }\lambda\in\Lambda_{\bar{1}}^{A}\text{ and }\mu\in\Lambda_{\bar{1}}^{B}\text{.}\]
2. \[(\tilde{\xi}_{\lambda}^{\pm}\boxtimes\xi_{\mu})\uparrow_{A_{0} \otimes B}^{A\otimes B}=\xi_{\lambda,\mu} \text{if }\lambda\in\Lambda_{\bar{0}}^{A}\text{ and }\mu\in\Lambda_{\bar{0}}^{B}\text{;}\] \[(\tilde{\xi}_{\lambda}^{\pm}\boxtimes\xi_{\mu}^{\pm})\uparrow_{A _{0}\otimes B}^{A\otimes B}=\xi_{\lambda,\mu} \text{if }\lambda\in\Lambda_{\bar{0}}^{A}\text{ and }\mu\in\Lambda_{\bar{1}}^{B}\text{;}\] \[(\tilde{\xi}_{\lambda}^{\pm}\boxtimes\xi_{\mu}^{\mp})\uparrow_{A _{0}\otimes B}^{A\otimes B}=\xi_{\lambda,\mu} \text{if }\lambda\in\Lambda_{\bar{0}}^{A}\text{ and }\mu\in\Lambda_{\bar{1}}^{B}\text{;}\] \[(\tilde{\xi}_{\lambda}\boxtimes\xi_{\mu})\uparrow_{A_{0}\otimes B}^{A \otimes B}=\xi_{\lambda,\mu} \text{if }\lambda\in\Lambda_{\bar{1}}^{A}\text{ and }\mu\in\Lambda_{\bar{0}}^{B}\text{;}\] \[(\tilde{\xi}_{\lambda}\boxtimes\xi_{\mu}^{\pm})\uparrow_{A_{0} \otimes B}^{A\otimes B}=\xi_{\lambda,\mu} \text{if }\lambda\in\Lambda_{\bar{1}}^{A}\text{ and }\mu\in\Lambda_{\bar{1}}^{B}\text{.}\]
_We, of course, have the corresponding equalities for \(\downarrow_{A\otimes B_{0}}^{A\otimes B}\) and \(\uparrow_{A\otimes B_{0}}^{A\otimes B}\)._
Proof.: We already know from Lemma 3.51 that \(A\otimes B\) is split semisimple. Let \(u_{A}\in A^{\times}\cap A_{\bar{1}}\) and \(u_{B}\in B^{\times}\cap B_{\bar{1}}\). Clearly \(u_{A}\otimes 1\) is a superunit of \(A\otimes B\), proving the first part of the lemma.
For parts (i) and (ii) we prove the hardest case, that is \(\lambda\in\Lambda^{A}_{0}\) and \(\mu\in\Lambda^{B}_{1}\). The other cases are similar but easier as any choices made about the labeling of characters are unimportant.
Let \(U\) be an irreducible \(A\)-supermodule with irreducible supercharacter \(\xi_{\lambda}\in\operatorname{Irr}_{\operatorname{super}}(A)\) and \(V\) an irreducible \(B\)-supermodule with irreducible supercharacter \(\xi_{\mu}\in\operatorname{Irr}_{\operatorname{super}}(B)\). By Lemma 3.51, \(U\boxtimes V\) is an irreducible \((A\otimes B)\)-supermodule. By Remark 3.50, \(\operatorname{Ind}^{A}_{A_{0}}U_{\bar{0}}\simeq U\). Therefore, \(\operatorname{Ind}^{A\otimes B}_{A_{0}\otimes B}(U_{\bar{0}}\boxtimes V) \simeq U\boxtimes V\).
Note that we made a choice when we selected \(U\), as we could have chosen \(\Pi U\not\simeq U\). Let's say we picked \(U\) such that \(U_{\bar{0}}\) has character \(\tilde{\xi}_{\lambda}^{+}\). In the previous paragraph we showed that
\[(\tilde{\xi}_{\lambda}^{+}\boxtimes\xi_{\mu})\uparrow^{A\otimes B}_{A_{\bar{0 }}\otimes B}=\xi_{\lambda,\mu}=\xi_{\lambda,\mu}^{+}+\xi_{\lambda,\mu}^{-}.\]
We can now choose the labeling of the appropriate elements of \(\operatorname{Irr}(A\otimes B)\) such that
\[(\tilde{\xi}_{\lambda}^{+}\boxtimes\xi_{\mu}^{+})\uparrow^{A\otimes B}_{A_{ \bar{0}}\otimes B}=\xi_{\lambda,\mu}^{+},\qquad(\tilde{\xi}_{\lambda}^{+} \boxtimes\xi_{\mu}^{-})\uparrow^{A\otimes B}_{A_{\bar{0}}\otimes B}=\xi_{ \lambda,\mu}^{-}.\]
We can think of this last equation as a definition, once the labeling of \(\operatorname{Irr}(A_{\bar{0}})\) and \(\operatorname{Irr}(B)\) have been fixed. Now,
\[(\tilde{\xi}_{\lambda}^{-}\boxtimes\xi_{\mu}^{+})\uparrow^{A \otimes B}_{A_{\bar{0}}\otimes B}=\left({}^{u_{A}}(\tilde{\xi}_{\lambda}^{-} \boxtimes\xi_{\mu}^{+})\right)\uparrow^{A\otimes B}_{A_{\bar{0}}\otimes B}=( \tilde{\xi}_{\lambda}^{+}\boxtimes\xi_{\mu}^{-})\uparrow^{A\otimes B}_{A_{ \bar{0}}\otimes B}=\xi_{\lambda,\mu}^{-},\] \[(\tilde{\xi}_{\lambda}^{-}\boxtimes\xi_{\mu}^{-})\uparrow^{A \otimes B}_{A_{\bar{0}}\otimes B}=\left({}^{u_{A}}(\tilde{\xi}_{\lambda}^{-} \boxtimes\xi_{\mu}^{-})\right)\uparrow^{A\otimes B}_{A_{\bar{0}}\otimes B}=( \tilde{\xi}_{\lambda}^{+}\boxtimes\xi_{\mu}^{+})\uparrow^{A\otimes B}_{A_{ \bar{0}}\otimes B}=\xi_{\lambda,\mu}^{+}\]
and part (ii) is proved. Part (i) follows form the fact that \(\operatorname{Ind}^{A\otimes B}_{A_{\bar{0}}\otimes B}\) and \(\operatorname{Res}^{A\otimes B}_{A_{\bar{0}}\otimes B}\) are an adjoint pair.
Let \(A\) be a split semisimple \(\mathbb{K}\)-superalgebra and \(\lambda\in\Lambda^{A}\). We set
\[\varepsilon_{\lambda}:=\begin{cases}1&\text{if }\lambda\in\Lambda^{A}_{\bar{0}}, \\ \sqrt{2}&\text{if }\lambda\in\Lambda^{A}_{\bar{1}}.\end{cases} \tag{3.56}\]
More generally, if \(A_{1},\ldots,A_{n}\) are split semisimple \(\mathbb{K}\)-superalgebras then we have identified \(\Lambda^{A_{1}\otimes\cdots\otimes A_{n}}=\Lambda^{A_{1}}\times\cdots\times \Lambda^{A_{n}}\) so that \((\lambda^{1},\ldots,\lambda^{n})\in\Lambda^{A_{1}\otimes\cdots\otimes A_{n}}_{ \bar{0}}\) if and only if the number of the odd \(\lambda^{i}\) is even. Now, set
\[\varepsilon_{\lambda^{1},\ldots,\lambda^{n}}:=\varepsilon_{(\lambda^{1}, \ldots,\lambda^{n})}=\begin{cases}1&\text{if }(\lambda^{1},\ldots,\lambda^{n})\in\Lambda^{A_{1}\otimes\cdots\otimes A_{n}} _{\bar{0}},\\ \sqrt{2}&\text{if }(\lambda^{1},\ldots,\lambda^{n})\in\Lambda^{A_{1}\otimes \cdots\otimes A_{n}}_{\bar{1}}.\end{cases} \tag{3.57}\]
Note that \(\varepsilon_{\lambda^{1},\ldots,\lambda^{n}}\) does not depend on the order of \(\lambda^{1},\ldots,\lambda^{n}\) (unlike \(\xi_{\lambda^{1},\ldots,\lambda^{n}}\)).
**Lemma 3.58**.: _Let \(A\) and \(B\) be split semisimple \(\mathbb{K}\)-superalgebras with superunits and let \(M\) be an \((A,B)\)-bisupermodule. Then:_
* \(M\otimes_{B}\xi^{\sigma}=(M\otimes_{B}\xi)^{\sigma},\) _for all_ \(\xi\in\operatorname{Irr}(B)\)_._
* _We can write, for each_ \(\mu\in\Lambda^{B}\)_,_ \[M\otimes_{B}\xi_{\mu}=\sum_{\lambda\in\Lambda^{A}}a_{\mu,\lambda}\xi_{ \lambda},\] _for some_ \(a_{\mu,\lambda}\in\mathbb{N}\)_. Similarly, we can write, for each_ \(\lambda\in\Lambda^{A}\)_,_ \[M^{*}\otimes_{A}\xi_{\lambda}=\sum_{\Lambda^{B}}b_{\lambda,\mu}\xi_{\mu},\] _for some_ \(b_{\lambda,\mu}\in\mathbb{N}\)_. Moreover,_ \(\varepsilon_{\lambda}^{2}a_{\mu,\lambda}=\varepsilon_{\mu}^{2}b_{\lambda,\mu}\)_, for all_ \(\lambda\in\Lambda^{A}\) _and_ \(\mu\in\Lambda^{B}\)
Proof.: (i) Let \({}_{\sigma_{A}}M\) be the \((A,B)\)-bisupermodule equal to \(M\) as a superspace but with \((A,B)\)-bimodule structure given by \(a.m.b:=\sigma_{A}(a)mb\), for all \(a\in A\), \(b\in B\) and \(m\in M\). Define \(M_{\sigma_{B}}\) analogously. We have the \((A,B)\)-bisupermodule isomorphism \({}_{\sigma_{A}}M\stackrel{{\sim}}{{\longrightarrow}}M_{\sigma_{ B}},\ m\mapsto(-1)^{|m|}m.\) The claim follows as
\[M\otimes_{B}\xi^{\sigma}=M_{\sigma_{B}}\otimes_{B}\xi={}_{\sigma_{A}}M\otimes _{B}\xi=(M\otimes_{B}\xi)^{\sigma}.\]
(ii) The expressions involving \(M\) and \(M^{*}\) just follow from part (i).
For the second part we simply run through all the possibilities of \(\lambda\) and \(\mu\) being even/odd. For example, let \(\lambda\) be even and \(\mu\) odd. Part (i) implies that \(\xi_{\lambda}\) must appear with coefficient \(a_{\mu,\lambda}/2\) in both \(M\otimes_{B}\xi_{\mu}^{+}\) and \(M\otimes_{B}\xi_{\mu}^{-}\). Therefore, \(\xi_{\mu}^{+}\) and \(\xi_{\mu}^{-}\) must both appear with coefficient \(a_{\mu,\lambda}/2\) in \(M^{*}\otimes_{A}\xi_{\lambda}\). Therefore, \(b_{\lambda,\mu}=a_{\mu,\lambda}/2\), as desired. The other cases are proved similarly.
Using the notation of the above lemma, we say \(\xi_{\lambda}\) appears as an _irreducible superconstituent_ (or just _superconstituent_) with coefficient \(a_{\mu,\lambda}\) in \(M\otimes_{B}\xi_{\mu}\).
**Lemma 3.59**.: _Let \(A\), \(B\) and \(C\) be split semisimple \(\mathbb{K}\)-superalgebras with superunit, \(M\) a \((B,C)\)-bisupermodule and \(\lambda\in\Lambda^{A}\), \(\mu\in\Lambda^{C}\). If \(M\otimes_{C}\xi_{\mu}=\sum_{\nu\in\Lambda^{B}}a_{\mu,\nu}\xi_{\nu}\) then_
\[(A\boxtimes M)\otimes_{A\otimes C}\xi_{\lambda,\mu}=\sum_{\nu\in\Lambda^{B}} \frac{\varepsilon_{\nu}\varepsilon_{(\lambda,\mu)}}{\varepsilon_{\mu} \varepsilon_{(\lambda,\nu)}}a_{\mu,\nu}\xi_{\lambda,\nu}.\]
Proof.: This is just a case of running through all the possibilities of \(\lambda\), \(\mu\) and \(\nu\) being even/odd. For example, if \(\lambda\) and \(\mu\) are both odd and \(\nu\) is even, then, by Lemma 3.58(i), \(\xi_{\nu}\) appears with coefficient \(a_{\mu,\nu}/2\) in both \(M\otimes_{C}\xi_{\mu}^{+}\) and \(M\otimes_{C}\xi_{\mu}^{-}\). Therefore, \(\tilde{\xi}_{\lambda}\boxtimes\xi_{\nu}\) appears with coefficient \(a_{\mu,\nu}/2\) in both \((A_{\bar{0}}\boxtimes M)\otimes_{A_{\bar{0}}\otimes C}(\tilde{\xi}_{\lambda} \boxtimes\xi_{\mu}^{+})\) and \((A_{\bar{0}}\boxtimes M)\otimes_{A_{\bar{0}}\otimes C}(\tilde{\xi}_{\lambda} \boxtimes\xi_{\mu}^{-})\). Using Lemma 3.55(ii),
\[(A\boxtimes B)\otimes_{A_{\bar{0}}\otimes B}(\tilde{\xi}_{\lambda}\boxtimes \xi_{\nu})=\xi_{\lambda,\nu}^{+}+\xi_{\lambda,\nu}^{-}=\xi_{\lambda,\nu}\]
appears as a superconstituent with coefficient \(a_{\mu,\nu}/2\) in
\[(A\boxtimes B)\otimes_{A_{\bar{0}}\otimes B}(A_{\bar{0}}\boxtimes M )\otimes_{A_{\bar{0}}\otimes C}(\tilde{\xi}_{\lambda}\boxtimes\xi_{\mu}^{+})= (A\boxtimes M)\otimes_{A_{\bar{0}}\otimes C}(\tilde{\xi}_{\lambda}\boxtimes \xi_{\mu}^{+})\] \[= (A\boxtimes M)\otimes_{A\otimes C}(A\boxtimes C)\otimes_{A_{\bar{0 }}\otimes C}(\tilde{\xi}_{\lambda}\boxtimes\xi_{\mu}^{+})=(A\boxtimes M) \otimes_{A\otimes C}\xi_{\lambda,\mu},\]
where, above, we are applying Lemma 3.9 twice and Lemma 3.55(ii) once.
### Super group algebras
Throughout this subsection \(G\) will denote a finite group with an index \(2\) subgroup \(G_{\bar{0}}\). We give \(\mathcal{R}G\) the structure of an \(\mathcal{R}\)-superalgebra with superunit via
\[(\mathcal{R}G)_{\bar{0}}:=\langle g\mid g\in G_{\bar{0}}\rangle_{\mathcal{R}} \quad\text{and}\quad(\mathcal{R}G)_{\bar{1}}:=\langle g\mid g\in G\setminus G_ {\bar{0}}\rangle_{\mathcal{R}}.\]
In particular, for \(g\in G\), we have \(|g|=\bar{0}\) of \(g\in G_{\bar{0}}\) and \(|g|=\bar{1}\) if \(g\not\in G_{\bar{0}}\).
For an absolutely indecomposable \(\mathcal{R}G\)-supermodule \(M\), when we speak of a _vertex_ of \(M\), we always mean a vertex of \(|M|\). The same applies to relative projectivity with respect to a subgroup and other standard notions of finite group theory.
**Lemma 3.60**.: _If \(\mathbb{K}\) contains a primitive \(|G|^{\text{th}}\) root of unity and \(e\) is a non-zero idempotent in \(Z(\mathbb{K}G)\cap\mathbb{K}G_{\bar{0}}\), then \(\mathbb{K}Ge\) is a split semisimple \(\mathbb{K}\)-superalgebra with superunit._
Proof.: We first prove \(\mathbb{K}G\) is split semisimple. Since \(\mathbb{K}\) contains a primitive \(|G|^{\text{th}}\) root of unity, it is well known that \(\mathbb{K}G_{\bar{0}}\) and \(|\mathbb{K}G|\) are split semisimple \(\mathbb{K}\)-algebras. We make use of this fact at several points during the proof.
Let \(U\) be an irreducible \(\mathbb{K}G_{\bar{0}}\)-module and set \(M:=\operatorname{Ind}_{G_{\bar{0}}}^{G}U\). We claim that \(M\) is an irreducible \(\mathbb{K}G\)-supermodule. If \(U\) is not \(G\)-stable, then, by standard Clifford theory, \(|M|\) is an irreducible \(|\mathbb{K}G|\)-module. Therefore, \(M\) is an irreducible \(\mathbb{K}G\)-supermodule of type \(\mathtt{M}\) and \(\operatorname{End}_{|\mathbb{K}G|}(|M|)=1\). On the other hand, if \(U\) is \(G\)-stable, then, again by standard Clifford theory, \(|M|=M_{1}\oplus M_{2}\), for two non-isomorphic, irreducible \(|\mathbb{K}G|\)-submodules \(M_{1},M_{2}\subseteq|M|\). Furthermore, \(M_{1}\) and \(M_{2}\) are related by tensoring with the unique non-trivial, linear character of \(G\) with kernel \(G_{\bar{0}}\). In other words, \(M_{1}^{\sigma}\cong M_{2}\). Therefore, neither \(M_{1}\) nor \(M_{2}\) are \(\sigma_{M}\)-invariant and \(M\) must be an irreducible \(\mathbb{K}G\)-supermodule. Moreover, \(\operatorname{End}_{|\mathbb{K}G|}(|M|)=2\) and \(M\) is of type \(\mathtt{Q}\).
In fact, every irreducible \(\mathbb{K}G\)-supermodule is of the form \(\operatorname{Ind}_{G_{\bar{0}}}^{G}U\), for some irreducible \(\mathbb{K}G_{\bar{0}}\)-module \(U\). Indeed, if \(M\) is an irreducible \(\mathbb{K}G\)-supermodule, then \(M\simeq\operatorname{Ind}_{G_{\bar{0}}}^{G}M_{\bar{0}}\), similar to Lemma 3.13(i). We have now shown that \(\mathbb{K}G\) is split.
Since we can decompose \(\mathbb{K}G_{\bar{0}}\) into a direct sum of irreducible \(\mathbb{K}G_{\bar{0}}\)-modules, we can decompose \(\mathbb{K}G\simeq\operatorname{Ind}_{G_{\bar{0}}}^{G}\mathbb{K}G_{\bar{0}}\) into a direct sum of irreducible \(\mathbb{K}G\)-supermodules, proving semisimplicity.
The statement for \(\mathbb{K}Ge\) now follows from Lemma 3.46, since truncating by \(e\) translates into deleting some of the factors in statement (ii) of said lemma.
**Remark 3.61**.: We note that it is not true that a \(\mathbb{K}\)-superalgebra \(A\) is split semisimple if and only if the algebra \(|A|\) is split semisimple. For example, let \(A=M_{2}(\mathbb{R})\) with superstructure determined by \(\sigma_{A}\), which is given by conjugation with \(\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\). Then \(A_{\bar{0}}\cong\mathbb{C}\) and it follows easily from Lemma 3.46 that \(A\) is not split over \(\mathbb{R}\). On the other hand \(|A|=M_{2}(\mathbb{R})\) is split over \(\mathbb{R}\).
For the next lemma, let \(H\) be another finite group with an index \(2\) subgroup \(H_{\bar{0}}\).
**Lemma 3.62**.: _Let \(M\) be an absolutely indecomposable \(\mathcal{O}\)-free \((\mathcal{O}G,\mathcal{O}H)\)-bisupermodule. If \(M\) has vertex \(P\leq G\times H\) then \(M^{*}\) has vertex \(P^{*}:=\{(y,x)\in H\times G\mid(x,y)\in P\}\)._
Proof.: Since \(|M^{*}|=|M|^{*}\), this is just a result about the usual (non-super) dual. Taking duals commutes with induction. Therefore, \(M^{*}\) is relatively \(P\)-projective as a right \(\mathcal{O}(G\times H)\)-module. Equivalently, it is relatively \(P^{*}\)-projective as a left \(\mathcal{O}(H\times G)\)-module. That it has vertex \(P^{*}\) is just a consequence of the fact that \(|M^{*}|^{*}\cong|M|\), since \(M\) is \(\mathcal{O}\)-free.
For any subgroup \(N\leq G\) we will use \(N_{\bar{0}}\) to denote \(N\cap G_{\bar{0}}\) and \(\mathcal{O}N\) will inherit its superalgebra structure from that of \(\mathcal{O}G\). In particular, if \(N\nleq G_{\bar{0}}\), then \(\mathcal{O}N\) is also a \(\mathbb{K}\)-split semisimple superalgebra with superunit.
We now state and prove a super version of the well-known Green correspondence.
**Theorem 3.63**.: _Let \(P\) be a \(p\)-subgroup of \(G\) and \(N_{G}(P)\leq N\leq G\) be such that \(N\) is not contained in \(G_{0}\). Then there exists a correspondence between the absolutely indecomposable \(\mathcal{O}G\)-supermodules with vertex \(P\) and the absolutely indecomposable \(\mathcal{O}N\)-supermodules with vertex \(P\). Suppose \(U\) is an absolutely indecomposable \(\mathcal{O}G\)-supermodule with vertex \(P\) corresponding to an absolutely indecomposable \(\mathcal{O}N\)-supermodule \(V\) with vertex \(P\). Then_
\[\operatorname{Ind}_{N}^{G}V\simeq U\oplus M_{G}\qquad\text{and}\qquad\operatorname {Res}_{N}^{G}U\simeq V\oplus M_{N}, \tag{3.64}\]
_for some \(\mathcal{O}G\)-supermodule \(M_{G}\) and some \(\mathcal{O}N\)-supermodule \(M_{N}\). Moreover, as an \(\mathcal{O}G\)-module, \(M_{G}\) is a direct sum of modules each with vertex contained in \(P\cap{}^{g}P\), for some \(g\in G\setminus N\) and, as an \(\mathcal{O}N\)-module, \(M_{N}\) is a direct sum of modules each with vertex contained in \(N\cap{}^{g}P\), for some \(g\in G\setminus N\)._
_Furthermore, given \(U\) (resp. \(V\)), the decompositions of supermodules in (3.64) uniquely determine \(V\) (resp. \(U\)) up to isomorphism, subject to all the above conditions._
Proof.: All of this is the well-known Green correspondence except that (3.64) holds as supermodules (and not just modules) and that \(U\) and \(V\) uniquely determine one another as supermodules (again, not just as modules), up to isomorphism.
Let \(V\) be an absolutely indecomposable \(\mathcal{O}N\)-supermodule with vertex \(P\). Then, by Lemma 3.13(i), \(V\simeq\operatorname{Ind}_{N_{\bar{0}}}^{N}V_{\bar{0}}\) and \(V_{\bar{0}}\) is necessarily an indecomposable \(\mathcal{O}N_{\bar{0}}\)-module with vertex \(P\) or \({}^{x}P\), for some \(x\in N\setminus N_{\bar{0}}\). (We are using the fact that \(p>2\) here.) Now, \(N_{G_{\bar{0}}}(P)\leq N_{\bar{0}}\) and so, by conjugating everything by \(x\), we also have that \(N_{G_{\bar{0}}}({}^{x}P)\leq N_{\bar{0}}\). We can, therefore, consider the Green correspondent \(W\) of \(V_{\bar{0}}\) in \(G_{\bar{0}}\).
Note that, by Lemma 3.13(ii), \(V_{\bar{0}}\) is not \(N\)-stable. Therefore, \(W\) is not \(N\)-stable, which is equivalent to not being \(G\)-stable, and \(\operatorname{Ind}_{G_{\bar{0}}}^{G}W\) is an absolutely indecomposable \(\mathcal{O}G\)-supermodule, with vertex \(P\).
Now, \(\operatorname{Ind}_{N_{\bar{0}}}^{G_{\bar{0}}}V_{\bar{0}}\cong W\oplus Z_{G}\), where \(Z_{G}\) is a direct sum of \(\mathcal{O}G_{\bar{0}}\)-modules each with vertex contained in \(P\cap{}^{g}P\) (or, in the case that \(V_{\bar{0}}\) has vertex \({}^{x}P\), each with vertex contained in \({}^{x}P\cap{}^{gx}P\)), for some \(g\in G_{\bar{0}}\setminus N_{\bar{0}}\). Consequently, \(\operatorname{Ind}_{G_{\bar{0}}}^{G}Z_{G}\) is a direct sum of modules each with vertex contained in \(P\cap{}^{g}P\), for some \(g\in G_{\bar{0}}\setminus N_{\bar{0}}\).
Setting \(U:=\operatorname{Ind}_{G_{\bar{0}}}^{G}W\) now gives
\[\operatorname{Ind}_{N}^{G}V\simeq\operatorname{Ind}_{N_{\bar{0}}}^{G}V_{\bar{ 0}}\simeq(\operatorname{Ind}_{G_{0}}^{G}W)\oplus(\operatorname{Ind}_{G_{0}}^{ G}Z_{G})=U\oplus\operatorname{Ind}_{G_{0}}^{G}Z_{G},\]
as \(\mathcal{O}G\)-supermodules, as desired.
We now show uniqueness of \(U\). Say \(\operatorname{Ind}_{N}^{G}V\simeq U\oplus M_{G}\), as in (3.64), where \(U\) and \(M_{G}\) have the desired properties. Then, taking the even part of both sides, \(\operatorname{Ind}_{N_{\bar{0}}}^{G_{\bar{0}}}V_{\bar{0}}\cong U_{\bar{0}} \oplus(M_{G})_{\bar{0}}\). Since \(V\) has vertex \(P\), \(V_{\bar{0}}\) has vertex \(P\) or \({}^{x}P\), for some \(x\in N\setminus N_{\bar{0}}\). Similarly, \(U_{\bar{0}}\) has vertex \(P\) or \({}^{y}P\), for some \(y\in G\setminus G_{\bar{0}}\). However, \(U_{\bar{0}}\mid\operatorname{Ind}_{N_{\bar{0}}}^{G_{\bar{0}}}V_{\bar{0}}\) and so \(V_{\bar{0}}\) and \(U_{\bar{0}}\) both have vertices \(P\) or \({}^{x}P\). Therefore, \(U_{\bar{0}}\) is the Green correspondent of \(V_{\bar{0}}\) in \(G_{\bar{0}}\) (as before we have \(N_{G_{\bar{0}}}(P),N_{G_{\bar{0}}}({}^{x}P)\leq N_{\bar{0}}\)) and \(V\simeq\operatorname{Ind}_{G_{0}}^{G}V_{\bar{0}}\) is uniquely determined as an \(\mathcal{O}G\)-supermodule.
Showing that the second equation in (3.64) holds and that \(U\) uniquely determines \(V\), as an \(\mathcal{O}N\)-supermodule, is completely analogous to the reverse argument.
In the context of Theorem 3.63 we say \(U\) and \(V\) are _super Green correspondents_. We will sometimes say \(U\) is the _super Green correspondent of \(V\) in \(G\)_ or \(V\)_is the super Green correspondent of \(U\) in \(N\)._
For the following lemma recall that, for a block \(\mathcal{O}Gb\) with defect group \(D\), we can consider \(\mathcal{O}Gb\) as an \(\mathcal{O}(G\times G)\)-supermodule with vertex \(\Delta D\).
**Lemma 3.65**.: _Let \(b\in\mathcal{O}G_{\bar{0}}\) be a block idempotent in \(\mathcal{O}G\) such that \(\mathcal{O}Gb\) has defect group \(D\) and \(N_{G}(D)\leq N\leq G\) with \(N\nleq G_{\bar{0}}\). Let \(f\in\mathcal{O}N\) be such that \(\mathcal{O}Nf\) is the Brauer correspondent of \(\mathcal{O}Gb\) in \(N\). Then \(f\in\mathcal{O}N_{\bar{0}}\). Moreover, there is a unique absolutely indecomposable summand \(U\) of \(b\mathcal{O}Gf\), as an \(\mathcal{O}(G\times N)\)-supermodule, with vertex \(\Delta D\) and all other summands have vertices strictly contained in \(\Delta D\). Moreover, \(U\) is the
super Green correspondent of both \(\mathcal{O}Gb\) and \(\mathcal{O}Nf\) in \(G\times N\). In particular, \(\mathcal{O}Nf\) is the super Green correspondent of \(\mathcal{O}Gb\) in \(N\times N\)._
Proof.: Since \(D\leq N_{\bar{0}}\), \(\mathcal{O}N\sigma_{\mathcal{O}N}(f)\) also has defect group \(D\) and
\[\operatorname{Br}_{D}(b)=\operatorname{Br}_{D}(\sigma_{\mathcal{O}G}(b))= \operatorname{Br}_{D}(\sigma_{\mathcal{O}G}(f))=\operatorname{Br}_{D}(\sigma_ {\mathcal{O}N}(f)).\]
Therefore, \(\mathcal{O}Nf\) and \(\mathcal{O}N\sigma_{\mathcal{O}N}(f)\) are both the Brauer correspondent of \(\mathcal{O}Gb\) in \(N\). In particular, \(\sigma_{\mathcal{O}N}(f)=f\) and so \(f\in\mathcal{O}N_{\bar{0}}\).
It is well known that \(\mathcal{O}Gb\) and \(\mathcal{O}Nf\) are (non-super) Green correspondents. We set \(U\) to be the super Green correspondent of \(\mathcal{O}Nf\) in \(G\times N\). In particular, \(U\mid\operatorname{Ind}_{N\times N}^{G\times N}\mathcal{O}Nf\simeq\mathcal{ O}Gf\) and all other summands have vertex strictly contained in \(\Delta D\). Due to the first paragraph, \(U\) must be at least the (non-super) Green correspondent of \(\mathcal{O}Gb\) in \(G\times N\). Therefore, \(U\mid\operatorname{Res}_{G\times N}^{G\times G}\mathcal{O}Gb\simeq b\mathcal{ O}G\), as an \(\mathcal{O}(G\times N)\)-module and so \(bU=U\).
We have now shown that \(U\) is the unique indecomposable summand of \(b\mathcal{O}Gf\) with vertex \(\Delta D\) and all other summands have vertex strictly contained in \(\Delta D\). In particular, \(U\) is the super Green correspondent of \(\mathcal{O}Gb\) in \(G\times N\). The proof is now complete.
If \(K\leq G\), \(g\in G\) and \(M\) is an \(\mathcal{O}K\)-supermodule, then \(gM\) is the \(\mathcal{O}(^{g}K)\)-supermodule defined in the obvious way, that is \(|gm|=|g|+|m|\), for all \(m\in M\). In particular, if \(H\leq G\), \(k\in K\) and \(h\in H\), then we have a canonical isomorphism of \(\mathcal{O}H\)-supermodules
\[\operatorname{Ind}_{H\cap^{g}K}^{H}\operatorname{Res}_{H\cap^{g}K}^{gK}gM \stackrel{{\sim}}{{\longrightarrow}}\operatorname{Ind}_{H\cap^{hgk }K}^{H}\operatorname{Res}_{H\cap^{hgk}K}^{{}^{hgk}K}hgkM,\,\,h^{\prime}\otimes gm \mapsto h^{\prime}h^{-1}\otimes hgk(k^{-1}m).\]
This isomorphism ensures the right-hand side of the following'super Mackey decomposition formula' is well-defined.
**Theorem 3.66**.: _Let \(H,K\leq G\) and \(M\) an \(\mathcal{O}K\)-supermodule. We have the following isomorphisms of \(\mathcal{O}H\)-supermodules_
\[\operatorname{Res}_{H}^{G}\operatorname{Ind}_{K}^{G}M\simeq\bigoplus_{g\in H \backslash G/K}\operatorname{Ind}_{H\cap^{g}K}^{H}\operatorname{Res}_{H\cap^{ g}K}^{gK}gM.\]
Proof.: In the proof of the original (non-super) version of the Mackey decomposition formula the isomorphism is given by \(g\otimes m\mapsto gm\in gM\), for all \(g\in H\backslash G/K\). This clearly respects superstructure.
### Vertices of super tensor products
Throughout this subsection \(G\) and \(H\) will denote finite groups with subgroups \(G_{\bar{0}}\leq G\) and \(H_{\bar{0}}\leq H\) of index at most \(2\). For \(g\in G\), we write \(|g|=\bar{0}\) if \(g\in G_{\bar{0}}\) and \(|g|=\bar{1}\) otherwise. Similarly for \(h\in H\). Note that, unlike SS3.10, we do not exclude the cases \(G=G_{\bar{0}}\) and \(H=H_{\bar{0}}\).
We also suppose that \(G_{\bar{0}}\) and \(H_{\bar{0}}\) both contain the canonical central element \(z\) of order \(2\) (note we do not distinguish between the \(z\) in \(G\) and that in \(H\) as these elements will be identified anyway). We set \(G\times_{z}H\) to be the free product \(G\star H\) of groups subject to this identification of \(z\)'s and the relation \([g,h]=z^{|g||h|}\), for all \(g\in G\) and \(h\in H\). We have a natural surjection \(\pi:G\times_{z}H\to G/\langle z\rangle\times H/\langle z\rangle\) with kernel \(\{1,z\}\). We now also have the subgroup \((G\times_{z}H)_{\bar{0}}:=\pi^{-1}(\{(g\langle z\rangle,h\langle z\rangle)\mid|g |=|h|\})\) of index at most \(2\). The main source of examples for this construction will become clear in SS4.2.
As in SS3.10, we have the super group algebras \(\mathcal{O}G\), \(\mathcal{O}H\) and \(\mathcal{O}(G\times_{z}H)\), except that now it is possible that some of these superalgebras could be purely even.
Let \(e_{z}:=(1-z)/2\). Since \(e_{z}\) is an even central idempotent, \(\mathcal{O}Ge_{z}\), \(\mathcal{O}He_{z}\) and \(\mathcal{O}(G\times_{z}H)e_{z}\) inherit superalgebra structure from \(\mathcal{O}G\), \(\mathcal{O}H\) and \(\mathcal{O}(G\times_{z}H)\), respectively.
Moreover, we have the superalgebra isomorphism
\[\mathcal{O}(G\times_{z}H)e_{z}\cong\mathcal{O}Ge_{z}\otimes\mathcal{O}He_{z}.\]
So, given an \(\mathcal{O}Ge_{z}\)-supermodule \(M\) and an \(\mathcal{O}He_{z}\)-supermodule \(N\), we can form the \(\mathcal{O}(G\times_{z}H)e_{z}\)-supermodule \(M\boxtimes N\).
Since \(p\) is odd, if \(P\) is a \(p\)-subgroup of \(G\) and \(Q\) a \(p\)-subgroup of \(H\), then \(P\leq G_{\bar{0}}\), \(Q\leq H_{\bar{0}}\) and \(P\) and \(Q\) commute with one another in \(G\times_{z}H\). Furthermore, since the subgroup of \(G\times_{z}H\) generated by \(G_{\bar{0}}\) and \(H_{\bar{0}}\) is isomorphic to \((G_{\bar{0}}\times H_{\bar{0}})/\langle(z,z)\rangle\), the subgroup of \(G\times_{z}H\) generated by \(P\) and \(Q\) is isomorphic to \(P\times Q\). We will usually simply denote said subgroup by \(P\times Q\).
**Lemma 3.67**.: _Adopting the above notation, suppose \(M\) (resp. \(N\)) is \(\mathcal{O}\)-free and absolutely indecomposable with vertex \(P\) and source \(U\) (resp. vertex \(Q\) and source \(V\)). Then \(M\boxtimes N\) is an \(\mathcal{O}\)-free absolutely indecomposable \(\mathcal{O}(G\times_{z}H)e_{z}\)-supermodule with vertex \(P\times Q\leq G\times_{z}H\) and source \(U\boxtimes V\)._
Proof.: That \(M\boxtimes N\) is \(\mathcal{O}\)-free, is clear.
We now assume that \(G_{\bar{0}}\) and \(H_{\bar{0}}\) are proper subgroups of \(G\) and \(H\) respectively. (We deal with the possibilities of either \(G_{\bar{0}}=G\) or \(H_{\bar{0}}=H\) at the end of the proof.)
Set \(L\) to be the subgroup of \(G\times_{z}H\) generated by \(G_{\bar{0}}\) and \(H_{\bar{0}}\), so \(L\cong(G_{\bar{0}}\times H_{\bar{0}})/\langle(z,z)\rangle\). Consider the \(\mathcal{O}(G_{\bar{0}}\times H_{\bar{0}})\)-module \(M_{\bar{0}}\boxtimes N_{\bar{0}}\). Since \((z,z)\) acts as the identity on this module, we can view \(M_{\bar{0}}\boxtimes N_{\bar{0}}\) as the \(\mathcal{O}L\)-module \(M_{\bar{0}}\boxtimes N_{\bar{0}}\). Moreover,
\[\operatorname{Res}_{L}^{G\times_{z}H}M\boxtimes N\simeq\bigoplus_{\varepsilon_ {g},\varepsilon_{h}\in\{0,1\}}u_{g}^{\varepsilon_{g}}u_{h}^{\varepsilon_{h}}( M_{\bar{0}}\boxtimes N_{\bar{0}}), \tag{3.68}\]
as \(\mathcal{O}L\)-modules, where \(u_{g}\in G_{\bar{1}}\) and \(u_{h}\in H_{\bar{1}}\), and
\[M\boxtimes N\simeq\operatorname{Ind}_{L}^{G\times_{z}H}M_{\bar{0}}\boxtimes N _{\bar{0}}, \tag{3.69}\]
as \(\mathcal{O}(G\times_{z}H)\)-supermodules. By Lemma 3.13(i), \(M_{\bar{0}}\) is an \(\mathcal{O}\)-free indecomposable \(\mathcal{O}G_{\bar{0}}\)-module and \(N_{\bar{0}}\) is an \(\mathcal{O}\)-free indecomposable \(\mathcal{O}H_{\bar{0}}\)-module. Therefore, \(M_{\bar{0}}\boxtimes N_{\bar{0}}\) is \(\mathcal{O}\)-free and, by [**Ku\({}_{2}\)**, Proposition 1.1], is indecomposable as an \(\mathcal{O}(G_{\bar{0}}\times H_{\bar{0}})\)-module and hence as an \(\mathcal{O}L\)-module. (Note that in [**Ku\({}_{2}\)**] the algebras are defined over an algebraically closed field. However, that proof runs through for algebras defined over \(\mathcal{O}\), as long as the modules are \(\mathcal{O}\)-free.)
Lemma 3.13(ii) tells us that \(M_{\bar{0}}\ncong u_{g}M_{\bar{0}}\) as \(\mathcal{O}G_{\bar{0}}\)-modules and \(N_{\bar{0}}\ncong u_{h}N_{\bar{0}}\) as \(\mathcal{O}H_{\bar{0}}\)-modules. Therefore, all four indecomposable summands in (3.68) are pairwise non-isomorphic and it follows from (3.69) and [**W**, SS5, Propositon 2] that \(M\boxtimes N\) is indecomposable as an \(\mathcal{O}(G\times_{z}H)\)-module.
Next, we note that \(M_{\bar{0}}\) has vertex and source a \(G\)-conjugate of the pair \((P,U)\). Similarly, \(N_{\bar{0}}\) has vertex and source an \(H\)-conjugate of the pair \((Q,V)\). Therefore, by [**Ku\({}_{2}\)**, Proposition 1.2], \(M_{\bar{0}}\boxtimes N_{\bar{0}}\) has vertex and source a \((G\times H)\)-conjugate of the pair \((P\times Q,U\boxtimes V)\), as an \(\mathcal{O}(G_{\bar{0}}\times H_{\bar{0}})\)-module. (Again, the reference to [**Ku\({}_{2}\)**] holds without complication for \(\mathcal{O}\)-free modules.) Since \(p\) is odd and \(\langle(z,z)\rangle\) has order 2, \(M_{\bar{0}}\boxtimes N_{\bar{0}}\) has vertex and source a \((G\times_{z}H)\)-conjugate of the pair \((P\times Q,U\boxtimes V)\), as an \(\mathcal{O}L\)-module. That \(M\boxtimes N\) has vertex \(P\times Q\) and source \(U\boxtimes V\) now follows from (3.68) and (3.69).
Finally, we observe that, if \(G=G_{\bar{0}}\) or \(H=H_{\bar{0}}\), the proof runs through in a very similar manner except that there will be only one or two summands in (3.68).
## 4. Double covers of symmetric groups
Throughout this section it is assumed that \(\mathbb{K}\) (and hence \(\mathcal{O}\)) contains a primitive \((2n!)^{\text{th}}\) root of unity. To be able to utilize our results from SS3.7, we also assume that \(-1\) and \(2\) have square roots in \(\mathbb{K}\) (and hence \(\mathcal{O}\)). We note that this last condition is automatic if \(n\geq 4\) and so is not a terribly onerous assumption.
### The groups \(\tilde{\mathsf{S}}_{n}\) and \(\tilde{\mathsf{A}}_{n}\)
The double covers of \(\mathsf{S}_{n}\) are given by
\[\begin{split}\tilde{\mathsf{S}}_{n}^{+}:=\langle z,\ t_{1}, \ldots,t_{n-1}\ |\ z^{2}=1,\ t_{i}z=zt_{i},\ t_{i}^{2}=1,\\ t_{i}t_{j}=zt_{j}t_{i}\text{ if }|i-j|>1,\ (t_{i}t_{i+1})^{3}=1 \rangle,\\ \tilde{\mathsf{S}}_{n}^{-}:=\langle z,\ t_{1},\ldots,t_{n-1}\ |\ z^{2}=1,\ t_{i}z=zt_{i},\ t_{i}^{2}=z,\\ t_{i}t_{j}=zt_{j}t_{i}\text{ if }|i-j|>1,\ (t_{i}t_{i+1})^{3}=1 \rangle.\end{split} \tag{4.1}\]
We will use \(\tilde{\mathsf{S}}_{n}\) to simultaneously represent both double covers. The natural group homomorphism \(\tilde{\mathsf{S}}_{n}\to\mathsf{S}_{n}\), \(z\mapsto 1\), \(t_{i}\mapsto s_{i}\) is denoted by \(\pi_{n}\) (recall that we denote by \(s_{i}\) the transposition \((i,i+1)\in\mathsf{S}_{n}\)). We will sometimes consider \(\tilde{\mathsf{S}}_{n}\) acting on \([n]\) through \(\pi_{n}\). We denote the double cover of the alternating group \(\pi_{n}^{-1}(\mathsf{A}_{n})\) by \(\tilde{\mathsf{A}}_{n}\). We define \(|-|:\tilde{\mathsf{S}}_{n}\to\mathbb{Z}/2\) to be the unique group homomorphism with kernel \(\tilde{\mathsf{A}}_{n}\). If \(n>1\) we consider \(\mathcal{R}\tilde{\mathsf{S}}_{n}\) as a superalgebra corresponding to \((\tilde{\mathsf{S}}_{n})_{\bar{0}}=\tilde{\mathsf{A}}_{n}\), see SS3.10. In the exceptional case \(n=1\), we consider \(\mathcal{R}\tilde{\mathsf{S}}_{n}\) as a purely even superalgebra.
We set
\[e_{z}:=(1-z)/2\in\mathcal{R}\tilde{\mathsf{S}}_{n}.\]
We will use the adjective'spin' when referring to objects associated to \(\mathcal{R}\tilde{\mathsf{S}}_{n}e_{z}\) rather than the whole of \(\mathcal{R}\tilde{\mathsf{S}}_{n}\). For example, the spin characters of \(\tilde{\mathsf{S}}_{n}\) will refer to the ordinary characters \(\chi\) of \(\tilde{\mathsf{S}}_{n}\) satisfying \(\chi(z)=-\chi(1)\), while the spin blocks of \(\mathcal{O}\tilde{\mathsf{S}}_{n}\) refers to the blocks of \(\mathcal{O}\tilde{\mathsf{S}}_{n}e_{z}\). Note that \(\mathcal{R}\tilde{\mathsf{S}}_{n}e_{z}\) inherits the structure of superalgebra from \(\mathcal{R}\tilde{\mathsf{S}}_{n}\) since the idempotent \(e_{z}\) is even.
If \(n>1\) then \(\mathbb{K}\tilde{\mathsf{S}}_{n}e_{z}\) is a split semisimple \(\mathbb{K}\)-superalgebra with superunit. If \(n=1\), then \(\mathbb{K}\tilde{\mathsf{S}}_{n}e_{z}\cong\mathbb{K}\) is split semisimple but is without a superunit. We can, therefore, still utilize many of the results from SS3.9. However, several of the subsequent proofs later on the article require us to consider this case separately.
Let \(G\leq\tilde{\mathsf{S}}_{n}\) with \(z\in G\). Following the notation from SS3.10, we set
\[G_{\bar{0}}:=G\cap\tilde{\mathsf{A}}_{n}\quad\text{and}\quad G_{\bar{1}}:=G \setminus\tilde{\mathsf{A}}_{n} \tag{4.2}\]
and treat \(\mathcal{R}Ge_{z}\) as a superalgebra in the appropriate way. (Note that, unlike in SS3.10, we are yet not assuming that \(G_{\bar{0}}\) is a proper subgroup of \(G\).) If \(G\nleq\tilde{\mathsf{A}}_{n}\), then, due to Lemma 3.60, \(\mathbb{K}Ge_{z}\) is a split semisimple \(\mathbb{K}\)-superalgebra with superunit.
Recall the twisted group superalgebra \(\mathcal{T}_{n}\) from Example 3.6. We have the following isomorphisms of superalgebras
\[\begin{split}\mathcal{O}\tilde{\mathsf{S}}_{n}^{+}e_{z}& \stackrel{{\sim}}{{\longrightarrow}}\mathcal{T}_{n},\ t_{i}e_{z} \mapsto\,t_{i},\ ze_{z}\mapsto-1\\ \mathcal{O}\tilde{\mathsf{S}}_{n}^{-}e_{z}&\stackrel{{ \sim}}{{\longrightarrow}}\mathcal{T}_{n},\ t_{i}e_{z}\mapsto(-1)^{i}\sqrt{-1 }t_{i},\ ze_{z}\mapsto-1.\end{split} \tag{4.3}\]
### Special subgroups
Consider the subgroups \(G\leq\tilde{\mathsf{S}}_{m}\) and \(H\leq\tilde{\mathsf{S}}_{n}\), both containing the canonical central element \(z\) (note we do not distinguish between the \(z\) in \(G\) and that in \(H\) as these elements will be identified anyway). As in SS3.11, we set \(G\times_{z}H\) to be the free product \(G\star H\) of groups subject to this identification of \(z\)'s and the relation
\([g,h]=z^{|g||h|}\), for all \(g\in G\) and \(h\in H\). Equivalently, \(G\times_{z}H\) is isomorphic to the subgroup of \(\tilde{\mathsf{S}}_{m+n}\) generated by \(G\) and \(H\), where we view \(H\leq\tilde{\mathsf{S}}_{n}\) as a subgroup of \(\tilde{\mathsf{S}}_{m+n}\) via \(z\mapsto z\), \(t_{i}\mapsto t_{m+i}\).
If \(\mathtt{X}\subseteq[n]\), then \(\mathsf{S}_{\mathtt{X}}\) will signify the subgroup of \(\mathsf{S}_{n}\) consisting of all elements that fix \([n]\backslash\mathtt{X}\) pointwise and \(\tilde{\mathsf{S}}_{\mathtt{X}}\) will denote \(\pi_{n}^{-1}(\mathsf{S}_{X})\leq\tilde{\mathsf{S}}_{n}\). Moreover, for disjoint \(\mathtt{X}_{1},\ldots,\mathtt{X}_{k}\subseteq[n]\), \(\tilde{\mathsf{S}}_{\mathtt{X}_{1},\ldots,\mathtt{X}_{k}}\) is the subgroup of \(\tilde{\mathsf{S}}_{n}\) generated by the \(\tilde{\mathsf{S}}_{\mathtt{X}_{i}}\)'s. In particular,
\[\tilde{\mathsf{S}}_{\mathtt{X}_{1},\ldots,\mathtt{X}_{k}}\cong\tilde{ \mathsf{S}}_{\mathtt{X}_{1}}\times_{z}\cdots\times_{z}\tilde{\mathsf{S}}_{ \mathtt{X}_{k}}\]
and
\[\mathcal{O}\tilde{\mathsf{S}}_{\mathtt{X}_{1},\ldots,\mathtt{X}_{k}}e_{z} \cong\mathcal{O}\tilde{\mathsf{S}}_{\mathtt{X}_{1}}e_{z}\otimes\cdots\otimes \mathcal{O}\tilde{\mathsf{S}}_{\mathtt{X}_{k}}e_{z}\cong\mathcal{T}_{| \mathtt{X}_{1}|}\otimes\cdots\otimes\mathcal{T}_{|\mathtt{X}_{k}|} \tag{4.4}\]
as superalgebras. We also define the corresponding subgroups of \(\tilde{\mathsf{A}}_{n}\):
\[\tilde{\mathsf{A}}_{\mathtt{X}}:=\tilde{\mathsf{S}}_{\mathtt{X}}\cap\tilde{ \mathsf{A}}_{n}\quad\text{and}\quad\tilde{\mathsf{A}}_{\mathtt{X}_{1},\ldots, \mathtt{X}_{k}}:=\tilde{\mathsf{S}}_{\mathtt{X}_{1},\ldots,\mathtt{X}_{k}} \cap\tilde{\mathsf{A}}_{n}.\]
In the important special case where \(n=n_{1}+\cdots+n_{k}\) and \(\mathtt{Y}_{l}=[n_{1}+\cdots+n_{l}]\backslash[n_{1}+\cdots+n_{l-1}]\) for \(l=1,\ldots,k\), we have the _standard Young subgroups_
\[\tilde{\mathsf{S}}_{n_{1},\ldots,n_{k}}:=\tilde{\mathsf{S}}_{\mathtt{Y}_{1}, \ldots,\mathtt{Y}_{k}}\cong\tilde{\mathsf{S}}_{n_{1}}\times_{z}\cdots\times_{ z}\tilde{\mathsf{S}}_{n_{k}}\leq\tilde{\mathsf{S}}_{n}\quad\text{and}\quad \tilde{\mathsf{A}}_{n_{1},\ldots,n_{k}}=\tilde{\mathsf{S}}_{n_{1},\ldots,n_{k} }\cap\tilde{\mathsf{A}}_{n}\leq\tilde{\mathsf{A}}_{n}. \tag{4.5}\]
If some of the \(n_{l}\)'s are equal to each other, we use the exponential notation, as for example for the standard Young subgroup \(\tilde{\mathsf{S}}_{r+kp,p^{d-k}}=\tilde{\mathsf{S}}_{r+kp,p,\ldots,p}\) defined in (4.16) below.
### Irreducible characters of \(\tilde{\mathsf{S}}_{n}\) and \(\tilde{\mathsf{A}}_{n}\)
For the following theorem, originally due to Schur [**S**], we adopt the notation of SS3.9.
**Theorem 4.6**.: _We can identify \(\Lambda^{\mathbb{K}\tilde{\mathsf{S}}_{n}e_{z}}\) with \(\mathscr{P}_{0}(n)\), where \(\lambda\in\mathscr{P}_{0}(n)\) is even/odd (in the sense of SS3.9) if \(|\lambda|-h(\lambda)\) is even/odd._
To label the irreducible spin characters of \(\tilde{\mathsf{S}}_{n}\), we now use (3.47), but note that for the moment, we are not being precise about distinguishing between \(\xi_{\lambda}^{+}\) and \(\xi_{\lambda}^{-}\) when \(\lambda\) is odd. To label the irreducible spin characters of \(\tilde{\mathsf{A}}_{n}\), if \(n=1\), then \(\tilde{\xi}_{(1)}\) will denote the unique element of \(\operatorname{Irr}(\mathbb{K}\tilde{\mathsf{A}}_{1}e_{z})\). For \(n>1\), we use the notation of Lemma 3.49. Again, for the moment, we are not being precise about distinguishing between \(\tilde{\xi}_{\lambda}^{+}\) and \(\tilde{\xi}_{\lambda}^{-}\) when \(\lambda\) is even.
If \(m,n>1\), we can label the irreducible characters of
\[\mathbb{K}(\tilde{\mathsf{S}}_{m}\times_{z}\tilde{\mathsf{S}}_{n})e_{z}\cong \mathbb{K}\tilde{\mathsf{S}}_{m}e_{z}\otimes\mathbb{K}\tilde{\mathsf{S}}_{n}e_ {z}\]
using Lemmas 3.51 and 3.55. If \(m=1\), then Lemma 3.55 no longer applies. However, to unify our \(m=1\) and \(m>1\) theory, we set
\[\xi_{(1),\lambda}^{(\pm)}:=\xi_{(1)}\boxtimes\xi_{\lambda}^{(\pm)}\in \operatorname{Irr}(\mathbb{K}(\tilde{\mathsf{S}}_{1}\times_{z}\tilde{\mathsf{S }}_{n})e_{z}),\]
for any \(\lambda\in\mathscr{P}_{0}(n)\). If \(n=1\) we proceed similarly to the case \(m=1\).
### Shifted tableau and branching rules
If \(\mu\in\mathscr{P}_{0}(m)\) and \(\lambda\in\mathscr{P}_{0}(n)\) are such that \(\mu\subseteq\lambda\), we consider the shifted skew diagram \(\mathsf{sh}[\lambda\setminus\mu]:=\mathsf{sh}[\lambda]\setminus\mathsf{sh}[\mu]\) of \(\lambda\setminus\mu\). We set \(\mathbb{N}^{\prime}\) to be the totally ordered set \(\{1^{\prime}<1<2^{\prime}<2<\dots\}\). The letters \(1^{\prime},2^{\prime},3^{\prime},\dots\) are said to be _marked_. A _shifted tableau_\(\mathtt{T}\) of shape \(\lambda\setminus\mu\) is an assignment \(\mathtt{T}:\mathsf{sh}[\lambda\setminus\mu]\to\mathbb{N}^{\prime}\) satisfying
1. \(\mathtt{T}(i,j)\leq\mathtt{T}(i+1,j)\), \(\mathtt{T}(i,j)\leq\mathtt{T}(i,j+1)\), for all permissible \(i,j\).
2. Each column has at most one \(k\), for each \(k\in\mathbb{Z}_{>0}\).
3. Each row has at most one \(k^{\prime}\), for each \(k\in\mathbb{Z}_{>0}\).
We say \(\mathtt{T}\) has _content_\(\nu=(\nu_{1},\nu_{2},\dots)\), where \(\nu_{k}=|\{(i,j)|\mathtt{T}(i,j)=k\text{ or }k^{\prime}\}|.\) The _word_\(w(\mathtt{T})\) of \(\mathtt{T}\) is the sequence obtained by reading the rows of \(\mathtt{T}\) from left to right starting at the bottom row and working upwards. If \(w(\mathtt{T})=w_{1},w_{2},\dots w_{n-m}\) and \(i\in\mathbb{Z}_{>0}\), we define
\[m_{i}(j)=\text{ multiplicity of }i\text{ among }w_{n-m-j+1},\dots,w_{n-m},\]
for \(0\leq j\leq n-m\) and
\[m_{i}(n-m+j)=m_{i}(n-m)+\text{ multiplicity of }i^{\prime}\text{ among }w_{1},\dots,w_{j},\]
for \(0<j\leq n-m\).
The word \(w\) is said to satisfy the lattice property if, whenever \(m_{i}(j)=m_{i-1}(j)\), for permissible \(i\) and \(j\) with \(j<2(n-m)\), we have
\[w_{n-m-j}\neq i,i^{\prime} \text{if }0\leq j<n-m,\] \[w_{j-n+m+1}\neq i-1,i^{\prime} \text{if }n-m\leq j<2(n-m).\]
We write \(|w|\) to denote the word \(w\) but with all the \(k^{\prime}\)'s replaced by \(k\)'s, for all \(k\in\mathbb{Z}_{>0}\).
If \(\nu\in\mathscr{P}_{0}(n-m)\), then we define \(\mathfrak{f}_{\nu}(\lambda\setminus\mu)\) to be the number of shifted tableaux \(\mathtt{T}:\mathsf{sh}[\lambda\setminus\mu]\to\mathbb{N}^{\prime}\) of content \(\nu\) such that
1. \(w(\mathtt{T})\) satisfies the lattice property.
2. The leftmost \(i\) of \(|w(\mathtt{T})|\) is unmarked in \(w\), for all \(1\leq i\leq h(\nu)\).
Recall the notation (3.56),(3.57).
**Theorem 4.7**.: [**St**, Theorems 8.1, 8.3] _Let \(m,n\in\mathbb{Z}_{>0}\), with \(m<n\), \(\mu\in\mathscr{P}_{0}(m)\) and \(\nu\in\mathscr{P}_{0}(n-m)\). Then_
\[\xi_{\mu,\nu}\uparrow_{\mathfrak{S}_{m,n-m}}^{\mathfrak{S}_{n}}=\sum_{\lambda \in\mathscr{P}_{0}(n)}\,\frac{\varepsilon_{\mu,\nu}}{\varepsilon_{\lambda}}\, 2^{(h(\mu)+h(\nu)-h(\lambda))/2}\,\mathfrak{f}_{\nu}(\lambda\setminus\mu)\, \xi_{\lambda}.\]
_Furthermore, if \((\mu,\nu)\) and \(\lambda\) are both odd, then, unless \(\lambda=\mu\sqcup\nu\), the coefficients of \(\xi_{\lambda}^{+}\) and \(\xi_{\lambda}^{-}\) are equal in both \(\xi_{\mu,\nu}^{+}\uparrow_{\mathfrak{S}_{m,n-m}}^{\tilde{\mathfrak{S}}_{n}}\) and \(\xi_{\mu,\nu}^{-}\uparrow_{\mathfrak{S}_{m,n-m}}^{\tilde{\mathfrak{S}}_{n}}\)._
Proof.: In [**St**] this is stated for the character on the left-hand side being irreducible and not an irreducible supercharacter (as is the case here). This is the reason our right-hand side appears to be multiplied by \(\varepsilon_{(\mu,\nu)}^{2}\). There is also an exception mentioned in [**St**], namely when \(\lambda\) is odd and \(\lambda=\mu\sqcup\nu\). In this case the appropriate coefficient is \(1\). As noted in that proof, we have \(\mathfrak{f}_{\nu}(\lambda\setminus\mu)=1\). Therefore, since \(\varepsilon_{\mu,\nu}=\varepsilon_{\lambda}\) and \(h(\lambda)=h(\mu)+h(\nu)\), our formula also gives coefficient \(1\).
### Blocks of \(\tilde{\mathfrak{S}}_{n}\) and \(\tilde{\mathfrak{A}}_{n}\)
In this section we describe the spin blocks of \(\tilde{\mathfrak{S}}_{n}\) and \(\tilde{\mathfrak{A}}_{n}\), their defect groups and Brauer correspondents. Parts (i),(ii) of the following theorem are [**H**, Theorem 1.1], while parts (iii),(iv) are [**Ca**, Theorem A, Corollary 26].
**Theorem 4.8**.: _Let \(n\in\mathbb{Z}_{>0}\)._
1. _The spin blocks of_ \(\mathcal{O}\tilde{\mathfrak{S}}_{n}\) _are labelled by pairs_ \((\rho,d)\)_, where_ \(\rho\) _is a_ \(\bar{p}\)_-core and_ \(d\in\mathbb{N}\)_, with_ \(n=|\rho|+dp\)_. If_ \(d=0\) _and_ \(\rho\) _is odd then_ \((\rho,0)\) _labels two spin blocks of_ \(\mathcal{O}\tilde{\mathfrak{S}}_{n}\) _with corresponding block idempotents_ \(\mathsf{e}_{\rho,0}^{+},\mathsf{e}_{\rho,0}^{-}\in\mathcal{O}\tilde{\mathfrak{ S}}_{n}e_{z}\)_. In all cases we denote the block idempotent (or sum of two block idempotents) corresponding to_ \((\rho,d)\) _by_ \(\mathsf{e}_{\rho,d}\in\mathcal{O}\tilde{\mathfrak{S}}_{n}e_{z}\)
_._
2. _The character_ \(\xi_{\lambda}^{(\pm)}\) _lies in the (sum of) block(s)_ \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\)_, where_ \(\rho\) _is the_ \(\bar{p}\)_-core of_ \(\lambda\)_. If_ \(d=0\) _and_ \(\rho\) _is odd, we choose the labeling so that_ \(\xi_{\rho}^{\pm}\) _lies in the block_ \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,0}^{\pm}\)_._
_For the remainder of this theorem we fix a \(\bar{p}\)-core \(\rho\) and \(d\in\mathbb{N}\) with \(n=|\rho|+dp\), and set \(\mathtt{R}:=[|\rho|]\) and \(\mathtt{P}:=[n]\setminus\mathtt{R}\)._
* _If_ \(d=0\)_, each block of_ \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\) _has trivial defect group. If_ \(d>0\)_, any Sylow_ \(p\)_-subgroup_ \(\mathsf{D}\) _of_ \(\tilde{\mathsf{S}}_{\mathtt{P}}\leq\tilde{\mathsf{S}}_{n}\) _is a defect group of_ \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\)_._
* _If_ \(d=0\)_, then each block of_ \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\) _is its own Brauer correspondent. If_ \(d>0\)_, then, setting_ \(\mathsf{D}\) _to be as in (iii), the Brauer correspondent of_ \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\) _in_ \(N_{\tilde{\mathsf{S}}_{n}}(\mathsf{D})\) _is_ \(\mathcal{O}N_{\tilde{\mathsf{S}}_{n}}(\mathsf{D})\mathsf{e}_{\rho,0}\)_. Here, we view_ \(\mathsf{e}_{\rho,0}\) _as an element of_ \(\mathcal{O}N_{\tilde{\mathsf{S}}_{n}}(\mathsf{D})\) _via_ \(\mathsf{e}_{\rho,0}\in\mathcal{O}\tilde{\mathsf{S}}_{\mathtt{R}}{\hookrightarrow }\mathcal{O}N_{\tilde{\mathsf{S}}_{n}}(\mathsf{D})\)_. If_ \(\rho=\varnothing\)_, we interpret_ \(\mathsf{e}_{\varnothing,0}\) _as_ \(e_{z}\)_._
**Remark 4.9**.: The \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\)'s are precisely the superblocks of \(\mathcal{O}\tilde{\mathsf{S}}_{n}e_{z}\). That is, the finest decomposition of \(\mathcal{O}\tilde{\mathsf{S}}_{n}e_{z}\) into a direct sum of two-sided ideals that are invariant under the involution \(\sigma_{\mathcal{O}\tilde{\mathsf{S}}_{n}}\). This follows from the distribution of characters given in Theorem 4.8(ii). In particular, we always have \(\mathsf{e}_{\rho,d}\in\mathcal{O}\tilde{\mathsf{A}}_{n}e_{z}\).
The spin blocks of \(\mathcal{O}\tilde{\mathsf{A}}_{n}\) are described in [**Ke**, Proposition 3.16]:
**Theorem 4.10**.: _For each \(n>1\), \(\bar{p}\)-core \(\rho\) and \(d\in\mathbb{N}\) with \(n=|\rho|+dp\), \(\mathcal{O}\tilde{\mathsf{A}}_{n}\mathsf{e}_{\rho,d}\) is a single block of \(\mathcal{O}\tilde{\mathsf{A}}_{n}\) unless \(d=0\) and \(\rho\) is even. In this latter case \(\mathcal{O}\tilde{\mathsf{A}}_{n}\mathsf{e}_{\rho,d}\) is a direct sum of two blocks of \(\mathcal{O}\tilde{\mathsf{A}}_{n}\). If \(n=1\), then \(\mathcal{O}\tilde{\mathsf{A}}_{1}\mathsf{e}_{(1),0}=\mathcal{O}\tilde{\mathsf{ S}}_{1}\mathsf{e}_{(1),0}=\mathcal{O}e_{z}\) is a single block of \(\mathcal{O}\tilde{\mathsf{A}}_{1}\)._
_If \(d=0\), once again, the defect group is trivial. In all other cases the defect group of \(\mathcal{O}\tilde{\mathsf{A}}_{n}\mathsf{e}_{\rho,d}\) is the same as that of \(\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\)._
We set
\[B^{\rho,d}:=\mathcal{O}\tilde{\mathsf{S}}_{n}\mathsf{e}_{\rho,d}\quad\text{and }\quad B^{\rho,d}_{\bar{0}}:=\mathcal{O}\tilde{\mathsf{A}}_{n}\mathsf{e}_{\rho,d}\]
(it is easy to see that \(\mathcal{O}\tilde{\mathsf{A}}_{n}\mathsf{e}_{\rho,d}\) is indeed the purely even subalgebra of the superalgebra \(B^{\rho,d}\)). We will often assume \(d>0\) to ensure that blocks and superblocks coincide. However, the corresponding statements for trivial defect blocks are completely elementary.
Fix a \(\bar{p}\)-core \(\rho\) and a non-negative integer \(d\). Let \(r:=|\rho|\) and \(n:=r+dp\). As in Theorem 4.8, we set \(\mathtt{R}:=[r]\) and \(\mathtt{P}:=[n]\setminus\mathtt{R}\). For \(k=1,\ldots,d\) we also set \(\mathtt{P}_{k}:=[r+kp]\setminus[r+(k-1)p].\) In other words, we have:
\[\underbrace{1,\ldots,r}_{\mathtt{R}}\overbrace{r+1,\ldots,r+p}_{\mathtt{P}_{1} }\underbrace{r+p+1,\ldots,r+2p}_{\mathtt{P}_{2}}\cdots\underbrace{r+p(d-1)+1, \ldots,r+dp}_{\mathtt{P}_{d}} \tag{4.11}\]
With this notation, the following now follows easily from Theorem 4.8, cf. [**KL**, Proposition 5.2.13]:
**Lemma 4.12**.: _Let \(\mathsf{D}\) be the defect group of the block \(B^{\rho,d}\). Then \(\mathsf{D}\) is abelian if and only if \(d<p\). In this case we can choose \(\mathsf{D}=\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d}\), where each \(\mathsf{D}_{k}\) is a Sylow \(p\)-subgroup of \(\tilde{\mathsf{S}}_{\mathtt{P}_{k}}\)._
Let \(d<p\). We continue with the notation (4.11). Let \(\iota:\mathsf{S}_{d}\to\mathsf{S}_{\mathtt{P}}\) be the natural permutation action of \(\mathsf{S}_{d}\) on the \(\mathtt{P}_{k}\)'s. More precisely,
\[\iota(w)\cdot(r+(k-1)p+t)=r+(w(k)-1)p+t,\]
for all \(1\leq k\leq d\), \(1\leq t\leq p\) and \(w\in\mathsf{S}_{d}\). We choose a lift \(T_{w}\) of \(\iota(w)\) to \(\tilde{\mathsf{S}}_{\mathfrak{p}}\), for each \(w\in\mathsf{S}_{d}\) and set \(T_{k}:=T_{s_{k}}\), for all \(1\leq k\leq d-1\).
Note that every Sylow \(p\)-subgroup of \(\mathsf{S}_{\mathfrak{p}_{k}}\) lifts uniquely and isomorphically to a Sylow \(p\)-subgroup of \(\tilde{\mathsf{S}}_{\mathfrak{p}_{k}}\). We may, therefore, choose the \(\mathsf{D}_{k}\)'s in Lemma 4.12 such that \(T_{w}\mathsf{D}_{k}T_{w}^{-1}=\mathsf{D}_{w(k)}\), for all \(1\leq k\leq d\) and \(w\in\mathsf{S}_{d}\). Indeed, one can first construct Sylow \(p\)-subgroups of each of the \(\mathsf{S}_{\mathfrak{p}_{k}}\)'s that get permuted by the \(\iota(w)\)'s and then lift to \(\tilde{\mathsf{S}}_{\mathfrak{p}_{k}}\). By the construction of the \(T_{w}\)'s we also have \(T_{w}\tilde{\mathsf{S}}_{\mathfrak{p}_{k}}T_{w}^{-1}=\tilde{\mathsf{S}}_{ \mathfrak{p}_{w(k)}}\), for all \(1\leq k\leq d\) and \(w\in\mathsf{S}_{d}\).
Note that \(\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{1}}e_{z}\cong\mathcal{O}\tilde{ \mathsf{S}}_{p}e_{z}\) via \(t_{r+m}e_{z}\mapsto t_{m}e_{z}\). We now fix isomorphisms between the \(\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{k}}e_{z}\)'s. For each \(1<k\leq d\), we identify \(\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{k}}e_{z}\) with \(\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{1}}e_{z}\cong\mathcal{O}\tilde {\mathsf{S}}_{p}e_{z}\) via the isomorphism
\[\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{1}}e_{z}\to\mathcal{O}\tilde{ \mathsf{S}}_{\mathfrak{p}_{k}}e_{z},\quad a\mapsto(-1)^{k|a|}T_{(1,k)}aT_{(1,k )}^{-1}. \tag{4.13}\]
(Note this does not depend on our choice of \(T_{(1,k)}\), since \(T_{(1,k)}\) is uniquely determined up to multiplication by \(z\).) Through these isomorphisms and using (4.4), we can identify the superalgebra
\[\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{1},\ldots,\mathfrak{p}_{d}}e_{ z}\cong\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{1}}e_{z}\otimes\cdots \otimes\mathcal{O}\tilde{\mathsf{S}}_{\mathfrak{p}_{d}}e_{z}\]
with \((\mathcal{O}\tilde{\mathsf{S}}_{p}e_{z})^{\otimes d}\).
**Lemma 4.14**.: _There exist \(\kappa_{k}\in\mathcal{O}^{\times}\), for \(1\leq k\leq d-1\), satisfying the following properties:_
1. \((\kappa_{k}T_{k}e_{z})(a_{1}\otimes\cdots\otimes a_{d})(\kappa_{k}^{-1}T_{k}^ {-1}e_{z})=(-1)^{\sum_{j\neq k,k+1}|a_{j}|}\binom{s_{k}}{k}(a_{1}\otimes\cdots \otimes a_{d})),\) _for all_ \(1\leq k\leq d-1\) _and_ \(a_{1}\otimes\cdots\otimes a_{d}\in(\mathcal{O}\tilde{\mathsf{S}}_{p}e_{z})^{ \otimes d}\)_._
2. \((\kappa_{k}T_{k}e_{z})^{2}=e_{z},\) _for all_ \(1\leq k\leq d-1\)_._
3. \((\kappa_{k}T_{k}e_{z})(\kappa_{l}T_{l}e_{z})=-(\kappa_{l}T_{l}e_{z})(\kappa_{ k}T_{k}e_{z}),\) _for all_ \(1\leq k,l\leq d-1\) _with_ \(|k-l|>1\)_._
4. \((\kappa_{k}T_{k}e_{z})(\kappa_{k+1}T_{k+1}e_{z})(\kappa_{k}T_{k}e_{z})=(\kappa _{k+1}T_{k+1}e_{z})(\kappa_{k}T_{k}e_{z})(\kappa_{k+1}T_{k+1}e_{z}),\) _for all_ \(1\leq k\leq d-2\)_._
Proof.: This is essentially contained in the proof of [**KL**, Proposition 5.2.13] but we outline the main points.
Property (i) does not depend on the \(\kappa_{k}\)'s. Indeed, if \(a_{1}\otimes\cdots\otimes a_{d}\in(\mathcal{O}\tilde{\mathsf{S}}_{p}e_{z})^{ \otimes d}\), we have
\[(\kappa_{k}T_{k}e_{z})(a_{1}\otimes\cdots\otimes a_{d})(\kappa_ {k}^{-1}T_{k}^{-1}e_{z})=T_{k}(a_{1}\otimes\cdots\otimes a_{d})T_{k}^{-1}\] \[= T_{(1,k)}T_{(1,k+1)}T_{(1,k)}(a_{1}\otimes\cdots\otimes a_{d})T_{ (1,k)}^{-1}T_{(1,k+1)}^{-1}T_{(1,k)}^{-1}.\]
Property (i) now follows from a direct calculation using (4.13).
Property (iii) also does not depend on the \(\kappa_{k}\)'s and follows immediately from (4.4).
Since \(T_{k}\) is a lift of \(\iota(s_{k})\), for each \(1\leq k\leq d-1\), either \(T_{k}^{2}=1\) or \(z\). Furthermore, for each \(1\leq k,l\leq d-1\), \(T_{k}\) is conjugate in \(\tilde{\mathsf{S}}_{\mathfrak{p}}\) to either \(T_{l}\) or \(zT_{l}\). Therefore, property (ii) holds with either all \(\kappa_{k}=1\) or all \(\kappa_{k}=\sqrt{-1}\).
Finally, since the \(\pi_{n}(T_{k})\)'s satisfy the braid relations, property (iv) holds up to a sign, for each \(1\leq k\leq d-2\). We may therefore replace some of the \(\kappa_{k}\)'s with \(-\kappa_{k}\) to ensure that property (iv) holds. Note that this last reassignment will not stop property (ii) being satisfied.
### Special subgroups and their blocks
Throughout the subsection we fix a \(\bar{p}\)-core \(\rho\) and an integer \(d\) satisfying \(0\leq d<p\) (the assumption \(d<p\) is to guarantee that we are in the abelian defect group case, see Lemma 4.12, although it is not always needed.)
We adopt the notation (4.11), in particular \(r:=|\rho|\) and \(n:=r+dp\). For \(0\leq k\leq d\), we define the following subgroups of \(\tilde{\mathsf{S}}_{n}\):
\[\mathsf{G}_{k} :=\tilde{\mathsf{S}}_{\mathsf{R}\cup\mathsf{P}_{1}\cup\cdots\cup \mathsf{P}_{k}}\cong\tilde{\mathsf{S}}_{r+kp}, \tag{4.15}\] \[\mathsf{H}_{k} :=\tilde{\mathsf{S}}_{\mathsf{R}\cup\mathsf{P}_{1}\cup\cdots\cup \mathsf{P}_{k},\mathsf{P}_{k+1},\ldots,\mathsf{P}_{d}}\cong\tilde{\mathsf{S}}_ {r+kp,p^{d-k}},\] (4.16) \[\mathsf{L}_{k} :=\tilde{\mathsf{S}}_{\mathsf{R},\mathsf{P}_{1},\ldots,\mathsf{P} _{k}}\cong\tilde{\mathsf{S}}_{r,p^{k}},\] (4.17) \[\mathsf{N}_{k} :=N_{\mathsf{G}_{k}}(\tilde{\mathsf{S}}_{\mathsf{P}_{1},\ldots, \mathsf{P}_{k}}), \tag{4.18}\]
and set \(\mathsf{G}:=\mathsf{G}_{d}=\tilde{\mathsf{S}}_{n}\), \(\mathsf{H}:=\mathsf{H}_{d-1}=\tilde{\mathsf{S}}_{n-p,p}\) (if \(d>0\)), \(\mathsf{L}:=\mathsf{L}_{d}=\tilde{\mathsf{S}}_{r,p^{d}}\) and \(\mathsf{N}:=\mathsf{N}_{d}\). Note that \(\mathsf{N}\) must permute \(\mathsf{P}_{1},\ldots,\mathsf{P}_{d}\). In particular, \(\mathsf{N}\leq\tilde{\mathsf{S}}_{n}\) is the subgroup generated by \(\mathsf{L}=\tilde{\mathsf{S}}_{\mathsf{R},\mathsf{P}_{1},\ldots,\mathsf{P}_{ d}}\) and the \(T_{w}\)'s from SS4.5.
Recalling the notation (4.2), we will also use the subgroups \(\mathsf{G}_{\bar{0}}=\tilde{\mathsf{A}}_{n}\), \(\mathsf{N}_{\bar{0}}=\mathsf{N}\cap\tilde{\mathsf{A}}_{n}\), etc. (not to be confused with \(\mathsf{G}_{0}\), \(\mathsf{N}_{0}\), etc.)
We make the following identifications using (4.4):
\[\begin{split}\mathcal{O}\mathsf{H}_{k}e_{z}&\cong \mathcal{O}\tilde{\mathsf{S}}_{\mathsf{R}\cup\mathsf{P}_{1}\cup\cdots\cup \mathsf{P}_{k}}e_{z}\otimes\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{P}_{k+1}}e _{z}\otimes\cdots\otimes\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{P}_{d}}e_{z}\\ \mathcal{O}\mathsf{L}_{k}e_{z}&\cong\mathcal{O} \tilde{\mathsf{S}}_{\mathsf{R}}e_{z}\otimes\mathcal{O}\tilde{\mathsf{S}}_{ \mathsf{P}_{1}}e_{z}\otimes\cdots\otimes\mathcal{O}\tilde{\mathsf{S}}_{ \mathsf{P}_{k}}e_{z},\end{split} \tag{4.19}\]
for all \(0\leq k\leq d\). Furthermore, for \(2\leq k\leq d\), we identify \(\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{P}_{k}}e_{z}\) with \(\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{P}_{1}}e_{z}\cong\mathcal{O}\tilde{ \mathsf{S}}_{p}e_{z}\) via (4.13). Through these identifications we set \(\mathsf{e}_{\varnothing,1}^{(k)}\in\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{P}_ {k}}e_{z}\) to be the image of \(\mathsf{e}_{\varnothing,1}\in\mathcal{O}\tilde{\mathsf{S}}_{p}e_{z}\). We now define central idempotents
\[\mathsf{b}_{k} :=\mathsf{e}_{\rho,k}\in\mathcal{O}\mathsf{G}_{k}e_{z}, \tag{4.20}\] \[\mathsf{c}_{k} :=\mathsf{e}_{\rho,k}\otimes\mathsf{e}_{\varnothing,1}^{(k+1)} \otimes\cdots\otimes\mathsf{e}_{\varnothing,1}^{(d)}\in\mathcal{O}\mathsf{H} _{k}e_{z},\] (4.21) \[\mathsf{f}_{k} :=\mathsf{e}_{\rho,0}\otimes\mathsf{e}_{\varnothing,1}^{(1)} \otimes\cdots\otimes\mathsf{e}_{\varnothing,1}^{(k)}\in\mathcal{O}\mathsf{L} _{k}e_{z}, \tag{4.22}\]
for all \(0\leq k\leq d\) and set
\[\mathsf{b}:=\mathsf{b}_{d}=\mathsf{c}_{d}\in\mathcal{O}\mathsf{G}e_{z},\quad \mathsf{c}:=\mathsf{c}_{d-1}\in\mathcal{O}\mathsf{H}e_{z},\quad\mathsf{f}:= \mathsf{f}_{d}=\mathsf{c}_{0}\in\mathcal{O}\mathsf{L}e_{z}. \tag{4.23}\]
For \(0\leq k\leq l\leq d\), noting that the idempotents \(\mathsf{c}_{m}\) commute, we define the idempotent
\[\mathsf{c}_{k,l}:=\mathsf{c}_{k}\mathsf{c}_{k+1}\cdots\mathsf{c}_{l}, \tag{4.24}\]
which we consider as an elements of \(\mathcal{O}\mathsf{G}e_{z}\). Occasionally, we will also need
\[\mathsf{H}^{\prime}_{k} :=\tilde{\mathsf{S}}_{\mathsf{R}\cup\mathsf{P}_{1}\cup\ldots\cup \mathsf{P}_{k},\mathsf{P}_{k+1},\ldots,\mathsf{P}_{d-1}}\cong\tilde{\mathsf{S }}_{r+kp,p^{d-k-1}}\leq\mathsf{G}_{d-1}, \tag{4.25}\] \[\mathsf{c}^{\prime}_{k} :=\mathsf{e}_{\rho,k}\otimes\mathsf{e}_{\varnothing,1}^{(k+1)} \otimes\cdots\otimes\mathsf{e}_{\varnothing,1}^{(d-1)}\in\mathcal{O}\mathsf{H} ^{\prime}_{k}e_{z},\] (4.26) \[\mathsf{c}^{\prime}_{k,l} :=\mathsf{c}^{\prime}_{k}\mathsf{c}^{\prime}_{k+1}\cdots\mathsf{c}^ {\prime}_{l} \tag{4.27}\]
for \(0\leq k\leq l\leq d-1\). We note that, by Remark 4.9, all the \(\mathsf{b}_{k}\)'s, \(\mathsf{f}_{k}\)'s, \(\mathsf{c}_{k}\)'s, \(\mathsf{c}^{\prime}_{k}\)'s, etc. are in \(\mathcal{O}\mathsf{G}_{\bar{0}}\).
Using Lemma 4.14 we can now precisely express \(\mathcal{O}\mathsf{N}\mathsf{f}\) in terms of a twisted wreath product. More precisely, since we know \(\mathsf{N}\) is generated by \(\mathsf{L}\) and the \(T_{w}\)'s, the isomorphism \(\mathcal{O}\mathsf{L}e_{z}\cong\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{R}}e_{z} \otimes(\mathcal{O}\tilde{\mathsf{S}}_{p}e_{z})^{\otimes d}\), from (4.19) extends to an isomorphism
\[\mathcal{O}\mathsf{N}e_{z}\cong\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{R}}e_{z} \otimes(\mathcal{O}\tilde{\mathsf{S}}_{p}e_{z}\wr_{\mathsf{S}}\mathcal{T}_{d}),\ T_{k}e_{z}\mapsto\kappa_{k}^{-1}(1 \otimes\imath_{k}),\]
where the \(\kappa_{k}\)'s are those from Lemma 4.14. In particular, we have the isomorphisms of superalgebras
\[\begin{split}\mathcal{O}\mathsf{Lf}&\cong\mathcal{O} \tilde{\mathsf{S}}_{\mathsf{R}}\mathsf{e}_{\rho,0}\otimes(\mathcal{O}\tilde{ \mathsf{S}}_{p}\mathsf{e}_{\varnothing,1})^{\otimes d}\cong B^{\rho,0} \otimes(B^{\varnothing,1})^{\otimes d}\\ \mathcal{O}\mathsf{Nf}&\cong\mathcal{O}\tilde{ \mathsf{S}}_{\mathsf{R}}\mathsf{e}_{\rho,0}\otimes(\mathcal{O}\tilde{ \mathsf{S}}_{p}\mathsf{e}_{\varnothing,1}\wr_{\mathsf{s}}\mathcal{T}_{d}) \cong B^{\rho,0}\otimes(B^{\varnothing,1}\wr_{\mathsf{s}}\mathcal{T}_{d}). \end{split} \tag{4.28}\]
As in Lemma 4.12 and the subsequent comments, for \(k=1,\ldots,d\), we set \(\mathsf{D}_{k}\) to be a Sylow \(p\)-subgroup of \(\tilde{\mathsf{S}}_{\mathsf{P}_{k}}\) such that \(T_{w}\mathsf{D}_{k}T_{w}^{-1}=\mathsf{D}_{w(k)}\), for all \(k=1,\ldots,d\) and \(w\in\mathsf{S}_{d}\), and define
\[\mathsf{D}:=\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d}\leq\tilde{\mathsf{ S}}_{\mathsf{P}}. \tag{4.29}\]
**Lemma 4.30**.: _We have:_
1. \(C_{\mathsf{G}}(\mathsf{D})=\tilde{\mathsf{S}}_{\mathsf{R}}\times_{z}C_{ \tilde{\mathsf{S}}_{\mathsf{P}_{1}}}(\mathsf{D}_{1})\times_{z}\cdots\times_{ z}C_{\tilde{\mathsf{S}}_{\mathsf{P}_{d}}}(\mathsf{D}_{d})\)_._
2. \(N_{\mathsf{G}}(\mathsf{D})\) _is generated by_ \(\tilde{\mathsf{S}}_{\mathsf{R}}\times_{z}N_{\tilde{\mathsf{S}}_{\mathsf{P}_{1 }}}(\mathsf{D}_{1})\times_{z}\cdots\times_{z}N_{\tilde{\mathsf{S}}_{\mathsf{ P}_{d}}}(\mathsf{D}_{d})\) _and_ \(\{T_{w}\mid w\in\mathsf{S}_{d}\}\)_. In particular,_ \(N_{\mathsf{G}}(\mathsf{D})\leq\mathsf{N}\)_._
3. \(N_{\mathsf{G}\times\mathsf{L}}(\Delta\mathsf{D})\leq\mathsf{L}\times\mathsf{L}\) _and_ \(N_{\mathsf{G}\times\mathsf{H}}(\Delta\mathsf{D})\leq\mathsf{H}\times\mathsf{H}\)_._
Proof.: (i) Since \(\mathsf{D}_{k}\leq\tilde{\mathsf{A}}_{\mathsf{P}_{k}}\), for each \(k\), we have that \(\tilde{\mathsf{S}}_{\mathsf{R}}\) and all the \(C_{\tilde{\mathsf{S}}_{\mathsf{P}_{k}}}(\mathsf{D}_{k})\)'s centralize \(\mathsf{D}\). The reverse inclusion is easily seen by passing to \(\mathsf{S}_{n}\), as centralizers decompose according to an element's cycle decomposition.
(ii) Since \(\mathsf{D}\leq\mathsf{G}_{\bar{0}}\), it is easy to check that \(\tilde{\mathsf{S}}_{\mathsf{R}}\), all the \(N_{\tilde{\mathsf{S}}_{\mathsf{P}_{k}}}(\mathsf{D}_{k})\)'s and all the \(T_{w}\)'s normalize \(\mathsf{D}\). To show the reverse inclusion we again pass to \(\mathsf{S}_{n}\). By inspecting cycle types, if \(g\in N_{\mathsf{G}}(\mathsf{D})\), then it must permute the \(\mathsf{D}_{k}\)'s and consequently the \(\mathsf{P}_{k}\)'s. The claim follows.
(iii) Suppose \((g,h)\in N_{\mathsf{G}\times\mathsf{L}}(\Delta\mathsf{D})\) and \(1\leq k\leq d\). Then \({}^{(g,h)}\Delta\mathsf{D}_{k}\leq\Delta\mathsf{D}\). However, since \(h\in\mathsf{L}\), we must have \({}^{(g,h)}\Delta\mathsf{D}_{k}=\Delta\mathsf{D}_{k}\). Therefore,
\[g\in N_{\mathsf{G}}(\mathsf{D}_{k})\leq\tilde{\mathsf{S}}_{\mathsf{R}\cup \bigcup_{m\neq k}\mathsf{P}_{m},\mathsf{P}_{k}}\]
where the containment follows by passing to \(\mathsf{S}_{n}\) and considering cycle types. Taking all such \(k\)'s simultaneously gives that \(g\in\mathsf{L}\), as desired.
The second inclusion is proved similarly.
**Lemma 4.31**.: _Let \(0<d<p\)._
1. \(\mathcal{O}\mathsf{Gb}\) _is a block with defect group_ \(\mathsf{D}\)_._
2. \(\mathcal{O}\mathsf{Hc}\) _is a block with defect group_ \(\mathsf{D}\)_._
3. \(\mathcal{O}\mathsf{Lf}\) _and_ \(\mathcal{O}\mathsf{Nf}\) _are both blocks with defect group_ \(\mathsf{D}\)_._
Proof.: (i) This follows immediately from Theorem 4.8(i),(iii).
(ii) We first assume \(d=1\) and \(\rho\) is odd and set \(\mathsf{H}_{\mathsf{A}}:=\tilde{\mathsf{S}}_{\mathsf{R}}\times_{z}\tilde{ \mathsf{A}}_{\mathsf{P}_{1}}\leq\mathsf{H}\). Now, \(\mathsf{e}_{\rho,d-1}=\mathsf{e}_{\rho,0}=\mathsf{e}_{\rho,0}^{+}+\mathsf{e}_{ \rho,0}^{-}\) and \(\mathsf{e}_{\varnothing,1}^{(d)}=\mathsf{e}_{\varnothing,1}^{(1)}\). Furthermore, by Theorem 4.8(ii),(iii), \(\sigma_{\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{R}}}(\mathsf{e}_{\rho,0}^{+})= \mathsf{e}_{\rho,0}^{-}\) and \(\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{R}}\mathsf{e}_{\rho,0}^{+}\) is a block with trivial defect group and by Theorem 4.10\(\mathcal{O}\tilde{\mathsf{A}}_{\mathsf{P}_{1}}\mathsf{e}_{\varnothing,1}^{(1)}\) is a block with defect group \(\mathsf{D}_{1}=\mathsf{D}\). Moreover, as \(\tilde{\mathsf{S}}_{\mathsf{R}}\) and \(\tilde{\mathsf{A}}_{\mathsf{P}_{1}}\) commute and \(|\tilde{\mathsf{S}}_{\mathsf{R}}\cap\tilde{\mathsf{A}}_{\mathsf{P}_{d}}|=2\), \(\mathcal{O}\mathsf{H}_{\mathsf{A}}(\mathsf{e}_{\rho,0}^{+}\otimes\mathsf{e}_{ \varnothing,1}^{(1)})\) is a block with defect group \(\mathsf{D}\).
If \(g\in\tilde{\mathsf{S}}_{\mathsf{P}_{1}}\setminus\tilde{\mathsf{A}}_{\mathsf{P}_{1}}\), then \({}^{g}\mathsf{e}_{\rho,0}^{+}=\mathsf{e}_{\rho,0}^{-}\). Therefore, Lemma 2.5 now gives that
\[\operatorname{Tr}_{\mathsf{H}_{\mathsf{A}}}^{\mathsf{H}}(\mathsf{e}_{\rho,0}^{+} \otimes\mathsf{e}_{\varnothing,1}^{(1)})=\mathsf{e}_{\rho,0}\otimes\mathsf{e}_{ \varnothing,1}^{(1)}=\mathsf{c}\]
is a block idempotent of \(\mathcal{O}\mathsf{H}\) and \(\mathcal{O}\mathsf{Hc}\) has defect group \(\mathsf{D}\).
We now assume that \(d>1\) or \(\rho\) is even. Then \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{e}_{\rho,d-1}\) is a block with defect group \(\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d-1}\) and \(\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{p}_{d}}\mathsf{e}_{\varnothing,1}^{(d)}\) a block with defect group \(\mathsf{D}_{d}\). The claim now follows from Remark 3.8 and Lemma 3.67.
(iii) We first prove \(\mathsf{f}\) is a block idempotent of \(\mathcal{O}\mathsf{L}\). We have already proved the claim for \(d=1\) in part (i). For \(d>1\), note that, by induction on \(d\), \(\mathcal{O}\mathsf{L}_{d-1}\mathsf{f}_{d-1}\) is a block with defect group \(\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d-1}\) and so the claim holds, as in part (i), by Remark 3.8 and Lemma 3.67.
To show that \(\mathcal{O}\mathsf{N}\mathsf{f}\) is a block we simply apply Lemmas 4.30(i) and 2.9. That it has defect group \(\mathsf{D}\) is immediate since \(p\nmid[\mathsf{N}:\mathsf{L}]\).
**Lemma 4.32**.: _Let \(0<d<p\). Then \(\mathcal{O}\mathsf{N}\mathsf{f}\) (resp. \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\)) is the Brauer correspondent of \(\mathcal{O}\mathsf{G}\mathsf{b}\) (resp. \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\)) in \(\mathsf{N}\) (resp. \(\mathsf{N}_{\bar{0}}\))._
Proof.: By Lemma 4.31, \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\) both have defect group \(\mathsf{D}\) and, by Lemma 4.30(ii), \(N_{\mathsf{G}}(\mathsf{D})\leq\mathsf{N}\). We prove that \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\) are Brauer correspondents by showing that they have a common Brauer correspondent in \(N_{\mathsf{G}}(\mathsf{D})\).
By Theorem 4.8(iv), the Brauer correspondent of \(\mathcal{O}\mathsf{G}\mathsf{b}\) in \(N_{\mathsf{G}}(\mathsf{D})\) is \(\mathcal{O}N_{\mathsf{G}}(\mathsf{D})\mathsf{e}_{\rho,0}\). Now,
\[\mathrm{Br}_{\mathsf{D}}(\mathsf{f}) =\mathrm{Br}_{\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d}}( \mathsf{e}_{\rho,0}\otimes\mathsf{e}_{\varnothing,1}^{(1)}\otimes\cdots\otimes \mathsf{e}_{\varnothing,1}^{(d)})\] \[=\bar{\mathsf{e}}_{\rho,0}\otimes\mathrm{Br}_{\mathsf{D}_{1}}( \mathsf{e}_{\varnothing,1}^{(1)})\otimes\cdots\otimes\mathrm{Br}_{\mathsf{D} _{1}}(\mathsf{e}_{\varnothing,1}^{(d)})\] \[=\bar{\mathsf{e}}_{\rho,0}\otimes\bar{e}_{z}\otimes\cdots\otimes \bar{e}_{z}=\bar{\mathsf{e}}_{\rho,0},\]
where the second equality follows from the decomposition of \(C_{\mathsf{G}}(\mathsf{D})\) in Lemma 4.30(i) and the third from Theorem 4.8(iv) applied to each \(\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{p}_{k}}\mathsf{e}_{\varnothing,1}^{( k)}\). The claim for \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\) follows.
For \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\), we first note that, by Theorem 4.10, \(\mathsf{b}\) is a block idempotent of \(\mathcal{O}\mathsf{G}_{\bar{0}}\) and certainly \(N_{\mathsf{G}_{0}}(\mathsf{D})\leq\mathsf{N}_{\bar{0}}\). Next, by Lemma 3.65, we have that \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\) are super Green correspondents. In particular,
\[\mathcal{O}\mathsf{N}\mathsf{f}\mid\mathrm{Res}_{\mathsf{N}\times\mathsf{N}}^ {\mathsf{G}\times\mathsf{G}}(\mathcal{O}\mathsf{G}\mathsf{b})\quad\text{and} \quad\mathcal{O}\mathsf{G}\mathsf{b}\mid\mathrm{Ind}_{\mathsf{N}\times\mathsf{ N}}^{\mathsf{G}\times\mathsf{G}}(\mathcal{O}\mathsf{N}\mathsf{f})\]
as superbimodules. Taking the even part of both expressions gives that
\[\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\mid\mathrm{Res}_{\mathsf{N}_{0} \times\mathsf{N}_{\bar{0}}}^{\mathsf{G}_{\bar{0}}\times\mathsf{G}_{\bar{0}}}( \mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b})\quad\text{and}\quad\mathcal{O} \mathsf{G}_{\bar{0}}\mathsf{b}\mid\mathrm{Ind}_{\mathsf{N}_{0}\times\mathsf{ N}_{\bar{0}}}^{\mathsf{G}_{\bar{0}}\times\mathsf{G}_{\bar{0}}}(\mathcal{O} \mathsf{N}_{\bar{0}}\mathsf{f}).\]
The claim follows.
## 5. RoCK blocks
To keep in line with Section 4, for the remainder of this article, we assume that \(\mathbb{K}\) contains a primitive \((2n!)^{\mathrm{th}}\) root of unity. We will also assume that \(n\geq 4\), so, by the comments at the beginning of Section 4, it is automatic that \(-1\) and \(2\) have square roots in \(\mathbb{K}\). This is harmless since, when working with RoCK blocks, the assumption \(n\geq 4\) will hold automatically as we normally assume \(d\geq 1\) in the setup below.
### Rouquier cores and RoCK blocks
Let \(d\in\mathbb{N}\). We take the following definition from our [KL, SS4.1a].
**Definition 5.1**.: A \(d\)_-Rouquier \(\bar{p}\)-core \(\rho\)_ is a \(\bar{p}\)-core such that \(\mathtt{Ab}_{\rho}\) has the following properties:
1. The \(1^{\mathrm{st}}\) runner has at least \(d\) beads.
2. The \((i+1)^{\rm th}\) runner has at least \(d-1\) more beads than the \(i^{\rm th}\) runner, for \(1\leq i\leq\ell-1\).
If \(\rho\) is a \(d\)-Rouquier \(\bar{p}\)-core, we refer to \(B^{\rho,d}\) as a _RoCK block_ of _weight_\(d\).
If \(d\geq 1\) and \(\rho\) is \(d\)-Rouquier \(\bar{p}\)-core then it is automatic that the \(i^{\rm th}\) runner of \(\mathtt{Ab}_{\rho}\) is empty, for \(\ell<i<p\).
We note that, for each \(d\), there exist infinitely many even and infinitely many odd \(d\)-Rouquier \(\bar{p}\)-cores, as one can add arbitrarily many beads onto the \(\ell^{\rm th}\) runner.
The following trivial observation will be very useful throughout this paper.
**Remark 5.2**.: If \(\rho\) is a \(d\)-Rouquier \(\bar{p}\)-core then \(\rho\) is a \(k\)-Rouquier \(\bar{p}\)-core for all \(k\leq d\).
The key properties of Rouquier \(\bar{p}\)-cores are contained in the following lemma, mostly taken from [KL, Lemma 4.1.2].
**Lemma 5.3**.: _Let \(d\in\mathbb{Z}_{>0}\), \(\rho\) a \(d\)-Rouquier \(\bar{p}\)-core and \(\lambda\in\mathscr{P}_{0}(\rho,d)\). Then \(\mathtt{Ab}_{\lambda}\) is obtained from \(\mathtt{Ab}_{\rho}\) by \(d\) consecutive elementary slides down on runners \(0,1,\ldots,\ell\). Moreover, if \(\mu\in\mathscr{P}_{0}(\rho,d-1)\) such that \(\mu\subseteq\lambda\), then there exists \(0\leq i\leq\ell\) such that:_
1. \(\mathtt{Ab}_{\lambda}\) _is obtained from_ \(\mathtt{Ab}_{\mu}\) _by an elementary slide down on runner_ \(i\)_._
2. \(\mathtt{sh}[\lambda\setminus\mu]\) _is of the form:_
3. \(\mu\) _and_ \(\lambda\) _have the same parity if and only if_ \(i=0\) _and_ \(\lambda=\mu\sqcup(p)\)_._
Proof.: The statement concerning \(\mathtt{Ab}_{\lambda}\) and \(\mathtt{Ab}_{\rho}\) is just [KL, Lemma 4.1.1(i)]. Part (i) is stated in Lemma 4.1.2 from the same paper. There it is stated for \(\lambda,\mu\in\mathscr{P}_{p}\) (\(p\)-strict partitions) instead of \(\mathscr{P}_{0}\), so the above is just a specific case. Part (ii) follows as a consequence of Remark 4.1.3, again from the same paper. Note that the box diagrams look slightly different in [KL, Remark 4.1.3], since it deals with non-shifted diagrams.
To prove part (iii) we note that, since the \(i^{\rm th}\) runner of \(\mathtt{Ab}_{\rho}\) has at least \(d\) beads on it, for all \(1\leq i\leq\ell\), \(\lambda\) must have the same length as \(\mu\), unless \(i=0\) and \(\lambda=\mu\sqcup(p)\). Since \(|\lambda|\) and \(|\mu|\) certainly have opposite parity, the claim follows.
### Induction and restriction of characters in RoCK blocks
As the characters of \(B^{\varnothing,1}\) and \(B^{\varnothing,1}_{0}\) will be so important throughout this paper we introduce the special notation for them. Note that \(\mathscr{P}_{0}(\varnothing,1)=\{(p-j,j)\mid j\in I\}\). In view of Theorem 4.6, the partition \((p)\) is even, while the partitions \((p-j,j)\), with \(1\leq j\leq\ell\) are odd.
Notationally, we often identify any such partition \((p-j,j)\) with \(j\). For example, recalling the notation of SSSS3.9,4.3, especially (3.56) and (3.57), we denote
\[\begin{split}&\xi_{j}:=\xi_{(p-j,j)},\ \xi_{j}^{\pm}:=\xi_{(p-j,j)}^{\pm},\ \xi_{j}^{(\pm)}:=\bar{\xi}_{(p-j,j)}^{(\pm)},\ \xi_{\mu,j_{1},\ldots,j_{k}}:=\xi_{\mu,(p-j_{1},j_{1}),\ldots,(p-j_{k},j_{k})} \\ &\varepsilon_{j}:=\varepsilon_{(p-j,j)},\ \varepsilon_{j_{1},\ldots,j_{k}}:= \varepsilon_{(p-j_{1},j_{1}),\ldots,(p-j_{k},j_{k})},\ \varepsilon_{\mu,j_{1},\ldots,j_{k}}:=\varepsilon_{\mu,(p-j_{1},j_{1}),\ldots,( p-j_{k},j_{k})},\end{split} \tag{5.4}\]
whenever these make sense. In particular, we have
\[\operatorname{Irr}_{\operatorname{super}}(B^{\varnothing,1}) =\{\xi_{0},\xi_{1},\dots,\xi_{\ell}\}, \tag{5.5}\] \[\operatorname{Irr}(B^{\varnothing,1}) =\{\xi_{0},\xi_{1}^{\pm},\dots,\xi_{\ell}^{\pm}\},\] (5.6) \[\operatorname{Irr}(B^{\varnothing,1}_{\bar{0}}) =\{\tilde{\xi}_{0}^{\pm},\tilde{\xi}_{1},\dots,\tilde{\xi}_{\ell }\}. \tag{5.7}\]
In Section 6 we will be more precise about distinguishing between \(\xi_{j}^{+}\) and \(\xi_{j}^{-}\), when \(j>0\).
Recall the notation used in Theorem 4.7, especially (3.56),(3.57). Thus \(\varepsilon_{\lambda}=1\) if \(\lambda\) is even (i.e. \(|\lambda|-h(\lambda)\) is even), and \(\varepsilon_{\lambda}=\sqrt{2}\) if \(\lambda\) is odd. In particular, \(\varepsilon_{j}=1\) if \(j=0\) and \(\varepsilon_{j}=\sqrt{2}\) if \(1\leq j\leq\ell\). Moreover, \(\varepsilon_{\mu,\lambda}=1\) if \((\mu,\lambda)\) is even (i.e. \(\mu\) and \(\lambda\) have the same parity), and \(\varepsilon_{\mu,\lambda}=\sqrt{2}\) if \((\mu,\lambda)\) is odd (i.e. \(\mu\) and \(\lambda\) have opposite parities).
Recall also the notation of SSSS4.6,2.5.
**Lemma 5.8**.: _Let \(d\geq 1\) and \(\rho\) be a \(d\)-Rouquier \(\bar{p}\)-core._
1. _If_ \(\mu\in\mathscr{P}_{0}(\rho,d-1)\) _and_ \(j\in I\) _then_ \[\xi_{\mu,j}\uparrow_{\mathsf{H},\mathsf{c}}^{\mathsf{G},\mathsf{b}}=\sum_{ \lambda\in\mathscr{P}_{0}^{\leq\ell-j}(\mu)^{+}}\frac{\varepsilon_{\mu,j} \varepsilon_{\mu,\lambda}\varepsilon_{j}}{\varepsilon_{\lambda}}\,\xi_{\lambda}.\]
2. _If_ \(\lambda\in\mathscr{P}_{0}(\rho,d)\) _then_ \[\xi_{\lambda}\downarrow_{\mathsf{H},\mathsf{c}}^{\mathsf{G},\mathsf{b}}=\sum_{ j\in I}\sum_{\mu\in\mathscr{P}_{0}^{\leq\ell-j}(\lambda)^{-}}\frac{\varepsilon_{ \lambda}\varepsilon_{\mu,\lambda}\varepsilon_{j}}{\varepsilon_{\mu,j}}\,\xi_{ \mu,j}.\]
_Furthermore, if \((\mu,j)\) and \(\lambda\) are both odd, then, unless \(j=0\) and \(\lambda=\mu\sqcup(p)\), the coefficients of \(\xi_{\lambda}^{+}\) and \(\xi_{\lambda}^{-}\) are equal in both \(\xi_{\mu,j}^{+}\uparrow_{\mathsf{S}_{\mathsf{R},\mathsf{p}}}^{\mathsf{S}_{n}}\) and \(\xi_{\mu,j}^{-}\uparrow_{\mathsf{S}_{\mathsf{R},\mathsf{p}}}^{\mathsf{S}_{n}}\)._
Proof.: Since, by Lemma 3.26, \((\mathsf{b}\mathsf{O}\mathsf{G}\mathsf{c})^{*}\simeq\mathsf{c}\mathsf{O} \mathsf{G}\mathsf{b}\), we can apply Lemma 3.58(ii) to see that parts (i) and (ii) are equivalent. We will prove part (i).
Let \(\mu\in\mathscr{P}_{0}(\rho,d-1)\), \(j\in I\) and suppose \(\xi_{\lambda}\) appears with non-zero coefficient in \(\xi_{\mu,j}\uparrow_{\mathsf{H},\mathsf{c}}^{\mathsf{G},\mathsf{b}}\). Since \(\xi_{\lambda}\) lies in the block \(\mathsf{O}\mathsf{G}\mathsf{b}\), we have \(\lambda\in\mathscr{P}_{0}(\rho,d)\). Furthermore, Theorem 4.7 dictates that \(\mu\subseteq\lambda\) and so, by Lemma 5.3(i), \(\mathsf{Ab}_{\lambda}\) is obtained from \(\mathsf{Ab}_{\mu}\) by an elementary slide down on runner \(i\), for some \(i\in I\) and \(\mathsf{sh}[\lambda\setminus\mu]\) is of the form
Now
\[\mathfrak{f}_{(p-j,j)}(\mathfrak{h}_{i})=\begin{cases}1&\text{if $i+j\leq\ell$},\\ 0&\text{if $i+j>\ell$}.\end{cases}\]
Indeed one can readily check that \(\mathfrak{f}_{(p-j,j)}(\mathfrak{h}_{i})=0\), if \(i+j>\ell\) and that, if \(i+j\leq\ell\), there is precisely one legal way of filling in \(\mathfrak{h}_{i}\) with content \((p-j,j)\), namely
Therefore, by Theorem 4.7, we have
\[\xi_{\mu,j}\uparrow_{\mathsf{H},\mathsf{c}}^{\mathsf{G},\mathsf{b}}=\sum_{ \lambda\in\mathscr{P}_{0}^{\leq\ell-j}(\mu)^{+}}\frac{\varepsilon_{\mu,j}}{ \varepsilon_{\lambda}}2^{(h(\mu)+h((p-j,j))-h(\lambda))/2}\xi_{\lambda},\]
for all \(\mu\in\mathscr{P}_{0}(\rho,d-1)\) and \(j\in I\).
To get the coefficient into the desired form note that \(2^{h((p-j,j))/2}=\sqrt{2}\varepsilon_{j}\). Furthermore, \(h(\lambda)=h(\mu)\) unless \(\mathtt{Ab}_{\lambda}\) is obtained from \(\mathtt{Ab}_{\mu}\) by adding a new bead on the runner \(0\), in which case \(h(\lambda)=h(\mu)+1\). By Lemma 5.3(iii), this case happens precisely when \(\varepsilon_{\lambda}=\varepsilon_{\mu}\). Therefore, \(2^{(h(\mu)-h(\lambda))/2}=\varepsilon_{\mu,\lambda}/\sqrt{2}\) and
\[2^{(h(\mu)+h((p-j,j))-h(\lambda))/2}=\big{(}\varepsilon_{\mu,\lambda}/\sqrt{2 }\big{)}\big{(}\sqrt{2}\varepsilon_{j}\big{)}=\varepsilon_{\mu,\lambda} \varepsilon_{j},\]
as desired.
The final statement follows from the corresponding statement in Theorem 4.7 once we observe that, by Lemma 5.3(i), \(\lambda=\mu\sqcup(p-j,j)\) if and only if \(j=0\) and \(\lambda=\mu\sqcup(p)\).
## 6. Weight one RoCK blocks
Recall that it is now assumed that \(\mathbb{K}\) contains a primitive \((2n!)^{\mathrm{th}}\) root of unity.
Throughout this section let \(\rho\) be a \(1\)-Rouquier \(\bar{p}\)-core and \(d=1\). We adopt the notation (4.11) and all the notation of SS4.6. So
\[r=|\rho|,\ n=r+p,\ \mathtt{R}=[r],\ \mathtt{P}=[n]\setminus[r],\ \mathtt{G}=\tilde{ \mathsf{S}}_{n},\ \mathsf{L}=\mathsf{N}=\tilde{\mathsf{S}}_{\mathtt{R},\mathtt{P}} \cong\tilde{\mathsf{S}}_{r,p},\]
\[\mathsf{b}=\mathsf{e}_{\rho,1}\in\mathcal{OG},\ \mathsf{f}=\mathsf{e}_{\rho,0} \otimes\mathsf{e}_{\varnothing,1}^{(1)}\in\mathcal{ON}.\]
We identify \(\mathcal{ON}\mathsf{f}\) with \(B^{\rho,0}\otimes B^{\varnothing,1}\) via (4.28). By Lemma 4.31, \(\mathsf{D}=\mathsf{D}_{1}\) is simultaneously a defect group of \(\mathcal{OG}_{\mathsf{P}}\mathsf{e}_{\varnothing,1}^{(1)}\cong B^{\varnothing,1}\) and \(\mathcal{OG}\mathsf{b}=B^{\rho,1}\).
We recall the notation Irr and Irr\({}_{\mathrm{super}}\) from (3.48) and \(\xi_{j}^{(\pm)},\tilde{\xi}_{j}^{(\pm)},\xi_{\rho,j}^{(\pm)}\), etc. from (5.4).
### Brauer trees of weight one RoCK blocks
For \(j\in I\), we denote by
\[\rho^{j}\in\mathscr{P}_{0}(\rho,1) \tag{6.1}\]
the partition whose abacus \(\mathtt{Ab}_{\rho^{j}}\) is obtained from \(\rho\) by sliding a bead down the \(j^{\mathrm{th}}\) runner.
**Lemma 6.2**.: _Let \(\rho\) be a \(1\)-Rouquier \(\bar{p}\)-core and \(d=1\)._
1. _If_ \(\rho\) _is even, there exists a Morita superequivalence between_ \(\mathcal{ON}\mathsf{f}\) _and_ \(B^{\varnothing,1}\) _with the corresponding bijection_ \[\mathrm{Irr}_{\mathrm{super}}(\mathcal{ON}\mathsf{f})\to\mathrm{Irr}_{ \mathrm{super}}(B^{\varnothing,1}),\ \xi_{\rho,j}\mapsto\xi_{j}.\]
_._
2. _If_ \(\rho\) _is odd, there exists a Morita equivalence between_ \(\mathcal{O}\mathsf{N}\mathsf{f}\) _and_ \(B^{\varnothing,1}_{\bar{0}}\) _with the corresponding bijection_ \[\mathrm{Irr}(\mathcal{O}\mathsf{N}\mathsf{f})\to\mathrm{Irr}(B^{\varnothing,1}_{ \bar{0}}),\ \left\{\begin{array}{ll}\xi_{\rho,j}\mapsto\tilde{\xi}_{j}&\text{ if }1\leq j\leq\ell,\\ \xi^{\pm}_{\rho,0}\mapsto\tilde{\xi}^{\pm}_{0}.\end{array}\right.\] _for an appropriate choice of irreducible characters_ \((-)^{\pm}\)_._
_The Morita equivalences in (i) and (ii) can be chosen to have trivial source._
1. _The irreducible characters of_ \(B^{\varnothing,1}\) _can be labelled such that_ \(B^{\varnothing,1}\) _has Brauer tree:_ \(\xi^{+}_{\ell}\)__\(\xi^{+}_{\ell-1}\)__\(\ldots\)__\(\xi^{+}_{1}\)__\(\xi_{0}\)__\(\xi^{-}_{1}\)__\(\ldots\)__\(\xi^{-}_{\ell-1}\)__\(\xi^{-}_{\ell}\)__ _The irreducible characters of_ \(B^{\varnothing,1}_{\bar{0}}\) _can be labelled such that_ \(B^{\varnothing,1}_{\bar{0}}\) _has Brauer tree:_ \(\tilde{\xi}^{+}_{0}+\tilde{\xi}^{-}_{0}\)__\(\tilde{\xi}_{1}\)__\(\ldots\)__\(\tilde{\xi}_{\ell-1}\)__\(\tilde{\xi}_{\ell}\)__
2. _If_ \(\rho\) _is even, the irreducible characters of_ \(\mathcal{O}\mathsf{G}\mathsf{b}\) _can be labelled such that_ \(\mathcal{O}\mathsf{G}\mathsf{b}\) _has Brauer tree:_ \(\xi^{+}_{\rho^{\ell}}\)__\(\xi^{+}_{\rho^{\ell-1}}\)__\(\ldots\)__\(\xi^{+}_{\rho^{\ell-1}}\)__\(\xi^{-}_{\rho^{\ell-1}}\)__
3. _If_ \(\rho\) _is odd, the irreducible characters of_ \(\mathcal{O}\mathsf{G}\mathsf{b}\) _can be labelled such that_ \(\mathcal{O}\mathsf{G}\mathsf{b}\) _has Brauer tree:_ \(\xi^{+}_{\rho^{\ell}}\)__\(\xi^{+}_{\rho^{\ell-1}}\)__\(\ldots\)__\(\xi^{+}_{\rho^{\ell-1}}\)__\(\xi^{-}_{\rho^{\ell-1}}\)__
Proof.: Suppose \(\rho\) is even. If \(r=1\), then \(\mathcal{O}\mathsf{N}\mathsf{f}=\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{p}} \mathsf{e}^{(1)}_{\varnothing,1}\cong B^{\varnothing,1}\) and the Morita superequivalence is clear. Let \(r>1\). By Theorems 4.8 and 4.10, \(|B^{\rho,0}|\cong\mathcal{M}_{2m\times 2m}(\mathcal{O})\) and \(B^{\rho,0}_{0}\cong\mathcal{M}_{m\times m}(\mathcal{O})\oplus\mathcal{M}_{m \times m}(\mathcal{O})\), for some \(m\in\mathbb{Z}_{>0}\). In particular, any primitive idempotent in \(e\in B^{\rho,0}_{\bar{0}}\) remains primitive in \(B^{\rho,0}\). Viewing \(e\) as an element of \(\mathcal{O}\mathsf{N}\mathsf{f}\cong B^{\rho,0}\otimes B^{\varnothing,1}\), we have the isomorphism of superalgebras
\[B^{\varnothing,1}\stackrel{{\sim}}{{\longrightarrow}}e\mathcal{O} \mathsf{N}\mathsf{f}e\cong eB^{\rho,0}e\otimes B^{\varnothing,1},\ x\mapsto e \otimes x.\]
Furthermore, since \(B^{\rho,0}eB^{\rho,0}=B^{\rho,0}\), we have \((\mathcal{O}\mathsf{N}\mathsf{f})e(\mathcal{O}\mathsf{N}\mathsf{f})=\mathcal{ O}\mathsf{N}\mathsf{f}\) and so, by Lemma 3.10, \(e\mathcal{O}\mathsf{N}\mathsf{f}\) induces a Morita superequivalence between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(B^{\varnothing,1}\).
If \(\rho\) is odd, then, by Theorem 4.8, \(|B^{\rho,0}|\cong\mathcal{M}_{m\times m}(\mathcal{O})\oplus\mathcal{M}_{m \times m}(\mathcal{O})\), for some \(m\in\mathbb{Z}_{>0}\). Furthermore, \(\sigma_{\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{k}}}\) swaps these two factors. (In other words, \(B^{\rho,0}\cong\mathcal{Q}_{m}(\mathcal{O})\).) Let \(e\in B^{\rho,0}\) be a primitive idempotent. The above shows that \(e\sigma_{\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{k}}}(e)=0\). Let \(u\in B^{\varnothing,1}_{\bar{1}}\cap(B^{\varnothing,1})^{\times}\). Viewing \(e\) as an element of \(\mathcal{O}\mathsf{N}\mathsf{f}\cong B^{\rho,0}\otimes B^{\varnothing,1}\), we have that \({}^{u}e=\sigma_{\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{k}}}(e)\). Therefore, \(eB^{\varnothing,1}e=e\otimes B^{\varnothing,1}_{\bar{0}}\) and we have the isomorphism of algebras
\[B^{\varnothing,1}_{\bar{0}}\stackrel{{\sim}}{{\longrightarrow}}e \mathcal{O}\mathsf{N}\mathsf{f}e\cong eB^{\rho,0}e\otimes B^{\varnothing,1}_{ \bar{0}},\ x\mapsto e\otimes x.\]
Now,
\[B^{\rho,0}eB^{\rho,0}+B^{\rho,0}ueu^{-1}B^{\rho,0}=B^{\rho,0}(e+\sigma_{ \mathcal{O}\tilde{\mathsf{S}}_{\mathsf{k}}}(e))B^{\rho,0}=B^{\rho,0}.\]
Therefore, \((\mathcal{O}\mathsf{N}\mathsf{f})e(\mathcal{O}\mathsf{N}\mathsf{f})=\mathcal{O} \mathsf{N}\mathsf{f}\) and we have shown that \(e\mathcal{O}\mathsf{N}\mathsf{f}\) induces a Morita equivalence between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(B^{\varnothing,1}_{\bar{0}}\).
The bijections of characters given in parts (i) and (ii) are both just a consequence of the fact that \(e\mathcal{ON}\mathsf{f}\otimes_{\mathcal{ON}}\)? is a summand of restriction \(\operatorname{Res}^{\tilde{\mathsf{S}}_{\mathsf{R},\mathsf{p}}}_{\tilde{\mathsf{ S}}_{\mathsf{P}}}\). Therefore, by Lemma 3.55(ii), nothing other than \(\xi_{j}^{(\pm)}\), when \(\rho\) is even, or \(\tilde{\xi}_{j}^{(\pm)}\), when \(\rho\) is odd, can occur as an irreducible constituent of the image of \(\xi_{\rho}\otimes\xi_{j}\).
To show that the relevant bimodule always has trivial source we note that, in both cases, \(\mathcal{ON}\) has trivial source as an \((\mathcal{ON},\mathcal{ON})\)-bimodule and so \(e\mathcal{ON}\) is a direct sum of trivial source bimodules when considered as an \((\mathcal{OD},\mathcal{OD})\)-bimodule. The claim follows.
Parts (iii), (iv) and (v) all follow immediately from [**Mu**, Theorem 4.4].
### Morita superequivalences for weight one RoCK blocks
For the following lemma we treat \(\mathsf{bOGf}\) as an \((\mathcal{OG}\mathsf{b},\mathcal{ON})\)-bisupermodule and \(\mathsf{fOGb}\) as an \((\mathcal{ON},\mathcal{OG})\)-bisupermodule.
**Lemma 6.3**.: _Let \(\rho\) be a \(1\)-Rouquier \(\bar{p}\)-core and \(d=1\)._
1. _As a bisupermodule,_ \(\mathsf{bOGf}\) _(resp._ \(\mathsf{fOGb}\)_) has a unique non-projective, absolutely indecomposable summand_ \(U\) _(resp._ \(U^{*}\)_). Furthermore,_ \(U\) _and_ \(U^{*}\) _induce a stable superequivalence of Morita type between_ \(\mathcal{ON}\) _and_ \(\mathcal{OG}\)_b._
2. _The_ \((\mathcal{OG}\mathsf{b},\mathcal{ON})\)_-bisupermodule_ \(\Omega^{\ell}_{\mathcal{OG}\otimes(\mathcal{ON})^{\mathrm{op}}}(\mathsf{bOGf})\) _induces a Morita superequivalence between_ \(\mathcal{ON}\) _and_ \(\mathcal{OG}\)_b. Moreover, we can choose the labeling of_ \(\operatorname{Irr}(\mathcal{ON})\) _and_ \(\operatorname{Irr}(\mathcal{OG})\) _such that the corresponding bijection of irreducible characters is given by_ \[\xi_{\rho,0}\mapsto\xi_{\rho^{0}},\qquad\xi_{\rho,i}^{\pm}\mapsto\xi_{\rho^{ j}}^{\pm}\ \ (1\leq j\leq\ell),\] _if_ \(\rho\) _is even, and_ \[\xi_{\rho,0}^{\pm}\mapsto\xi_{\rho^{0}}^{\pm},\qquad\xi_{\rho,i}\mapsto\xi_{ \rho^{j}}\ \ (1\leq j\leq\ell),\] _if_ \(\rho\) _is odd._
3. _If_ \(\rho\) _is even then the bisupermodules_ \(\mathsf{bOGf}\) _and_ \(\mathsf{fOGb}\) _are absolutely indecomposable._
Proof.: By Lemma 4.32, \(\mathcal{OG}\mathsf{b}\) and \(\mathcal{ON}\) are Brauer correspondents. Since \(\mathsf{f}\in\mathcal{ON}_{\bar{0}}\) and \(\mathsf{b}\in\mathcal{OG}_{\bar{0}}\), using Lemma 3.65, we may set \(U\) to be the common super Green correspondent of \(\mathcal{ON}\) and \(\mathcal{OG}\) in \(\mathsf{G}\times\mathsf{N}\). In particular, \(U\) is isomorphic to the unique non-projective, absolutely indecomposable summand of \(\mathsf{bOGf}\).
Now, by Lemma 3.26, \((\mathsf{bOGf})^{*}\simeq\mathsf{fOGb}\) and so, with Lemma 3.62 in mind, \(U^{*}\) is isomorphic to the the unique non-projective, absolutely indecomposable summand of \(\mathsf{fOGb}\). In particular, \(U^{*}\) is the super Green correspondent of both \(\mathcal{ON}\) and \(\mathcal{OG}\) in \(\mathsf{N}\times\mathsf{G}\).
By Theorem 3.63 and the fact that \(U\mathsf{f}=U\), \(\mathcal{OG}\mathsf{b}\) is isomorphic to the unique non-projective, absolutely indecomposable summand of \(U\otimes_{\mathcal{ON}}\mathsf{fOGb}\). Since \(U^{*}\) is isomorphic to the unique non-projective, absolutely indecomposable summand of \(\mathsf{fOGb}\), Lemma 2.3(ii) gives that \(\mathcal{OG}\) is isomorphic to the unique non-projective, absolutely indecomposable summand of \(U\otimes_{\mathcal{ON}}U^{*}\). So \(U\otimes_{\mathcal{ON}}U^{*}\simeq\mathcal{OG}\oplus P,\) for some projective \((\mathcal{OG},\mathcal{OG})\)-bisupermodule \(P\). Similarly, \(U^{*}\otimes_{\mathcal{OG}}U\simeq\mathcal{ON}\oplus Q\), for some projective \((\mathcal{ON},\mathcal{ON})\)-bisupermodule \(Q\). We have now shown part (i).
Until further notice we assume \(\rho\) is even. We address the case where \(\rho\) is odd at the end of this proof. By Lemma 6.2(i),(iii),(iv), \(\mathcal{OG}\) and \(\mathcal{ON}\) are both Morita equivalent to \(\mathsf{B}_{\ell}\)
introduced in SS2.4. Moreover, we may assume the corresponding bijections of characters are given by
\[\operatorname{Irr}(\mathcal{O}\mathsf{Nf})\to\operatorname{Irr}(\mathsf{B}_{ \ell}),\quad\xi_{\rho,0}\mapsto\chi_{0},\quad\xi_{\rho,j}^{\pm}\mapsto\chi_{j^ {\pm}}\ \ (1\leq j\leq\ell). \tag{6.4}\]
We use these Morita equivalences with the above assumed bijections on characters at multiple stages throughout the remainder of the proof.
Note that
\[\operatorname{Br}_{\mathsf{D}}(\mathsf{f})=\operatorname{Br}_{\mathsf{D}}( \mathsf{b})=\bar{\mathsf{e}}_{\rho,0}\in Z(\mathbb{F}C_{\mathsf{G}}(\mathsf{D })),\]
where the first equality follows from Lemma 4.32 and the second from Theorem 4.8(iv). Furthermore, \(\bar{\mathsf{e}}_{\rho,0}\) is primitive in \(Z(\mathbb{F}C_{\mathsf{G}}(\mathsf{D}))\), as, by Lemma 4.30(i), we have \(\mathbb{F}C_{\mathsf{G}}(\mathsf{D})\bar{\mathsf{e}}_{\rho,0}\cong\mathbb{F} \bar{\mathsf{S}}_{\mathsf{R}}\bar{\mathsf{e}}_{\rho,0}\otimes\mathbb{F} \mathsf{D}\), where \(\mathbb{F}\mathsf{D}\) is totally even. Now, by Lemma 4.30(ii),
\[N_{\mathsf{G}}(\mathsf{D},\bar{\mathsf{e}}_{\rho,0})/C_{\mathsf{G}}(\mathsf{D })=N_{\mathsf{N}}(\mathsf{D},\bar{\mathsf{e}}_{\rho,0})/C_{\mathsf{N}}( \mathsf{D})\cong N_{\bar{\mathsf{S}}_{\mathsf{P}}}(\mathsf{D})/C_{\bar{ \mathsf{S}}_{\mathsf{P}}}(\mathsf{D})\cong C_{2\ell}.\]
Therefore, by [\(\mathbf{L}_{4}\), 5.2,5.4] and [\(\mathbf{L}_{2}\), Proposition 5.1], there exists \(0\leq m<4\ell\) such that
\[U\simeq\Omega^{m}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O} \mathsf{Nf})^{\mathrm{sop}}}(V),\]
for some \((\mathcal{O}\mathsf{G}\mathsf{b},\mathcal{O}\mathsf{Nf})\)-bimodule \(V\) inducing a Morita equivalence between \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\). Also
\[\Omega^{4\ell}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{Nf })^{\mathrm{sop}}}(V)\simeq V.\]
(Note we need to know that \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\) are already Morita equivalent to be able to apply [\(\mathbf{L}_{2}\), Proposition 5.1] as this result is concerning stable and Morita auto-equivalences.) Via the discussion in SS3.4, Lemma 3.25 tells us that \(V\) and \(V^{*}\) even induce a Morita superequivalence between \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\). Therefore,
\[\begin{split} V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}U& \simeq V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}\Omega^{m}_{ \mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{Nf})^{\mathrm{sop }}}(V)\simeq\Omega^{m}_{\mathcal{O}\mathsf{Nf}\otimes(\mathcal{O}\mathsf{Nf}) ^{\mathrm{sop}}}(V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}V)\\ &\simeq\Omega^{m}_{\mathcal{O}\mathsf{Nf}\otimes(\mathcal{O} \mathsf{Nf})^{\mathrm{sop}}}(\mathcal{O}\mathsf{Nf}).\end{split} \tag{6.5}\]
By Lemma 6.2(i),(iii),(iv) and using the bijection of characters in (6.4), we may assume that
\[V\otimes_{\mathcal{O}\mathsf{Nf}}\xi_{\rho,0}=\xi_{\rho^{0}},\quad V\otimes_{ \mathcal{O}\mathsf{Nf}}\xi_{\rho,j}^{\pm}=\xi_{\rho^{j}}^{\pm}\ \ (1\leq j\leq\ell). \tag{6.6}\]
(One can think of this as fixing the labeling of the characters of \(\mathcal{O}\mathsf{G}\mathsf{b}\) given that of \(\mathcal{O}\mathsf{Nf}\).)
Now, by Lemmas 5.8 and 3.58(i), we have for all \(i\in I\)
\[\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\otimes_{\mathcal{O}\mathsf{Nf}}\xi_ {\rho,i}^{(\pm)}=\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{c}\otimes_{\mathcal{O} \mathsf{H}}\xi_{\rho,i}^{(\pm)}=\sum_{j=0}^{\ell-i}\xi_{\rho^{j}}. \tag{6.7}\]
(Here and for the remainder of the proof, \(\xi_{\rho,i}^{(\pm)}\) denotes \(\xi_{\rho,i}^{\pm}\) if \(i\neq 0\) and \(\xi_{\rho,0}\) if \(i=0\).) In particular,
\[\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\otimes_{\mathcal{O}\mathsf{Nf}}\xi_ {\rho,\ell}^{+}=\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\otimes_{\mathcal{O} \mathsf{Nf}}\xi_{\rho,\ell}^{-}=\xi_{\rho^{0}}.\]
As \(U\) is isomorphic to the unique non-projective, absolutely indecomposable summand of \(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\) and \(\xi_{\rho^{0}}\notin\mathbb{N}\operatorname{Prj}(\mathcal{O}\mathsf{G}\mathsf{ b})\), Lemma 2.3(i) now implies that
\[U\otimes_{\mathcal{O}\mathsf{Nf}}\xi_{\rho,\ell}^{+}=\xi_{\rho^{0}}.\]
Therefore, using (6.5) and (6.6),
\[\Omega^{m}_{\mathcal{O}\mathsf{Nf}\otimes(\mathcal{O}\mathsf{Nf})^{\mathrm{sop }}}(\mathcal{O}\mathsf{Nf})\otimes_{\mathcal{O}\mathsf{Nf}}\xi_{\rho,\ell}^{+}=V ^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}U\otimes_{\mathcal{O}\mathsf{Nf}} \xi_{\rho,\ell}^{+}=\xi_{\rho,0}.\]
Since, \(\mathcal{O}\mathsf{N}\mathsf{f}\) is Morita equivalent to \(\mathsf{B}_{\ell}\) with the bijection of characters given in (6.4), Lemma 2.12(i) now implies that \(m=\ell\) or \(3\ell\). We can, in fact, take either \(m=\ell\) or \(3\ell\), since implicit in our application of [\(\mathbf{L}_{2}\), Proposition 5.1] is that, we have a bijection of irreducible Brauer characters, \(\mathrm{IBr}(\mathcal{O}\mathsf{N}\mathsf{f})\to\mathrm{IBr}(\mathcal{O} \mathsf{G}\mathsf{b})\), induced by some Morita equivalence. Then \(V\) and \(m\) are unique with respect to the condition that \(V\) must induce this same bijection \(\mathrm{IBr}(\mathcal{O}\mathsf{N}\mathsf{f})\to\mathrm{IBr}(\mathcal{O} \mathsf{G}\mathsf{b})\). By the structure of the Brauer tree, there are clearly two possible such bijections and hence we can take \(m=\ell\) or \(3\ell\). We set \(m=3\ell\). Now,
\[V\simeq\Omega^{4\ell}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O} \mathsf{N}\mathsf{f})^{\mathrm{sop}}}(V)\simeq\Omega^{\ell}_{\mathcal{O} \mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{N}\mathsf{f})^{\mathrm{sop}}}( U)\simeq\Omega^{\ell}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O} \mathsf{N}\mathsf{f})^{\mathrm{sop}}}(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{ f}).\]
proving part (ii).
To show that \(\mathsf{b}\mathsf{O}\mathsf{G}\mathsf{f}\) is absolutely indecomposable it is enough to show that \(U\simeq\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\) which, in turn, follows once we have proved that
\[U\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}}\xi^{(\pm)}_{\rho,i}=\mathsf{b} \mathcal{O}\mathsf{G}\mathsf{f}\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}}\xi^ {(\pm)}_{\rho,i}=\sum_{j=0}^{\ell-i}\xi_{\rho^{j}},\]
for all \(i\in I\). By (6.6), this is equivalent to
\[V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}U\otimes_{\mathcal{O}\mathsf{N} \mathsf{f}}\xi^{(\pm)}_{\rho,i}=\sum_{j=0}^{\ell-i}\xi_{\rho,j}, \tag{6.8}\]
for all \(i\in I\). Certainly \(U\oplus R\simeq\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\), for some projective \((\mathcal{O}\mathsf{G}\mathsf{b},\mathcal{O}\mathsf{N}\mathsf{f})\)-bisupermodule \(R\). Therefore, by Lemma 2.3(i), (6.7) and (6.6), we have
\[V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}U\otimes_{\mathcal{O}\mathsf{N} \mathsf{f}}\xi^{(\pm)}_{\rho,i}\leq_{\mathrm{P}\mathrm{rj}(\mathcal{O}\mathsf{ N})}V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}\mathsf{b}\mathcal{O} \mathsf{G}\mathsf{f}\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}}\xi^{(\pm)}_{\rho,i}=\sum_{j=0}^{\ell-i}\xi_{\rho,j},\]
for all \(i\in I\). However, by (6.5) and Lemma 2.12(i), we also know that
\[V^{*}\otimes_{\mathcal{O}\mathsf{G}\mathsf{b}}U\otimes_{\mathcal{O}\mathsf{N }\mathsf{f}}\xi^{(\pm)}_{\rho,i}=\Omega^{3\ell}_{\mathcal{O}\mathsf{N}\mathsf{ f}\otimes(\mathcal{O}\mathsf{N}\mathsf{f})^{\mathrm{sop}}}(\mathcal{O} \mathsf{N}\mathsf{f})\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}}\xi^{(\pm)}_{ \rho,i}\ \geq_{\mathrm{P}\mathrm{rj}(\mathcal{O}\mathsf{N})}\ \xi^{+}_{\rho,\ell-i},\xi^{-}_{\rho,\ell-i},\]
for all \(i\in I\). Therefore, claim (6.8) will follow once we have proved that, for each \(i\in I\), it is not possible to subtract a non-zero \(\chi\in\mathbb{N}\Pr\!\left(\mathcal{O}\mathsf{N}\mathsf{f}\right)\) from \(\sum_{j=0}^{\ell-i}\xi_{\rho,j}\) and satisfy
\[\sum_{j=0}^{\ell-i}\xi_{\rho,j}-\chi\ \geq_{\mathrm{P}\mathrm{rj}(\mathcal{O} \mathsf{N})}\ \xi^{+}_{\rho,\ell-i},\xi^{-}_{\rho,\ell-i}.\]
For a contradiction, suppose such a \(\chi\) does exist. Now, by inspecting the Brauer tree of \(\mathsf{B}_{\ell}\) and using the bijection of characters in (6.4), \(\mathbb{N}\Pr\!\left(\mathcal{O}\mathsf{N}\mathsf{f}\right)\) is precisely the \(\mathbb{N}\)-linear combination of characters of the form
\[\xi^{(\pm)}_{\rho,k}+\xi^{(\pm)}_{\rho,k+1}, \tag{6.9}\]
for \(0\leq k\leq\ell-1\). Say \(\xi^{+}_{\rho,k}+\xi^{+}_{\rho,k+1}\) appears with positive coefficient in \(\chi\) when expressed as a linear combination of the characters in (6.9), for some \(0\leq k\leq\ell-i-1\). (The argument for \(\xi^{-}_{\rho,k}+\xi^{-}_{\rho,k+1}\) is completely analogous.) Then we must have
\[\sum_{j=0}^{\ell-i}\xi_{\rho,j}-(\xi^{+}_{\rho,k}+\xi^{+}_{\rho,k+1})-\xi^{+}_{ \rho,\ell-i}\in\mathbb{N}\Pr\!\left(\mathcal{O}\mathsf{N}\mathsf{f}\right)\]
and
\[\sum_{j=0}^{\ell-i}\xi_{\rho,j}-(\xi_{\rho,k}^{+}+\xi_{\rho,k+1}^{+})-\xi_{\rho, \ell-i}^{-}\in\mathbb{N}\operatorname{Prj}(\mathcal{O}\mathsf{N}\mathsf{f}).\]
The first expression immediately rules out \(k=\ell-i-1\), as otherwise we have \(\xi_{\rho,\ell-i}^{+}\) appearing with negative coefficient. In all other cases, one of the two above expressions is the sum of the characters corresponding to two disconnected paths in the Brauer trees, both of odd length. (If \(k\equiv\ell-i\) modulo \(2\) it is the first expression and if \(k\not\equiv\ell-i\) modulo \(2\) it is the second expression.) A sum of such characters can never be in \(\mathbb{N}\operatorname{Prj}(\mathcal{O}\mathsf{N}\mathsf{f})\). We have now reached our desired contradiction and proved part (iii).
We now outline the proof for when \(\rho\) is odd. (Note that, in this case, we are not yet claiming that \(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\) is indecomposable.) This time, by Lemma 6.2(ii),(iii),(v), \(\mathcal{O}\mathsf{G}\mathsf{b}\) and \(\mathcal{O}\mathsf{N}\mathsf{f}\) are both Morita equivalent to \(\mathsf{A}_{\ell}\), introduced in SS2.4. We have
\[\operatorname{Br}_{\mathsf{D}}(\mathsf{f})=\operatorname{Br}_{\mathsf{D}}( \mathsf{b})=\bar{\mathsf{e}}_{\rho,0}=\bar{\mathsf{e}}_{\rho,0}^{+}+\bar{ \mathsf{e}}_{\rho,0}^{-}\in\mathbb{F}C_{\mathsf{G}}(\mathsf{D})\]
and
\[N_{\mathsf{G}}(\mathsf{D},\bar{\mathsf{e}}_{\rho,0}^{+})/C_{\mathsf{G}}( \mathsf{D})=N_{\mathsf{N}}(\mathsf{D},\bar{\mathsf{e}}_{\rho,0}^{+})/C_{ \mathsf{N}}(\mathsf{D})\cong N_{\tilde{\mathsf{A}}_{\mathsf{P}}}(\mathsf{D})/ C_{\tilde{\mathsf{A}}_{\mathsf{P}}}(\mathsf{D})\cong C_{\ell}.\]
Once again, we use [\(\mathbf{L}_{4}\), 5.2,5.4] and [\(\mathbf{L}_{2}\), Proposition 5.1] to prove that there exists \(0\leq m<2\ell\) such that \(U\simeq\Omega^{m}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{ N}\mathsf{f})^{\mathrm{sop}}}(V)\), for some \((\mathcal{O}\mathsf{G}\mathsf{b},\mathcal{O}\mathsf{N}\mathsf{f})\)-bimodule \(V\) inducing a Morita equivalence between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\). Also, \(\Omega^{2\ell}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{ N}\mathsf{f})^{\mathrm{sop}}}(V)\simeq V.\) As in the even case, we can choose the labeling of \(\operatorname{Irr}(\mathcal{O}\mathsf{N}\mathsf{f})\) and \(\operatorname{Irr}(\mathcal{O}\mathsf{G}\mathsf{b})\) to ensure that \(V\) induces the desired bijection \(\operatorname{Irr}(\mathcal{O}\mathsf{N}\mathsf{f})\to\operatorname{Irr}( \mathcal{O}\mathsf{G}\mathsf{b})\). We use Lemma 2.12(ii) to prove that \(m=\ell\). Therefore,
\[V\simeq\Omega^{2\ell}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O} \mathsf{N}\mathsf{f})^{\mathrm{sop}}}(V)\simeq\Omega^{\ell}_{\mathcal{O} \mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{N}\mathsf{f})^{\mathrm{sop}}}( U)\simeq\Omega^{\ell}_{\mathcal{O}\mathsf{G}\mathsf{b}\otimes(\mathcal{O}\mathsf{N} \mathsf{f})^{\mathrm{sop}}}(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}),\]
as desired.
### The bisupermodule \(\mathbf{M}\)
Throughout the remainder of the article we set \(\mathbf{M}\) to be the \((B^{\varnothing,1},B^{\varnothing,1})\)-bisupermodule
\[\mathbf{M}:=\Omega^{\ell}_{B^{\varnothing,1}\otimes(B^{\varnothing,1})^{ \mathrm{sop}}}(B^{\varnothing,1}). \tag{6.10}\]
It is well known that \(\mathbf{M}\) is absolutely indecomposable. For example, one can apply [\(\mathbf{L}_{4}\), 5.2,5.4] and [\(\mathbf{L}_{2}\), Propostion 5.1] to show that \(\Omega^{4\ell}_{B^{\varnothing,1}\otimes(B^{\varnothing,1})^{\mathrm{sop}}}(B^ {\varnothing,1})\simeq B^{\varnothing,1}\) and use that \(\Omega\) commutes with direct sums. We will also need the fact that
\[\mathbf{M}_{\bar{0}}\simeq\Omega^{\ell}_{B_{\bar{0}}^{\varnothing,1}\otimes(B _{\bar{0}}^{\varnothing,1})^{\mathrm{sop}}}(B_{\bar{0}}^{\varnothing,1}), \tag{6.11}\]
as \((B_{\bar{0}}^{\varnothing,1},B_{\bar{0}}^{\varnothing,1})\)-bimodules and that \(\mathbf{M}_{\bar{0}}\) is indecomposable, again as a \((B_{\bar{0}}^{\varnothing,1},B_{\bar{0}}^{\varnothing,1})\)-bimodule. First note that, as a consequence of Corollary 3.17,
\[\mathbf{M}_{\bar{0}}\simeq\Omega^{\ell}_{(B^{\varnothing,1}\otimes(B^{ \varnothing,1})^{\mathrm{sop}})_{\bar{0}}}(B_{\bar{0}}^{\varnothing,1}), \tag{6.12}\]
as \((B^{\varnothing,1}\otimes(B^{\varnothing,1})^{\mathrm{sop}})_{\bar{0}}\)-modules. Next, using the notation of (2.8), we have that \(p\nmid[\tilde{\mathsf{A}}_{p}\times\tilde{\mathsf{A}}_{p}:(\tilde{\mathsf{S}} _{p}\times\tilde{\mathsf{S}}_{p})_{\tilde{\mathsf{S}}_{p}/\tilde{\mathsf{A}}_{p }}]=2\). Therefore, \(\operatorname{Res}^{(\tilde{\mathsf{S}}_{p}\times\tilde{\mathsf{S}}_{p})_{ \tilde{\mathsf{S}}_{p}/\tilde{\mathsf{A}}_{p}}}_{\tilde{\mathsf{A}}_{p}\times \tilde{\mathsf{A}}_{p}}P\) is projective, for any projective \((B^{\varnothing,1}\otimes(B^{\varnothing,1})^{\mathrm{sop}})_{\bar{0}}\)-module \(P\). Moreover, for any indecomposable \((B^{\varnothing,1}\otimes(B^{\varnothing,1})^{\mathrm{sop}})_{\bar{0}}\)-module \(N\), if \(\operatorname{Res}^{(\tilde{\mathsf{S}}_{p}\times\tilde{\mathsf{S}}_{p})_{ \tilde{\mathsf{S}}_{p}/\tilde{A}_{p}}}_{\tilde{\mathsf{A}}_{p}\times\tilde{ \mathsf{A}}_{p}}(N)\) has any non-zero, projective, indecomposable summand, then \(N\) is projective. Now (6.11) just follows from (6.12). That \(\mathbf{M}_{\bar{0}}\) is indecomposable
as an \((B_{0}^{\varnothing,1},B_{0}^{\varnothing,1})\)-bimodule is now just proved in the same way the analogous statement for \(\mathbf{M}\) was proved above.
**Proposition 6.13**.: _The bisupermodule \(\mathbf{M}\) satisfies the following properties:_
1. \(\mathbf{M}\) _is absolutely indecomposable with vertex_ \(\Delta\mathsf{D}\) _and source_ \(\Omega^{\ell}_{\mathcal{O}\Delta\mathsf{D}}(\mathcal{O})\)_. In particular,_ \(\mathbf{M}\) _has endopermutation source._
2. _The_ \((\mathcal{O}\mathsf{Nf},\mathcal{O}\mathsf{Nf})\)_-bisupermodule_ \(B^{\rho,0}\boxtimes\mathbf{M}\) _is absolutely indecomposable with vertex_ \(\Delta\mathsf{D}\)_. Moreover,_ \(\mathsf{b}\mathsf{O}\mathsf{Gf}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{\rho,0} \boxtimes\mathbf{M})\) _has a unique non-projective, absolutely indecomposable summand_ \(V\)_, and this summand induces a Morita superequivalence between_ \(\mathcal{O}\mathsf{Nf}\) _and_ \(\mathcal{O}\mathsf{Gb}\)_. Furthermore, for all_ \(i\in I\)_, we have_ \(V\otimes_{\mathcal{O}\mathsf{Nf}}\xi_{\rho,i}=\xi_{\rho^{i}}\)_._
3. \(\mathbf{M}\) _and_ \(\mathbf{M}^{*}\) _induce a stable auto-superequivalence of Morita type of_ \(B^{\varnothing,1}\)_._
4. _For all_ \(i\in I\)_, we have_ \[\mathbf{M}\otimes_{B^{\varnothing,1}}\xi_{i}=\mathbf{M}^{*}\otimes_{B^{ \varnothing,1}}\xi_{i}=\varepsilon_{i}^{2}\sum_{j=0}^{\ell-i}\xi_{j}.\]
Proof.: (i) Since \(B^{\varnothing,1}\) has vertex \(\Delta\mathsf{D}\) and trivial source, \(\mathbf{M}\) also has vertex \(\Delta\mathsf{D}\) with source isomorphic to \(\Omega^{\ell}_{\mathcal{O}\Delta\mathsf{D}}(\mathcal{O})\). It now follows from [\(\mathbf{L}_{6}\), Proposition 7.3.4] that \(\mathbf{M}\) has endopermutation source.
(ii) We first prove that \(B^{\rho,0}\boxtimes\mathbf{M}\) is absolutely indecomposable with vertex \(\Delta\mathsf{D}\). For \(r=1\), \(\mathcal{O}\mathsf{Nf}\cong B^{\varnothing,1}\) and we just apply part (i).
For \(\rho\) even with \(r>1\), by Theorem 4.8, \(B^{\rho,0}\) is an absolutely indecomposable \((B^{\rho,0},B^{\rho,0})\)-bisupermodule and so the claim follows from Remark 3.8 and Lemma 3.67.
For \(\rho\) odd, we look more carefully at the proof of Lemma 6.2(ii). If we set \(e\in B^{\rho,0}\) to be a primitive idempotent, then \(e\sigma_{\mathcal{O}\mathbb{S}_{\mathbf{\bar{0}}}}(e)=0\) and \(e\mathcal{O}\mathsf{Nf}\otimes_{\mathcal{O}\mathsf{Nf}}\)? induces a Morita equivalence between \(\mathcal{O}\mathsf{Nf}\) and \(B_{0}^{\varnothing,1}\). Therefore, to show indecomposability it is enough to show that \(e(B^{\rho,0}\boxtimes\mathbf{M})e\) is absolutely indecomposable as a \((B_{0}^{\varnothing,1},B_{0}^{\varnothing,1})\)-bimodule. Now, similarly to the proof of Lemma 6.2(ii),
\[\mathbf{M}_{\bar{0}}\to e(B^{\rho,0}\boxtimes\mathbf{M})e,\ m\mapsto e\otimes m\]
is an isomorphism of \((B_{0}^{\varnothing,1},B_{0}^{\varnothing,1})\)-bimodules and \(\mathbf{M}_{\bar{0}}\) is an indecomposable \((B_{0}^{\varnothing,1},B_{0}^{\varnothing,1})\)-bimodule, as noted in the comments preceding the Proposition.
We now prove that \(B^{\rho,0}\boxtimes\mathbf{M}\) has vertex \(\Delta\mathsf{D}\). By Theorem 4.10, \(B_{0}^{\rho,0}\) has trivial defect and, by part (i), \(\mathbf{M}\) has vertex \(\Delta\mathsf{D}\). Therefore, by Remark 3.8 and Lemma 3.67, \(B_{0}^{\rho,0}\boxtimes\mathbf{M}\) has vertex \(\Delta\mathsf{D}\), as a \((B_{0}^{\rho,0}\otimes B^{\varnothing,1},B_{0}^{\rho,0}\otimes B^{\varnothing,1})\)-bimodule. The claim follows.
By Lemma 6.3(i), \(\mathsf{b}\mathsf{O}\mathsf{Gf}\) induces a stable superequivalence of Morita type between \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{Gb}\). Therefore, \(\mathsf{b}\mathsf{O}\mathsf{Gf}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{\rho,0} \boxtimes\mathbf{M})\) has a unique non-projective, absolutely indecomposable summand \(V\). Furthermore,
\[\mathsf{b}\mathsf{O}\mathsf{Gf}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{ \rho,0}\boxtimes\mathbf{M})\simeq \mathsf{b}\mathsf{O}\mathsf{Gf}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{ \rho,0}\boxtimes\Omega^{\ell}_{B^{\varnothing,1}\otimes(B^{\varnothing,1})^{ \mathrm{sop}}}(B^{\varnothing,1}))\] \[\simeq \mathsf{b}\mathsf{O}\mathsf{Gf}\otimes_{\mathcal{O}\mathsf{Nf}} \Omega^{\ell}_{\mathcal{O}\mathsf{Nf}\otimes(\mathcal{O}\mathsf{Nf})^{ \mathrm{sop}}}(\mathcal{O}\mathsf{Nf})\] \[\simeq \Omega^{\ell}_{\mathcal{O}\mathsf{Gb}\otimes(\mathcal{O}\mathsf{Nf })^{\mathrm{sop}}}(\mathsf{b}\mathsf{O}\mathsf{Gf}\otimes_{\mathcal{O}\mathsf{Nf }}\mathsf{O}\mathsf{Nf})\oplus P\] \[\simeq \Omega^{\ell}_{\mathcal{O}\mathsf{Gb}\otimes(\mathcal{O}\mathsf{Nf })^{\mathrm{sop}}}(\mathsf{b}\mathsf{O}\mathsf{Gf})\oplus P,\]
for some projective \((\mathcal{O}\mathsf{Gb},\mathcal{O}\mathsf{Nf})\)-bisupermodule \(P\). Here, the second isomorphism follows from Lemma 3.19 (unless \(r=1\), in which case it is actually an equality), as \(\Omega_{B^{\rho,0}\otimes(B^{\rho,0})^{\mathrm{sop}}}(B^{\rho,0})=\{0\}\), and the fourth isomorphism from the fact that \(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\) induces a stable superequivalence between \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{Gb}\). So, \(V\simeq\Omega^{\ell}_{\mathcal{O}\mathsf{Gb}\otimes(\mathcal{O}\mathsf{Nf})^ {\mathrm{sop}}}(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f})\) and all the remaining results now follow from Lemma 6.3(ii).
(iii) We now assume \(\rho\) is even with \(r>1\). Due to the comment following Definition 5.1, this is always possible. By Lemma 6.3(i),(iii), \(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\) and \(\mathsf{f}\mathcal{O}\mathsf{Gb}\) are both absolutely indecomposable and induce a stable superequivalence of Morita type between \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{Gb}\). Therefore,
\[\mathsf{f}\mathcal{O}\mathsf{Gb}\otimes_{\mathcal{O}\mathsf{Gb}}\mathsf{b} \mathcal{O}\mathsf{G}\mathsf{f}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{\rho,0} \boxtimes\mathbf{M})\]
has a unique non-projective, absolutely indecomposable summand isomorphic to \(B^{\rho,0}\boxtimes\mathbf{M}\). However, \(V\) from part (ii) is the unique non-projective, absolutely indecomposable summand of \(\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f}\otimes_{\mathcal{O}\mathsf{Nf}}(B^ {\rho,0}\boxtimes\mathbf{M})\) and so
\[B^{\rho,0}\boxtimes\mathbf{M}\simeq\mathsf{f}\mathcal{O}\mathsf{Gb}\otimes_{ \mathcal{O}\mathsf{Gb}}V. \tag{6.14}\]
Now,
\[B^{\rho,0}\boxtimes\mathbf{M}^{*}\simeq(B^{\rho,0})^{*}\boxtimes\mathbf{M}^{* }\simeq(B^{\rho,0}\boxtimes\mathbf{M})^{*}\simeq V^{*}\otimes_{\mathcal{O} \mathsf{Gb}}\mathsf{b}\mathcal{O}\mathsf{G}\mathsf{f},\]
where the first isomorphism follows from Lemma 3.26, the second from Lemma 3.24 and the third from Lemmas 3.22 and 3.26. Furthermore, since \(V\) induces a Morita superequivalence between \(\mathcal{O}\mathsf{Nf}\) and \(\mathcal{O}\mathsf{Gb}\), Lemmas 3.25 and 6.3(i),(iii) give that \(B^{\rho,0}\boxtimes\mathbf{M}\) and \(B^{\rho,0}\boxtimes\mathbf{M}^{*}\) induce a stable auto-superequivalence of Morita type of \(\mathcal{O}\mathsf{Nf}\).
We now set \(e\) to be a primitive idempotent in \(B^{\rho,0}_{0}\), as in the proof of Lemma 6.2(i), where we showed that \(e\mathcal{O}\mathsf{Nf}\) and \(\mathsf{f}\mathcal{O}\mathsf{N}e\) induce a Morita superequivalence between \(\mathcal{O}\mathsf{Nf}\) and \(B^{\varnothing,1}\). We now have that
\[e\mathcal{O}\mathsf{Nf}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{\rho,0}\boxtimes \mathbf{M})\otimes_{\mathcal{O}\mathsf{Nf}}\mathsf{f}\mathcal{O}\mathsf{N}e \simeq e(B^{\rho,0}\boxtimes\mathbf{M})e\simeq\mathbf{M}\]
and
\[e\mathcal{O}\mathsf{Nf}\otimes_{\mathcal{O}\mathsf{Nf}}(B^{\rho,0}\boxtimes \mathbf{M}^{*})\otimes_{\mathcal{O}\mathsf{Nf}}\mathsf{f}\mathcal{O}\mathsf{N}e \simeq e(B^{\rho,0}\boxtimes\mathbf{M}^{*})e\simeq\mathbf{M}^{*}\]
induce a stable auto-superequivalence of Morita type of \(B^{\varnothing,1}\), as desired.
(iv) We continue to assume that \(\rho\) is even with \(r>1\). We have already seen in part (iii) that \(\mathsf{f}\mathcal{O}\mathsf{Gb}\otimes_{\mathcal{O}\mathsf{Gb}}V\simeq B^ {\rho,0}\boxtimes\mathbf{M}\). Now,
\[\mathsf{f}\mathcal{O}\mathsf{Gb}\otimes_{\mathcal{O}\mathsf{Gb}}V\otimes_{ \mathcal{O}\mathsf{Nf}}\xi_{\rho,i}=\mathsf{f}\mathcal{O}\mathsf{Gb}\otimes_{ \mathcal{O}\mathsf{Gb}}\xi_{\rho^{i}}=\varepsilon_{i}^{2}\sum_{j=0}^{\ell-i} \xi_{\rho,j},\]
for all \(i\in I\), where the first equality holds by part (ii) and the second due to Lemma 5.8(ii). (Note that \(\varepsilon_{\rho^{i}}=\varepsilon_{i}\), see Lemma 6.2(iii),(iv).) Therefore,
\[(B^{\varnothing,1}\boxtimes\mathbf{M})\otimes_{\mathcal{O}\mathsf{Nf}}\xi_{ \rho,i}=\varepsilon_{i}^{2}\sum_{j=0}^{\ell-i}\xi_{\rho,j},\]
for all \(i\in I\), and the claim follows from Lemma 3.59.
The claim for \(\mathbf{M}^{*}\) follows from Lemma 3.58(ii).
## 7. The bisupermodules \(\mathbf{X}\) and \(\mathbf{Y}\)
Throughout this section we set \(d\) to be an integer with \(1\leq d<p\) and \(\rho\) a \(d\)-Rouquier \(\bar{p}\)-core. We adopt all the notation of SS4.6, in particular, \(r=|\rho|\), \(n=r+dp\),
\[\mathsf{G}=\tilde{\mathsf{S}}_{n},\quad\mathsf{L}=\tilde{\mathsf{S}}_{\mathsf{ R},\mathsf{P}_{1},\ldots,\mathsf{P}_{d}},\quad\mathsf{N}=N_{\mathsf{G}}( \tilde{\mathsf{S}}_{\mathsf{P}_{1},\ldots,\mathsf{P}_{d}}),\quad\mathsf{H}= \tilde{\mathsf{S}}_{\mathsf{R}\cup\mathsf{P}_{1}\cup\cdots\cup\mathsf{P}_{d-1 },\mathsf{P}_{d}},\]
\[\mathsf{b}=\mathsf{e}_{\rho,d}\in\mathcal{O}\mathsf{G},\quad\mathsf{f}=\mathsf{ e}_{\rho,0}\otimes\mathsf{e}_{\varnothing,1}^{(1)}\otimes\cdots\otimes\mathsf{e}_{ \varnothing,1}^{(d)}\in\mathcal{O}\mathsf{L}e_{z},\quad\mathsf{c}=\mathsf{e}_{ \rho,d-1}\otimes\mathsf{e}_{\varnothing,1}^{(d)}\in\mathcal{O}\mathsf{H}e_{z},\]
and the defect group
\[\mathsf{D}=\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d}\]
is chosen as in (4.29). We identify \(\mathcal{O}\mathsf{L}\mathsf{f}\) with \(B^{\rho,0}\otimes(B^{\varnothing,1})^{\otimes d}\) and \(\mathcal{O}\mathsf{N}\mathsf{f}\) with \(B^{\rho,0}\otimes(B^{\varnothing,1}\wr_{\mathsf{g}}\mathcal{T}_{d})\) as in (4.28) via Lemma 4.14. We continue with all our assumptions on \(\mathcal{O}\) from Section 4.
### D-small subgroups
Recall that \(\mathsf{G}\) acts on \([n]\) via \(\pi_{n}\), see SS4.1. In particular, \(\mathtt{R}\) is the set of fixed points of \(\mathsf{D}\). For \(Q\leq\mathsf{D}\), we write \(Q<_{\mathsf{s}}\mathsf{D}\) if the set of fixed points of \(Q\) on \([n]\) strictly contains \(\mathtt{R}\). In other words, \(Q<_{\mathsf{s}}\mathsf{D}\) if and only \(Q\leq\mathsf{D}_{1}\times\cdots\times\hat{\mathsf{D}}_{k}\times\cdots\times \mathsf{D}_{d}\), for some \(1\leq k\leq d\). (Here, \(\hat{\mathsf{D}}_{k}\) means that \(\mathsf{D}_{k}\) is omitted from the direct product.)
Recall the notation (2.2).
**Lemma 7.1**.: _We have:_
1. _Let_ \(g\in\mathsf{G}\setminus\mathsf{N}\)_. Then_ \(\mathsf{D}\cap^{g}\mathsf{D}<_{\mathsf{s}}\mathsf{D}\)_. In particular, for any_ \((g_{1},g_{2})\in(\mathsf{G}\times\mathsf{G})\setminus(\mathsf{N}\times \mathsf{N})\)_, we have_ \(\Delta\mathsf{D}\cap^{(g_{1},g_{2})}\Delta\mathsf{D}=\Delta Q\)_, for some_ \(Q<_{\mathsf{s}}\mathsf{D}\)_._
2. _Let_ \((g,h)\in(\mathsf{G}\times\mathsf{N})\setminus(\mathsf{N}\times\mathsf{N})\)_. Then_ \((\mathsf{D}\times\mathsf{D})\cap{}^{(g,h)}\Delta\mathsf{D}=\Delta_{\varphi}Q\)_, for some_ \(Q<_{\mathsf{s}}\mathsf{D}\) _and_ \(\varphi:Q\to\mathsf{D}\)_, with_ \(\varphi(Q)<_{\mathsf{s}}\mathsf{D}\)_._
Proof.: (i) Let \(g\in\mathsf{G}\setminus\mathsf{N}\). Then, by Lemma 4.30(ii), we have \(g\notin N_{\mathsf{G}}(\mathsf{D})\), so \({}^{g^{-1}}\mathsf{D}_{k}\not\subseteq\mathsf{D}\) for some \(1\leq k\leq d\). We claim that \({}^{g}\mathsf{D}\cap\mathsf{D}\) fixes \(\mathsf{P}_{c}\) pointwise. Say \(\alpha\in{}^{g}\mathsf{D}\cap\mathsf{D}\) does not fix \(\mathsf{P}_{k}\) pointwise. Then, since \(\alpha\in\mathsf{D}\), we can write \(\alpha=h_{1}h\), for some \(h_{1}\in\mathsf{D}_{k}\setminus\{1\}\) and \(h\in\mathsf{D}_{1}\times\cdots\times\hat{\mathsf{D}}_{k}\times\cdots\times \mathsf{D}_{d}\). As \(h_{1}h\in{}^{g}\mathsf{D}\), we have \({}^{g^{-1}}{h_{1}}^{g^{-1}}h\in\mathsf{D}\). Therefore, \({}^{g^{-1}}h_{1}\in\mathsf{D}\), as \(\pi_{n}({}^{g^{-1}}h_{1})\) and \(\pi_{n}({}^{g^{-1}}h)\) have disjoint cycle decomposition. This contradicts the containment \({}^{g^{-1}}\mathsf{D}_{k}\not\subseteq\mathsf{D}\).
For the second part we set \(Q:=\{x\in\mathsf{D}\mid(x,x)\in\Delta\mathsf{D}\cap{}^{(g_{1},g_{2})}\Delta \mathsf{D}\}.\) Certainly \(\Delta Q=\Delta\mathsf{D}\cap{}^{(g_{1},g_{2})}\Delta\mathsf{D}\). To prove that \(Q<_{\mathsf{s}}\mathsf{D}\), we apply the first part to the first or second coordinate depending on whether \(g_{1}\notin\mathsf{N}\) or \(g_{2}\notin\mathsf{N}\).
(ii) Certainly \((\mathsf{D}\times\mathsf{D})\cap{}^{(g,h)}\Delta\mathsf{D}=\Delta_{\varphi}Q\), for some \(Q\leq\mathsf{D}\) and \(\varphi:Q\to\mathsf{D}\). We just need to show that \(Q,\varphi(Q)<_{\mathsf{s}}\mathsf{D}\). Since \(g\notin\mathsf{N}\), it follows from part (i) that \(Q\leq\mathsf{D}\cap{}^{g}\mathsf{D}<_{\mathsf{s}}\mathsf{D}\). Now suppose \((x,y)\in(\mathsf{D}\times\mathsf{D})\cap{}^{(g,h)}\Delta\mathsf{D}=\Delta_{ \varphi}Q\). Then \(({}^{g^{-1}}x,{}^{h^{-1}}y)\in\Delta\mathsf{D}\). In particular, \({}^{g^{-1}}x={}^{h^{-1}}y\) and so \({}^{hg^{-1}}x=y\). So, since \(hg^{-1}\notin\mathsf{N}\), \(y\in\mathsf{D}\cap{}^{hg^{-1}}\mathsf{D}<_{\mathsf{s}}\mathsf{D}\), by part (i). As this holds for all such \(y\), we have \(\varphi(Q)<_{\mathsf{s}}\mathsf{D}\), as desired.
In the remainder of this section we will often encounter the following situation. Let \(\mathsf{D}\leq K_{1},K_{2}\leq\mathsf{G}\), with \(z\in K_{1},K_{2}\notin\mathsf{G}_{\widetilde{0}}\), and let \(U,V\) be \((\mathcal{O}K_{1},\mathcal{O}K_{2})\)-bisupermodules.
If \(V\) is isomorphic to an absolutely indecomposable summand of \(U\) with vertex \(\Delta\mathsf{D}\) and all other indecomposable summands of \(U\), as an \((\mathcal{O}K_{1},\mathcal{O}K_{2})\)-bimodule, have vertex contained in some \(\Delta Q\), with \(Q<_{\mathsf{s}}\mathsf{D}\), then we write
\[V\,|_{\mathsf{D}}\,U. \tag{7.2}\]
If instead, all other indecomposable summands of \(U\) as an \((\mathcal{O}K_{1},\mathcal{O}K_{2})\)-bimodule, have vertex contained in some \(\Delta_{\varphi}Q\), with \(Q,\varphi(Q)<_{\mathfrak{s}}\mathsf{D}\) for some \(\varphi:Q\to\mathsf{D}\), then we write
\[V\,|^{\mathsf{D}}\,U. \tag{7.3}\]
Moreover, we refer to such \(\Delta_{\varphi}Q\) as \(\mathsf{D}\)_-small_.
### The bisupermodules \(\mathbf{M}_{\mathbf{L}}\) and \(\mathbf{M}_{\mathbf{N}}\)
Recall the \((B^{\varnothing,1},B^{\varnothing,1})\)-bisupermodule \(\mathbf{M}\) from (6.10). We have the \(((B^{\varnothing,1})^{\otimes d},(B^{\varnothing,1})^{\otimes d})\)-bisupermodule \(\mathbf{M}^{\boxtimes d}\). For \(1\leq k\leq d\), we set \((B^{\varnothing,1})^{(k)}:=\mathcal{O}\tilde{\mathsf{S}}_{\mathsf{P}_{k}} \mathsf{e}_{\varnothing,1}^{(k)}\) and identify each \((B^{\varnothing,1})^{(k)}\) with \(B^{\varnothing,1}\) via (4.13), as in SS4.6. We denote by \(\mathbf{M}^{(k)}\) the \(k^{\text{th}}\) factor in \(\mathbf{M}^{\boxtimes d}\). That is, \(\mathbf{M}^{(k)}\) is a \(((B^{\varnothing,1})^{(k)},(B^{\varnothing,1})^{(k)})\)-bisupermodule that we identify with \(\mathbf{M}\). We denote by \(\mathbf{M}_{\mathbf{S}_{d}}^{\boxtimes d}\) the \((B^{\varnothing,1}\wr_{\mathfrak{s}}\mathcal{T}_{d},B^{\varnothing,1}\wr_{ \mathfrak{s}}\mathcal{T}_{d})_{\mathsf{S}_{d}}\)-supermodule from Lemma 3.31.
Recalling the identification \(\mathcal{O}\mathsf{Lf}=B^{\rho,0}\otimes(B^{\varnothing,1})^{\otimes d}\), define the \((\mathcal{O}\mathsf{Lf},\mathcal{O}\mathsf{Lf})\)-bisupermodule
\[\mathbf{M}_{\mathbf{L}}:=B^{\rho,0}\boxtimes\mathbf{M}^{\boxtimes d}.\]
We have also identified \(\mathcal{O}\mathsf{Nf}\) with \(B^{\rho,0}\otimes(B^{\varnothing,1}\wr_{\mathfrak{s}}\mathcal{T}_{d})\). Recalling (3.33), we define the \((\mathcal{O}\mathsf{Nf},\mathcal{O}\mathsf{Nf})\)-bisupermodule
\[\mathbf{M}_{\mathbf{N}}:=B^{\rho,0}\boxtimes(\mathbf{M}\wr_{\mathfrak{s}} \mathcal{T}_{d})=B^{\rho,0}\boxtimes\Big{(}\mathrm{Ind}_{(B^{\varnothing,1} \wr_{\mathfrak{s}}\mathcal{T}_{d},B^{\varnothing,1}\wr_{\mathfrak{s}} \mathcal{T}_{d})_{\mathsf{S}_{d}}}^{\boxtimes d}\mathbf{M}_{\mathbf{S}_{d}}^{ \boxtimes d}\Big{)}.\]
Recall the subgroup \((\mathsf{N}\times\mathsf{N})_{\mathsf{N}/\mathsf{L}}\leq\mathsf{N}\times \mathsf{N}\) from (2.8). To simplify the notation, we denote
\[(\mathsf{N}\times\mathsf{N})_{\mathsf{S}_{d}}:=(\mathsf{N}\times\mathsf{N})_{ \mathsf{N}/\mathsf{L}}.\]
Note that the \(T_{w}\)'s, introduced just before Lemma 4.14, are group elements. Therefore, through our identification \(\mathcal{O}\mathsf{Nf}=B^{\rho,0}\otimes(B^{\varnothing,1}\wr_{\mathfrak{s}} \mathcal{T}_{d})\), we have
\[(B^{\varnothing,1}\wr_{\mathfrak{s}}\mathcal{T}_{d},B^{\varnothing,1}\wr_{ \mathfrak{s}}\mathcal{T}_{d})_{\mathsf{S}_{d}}=\mathcal{O}(\mathsf{N}\times \mathsf{N})_{\mathsf{S}_{d}}(\mathsf{f}\otimes\mathsf{f}),\]
In particular,
\[\mathbf{M}_{\mathsf{N}}\simeq\mathrm{Ind}_{(\mathsf{N}\times\mathsf{N})_{ \mathsf{S}_{d}}}^{\mathsf{N}\times\mathsf{N}}(\mathbf{M}_{\mathsf{L}})_{\mathsf{ S}_{d}}. \tag{7.4}\]
where
\[(\mathbf{M}_{\mathsf{L}})_{\mathsf{S}_{d}}:=B^{\rho,0}\boxtimes\mathbf{M}_{ \mathsf{S}_{d}}^{\boxtimes d}.\]
We will sometimes consider \((\mathbf{M}_{\mathsf{L}})_{\mathsf{S}_{d}}\) as an \(\mathcal{O}((\mathsf{N}\cap\mathsf{H})\times(\mathsf{N}\cap\mathsf{H}))_{( \mathsf{N}\cap\mathsf{H})/\mathsf{L}}\)-module via the inclusion \((\mathsf{N}\cap\mathsf{H})\hookrightarrow\mathsf{N}\). In this case we set
\[((\mathsf{N}\cap\mathsf{H})\times(\mathsf{N}\cap\mathsf{H}))_{\mathsf{S}_{d-1 }}:=((\mathsf{N}\cap\mathsf{H})\times(\mathsf{N}\cap\mathsf{H}))_{(\mathsf{ N}\cap\mathsf{H})/\mathsf{L}}.\]
**Lemma 7.5**.: _We have:_
1. \(\mathbf{M}_{\mathsf{L}}\) _is an absolutely indecomposable_ \((\mathcal{O}\mathsf{Lf},\mathcal{O}\mathsf{Lf})\)_-bisupermodule with vertex_ \(\Delta\mathsf{D}\)__
2. \(\mathbf{M}_{\mathsf{N}}\) _is an absolutely indecomposable_ \((\mathcal{O}\mathsf{Nf},\mathcal{O}\mathsf{Lf})\)_-bisupermodule with vertex_ \(\Delta\mathsf{D}\)_. In particular,_ \(\mathbf{M}_{\mathsf{N}}\) _is an absolutely indecomposable_ \((\mathcal{O}\mathsf{Nf},\mathcal{O}\mathsf{Nf})\)_-bisupermodule with vertex_ \(\Delta\mathsf{D}\)_._
3. \(\mathrm{Res}_{\mathsf{N}\times\mathsf{L}}^{\mathsf{N}\times\mathsf{N}}\mathbf{M} _{\mathsf{N}}\simeq\mathrm{Ind}_{\mathsf{L}\times\mathsf{L}}^{\mathsf{N}\times \mathsf{L}}\mathbf{M}_{\mathsf{L}}\)_._
Proof.: (i) is proved via induction. For \(d=1\) this is just Proposition 6.13(ii). The inductive step is now proved using Remark 3.8 and Lemma 3.67. Note that this induction is valid due to Remark 5.2.
(ii),(iii) To show that \(\mathbf{M}_{\mathsf{N}}\) is indecomposable (as an \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{L}\mathsf{f})\)-bisupermodule and therefore as an \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N}\mathsf{f})\)-bimodule) and that
\[\operatorname{Res}_{\mathsf{N}\times\mathsf{L}}^{\mathsf{N}\times\mathsf{N}} \mathbf{M}_{\mathsf{N}}\simeq\operatorname{Ind}_{\mathsf{L}\times\mathsf{L}}^ {\mathsf{N}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}},\]
we just apply Lemmas 2.9 and 4.30(i). (Note that Lemma 2.9 is proved using the Mackey formula. However, we can use Theorem 3.66 instead to obtain that the above does indeed holds as bisupermodules.) That \(\mathbf{M}_{\mathsf{N}}\) has vertex \(\Delta\mathsf{D}\) follows from the fact that \(\mathbf{M}_{\mathsf{L}}\) does and that \(p\nmid[\mathsf{N}:\mathsf{L}]\).
Let \(1\leq k\leq d\). We define \((\mathcal{O}\mathsf{L}_{k}\mathsf{f}_{k},\mathcal{O}\mathsf{L}_{k}\mathsf{f }_{k})\)-bisupermodule
\[\mathbf{M}_{\mathsf{L}_{k}}:=B^{\rho,0}\boxtimes\mathbf{M}^{\boxtimes k}\]
and the \((\mathcal{O}\mathsf{N}_{k}\mathsf{f}_{k},\mathcal{O}\mathsf{N}_{k}\mathsf{f }_{k})\)-bisupermodule
\[\mathbf{M}_{\mathsf{N}_{k}}=B^{\rho,0}\boxtimes(\mathbf{M}\wr_{\mathsf{s}} \mathcal{T}_{k}).\]
That is, with Remark 5.2 in mind, we do the same constructions as for \(\mathbf{M}_{\mathsf{L}}=\mathbf{M}_{\mathsf{L}_{d}}\) and \(\mathbf{M}_{\mathsf{N}_{d}}=\mathbf{M}_{\mathsf{N}}\), but with \(d\) replaced by \(k\). Analogously to (7.4), we have
\[\mathbf{M}_{\mathsf{N}_{k}}\simeq\operatorname{Ind}_{(\mathsf{N}_{k}\times \mathsf{N}_{k})\mathsf{S}_{k}}^{\mathsf{N}_{k}\times\mathsf{N}_{k}}(\mathbf{M }_{\mathsf{L}_{k}})_{\mathsf{S}_{k}},\]
where \((\mathsf{N}_{k}\times\mathsf{N}_{k})_{\mathsf{S}_{k}}:=(\mathsf{N}_{k}\times \mathsf{N}_{k})_{\mathsf{N}_{k}/\mathsf{L}_{k}}\) and \((\mathbf{M}_{\mathsf{L}})_{\mathsf{S}_{k}}:=B^{\rho,0}\boxtimes\mathbf{M}_{ \mathsf{S}_{k}}^{\boxtimes k}\).
**Lemma 7.6**.: _The following supermodules are absolutely indecomposable with vertex \(\Delta\mathsf{D}\):_
1. _the_ \(\mathcal{O}(\mathsf{H}\times\mathsf{H})\)_-supermodule_ \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\)_;_
2. _the_ \(\mathcal{O}((\mathsf{N}\cap\mathsf{H})\times(\mathsf{N}\cap\mathsf{H}))\)_-supermodule_ \(\mathcal{O}\mathsf{N}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}\)_;_
3. _the_ \(\mathcal{O}(\mathsf{L}\times\mathsf{L})\)_-supermodule_ \(\mathcal{O}\mathsf{L}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}\)_._
Proof.: We prove (i). Parts (ii) and (iii) are proved similarly. If \(d=1\), this is contained in Proposition 6.13(ii). Let \(d>1\). By Lemma 4.31(i) and Remark 5.2, \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\) is indecomposable with vertex \(\Delta(\mathsf{D}_{1}\times\cdots\times\mathsf{D}_{d-1})\). Moreover, by Proposition 6.13(i), \(\mathbf{M}^{(d)}\) is indecomposable with vertex \(\Delta\mathsf{D}_{d}\). Remark 3.8 and Lemma 3.67 complete the proof.
### The bisupermodules \(\mathbf{X}\) and \(\mathbf{Y}\)
In this subsection, we will define the bisupermodule \(\mathbf{X}\) that will ultimately induce a Morita superequivalence between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\). We will also introduce a related bisupermodule \(\mathbf{Y}\), that will aid with the inductive arguments in Section 8. Recall the super Green correspondence of Theorem 3.63.
By Lemma 7.5(ii), the \(\mathcal{O}(\mathsf{N}\times\mathsf{N})\)-supermodule \(\mathbf{M}_{\mathsf{N}}\) is absolutely indecomposable with vertex \(\Delta\mathsf{D}\), and Lemma 4.30(ii) gives that \(N_{\mathsf{G}\times\mathsf{N}}(\Delta\mathsf{D})\leq\mathsf{N}\times\mathsf{N}\). We now define \(\mathbf{X}\) to be the super Green correspondent of \(\mathbf{M}_{\mathsf{N}}\) in \(\mathsf{G}\times\mathsf{N}\).
By Lemma 7.6(i), the \(\mathcal{O}(\mathsf{H}\times\mathsf{H})\)-supermodule \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\) is absolutely indecomposable with vertex \(\Delta\mathsf{D}\), and by Lemma 4.30(iii), we have \(N_{\mathsf{G}\times\mathsf{H}}(\Delta\mathsf{D})\leq\mathsf{H}\times\mathsf{H}\). We now define \(\mathbf{Y}\) to be the super Green correspondent of the \(\mathcal{O}(\mathsf{H}\times\mathsf{H})\)-supermodule \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\) in \(\mathsf{G}\times\mathsf{H}\).
We need a technical lemma before continuing.
**Lemma 7.7**.: _The \((\mathcal{O}\mathsf{N},\mathcal{O}(\mathsf{N}\cap\mathsf{H}))\)-bisupermodule \(\mathcal{O}\mathsf{N}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathcal{O }\mathsf{N}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\) is absolutely indecomposable with vertex \(\Delta\mathsf{D}\). In particular, it is the super Green correspondent of \(\mathbf{Y}\) in \(\mathsf{N}\times(\mathsf{N}\cap\mathsf{H})\)._
Proof.: Note that
\[\mathcal{ON}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1} \mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\simeq\mathcal{OG}_{d-1}\mathsf{b}_{d -1}\boxtimes\mathbf{M}^{(d)}.\]
So for \(d=1\) the statement is immediate. From now on we assume that \(d>1\). We have
\[\begin{split}&\quad\operatorname{Res}_{\mathbf{N}\times\mathsf{L}}^{ \mathbf{N}\times(\mathsf{N}\cap\mathsf{H})}\big{(}\mathcal{ON}\otimes_{\mathcal{ O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes \mathbf{M}^{(d)})\big{)}\\ \simeq&\operatorname{Res}_{\mathbf{N}\times\mathsf{L }}^{\mathbf{N}\times(\mathsf{N}\cap\mathsf{H})}\operatorname{Ind}_{(\mathbf{N} \cap\mathsf{H})\times(\mathsf{N}\cap\mathsf{H})}^{\mathbf{N}\times(\mathsf{N} \cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}) \\ \simeq&\operatorname{Ind}_{(\mathbf{N}\cap\mathsf{H}) \times\mathsf{L}}^{\mathbf{N}\times\mathsf{L}}\operatorname{Res}_{(\mathbf{N} \cap\mathsf{H})\times\mathsf{L}}^{(\mathsf{N}\cap\mathsf{H})\times\mathsf{L }}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\\ \simeq&\operatorname{Ind}_{(\mathbf{N}\cap\mathsf{H })\times\mathsf{L}}^{\mathbf{N}\times\mathsf{L}}\operatorname{Ind}_{\mathsf{L }\times\mathsf{L}}^{(\mathsf{N}\cap\mathsf{H})\times\mathsf{L}}(\mathcal{O}_{d -1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\\ \simeq&\operatorname{Ind}_{\mathsf{L}\times\mathsf{L }}^{\mathbf{N}\times\mathsf{L}}(\mathcal{O}_{d-1}\mathsf{f}_{d-1}\boxtimes \mathbf{M}^{(d)}),\end{split}\]
where the second isomorphism follows from Theorem 3.66 and the third from Lemma 2.9(ii). It now follows from Lemmas 7.6, 2.9(i) and 4.30(i) that
\[\mathcal{ON}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d- 1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\]
is absolutely indecomposable as an \((\mathcal{ON},\mathcal{O}(\mathbf{N}\cap\mathsf{H}))\)-bisupermodule with vertex \(\Delta\mathsf{D}\). (We have even shown it is absolutely indecomposable as an \((\mathcal{ON},\mathcal{OL})\)-bisupermodule.)
Next note that, by Lemma 4.30(ii),(iii), \(N_{\mathsf{G}\times\mathsf{H}}(\Delta\mathsf{D})\leq(\mathbf{N}\cap\mathsf{H} )\times(\mathbf{N}\cap\mathsf{H})\) and so, taking into account Lemma 7.6, it makes sense to consider the super Green correspondents of \(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\) and \(\mathcal{ON}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1} \mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\) in \((\mathbf{N}\cap\mathsf{H})\times(\mathbf{N}\cap\mathsf{H})\).
By Lemma 4.32 and Remark 5.2, \(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\) and \(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\) are Brauer correspondents. Hence, by Lemma 3.65, they are super Green correspondents. Therefore,
\[\begin{split}\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes \mathbf{M}^{(d)}&\mid\;(\mathcal{OG}_{d-1}\otimes_{\mathcal{ON}_{ d-1}}\mathcal{ON}_{d-1}\mathcal{f}_{d-1}\otimes_{\mathcal{ON}_{d-1}} \mathcal{OG}_{d-1})\boxtimes\mathbf{M}^{(d)}\\ &\simeq\mathcal{OH}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H} )}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\otimes_{ \mathcal{O}(\mathbf{N}\cap\mathsf{H})}\mathcal{OH},\end{split}\]
where the isomorphism is two applications of Lemma 3.9.
We have now shown that \(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\) and \(\mathcal{ON}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1} \mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\) both have super Green correspondent \(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}\) in \((\mathbf{N}\cap\mathsf{H})\times(\mathbf{N}\cap\mathsf{H})\). In particular, \(\mathbf{Y}\) is isomorphic to the unique absolutely indecomposable summand of
\[\begin{split}&\quad\mathcal{OG}\otimes_{\mathcal{OH}}\mathcal{OH} \otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d -1}\boxtimes\mathbf{M}^{(d)})\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})} \mathcal{OH}\\ \simeq&\quad\mathcal{OG}\otimes_{\mathcal{O}(\mathbf{N} \cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}) \otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}\mathcal{OH}\\ \simeq&\quad\mathcal{OG}\otimes_{\mathcal{ON}} \mathcal{ON}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1} \mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\otimes_{\mathcal{O}(\mathbf{N}\cap \mathsf{H})}\mathcal{OH}\end{split}\]
with vertex \(\Delta\mathsf{D}\), as desired.
**Lemma 7.8**.: _We have:_
1. \(\mathbf{X}\) _is an_ \((\mathcal{OG}\mathsf{b},\mathcal{ON}\mathsf{f})\)_-bisupermodule._
2. \(\mathbf{Y}\) _is an_ \((\mathcal{OG}\mathsf{b},\mathcal{OH}\mathsf{c})\)_-bisupermodule._
Proof.: (i) Since \(\mathbf{X}\mid\mathcal{OG}\otimes_{\mathcal{ON}}\mathbf{M}_{\mathbf{N}}\), the right action of \(\mathcal{ON}\mathsf{f}\) is clear.
By Lemma 4.32, \(\mathcal{OG}\mathsf{b}\) and \(\mathcal{ON}\mathsf{f}\) are Brauer correspondents. Since \(\mathsf{b}\in\mathcal{OG}_{\bar{0}}\) and \(\mathsf{f}\in\mathcal{ON}_{\bar{0}}\), we may apply Lemma 3.65 and we set \(U=\mathsf{b}U\mathsf{f}\) to be the common Green correspondent of \(\mathcal{OG}\mathsf{b}\) and \(\mathcal{ON}\mathsf{f}\) in \(\mathsf{G}\times\mathbf{N}\). So, \(\mathcal{OG}\otimes_{\mathcal{ON}}\mathcal{ON}\) is a direct sum of \(U\), which
has vertex \(\Delta\mathsf{D}\), and \(V\), a direct sum of indecomposable bimodules each with vertex properly contained in \(\Delta\mathsf{D}\). Now,
\[\mathbf{X}\mid\mathcal{OG}\otimes_{\mathcal{ON}}\mathbf{M}_{ \mathbf{N}} \simeq\mathcal{OG}\otimes_{\mathcal{ON}}\mathcal{OM}\otimes_{ \mathcal{ON}}\mathbf{M}_{\mathbf{N}}\simeq(U\oplus V)\otimes_{\mathcal{ON}} \mathbf{M}_{\mathbf{N}}\] \[\simeq(U\otimes_{\mathcal{ON}}\mathbf{M}_{\mathbf{N}})\oplus(V \otimes_{\mathcal{ON}}\mathbf{M}_{\mathbf{N}}).\]
Lemma 2.3(ii) implies that \(V\otimes_{\mathcal{ON}}\mathbf{M}_{\mathbf{N}}\) is a direct sum of bimodules each with vertex of strictly smaller order than \(\Delta\mathsf{D}\). Therefore, \(\mathbf{X}\mid U\otimes_{\mathcal{ON}}\mathbf{M}_{\mathbf{N}}\). The claim follows.
(ii) Since \(\mathbf{Y}\mid\mathcal{OG}\otimes_{\mathcal{OH}}(\mathcal{OG}_{d-1}\mathsf{b }_{d-1}\boxtimes\mathbf{M}^{(d)})\), the right \(\mathcal{OH}\)c action is clear.
For the left action we first note that, by Lemma 7.7, \(\mathbf{Y}\) is actually the unique indecomposable summand of
\[\mathcal{OG}\otimes_{\mathcal{ON}}(\mathcal{ON}\otimes_{\mathcal{O}( \mathsf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M }^{(d)}))\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}\mathcal{OH}\]
\[\simeq\mathcal{OG}\otimes_{\mathcal{ON}}(\mathcal{ON}\mathsf{f}\otimes_{ \mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1} \boxtimes\mathbf{M}^{(d)}))\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})} \mathcal{OH}\]
with vertex \(\Delta\mathsf{D}\). As is part (i), this implies
\[\mathbf{Y}\mid U\otimes_{\mathcal{ON}}\mathcal{ON}\otimes_{\mathcal{O}( \mathsf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes \mathbf{M}^{(d)})\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}\mathcal{OH}.\]
The claim follows.
**Lemma 7.9**.: \(\mathbf{X}\) _is absolutely indecomposable as an \((\mathcal{OG}\mathsf{B},\mathcal{OL})\)-bisupermodule. In particular, \(\operatorname{Res}^{\mathsf{G}\times\mathsf{N}}_{\mathsf{G}\times\mathsf{L}}( \mathbf{X})\) is the super Green correspondent of \(\mathbf{M}_{\mathsf{L}}\) in \(\mathsf{G}\times\mathsf{L}\)._
Proof.: By Lemma 4.30(iii), we have \(N_{\mathsf{G}\times\mathsf{L}}(\Delta\mathsf{D})\leq\mathsf{L}\times\mathsf{L}\) and so one can consider the super Green correspondent of \(\mathbf{M}_{\mathsf{L}}\) in \(\mathsf{G}\times\mathsf{L}\).
Certainly every summand of \(\operatorname{Res}^{\mathsf{G}\times\mathsf{N}}_{\mathsf{G}\times\mathsf{L}}( \mathbf{X})\) has vertex conjugate to \(\Delta\mathsf{D}\) in \(\mathsf{G}\times\mathsf{N}\). Now, by definition,
\[\operatorname{Res}^{\mathsf{G}\times\mathsf{N}}_{\mathsf{G}\times\mathsf{L}} \mathbf{X}\mid\operatorname{Res}^{\mathsf{G}\times\mathsf{N}}_{\mathsf{G} \times\mathsf{L}}\mathrm{Ind}^{\mathsf{G}\times\mathsf{N}}_{\mathsf{N}\times \mathsf{N}}\mathbf{M}_{\mathsf{N}}\simeq\mathrm{Ind}^{\mathsf{G}\times\mathsf{ L}}_{\mathsf{N}\times\mathsf{L}}\operatorname{Res}^{\mathsf{N}\times\mathsf{N}}_{\mathsf{N} \times\mathsf{L}}\mathbf{M}_{\mathsf{N}}\simeq\mathrm{Ind}^{\mathsf{G}\times \mathsf{L}}_{\mathsf{N}\times\mathsf{L}}\mathrm{Ind}^{\mathsf{N}\times \mathsf{L}}_{\mathsf{L}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}}\simeq\mathrm{ Ind}^{\mathsf{G}\times\mathsf{L}}_{\mathsf{L}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}},\]
where the first isomorphism is due to Theorem 3.66 and the second to Lemma 7.5. However, by Theorem 3.63, \(\mathrm{Ind}^{\mathsf{G}\times\mathsf{L}}_{\mathsf{L}\times\mathsf{L}}\mathbf{M}_ {\mathsf{L}}\) has a unique indecomposable summand whose vertex is not strictly contained in \(\Delta\mathsf{D}\). The claim follows.
We will use the restriction \(\operatorname{Res}^{\mathsf{G}\times\mathsf{N}}_{\mathsf{G}\times\mathsf{L}} \mathbf{X}\) extensively, so to simplify the notation, we set
\[\mathsf{G}\mathbf{X}_{\mathsf{L}}:=\operatorname{Res}^{\mathsf{G}\times \mathsf{N}}_{\mathsf{G}\times\mathsf{L}}\mathbf{X}.\]
Recall the notation '\(|_{\mathsf{D}}\)' from (7.2).
**Lemma 7.10**.: _We have:_
1. \(\mathbf{X}\mid_{\mathsf{D}}(\mathcal{OG}\otimes_{\mathcal{ON}}\mathbf{M}_{ \mathsf{N}})\)_._
2. \(\mathsf{{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\mid_{\mathsf{D}}}(\mathcal{OG} \otimes_{\mathcal{OL}}\mathbf{M}_{\mathsf{L}})\)_._
3. \(\mathbf{Y}\mid_{\mathsf{D}}\big{(}\mathcal{OG}\otimes_{\mathcal{OH}}( \mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)})\big{)}\)_._
Proof.: (i) This follows from the definition of \(\mathbf{X}\), Theorem 3.63 and Lemma 7.1(i).
(ii) Similar to part (i), this follows from Lemma 7.9, Theorem 3.63 and Lemma 7.1(i).
(iii) By Lemma 7.7, \(\mathbf{Y}\) and \(\mathcal{ON}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1} \mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\) are super Green correspondents. In particular, by Theorem 3.63 and Lemma 7.1(i),
\[\mathbf{Y}\mid_{\mathsf{D}}\big{(}\mathcal{OG}\otimes_{\mathcal{ON}}( \mathcal{ON}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1} \mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}))\otimes_{\mathcal{O}(\mathsf{N}\cap \mathsf{H})}\mathcal{OH}\big{)} \tag{7.11}\] \[\simeq\mathcal{OG}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}( \mathcal{ON}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)})\otimes_{\mathcal{O}( \mathsf{N}\cap\mathsf{H})}\mathcal{OH}\] \[\simeq\mathcal{OG}\otimes_{\mathcal{OH}}(\mathcal{OH}\otimes_{ \mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathcal{ON}_{d-1}\mathsf{f}_{d-1} \boxtimes\mathbf{M}^{(d)})\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})} \mathcal{OH}).\]
However, as noted in the proof of Lemma 7.7, \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\) and \(\mathcal{O}\mathsf{N}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}\) are also super Green correspondents. In particular,
\[\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}\mid \mathcal{O}\mathsf{H}\otimes_{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}( \mathcal{O}\mathsf{N}_{d-1}\mathsf{f}_{d-1}\boxtimes\mathbf{M}^{(d)}))\otimes _{\mathcal{O}(\mathbf{N}\cap\mathsf{H})}\mathcal{O}\mathsf{H}.\]
As we already know that \(\mathbf{Y}\mid\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{H}}(\mathcal{O} \mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)})\), the claim now follows from (7.11).
Let \(1\leq k\leq d\). With Remark 5.2 in mind, we define \(\mathbf{X}_{k}\) and \({}_{\mathsf{G}_{k}}\mathbf{X}_{\mathsf{L}_{k}}\) in the same way as \(\mathbf{X}\) and \({}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\) were defined. (Of course, by Lemma 7.8(i), \(\mathbf{X}_{k}\) is an \((\mathcal{O}\mathsf{G}_{k}\mathsf{b}_{k},\mathcal{O}\mathsf{N}_{k}\mathsf{f} _{k})\)-bimodule.)
### Dualizing the special bisupmodules
The special bisupermodules defined so far can be dualized in the sense of SS3.6 to get the bisupermodules \(\mathbf{M}_{\mathsf{L}}^{*}:=(\mathbf{M}_{\mathsf{L}})^{*}\), \(\mathbf{M}_{\mathbf{N}}^{*}:=(\mathbf{M}_{\mathbf{N}})^{*}\), \(\mathbf{X}^{*}\), \(\mathbf{Y}^{*}\), \({}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}:=({}_{\mathsf{G}}\mathbf{X}_{ \mathsf{L}})^{*}\), and similarly \(\mathbf{M}_{\mathsf{L}_{k}}^{*}\), \(\mathbf{M}_{\mathsf{N}_{k}\ \mathsf{L}_{k}}^{*}\mathbf{X}_{\mathsf{G}_{k}}^{*}\) for \(1\leq k\leq d\).
**Lemma 7.12**.: _We have:_
1. \(\mathbf{M}_{\mathsf{L}}^{*}\simeq B^{\rho,0}\boxtimes(\mathbf{M}^{*})^{ \boxtimes d}\) _and_ \(\mathbf{M}_{\mathsf{L}}^{*}\) _has vertex_ \(\Delta\mathsf{D}\)_._
2. \(\mathbf{M}_{\mathsf{N}}^{*}\simeq B^{\rho,0}\boxtimes(\mathbf{M}^{*}\wr_{ \mathsf{G}}\mathcal{T}_{d})\) _and_ \(\mathbf{M}_{\mathsf{N}}^{*}\) _has vertex_ \(\Delta\mathsf{D}\)_._
3. \(\mathbf{X}^{*}\) _is the super Green correspondent of_ \(\mathbf{M}_{\mathsf{N}}^{*}\) _in_ \(\mathsf{N}\times\mathsf{G}\) _and_ \({}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\) _is the super Green correspondent of_ \(\mathbf{M}_{\mathsf{L}}^{*}\) _in_ \(\mathsf{L}\times\mathsf{G}\)_._
Proof.: (i) By Lemma 3.26\((B^{\rho,0})^{*}\simeq B^{\rho,0}\), and so the first claim just follows from Lemma 3.24. That \(\mathbf{M}_{\mathsf{L}}\) and \(\mathbf{M}_{\mathsf{L}}^{*}\) have the same vertex follows from Lemma 3.62.
(ii) This is proved in exactly the same way as (i) once we note that \(\mathbf{M}^{*}\wr_{\mathsf{G}}\mathcal{T}_{d}\simeq(\mathbf{M}_{\mathsf{G}} \mathcal{T}_{d})^{*}\) via Lemma 3.42.
(iii) By Lemma 3.23, we have \(\mathbf{X}^{*}\mid(\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{N}} \mathbf{M}_{\mathsf{N}})^{*}\simeq\mathbf{M}_{\mathsf{N}}^{*}\otimes_{\mathcal{O }\mathsf{N}}\mathcal{O}\mathsf{G}.\) That \(\mathbf{X}^{*}\) has vertex \(\Delta\mathsf{D}\) is, again, due to Lemma 3.62. With Lemma 7.9 in mind, the statement for \({}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\) is proved in an identical fashion.
Recall the notation '\(\mid^{\mathsf{D}}\)' from (7.3).
**Lemma 7.13**.: _We have:_
1. \(\mathcal{O}\mathsf{N}\mathsf{f}\mid^{\mathsf{D}}(\mathbf{X}^{*}\otimes_{ \mathcal{O}\mathsf{G}}\mathbf{X})\)_._
2. \(\mathcal{O}\mathsf{G}\mathsf{b}\mid\mathbf{X}\otimes_{\mathcal{O}\mathsf{N}} \mathbf{X}^{*}\)_._
Proof.: We first note that
\[\mathbf{M}_{\mathsf{N}}\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}} \mathbf{M}_{\mathsf{N}}^{*} \simeq\left(B^{\rho,0}\boxtimes(\mathbf{M}\wr_{\mathsf{S}}\mathcal{T }_{d})\right)\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}}\left(B^{\rho,0} \boxtimes(\mathbf{M}^{*}\wr_{\mathsf{S}}\mathcal{T}_{d})\right)\] \[\simeq(B^{\rho,0}\otimes_{B^{\rho,0}}B^{\rho,0})\boxtimes\left(( \mathbf{M}\wr_{\mathsf{S}}\mathcal{T}_{d})\otimes_{B^{\varpi,1_{\mathsf{S}}} \mathcal{T}_{d}}(\mathbf{M}^{*}\wr_{\mathsf{S}}\mathcal{T}_{d})\right)\] \[\simeq B^{\rho,0}\boxtimes\left((\mathbf{M}\otimes_{B^{\varpi,1}} \mathbf{M}^{*})\wr_{\mathsf{S}}\mathcal{T}_{d}\right),\]
where the second isomorphism follows from Lemma 3.9 and the third from Lemma 3.36(ii). By Proposition 6.13(iii) and Lemma 3.36(i), this last bisupermodule has a direct summand isomorphic to \(\mathcal{O}\mathsf{N}\mathsf{f}\).
Dualizing Lemma 7.5, using Lemma 7.12(i),(ii) and Lemma 2.9(ii), we obtain
\[\operatorname{Res}_{\mathbf{N}\times\mathsf{L}}^{\mathsf{N}\times\mathsf{N}} \mathbf{M}_{\mathsf{N}}^{*}\simeq\operatorname{Ind}_{\mathsf{L}\times\mathsf{L}}^ {\mathsf{N}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}}^{*}.\]
Therefore,
\[\operatorname{Res}_{\mathbf{N}\times\mathsf{L}}^{\mathsf{N}\times \mathsf{N}}(\mathbf{M}_{\mathsf{N}}\otimes_{\mathcal{O}\mathsf{N}}\mathbf{M}_{ \mathsf{N}}^{*}) \simeq\mathbf{M}_{\mathsf{N}}\otimes_{\mathcal{O}\mathsf{N}} \operatorname{Res}_{\mathbf{N}\times\mathsf{N}}^{\mathsf{N}\times\mathsf{N}} \mathbf{M}_{\mathsf{N}}^{*}\simeq\mathbf{M}_{\mathsf{N}}\otimes_{\mathcal{O} \mathsf{N}}\operatorname{Ind}_{\mathsf{L}\times\mathsf{L}}^{\mathsf{N}\times \mathsf{L}}\mathbf{M}_{\mathsf{L}}^{*}\] \[\simeq(\operatorname{Res}_{\mathbf{N}\times\mathsf{L}}^{\mathsf{N} \times\mathsf{N}}\mathbf{M}_{\mathsf{N}})\otimes_{\mathcal{O}\mathsf{L}} \mathbf{M}_{\mathsf{L}}^{*}\simeq(\operatorname{Ind}_{\mathsf{L}\times \mathsf{L}}^{\mathsf{N}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}})\otimes_{ \mathcal{O}\mathsf{L}}\mathbf{M}_{\mathsf{L}}^{*}\] \[\simeq\mathcal{O}\mathsf{N}\otimes_{\mathcal{O}\mathsf{L}} \mathbf{M}_{\mathsf{L}}\otimes_{\mathcal{O}\mathsf{L}}\mathbf{M}_{\mathsf{L}}^{*},\]
where the fourth isomorphism is Lemma 7.5(iii). Now, by Lemma 3.9 as well as Remark 3.8 and Lemma 3.67, we have \(\mathcal{O}\mathsf{L}\mathsf{f}\mid_{\mathsf{D}}(\mathbf{M}_{\mathsf{L}}\otimes_ {\mathcal{O}\mathsf{L}}\mathbf{M}_{\mathsf{L}}^{\ast})\). Therefore,
\[\mathcal{O}\mathsf{N}\mathsf{f}\mid_{\mathsf{D}}\bigl{(}\operatorname{Res}_{ \mathsf{N}\times\mathsf{L}}^{\mathsf{N}\times\mathsf{N}}(\mathbf{M}_{\mathsf{ L}}\otimes_{\mathcal{O}\mathsf{L}}\mathbf{M}_{\mathsf{L}}^{\ast})\bigr{)},\]
as \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{L})\)-bisupermodules. Putting this together with the first paragraph gives that \(\mathcal{O}\mathsf{N}\mathsf{f}\mid_{\mathsf{D}}(\mathbf{M}_{\mathsf{N}} \otimes_{\mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}}^{\ast})\). Similarly, we have \(\mathcal{O}\mathsf{N}\mathsf{f}\mid_{\mathsf{D}}(\mathbf{M}_{\mathsf{N}}^{ \ast}\otimes_{\mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}})\).
(i) First note that \(\mathbf{M}_{\mathsf{N}}^{\ast}\otimes\mathbf{M}_{\mathsf{N}}\), and consequently \(\mathcal{O}\mathsf{N}\mathsf{f}\), is a direct summand of
\[\mathbf{M}_{\mathsf{N}}^{\ast}\otimes_{\mathcal{O}\mathsf{N}}\mathcal{O} \mathsf{G}\otimes_{\mathcal{O}\mathsf{G}}\mathcal{O}\mathsf{G}\otimes_{ \mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}}\simeq\mathbf{M}_{\mathsf{N}}^{ \ast}\otimes_{\mathcal{O}\mathsf{N}}\mathcal{O}\mathsf{G}\otimes_{\mathcal{O} \mathsf{N}}\mathbf{M}_{\mathsf{N}}. \tag{7.14}\]
Now, by Lemma 7.10(i), we have \(\mathbf{X}\mid_{\mathsf{D}}(\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{ N}}\mathbf{M}_{\mathsf{N}})\). Applying Lemmas 3.22 and 3.62, we deduce that \(\mathbf{X}^{\ast}\mid_{\mathsf{D}}(\mathbf{M}_{\mathsf{N}}^{\ast}\otimes_{ \mathcal{O}\mathsf{N}}\mathcal{O}\mathsf{G})\). Therefore, \(\mathbf{X}^{\ast}\otimes_{\mathcal{O}\mathsf{G}}\mathbf{X}\mid\mathbf{M}_{ \mathsf{N}}^{\ast}\otimes_{\mathcal{O}\mathsf{N}}\mathcal{O}\mathsf{G}\otimes_ {\mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}}\) and, by Lemma 2.3(ii), every direct summand of \(\mathbf{M}_{\mathsf{N}}^{\ast}\otimes_{\mathcal{O}\mathsf{G}}\otimes_{ \mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}}\), as an \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{N})\)-bimodule, with vertex \(\Delta\mathsf{D}\), must also be a summand of \(\mathbf{X}^{\ast}\otimes_{\mathcal{O}\mathsf{G}}\mathbf{X}\). In particular, \(\mathcal{O}\mathsf{N}\mathsf{f}\mid\mathbf{X}^{\ast}\otimes_{\mathcal{O} \mathsf{G}}\mathbf{X}\), as an \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{N})\)-bisupermodule.
All that remains to show is that all other summands of \(\mathbf{X}^{\ast}\otimes_{\mathcal{O}\mathsf{G}}\mathbf{X}\) have \(\mathsf{D}\)-small vertex in the sense of SS7.1. We can, therefore, forget about superstructure for the time being. Given (7.14) and the comments at the beginning of the proof, we need only show that
\[\mathbf{M}_{\mathsf{N}}^{\ast}\otimes_{\mathcal{O}\mathsf{N}}\mathcal{O} \mathsf{G}\otimes_{\mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}}\simeq( \mathbf{M}_{\mathsf{N}}^{\ast}\otimes_{\mathcal{O}\mathsf{N}}\mathbf{M}_{ \mathsf{N}})\oplus U, \tag{7.15}\]
where, as an \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{N})\)-bimodule, \(U\) is a direct sum of bimodules each with \(\mathsf{D}\)-small vertex.
We have already seen that \(\mathbf{X}\mid_{\mathsf{D}}(\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{ N}}\mathbf{M}_{\mathsf{N}})\). Therefore,
\[\mathcal{O}\mathsf{N}\mathsf{f}\otimes_{\mathcal{O}\mathsf{N}}\mathcal{O} \mathsf{G}\otimes_{\mathcal{O}\mathsf{N}}\mathbf{M}_{\mathsf{N}}\simeq\mathsf{ f}(\operatorname{Res}_{\mathsf{N}\times\mathsf{N}}^{\mathsf{G}\times\mathsf{N}} \mathbf{X})\mathsf{f}\oplus V, \tag{7.16}\]
where \(V\) is a direct sum of \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N})\)-bimodules each with vertex contained in some \({}^{(g,1)}\Delta Q\), with \(g\in\mathsf{G}\) and \(Q<_{\mathsf{s}}\mathsf{D}\). Furthermore, Theorem 3.63 implies that
\[\operatorname{Res}_{\mathsf{N}\times\mathsf{N}}^{\mathsf{G}\times\mathsf{N}} \mathbf{X}\simeq\mathbf{M}_{\mathsf{N}}\oplus W, \tag{7.17}\]
where \(W\) is a direct sum of \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{N})\)-bimodules each with vertex contained in \((\mathsf{N}\times\mathsf{N})\cap{}^{(g,h)}\Delta\mathsf{D}\), for some \((g,h)\in(\mathsf{G}\times\mathsf{N})\setminus(\mathsf{N}\times\mathsf{N})\).
Now \(\mathcal{O}\mathsf{N}\mathsf{f}\) has defect group \(\mathsf{D}\) and so all the bimodules in (7.16) can be chosen to have vertex contain in \(\mathsf{D}\times\mathsf{D}\) (possibly after conjugating by an element of \(\mathsf{N}\times\mathsf{N}\)).
In particular, \(V\) is a direct sum of \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N}\mathsf{f})\)-bimodules each with vertex contained in some \((\mathsf{D}\times\mathsf{D})\cap{}^{(g,h)}\Delta Q\), with \((g,h)\in\mathsf{G}\times\mathsf{N}\) and \(Q<_{\mathsf{s}}\mathsf{D}\). By looking at cycle types via \(\pi_{n}\), certainly \(\mathsf{D}\cap{}^{g}Q,\mathsf{D}\cap{}^{h}Q<_{\mathsf{s}}\mathsf{D}\) and so \(V\) is a direct sum of \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N}\mathsf{f})\)-bimodules each with \(\mathsf{D}\)-small vertex.
Similarly, \(\mathsf{f}W\mathsf{f}\) is a direct sum of \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N}\mathsf{f})\)-bimodules each with vertex contained in \((\mathsf{D}\times\mathsf{D})\cap{}^{(g,h)}\Delta\mathsf{D}\), for some \((g,h)\in(\mathsf{G}\times\mathsf{N})\setminus(\mathsf{N}\times\mathsf{N})\). By Lemma 7.1(ii), \(\mathsf{f}W\mathsf{f}\) is a direct sum of \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N}\mathsf{f})\)-bimodules each with \(\mathsf{D}\)-small vertex.
Together, (7.16) and (7.17) now imply that
\[\mathbf{M}_{\mathsf{N}}\mathsf{\mid}^{\mathsf{D}}(\mathcal{O}\mathsf{N}\otimes_{ \mathcal{O}\mathsf{N}}\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{N}} \mathbf{M}_{\mathsf{N}}).\]
By applying \(\mathbf{M}_{\mathsf{N}}^{\ast}\otimes_{\mathcal{O}\mathsf{N}}\)? to both sides and utilizing Lemma 2.3(ii), we can show that \(U\) in (7.15) is a direct sum of \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{N})\)-bimodules each with vertex of the form \(\Delta_{\varphi}Q\), for some \(Q\leq\mathsf{D}\) and \(\varphi:Q\to\mathsf{D}\), with \(\varphi(Q)<_{\mathsf{s}}\mathsf{D}\).
In an entirely analogous way we can prove that
\[\mathbf{M}_{\mathsf{N}}\mathsf{\mid}^{\mathsf{D}}(\mathbf{M}_{\mathsf{N}}\otimes_{ \mathcal{O}\mathsf{N}}\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{N}} \mathcal{O}\mathsf{N}\mathsf{f}).\]
Once more, applying \(?\otimes_{\mathcal{ON}}\mathbf{M}_{\mathbf{N}}^{*}\) to both sides, we can have that \(U\) in (7.15) is a direct sum of \((\mathcal{ON},\mathcal{ON})\)-bimodules each with vertex of the form \(\Delta_{\varphi}Q\), for some \(Q<_{\mathsf{s}}\mathsf{D}\) and \(\varphi:Q\to\mathsf{D}\).
Since we can conjugate each \(Q\) independently from \(\varphi(Q)\), taking the last two paragraphs together gives that \(U\) is a direct sum of \((\mathcal{ON},\mathcal{ON})\)-bimodules each with \(\mathsf{D}\)-small vertex, as desired.
(ii) As in part (i) we have that \(\mathbf{X}\mid_{\mathsf{D}}(\mathcal{OG}\otimes_{\mathcal{ON}}\mathbf{M}_{ \mathbf{N}})\) and \(\mathbf{X}^{*}\mid_{\mathsf{D}}(\mathbf{M}_{\mathbf{N}}^{*}\otimes_{\mathcal{ON }}\mathcal{OG})\). Therefore,
\[\mathbf{X}\otimes_{\mathcal{ON}}\mathbf{X}^{*}\mid\mathcal{OG}\otimes_{ \mathcal{ON}}\mathbf{M}_{\mathbf{N}}\otimes_{\mathcal{ON}}\mathbf{M}_{ \mathbf{N}}^{*}\otimes_{\mathcal{ON}}\mathcal{OG} \tag{7.18}\]
and, by Lemma 2.3(ii), all other summands, as an \((\mathcal{OG},\mathcal{OG})\)-bimodule, have vertex of order strictly smaller than that of \(\Delta\mathsf{D}\). Moreover, by the comments at the beginning of the proof, the right-hand side of (7.18) has a summand isomorphic to \(\mathcal{OG}\otimes_{\mathcal{ON}}\mathcal{ON}\otimes_{\mathcal{ON}}\mathcal{OG}\). Since \(\mathcal{OG}\)b and \(\mathcal{ON}\)f are super Green correspondents by Lemmas 4.32 and 3.65, we have \(\mathcal{OG}\,\mathcal{OG}\otimes_{\mathcal{ON}}\mathcal{ON}\otimes_{ \mathcal{ON}}\mathcal{OG}\). Since \(\mathcal{OG}\)b has vertex \(\Delta\mathsf{D}\), the claim follows.
### Relating \(\mathbf{X}_{d}\) and \(\mathbf{X}_{d-1}\)
We now establish a lemma that will be key throughout inductive arguments in Section 8, as it allows us to relate \(\mathbf{X}=\mathbf{X}_{d}\) and \(\mathbf{X}_{d-1}\) using \(\mathbf{Y}\).
**Lemma 7.19**.: _Let \(d>1\)._
1. _As_ \((\mathcal{OG},\mathcal{O}(\mathsf{N}\cap\mathsf{H})\mathsf{f})\)_-bisupermodules and as_ \((\mathcal{OG},\mathcal{OL})\)_-bisupermodules,_ \[\mathbf{X}\mid_{\mathsf{D}}\big{(}\mathbf{Y}\otimes_{\mathcal{OH}}(\mathbf{ X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)})\big{)}.\]
2. _If_ \(\mathbf{X}_{d-1}\) _induces a Morita superequivalence between_ \(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\) _and_ \(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\)_, then as_ \((\mathcal{OG},\mathcal{O}(\mathsf{N}\cap\mathsf{H})\mathsf{f})\)_-bisupermodules and as_ \((\mathcal{OG},\mathcal{OL})\)_-bisupermodules,_ \[\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{ \varnothing,1})^{(d)}\big{)}\simeq\mathbf{X}.\]
Proof.: (i) Note by Lemma 3.9, that
\[\mathcal{OG}\otimes_{\mathcal{OH}}(\mathcal{OG}_{d-1}\mathsf{b}_{d-1} \boxtimes\mathbf{M}^{(d)})\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1} \boxtimes(B^{\varnothing,1})^{(d)}\big{)}\simeq\mathcal{OG}\otimes_{\mathcal{ OH}}(\mathbf{X}_{d-1}\boxtimes\mathbf{M}^{(d)}).\]
Since by definition, we have \(\mathbf{Y}\mid\mathcal{OG}\otimes_{\mathcal{OH}}(\mathcal{OG}_{d-1} \mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)})\), it now follows that
\[\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{ \varnothing,1})^{(d)}\big{)}\mid\mathcal{OG}\otimes_{\mathcal{OH}}(\mathbf{ X}_{d-1}\boxtimes\mathbf{M}^{(d)}), \tag{7.20}\]
as \((\mathcal{OG},\mathcal{O}(\mathsf{N}\cap\mathsf{H}))\)-bisupermodules.
Since by definition, we have \(\mathbf{X}_{d-1}\mid\mathcal{OG}_{d-1}\otimes_{\mathcal{ON}_{d-1}}\mathbf{M} _{\mathbf{N}_{d-1}}\), it follows that
\[\mathcal{OG}\otimes_{\mathcal{OH}}(\mathbf{X}_{d-1}\boxtimes \mathbf{M}^{(d)}) \mid\mathcal{OG}\otimes_{\mathcal{OH}}((\mathcal{OG}_{d-1} \otimes_{\mathcal{ON}_{d-1}}\mathbf{M}_{\mathbf{N}_{d-1}})\boxtimes\mathbf{M} ^{(d)}) \tag{7.21}\] \[\simeq\mathcal{OG}\otimes_{\mathcal{OH}}\mathcal{OH}\otimes_{ \mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathbf{M}_{\mathbf{N}_{d-1}}\boxtimes \mathbf{M}^{(d)})\] \[\simeq\mathcal{OG}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})} (\mathbf{M}_{\mathbf{N}_{d-1}}\boxtimes\mathbf{M}^{(d)}),\]
as \((\mathcal{OG},\mathcal{O}(\mathsf{N}\cap\mathsf{H}))\)-bisupermodules, where the first isomorphism follows from Lemma 3.9.
We also have
\[\mathcal{ON}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathbf{ M}_{\mathbf{N}_{d-1}}\boxtimes\mathbf{M}^{(d)})\simeq \mathcal{ON}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\operatorname{ Ind}_{(\mathbf{N}_{d-1}\times\mathbf{N}_{d-1})_{\mathsf{S}_{d-1}}}^{ \mathbf{N}_{d-1}\times\mathbf{N}_{d-1}}(\mathbf{M}_{\mathbf{L}_{d-1}})_{\mathsf{ S}_{d-1}}\boxtimes\mathbf{M}^{(d)})\] \[\simeq \operatorname{Ind}_{((\mathsf{N}\cap\mathsf{H})\times(\mathsf{N} \cap\mathsf{H}))_{\mathsf{S}_{d-1}}}^{\mathbf{N}\times(\mathsf{N}\cap\mathsf{H}) }(\mathbf{M}_{\mathbf{L}})_{\mathsf{S}_{d}}\] \[\simeq \operatorname{Res}_{\mathsf{N}\times(\mathsf{N}\cap\mathsf{H})} \operatorname{Ind}_{(\mathsf{N}\times\mathsf{N})_{\mathsf{S}_{d}}}^{\mathbf{N} \times\mathbf{N}}(\mathbf{M}_{\mathbf{L}})_{\mathsf{S}_{d}}\simeq \operatorname{Res}_{\mathsf{N}\times(\mathsf{N}\cap\mathsf{H})}^{\mathbf{N}\times \mathbf{N}}\mathbf{M}_{\mathbf{N}},\]
as \((\mathcal{O}\mathsf{N},\mathcal{O}(\mathsf{N}\cap\mathsf{H}))\)-bisupermodules, where the fourth isomorphism follows from Theorem 3.66. Therefore,
\[\mathcal{OG}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}(\mathbf{M}_{ \mathsf{N}_{d-1}}\boxtimes\mathbf{M}^{(d)})\simeq\mathrm{Ind}_{\mathsf{N} \times(\mathsf{N}\cap\mathsf{H})}^{\mathsf{G}\times(\mathsf{N}\cap\mathsf{H} )}\mathrm{Res}_{\mathsf{N}\times(\mathsf{N}\cap\mathsf{H})}^{\mathsf{N}\times \mathsf{N}}\mathbf{M}_{\mathsf{N}}\simeq\mathrm{Res}_{\mathsf{G}\times(\mathsf{ N}\cap\mathsf{H})}^{\mathsf{G}\times\mathsf{N}}\mathrm{Ind}_{\mathsf{N}\times \mathsf{N}}^{\mathsf{G}\times\mathsf{N}}\mathbf{M}_{\mathsf{N}}, \tag{7.22}\]
as \((\mathcal{OG},\mathcal{O}(\mathsf{N}\cap\mathsf{H}))\)-bisupermodules, where the second isomorphism is, once again, an application of Theorem 3.66.
Putting (7.20), (7.21) and (7.22) together yields
\[\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\big{)}\mid\mathrm{Res}_{\mathsf{G}\times(\mathsf{N}\cap\mathsf{H}) }^{\mathsf{G}\times\mathsf{N}}\mathrm{Ind}_{\mathsf{N}\times\mathsf{N}}^{ \mathsf{G}\times\mathsf{N}}\mathbf{M}_{\mathsf{N}}. \tag{7.23}\]
By the definition of \(\mathbf{X}\) and Lemma 7.9, \(\mathrm{Res}_{\mathsf{G}\times(\mathsf{N}\cap\mathsf{H})}^{\mathsf{G}\times \mathsf{N}}(\mathbf{X})\) is an indecomposable summand of the right-hand side of (7.23). We claim that
\[\mathrm{Res}_{\mathsf{G}\times(\mathsf{N}\cap\mathsf{H})}^{\mathsf{G}\times \mathsf{N}}\mathbf{X}\mid_{\mathsf{D}}\big{(}\mathrm{Res}_{\mathsf{G}\times( \mathsf{N}\cap\mathsf{H})}^{\mathsf{G}\times\mathsf{N}}\mathrm{Ind}_{\mathsf{ N}\times\mathsf{N}}^{\mathsf{G}\times\mathsf{N}}\mathbf{M}_{\mathsf{N}}\big{)}. \tag{7.24}\]
We will actually prove the stronger statement
\[{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\mid_{\mathsf{D}}\big{(}\mathrm{Res}_{ \mathsf{G}\times\mathsf{L}}^{\mathsf{G}\times\mathsf{N}}\mathrm{Ind}_{\mathsf{ N}\times\mathsf{N}}^{\mathsf{G}\times\mathsf{N}}\mathbf{M}_{\mathsf{N}}\big{)}. \tag{7.25}\]
Indeed, as in the proof of Lemma 7.9,
\[\mathrm{Res}_{\mathsf{G}\times\mathsf{L}}^{\mathsf{G}\times\mathsf{N}}\mathrm{ Ind}_{\mathsf{N}\times\mathsf{N}}^{\mathsf{G}\times\mathsf{N}}\mathbf{M}_{\mathsf{N}} \simeq\mathrm{Ind}_{\mathsf{N}\times\mathsf{L}}^{\mathsf{G}\times\mathsf{L}} \mathrm{Res}_{\mathsf{N}\times\mathsf{L}}^{\mathsf{N}\times\mathsf{N}}\mathbf{M }_{\mathsf{N}}\simeq\mathrm{Ind}_{\mathsf{N}\times\mathsf{L}}^{\mathsf{G} \times\mathsf{L}}\mathrm{Ind}_{\mathsf{L}\times\mathsf{L}}^{\mathsf{N}\times \mathsf{L}}\mathbf{M}_{\mathsf{L}}\simeq\mathrm{Ind}_{\mathsf{L}\times\mathsf{L }}^{\mathsf{G}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}},\]
where the first isomorphism is due to Theorem 3.66 and the second to Lemma 7.5. The claim (7.25), and hence (7.24), now follows from the statement of Lemma 7.9 and Lemma 7.1(i).
With (7.23) in mind, we now need only show that \(\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\big{)}\) has some summand with vertex \(\Delta\mathsf{D}\), when treated as an \((\mathcal{OG},\mathcal{OL})\)-bimodule.
By Lemma 7.9 and also Remark 3.8 and Lemma 3.67, \(\mathbf{X}_{d-1}\boxtimes\mathbf{M}^{(d)}\) is indecomposable, as an \((\mathcal{OG},\mathcal{OL})\)-bimodule, with vertex \(\Delta\mathsf{D}\). Now, as we have seen in (7.20),
\[\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\big{)}\mid\mathcal{OG}\otimes_{\mathcal{OH}}(\mathbf{X}_{d-1} \boxtimes\mathbf{M}^{(d)}),\]
as \((\mathcal{OG},\mathcal{OL})\)-bimodules. Hence, every summand of \(\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\big{)}\) has vertex contained in \(\Delta\mathsf{D}\), when treated as an \((\mathcal{OG},\mathcal{OL})\)-bimodule. If every summand had vertex strictly contained in \(\Delta\mathsf{D}\), then, by Lemmas 3.62, 3.67 and 2.3(ii), every summand of
\[\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\big{)}\otimes_{\mathcal{OL}}\big{(}\mathbf{X}_{d-1}^{*}\boxtimes(B^{ \varnothing,1})^{(d)}\big{)}\]
has vertex of strictly smaller order than \(\Delta\mathsf{D}\). However, by Lemma 3.9, Remark 5.2 and Lemma 7.13(ii), \(\mathbf{Y}\) is a summand, a contradiction.
(ii) If \(\mathbf{X}_{d-1}\) induces a Morita superequivalence between \(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\) and \(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\) then, by Lemma 3.11, \(\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\) induces a Morita superequivalence between \(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\otimes(B^{\varnothing,1})^{(d)}\cong\mathcal{O} (\mathsf{N}\cap\mathsf{H})\mathsf{f}\) and \(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\otimes(B^{\varnothing,1})^{(d)}\cong\mathcal{ OH}\mathsf{c}.\) In particular, \(\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1}) ^{(d)}\big{)}\) is indecomposable as an \((\mathcal{OG},\mathcal{O}(\mathsf{N}\cap\mathsf{H}))\)-bimodule, since
\[\mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1} )^{(d)}\big{)}\otimes_{\mathcal{O}(\mathsf{N}\cap\mathsf{H})}\big{(}\mathbf{X}_{d- 1}\boxtimes(B^{\varnothing,1})^{(d)}\big{)}^{*}\simeq\mathbf{Y}\]
is certainly indecomposable as an \((\mathcal{OG},\mathcal{OH})\)-module. Now (ii) follows (i).
Recall the notation (4.21),(4.24),(4.27).
**Corollary 7.26**.: _We have \({}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\mid_{\mathsf{D}}(\mathcal{OG}_{0,d} \otimes_{\mathcal{OL}}\mathbf{M}_{\mathsf{L}})\)._
Proof.: We already know from Lemma 7.10(ii) that \({{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}}\mid{{}_{\mathsf{D}}}({\mathcal{O}}{ \mathsf{G}}\otimes_{{\mathcal{O}}\mathsf{L}}{\mathbf{M}_{\mathsf{L}}})\). We, therefore, need only show that \({{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}}\mid{\mathcal{O}}{\mathsf{G}}{\mathsf{ c}}_{0,d}\otimes_{{\mathcal{O}}\mathsf{L}}{\mathbf{M}_{\mathsf{L}}}\). The proof proceeds via induction on \(d\). As always, our inductive argument is valid due to Remark 5.2.
When \(d=1\), \({{}_{\mathsf{G}}}_{0}={\mathsf{f}}\) and \({{}_{\mathsf{c}}}_{1}={\mathsf{b}}\). The result follows via Lemma 7.8(i). Suppose the result is true for \(d-1\). Then, by Lemma 7.19(i),
\[{{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}}\mid{\mathbf{Y}}\otimes_{{\mathcal{O} \mathsf{H}}}\big{(}({\mathcal{O}}{\mathsf{G}}_{d-1}{{}_{\mathsf{G}}}_{0,d-1}^ {\prime}\otimes_{{\mathcal{O}}\mathsf{L}_{d-1}}{\mathbf{M}_{\mathsf{L}_{d-1} }})\boxtimes(B^{\varnothing,1})^{(d)}\big{)},\]
as an \(({\mathcal{O}}{\mathsf{G}},{\mathcal{O}}{\mathsf{L}})\)-bisupermodule. This is in turn, by the definition of \({\mathbf{Y}}\) and Lemma 7.8(ii), isomorphic to a direct summand of
\[{\mathcal{O}}{\mathsf{G}}{\mathsf{b}}\otimes_{{\mathcal{O}\mathsf{ H}}}({\mathcal{O}}{\mathsf{G}}_{d-1}{{}_{\mathsf{B}}}_{d-1}\boxtimes{ \mathbf{M}}^{(d)})\otimes_{{\mathcal{O}\mathsf{H}}}\big{(}({\mathcal{O}}{ \mathsf{G}}_{d-1}{{}_{\mathsf{G}}}_{0,d-1}^{\prime}\otimes_{{\mathcal{O}} \mathsf{L}_{d-1}}{\mathbf{M}_{\mathsf{L}_{d-1}}})\boxtimes(B^{\varnothing,1})^ {(d)}\big{)}\] \[\simeq {\mathcal{O}}{\mathsf{G}}{\mathsf{b}}\otimes_{{\mathcal{O} \mathsf{H}}}{\mathcal{O}}{\mathsf{H}}{\mathsf{c}}_{0,d-1}\otimes_{{\mathcal{O }\mathsf{L}}}({\mathbf{M}_{\mathsf{L}_{d-1}}}\boxtimes{\mathbf{M}}^{(d)})\] \[\simeq {\mathcal{O}}{\mathsf{G}}{\mathsf{b}}\otimes_{{\mathcal{O} \mathsf{H}}}{\mathcal{O}}{\mathsf{H}}{\mathsf{c}}_{0,d-1}\otimes_{{\mathcal{O }\mathsf{L}}}{\mathbf{M}_{\mathsf{L}}}\] \[\simeq {\mathcal{O}}{\mathsf{G}}{\mathsf{b}}\otimes_{{\mathcal{O} \mathsf{H}}}{\mathcal{O}}{\mathbf{M}_{\mathsf{L}}}={\mathcal{O}}{\mathsf{G}}{ \mathsf{c}}_{0,d}\otimes_{{\mathcal{O}\mathsf{L}}}{\mathbf{M}_{\mathsf{L}}},\]
where the first and second isomorphisms follow from Lemma 3.9.
## 8. Main theorem
We continue with all our notation from Section 7, in particular, \(\rho\) a \(d\)-Rouquier \(\bar{p}\)-core. Recall also the irreducible supercharacters \(\xi_{0},\xi_{1},\ldots,\xi_{\ell}\) of \(B^{\varnothing,1}\) from (5.5) and the irreducible characters \(\xi_{0},\xi_{1}^{(\pm)},\ldots\xi_{\ell}^{(\pm)}\) of \(B^{\varnothing,1}\) from (5.6).
Recall the notation (5.4). We also refer to the tuple \((\mu,j_{1},\ldots,j_{k})\) as even or odd according to whether \((\mu,(p-j_{1},j_{1}),\ldots,(p-j_{k},j_{k}))\) is even or odd, i.e. according to whether the number of the odd partitions among \(\mu,(p-j_{1},j_{1}),\ldots,(p-j_{k},j_{k})\) is even or odd.
### Character calculations
In this subsection we gather together all the preparatory results about characters needed to ultimately prove Theorem 8.40.
**Lemma 8.1**.: _Let \(\mu\in{\mathscr{P}}_{0}(\rho,d-1)\) and \(i\in I\). Then in \({\mathcal{G}}_{0}(\mathbb{K}H{\mathsf{c}})\) we have_
\[({\mathcal{O}}{\mathsf{G}}_{d-1}{\mathsf{b}}_{d-1}\boxtimes{\mathbf{M}}^{(d)} )\otimes_{{\mathcal{O}\mathsf{H}}}\xi_{\mu,i}=({\mathcal{O}}{\mathsf{G}}_{d-1 }{\mathsf{b}}_{d-1}\boxtimes({\mathbf{M}}^{(d)})^{*})\otimes_{{\mathcal{O} \mathsf{H}}}\xi_{\mu,i}=\sum_{j=0}^{\ell-i}\frac{\varepsilon_{\mu,i} \varepsilon_{i}\varepsilon_{j}}{\varepsilon_{\mu,j}}\xi_{\mu,j}.\]
Proof.: If \(r=1\) and \(d=1\), then \({\mathcal{O}}{\mathsf{H}}{\mathsf{c}}\cong B^{\varnothing,1}\) and the equalities follow from 6.13(iv). Otherwise the equalities follow from Proposition 6.13(iv) and Lemma 3.59.
Throughout the rest of this subsection we construct several congruences of characters modulo characters of modules with specific vertices. Given that many of the bimodule isomorphisms we constructed in Section 7.3 were only modulo bimodules with certain vertices (see Lemmas 7.10, 7.13, 7.19(i) and Corollary 7.26) these congruences of characters are the best we can hope for at the moment. However, in SS8.2 we will eventually determine the character of \({{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}}\) (see Hypothesis 8.19 proved in Theorem 8.40).
An indecomposable \({\mathcal{O}}{\mathsf{L}}\)f-module (resp. \({\mathcal{O}}{\mathsf{N}}\)f-module, \({\mathcal{O}}{\mathsf{H}}{\mathsf{c}}\)-module or \({\mathcal{O}}{\mathsf{G}}{\mathsf{b}}\)-module) with vertex \(Q<_{\mathsf{s}}{\mathsf{D}}\) is said to have vertex of _non-maximal support_. We define \(\operatorname{Irr}_{<_{\mathsf{s}}{\mathsf{D}}}({\mathcal{O}}{\mathsf{L}}{ \mathsf{f}})\subseteq\operatorname{Irr}({\mathcal{O}}{\mathsf{L}}{\mathsf{f}})\) (resp. \(\operatorname{Irr}_{<_{\mathsf{s}}{\mathsf{D}}}({\mathcal{O}}{\mathsf{N}}{ \mathsf{f}})\subseteq\operatorname{Irr}({\mathcal{O}}{\mathsf{N}}{\mathsf{f}})\), \(\operatorname{Irr}_{<_{\mathsf{s}}{\mathsf{D}}}({\mathcal{O}}{\mathsf{H}}{ \mathsf{c}})\subseteq\operatorname{Irr}({\mathcal{O}}{\mathsf{H}}{\mathsf{c}})\) and \(\operatorname{Irr}_{<_{\mathsf{s}}{\mathsf{D}}}({\mathcal{O}}{\mathsf{G}}{ \mathsf{b}})\subseteq\operatorname{Irr}({\mathcal{O}}{\mathsf{G}}{\mathsf{b}})\)) to be the set of characters of irreducible \({\mathbb{K}}{\mathsf{L}}{\mathsf{f}}\)-modules (resp. \({\mathbb{K}}{\mathsf{M}}{\mathsf{f}}\)-modules, \({\mathbb{K}}{\mathsf{H}}{\mathsf{c}}\)-modules and \({\mathbb{K}}{\mathsf{G}}{\mathsf{b}}\)-modules) with vertex of non-maximal support.
For any \(Q\leq\mathsf{D}\), we denote by \(\mathbb{Z}\operatorname{Prj}_{Q}(\mathcal{OLf})\) the set of all \(\mathbb{Z}\)-linear combinations of characters of relatively \(Q\)-projective \(\mathcal{OLf}\)-modules.
It what follows we use the identification of \(\mathcal{OLf}\) with \(B^{\rho,0}\otimes(B^{\varnothing,1})^{\otimes d}\) from (4.28) and (3.54),(5.4) to label \(\operatorname{Irr}(\mathcal{OLf})\), i.e.
\[\operatorname{Irr}(\mathcal{OLf})=\{\xi_{\rho,j_{1},\ldots,j_{d}}\mid(\rho,j_ {1},\ldots,j_{d})\text{ is even}\}\sqcup\{\xi_{\rho,j_{1},\ldots,j_{d}}^{\pm} \mid(\rho,j_{1},\ldots,j_{d})\text{ is odd}\}.\]
Note that for \(1\leq k\leq d\) and \(0\leq j_{k}<\ell\) we have that \((\rho,j_{1},\ldots,j_{k},\ldots,j_{d})\) and \((\rho,j_{1},\ldots,j_{k}+1,\ldots,j_{d})\) have the same parity if \(j_{k}\neq 0\) and the opposite parities if \(j_{k}=0\).
**Lemma 8.2**.: _Let \(1\leq k\leq d\) and \(Q=\mathsf{D}_{1}\times\cdots\times\hat{\mathsf{D}}_{k}\times\cdots\times \mathsf{D}_{d}\leq\mathsf{D}\). Then \(\mathbb{Z}\operatorname{Prj}_{Q}(\mathcal{OLf})\) is precisely the \(\mathbb{Z}\)-linear span of all_
\[\xi_{\rho,j_{1},\ldots,j_{k},\ldots,j_{d}}^{(\pm)}+\xi_{\rho,j_{ 1},\ldots,j_{k}+1,\ldots,j_{d}}^{(\pm)} \text{with }0<j_{k}<\ell,\] \[\xi_{\rho,j_{1},\ldots,j_{k},\ldots,j_{d}}+\xi_{\rho,j_{1}, \ldots,j_{k}+1,\ldots,j_{d}}^{\pm} \text{with }j_{k}=0\text{ and }(\rho,j_{1},\ldots,j_{d})\text{ even},\] \[\xi_{\rho,j_{1},\ldots,j_{k},\ldots,j_{d}}^{+}+\xi_{\rho,j_{1}, \ldots,j_{k},\ldots,j_{d}}^{-}+\xi_{\rho,j_{1},\ldots,j_{k}+1,\ldots,j_{d}}^{ \pm} \text{with }j_{k}=0\text{ and }(\rho,j_{1},\ldots,j_{d})\text{ odd}.\]
_In particular, \(\mathbb{Z}\operatorname{Irr}_{<_{\mathsf{D}}}(\mathcal{OLf})\subseteq \ker(\varphi)\), where \(\varphi\) is the \(\mathbb{Z}\)-linear map defined by_
\[\varphi:\mathbb{Z}\operatorname{Irr}(\mathcal{OLf})\to\mathbb{Z},\ \xi_{\rho,j_{1},\ldots,j_{k},\ldots,j_{d}}^{(\pm)}\mapsto\frac{\varepsilon_{ \rho}\prod_{k=1}^{d}((-1)^{j_{k}}\varepsilon_{j_{k}})}{\varepsilon_{\rho,j_{1 },\ldots,j_{d}}}\]
Proof.: We first note, by inspecting the Brauer tree of \(B^{\varnothing,1}\) from Lemma 6.2(iii), that \(\mathbb{Z}\operatorname{Prj}(B^{\varnothing,1})\) is the \(\mathbb{Z}\)-linear span of \(\xi_{0}+\xi_{1}^{\pm}\) and \(\xi_{j}^{\pm}+\xi_{j+1}^{\pm}\), for \(0<j<\ell\), and that every projective indecomposable \(B^{\varnothing,1}\)-module is non-self-associate.
For \(r=1\) and \(d=1\), the first claim now follows as, in this case, \(k=1\), \(Q=1\), \(\operatorname{Prj}_{Q}(\mathcal{OLf})=\mathbb{N}\operatorname{Prj}(\mathcal{OL f})\) and \(B^{\varnothing,1}\cong\mathcal{OLf}\) with corresponding bijection \(\operatorname{Irr}(B^{\varnothing,1})\to\operatorname{Irr}(\mathcal{OLf})\), \(\xi_{j}^{(\pm)}\mapsto\xi_{(1),j}^{(\pm)}\). From now on we assume \(r>1\) or \(d>1\).
Recalling the notation (4.11), we set \(\mathsf{L}_{\hat{\mathsf{A}}_{k}}:=\mathsf{L}\cap\tilde{\mathsf{A}}_{[n] \setminus\mathfrak{P}_{k}}\), so \(\mathsf{L}_{\hat{\mathsf{A}}_{k}}\times_{z}\tilde{\mathsf{S}}_{\mathfrak{P}_{k}}\) is an index \(2\) subgroup of \(\mathsf{L}\). We also set
\[\hat{e}_{\varnothing,1}^{(k)}:=\mathsf{e}_{\rho,0}\otimes\mathsf{e}_{ \varnothing,1}^{(1)}\otimes\cdots\otimes\mathsf{e}_{\varnothing,1}^{(k-1)} \otimes\mathsf{e}_{\varnothing,1}^{(k+1)}\otimes\cdots\otimes\mathsf{e}_{ \varnothing,1}^{(d)}\in\mathcal{OL}_{\hat{\mathsf{A}}_{k}}.\]
(Note this really is in \(\mathcal{OL}_{\hat{\mathsf{A}}_{k}}\) since, by Remark 4.9, \(\mathsf{e}_{\rho,0}\in\tilde{\mathcal{O}}\tilde{\mathsf{A}}_{\mathsf{R}}\) and \(\mathsf{e}_{\varnothing,1}^{(l)}\in\tilde{\mathcal{O}}\tilde{\mathsf{A}}_{ \mathfrak{P}_{l}}\), for each \(l\neq k\).)
As \(p\nmid[\mathsf{L}:\mathsf{L}_{\hat{\mathsf{A}}_{k}}\times_{z}\tilde{\mathsf{S}}_ {\mathfrak{P}_{k}}]\), for any indecomposable, relatively \(Q\)-projective \(\mathcal{OLf}\)-module \(U\), we have that \(U\mid\operatorname{Ind}_{\hat{\mathsf{L}}_{\hat{\mathsf{A}}_{k}}\times_{z} \tilde{\mathsf{S}}_{\mathfrak{P}_{k}}}^{\perp}\tilde{U}\), for some indecomposable, relatively \(Q\)-projective \(\mathcal{O}(\mathsf{L}_{\hat{\mathsf{A}}_{k}}\times_{z}\tilde{\mathsf{S}}_{ \mathfrak{P}_{k}})\mathsf{f}\)-module \(\tilde{U}\). Now, \(\mathcal{O}(\mathsf{L}_{\hat{\mathsf{A}}_{k}}\times_{z}\tilde{\mathsf{S}}_{ \mathfrak{P}_{k}})\mathsf{f}\cong\mathcal{OL}_{\hat{\mathsf{A}}_{k}}\hat{e}_{ \varnothing,1}^{(k)}\otimes(B^{\varnothing,1})^{(k)}\) and \(Q\leq\mathsf{L}_{\hat{\mathsf{A}}_{k}}\). Therefore, \(\tilde{U}\cong\hat{U}_{k}\boxtimes U_{k}\), for some indecomposable \(\mathcal{OL}_{\hat{\mathsf{A}}_{k}}\hat{e}_{\varnothing,1}^{(k)}\)-module \(\hat{U}_{k}\) and projective, indecomposable \((B^{\varnothing,1})^{(k)}\)-module \(U_{k}\).
Recall that \((B^{\varnothing,1})^{(k)}\cong B^{\varnothing,1}\). Therefore, due to comments in the first paragraph, \(U_{k}\) is non-self-associate. It follows that \(U_{k}\) is not \(\tilde{\mathsf{S}}_{[n]\setminus\mathfrak{P}_{k}}\)-stable and consequently that \(\hat{U}_{k}\boxtimes U_{k}\) is not \(\mathsf{L}\)-stable. (Here, we are using the fact that \(r>1\) or \(d>1\) and so \(\tilde{\mathsf{A}}_{[n]\setminus\mathfrak{P}_{k}}\lesssim\tilde{\mathsf{S}}_{[n] \setminus\mathfrak{P}_{k}}\).) Therefore, \(U\) is actually isomorphic to \(\operatorname{Ind}_{\hat{\mathsf{L}}_{\hat{\mathsf{A}}_{k}}\times_{z}\tilde{ \mathsf{S}}_{\mathfrak{P}_{k}}}^{\perp}(\hat{U}_{k}\boxtimes U_{k})\).
We have now shown that \(\mathbb{Z}\operatorname{Prj}_{Q}(\mathcal{OLf})\) is precisely the \(\mathbb{Z}\)-linear span of characters of the form \((\hat{\chi}_{k}\boxtimes\chi_{k})\uparrow_{\mathsf{L}_{\hat{\mathsf{A}}_{k}} \times_{z}\tilde{\mathsf{S}}_{\mathfrak{P}_{k}}}^{\mathsf{L}}\), where \(\hat{\chi}_{k}\in\operatorname{Irr}(\mathcal{OL}_{\hat{\mathsf{A}}_{k}}\hat{e}_{ \varnothing,1}^{(k)})\) and \(\chi_{k}\in\operatorname{Prj}(\tilde{\mathcal{O}}\tilde{\mathsf{S}}_{ \mathfrak{P}_{k}}\mathsf{e}_{\varnothing,1}^{(k)})\)
In particular, by the first paragraph of this proof, \(\mathbb{Z}\operatorname{Prj}_{Q}(\mathcal{O}\mathsf{Lf})\) is the \(\mathbb{Z}\)-linear span of characters of the form \(\big{(}\hat{\chi}_{k}\otimes(\xi_{j}^{(\pm)}+\xi_{j+1}^{\pm})\big{)}\uparrow_{ \mathsf{L}_{\hat{A}_{k}}\times_{z}\widehat{\mathfrak{S}}_{k}}^{\mathsf{L}}\), where \(\hat{\chi}_{k}\in\operatorname{Irr}(\mathcal{O}\mathsf{L}_{\hat{A}_{k}}\hat{e }_{k})\) and \(0\leq j<\ell\). The first claim now follows from Lemma 3.55(iii) by considering all four possibilities of \(\hat{\chi}_{k}\) having an even or odd label and \(j=0\) or \(j>0\).
For the second claim we note that an indecomposable \(\mathcal{O}\mathsf{Lf}\)-module has non-maximal vertex if and only if it is relatively \(Q\)-projective, for some \(Q\) of the form considered above. One can now simply check that, for all \(k\), the given characters are in the kernel of \(\varphi\).
The next lemma is proved in much the same way as Lemma 8.2 and we leave the proof to the reader.
**Lemma 8.3**.: _Let \(\mu\in\mathscr{P}_{0}(\rho,d-1)\). The following are in \(\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\):_
* \(\xi_{\mu,j}^{(\pm)}+\xi_{\mu,j+1}^{(\pm)}\)_, for_ \(0<j<\ell\)_._
* \(\xi_{\mu,0}+\xi_{\mu,1}^{\pm}\)_, for_ \(\mu\) _even._
* \(\xi_{\mu,0}^{+}+\xi_{\mu,0}+\xi_{\mu,1}\)_, for_ \(\mu\) _odd._
**Remark 8.4**.: Note that Lemmas 8.2 and 8.3 are consistent with the comments preceding Lemma 8.2 concerning the labeling of associate pairs. For example, in the case that \(\mu\in\mathscr{P}_{0}(\rho,d-1)\) is even and \(j>0\), we can deduce that
\[\xi_{\mu,j}^{+}+\xi_{\mu,j+1}^{-}=\sum_{\begin{subarray}{c}1\leq i<j\\ j-i\text{ odd}\end{subarray}}(\xi_{\mu,i}^{+}+\xi_{\mu,i+1}^{+})-\sum_{ \begin{subarray}{c}1\leq i<j\\ j-i\text{ even}\end{subarray}}(\xi_{\mu,i}^{+}+\xi_{\mu,i+1}^{+})-(-1)^{j}(\xi_ {\mu,0}+\xi_{\mu,1}^{+})\]
which, by Lemma 8.3, is in \(\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\). Therefore, our choice of labeling of associate pairs for Lemma 8.3(i) was unimportant.
For the following lemma recall the notation (4.21),(4.24),(4.27).
**Lemma 8.5**.: _The maps_
\[\begin{split}\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O} \mathsf{L}}\mathsf{c}_{0,d}\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{ G}}?&:\mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{G}\mathsf{b})\to \mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{Lf})\\ \mathsf{L}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{O}\mathsf{ G}}?&:\mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{G}\mathsf{b})\to \mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{Lf})\\ \big{(}(\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O} \mathsf{L}_{d-1}}\mathsf{c}_{0,d-1}^{\prime}\mathcal{O}\mathsf{G}_{d-1}) \boxtimes(B^{\varnothing,1})^{(d)}\big{)}\otimes_{\mathcal{O}\mathsf{H}}?&: \mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{Hc})\to\mathbb{Z}\mathrm{Irr}( \mathcal{O}\mathsf{Lf})\\ \mathcal{O}\mathsf{Lf}\otimes_{\mathcal{O}\mathsf{L}}\mathsf{O} \mathsf{N}\otimes_{\mathcal{O}\mathsf{N}}?&:\mathbb{Z}\mathrm{Irr}( \mathcal{O}\mathsf{Nf})\to\mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{Lf})\end{split}\]
_restrict to maps_
\[\begin{split}\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O} \mathsf{L}}\mathsf{c}_{0,d}\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{ G}}?&:\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{G} \mathsf{b})\to\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{Lf}) \\ \mathsf{L}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{O}\mathsf{G}}?&: \mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{G}\mathsf{b})\to \mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{Lf})\\ \big{(}(\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O} \mathsf{L}_{d-1}}\mathsf{c}_{0,d-1}^{\prime}\mathcal{O}\mathsf{G}_{d-1}) \boxtimes(B^{\varnothing,1})^{(d)}\big{)}\otimes_{\mathcal{O}\mathsf{H}}?&: \mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\to \mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{Lf})\\ \mathcal{O}\mathsf{Lf}\otimes_{\mathcal{O}\mathsf{L}}\mathsf{O} \mathsf{N}\otimes_{\mathcal{O}\mathsf{N}}?&:\mathbb{Z}\mathrm{Irr}_{<_{s }\mathsf{D}}(\mathcal{O}\mathsf{Nf})\to\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}( \mathcal{O}\mathsf{Lf})\end{split}\]
_(For the third map we are assuming that \(d>1\).) Furthermore, the induced maps,_
\[\begin{split}\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O} \mathsf{L}}\mathsf{c}_{0,d}\mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{ G}}?&:\mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{G}\mathsf{b})/ \mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{G}\mathsf{b})\to \mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{Lf})/\mathbb{Z}\mathrm{Irr}_{<_{s} \mathsf{D}}(\mathcal{O}\mathsf{Lf})\\ \mathsf{L}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{O}\mathsf{ G}}?&:\mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{G}\mathsf{b})/ \mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathcal{O}\mathsf{G}\mathsf{b})\to \mathbb{Z}\mathrm{Irr}(\mathcal{O}\mathsf{Lf})/\mathbb{Z}\mathrm{Irr}_{<_{s} \mathsf{D}}(\mathcal{O}\mathsf{Lf}),\end{split}\end{split}\]
_coincide._
Proof.: We first show that each of our four bimodules is a direct sum of bimodules each with vertex contained in \(\Delta\mathsf{D}\). By Lemma 7.12(i), \(\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O}\mathsf{L}}\mathsf{c}_{0,d} \mathsf{OG}\) is a direct sum of \((\mathcal{O}\mathsf{L},\mathcal{OG})\)-bimodules each with vertex contained in \(\Delta\mathsf{D}\) and, by Lemma 7.12(iii), \(\iota\mathbf{X}_{\mathsf{G}}^{*}\) has vertex \(\Delta\mathsf{D}\). By Remarks 5.2, 3.8 and Lemma 3.67,
\[(\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O}\mathsf{L}_{d-1}}\mathsf{ c}_{0,d-1}^{\prime}\mathsf{OG}_{d-1})\boxtimes(B^{\varnothing,1})^{(d)}\]
is also a direct sum of \((\mathcal{O}\mathsf{L},\mathcal{O}\mathsf{H})\)-bimodules each with vertex contained in \(\Delta\mathsf{D}\). Finally, \(\mathcal{O}\mathsf{L}\mathsf{f}\) is an \((\mathcal{O}\mathsf{L}\mathsf{f},\mathcal{O}\mathsf{L}\mathsf{f})\)-bimodule with vertex \(\Delta\mathsf{D}\) and so \(\mathcal{O}\mathsf{L}\mathsf{f}\otimes_{\mathcal{O}\mathsf{L}}\mathsf{ON}\) is a direct sum of \((\mathcal{O}\mathsf{L}\mathsf{f},\mathcal{O}\mathsf{M}\mathsf{f})\)-bimodules each with vertex contained in \(\Delta\mathsf{D}\).
Lemma 2.3(i) now tells us that all four bimodules take a module with vertex \(Q<_{\mathsf{s}}Q\) to a direct sum of modules each with vertex contained in \(\mathsf{D}\cap{}^{g}Q\), for some \(g\in\mathsf{G}\). We claim that \(\mathsf{D}\cap{}^{g}Q<_{\mathsf{s}}Q\), if \(Q<_{\mathsf{s}}\mathsf{D}\). This will complete the proof of the first part of the lemma. If \(g\notin\mathsf{N}\), then \(\mathsf{D}\cap{}^{g}Q\leq\mathsf{D}\cap{}^{g}\mathsf{D}<_{\mathsf{s}}\mathsf{D}\) by Lemma 7.1(i). If \(g\in\mathsf{N}\), then the set of fixed points of \({}^{g}Q\), and hence \(\mathsf{D}\cap{}^{g}Q\), on \([n]\) strictly contains \(\mathsf{R}\).
Now, dualizing Corollary 7.26 using Lemma 3.62, we have
\[\iota\mathbf{X}_{\mathsf{G}}^{*}\mid_{\mathsf{D}}(\mathcal{OG}_{0,d}\otimes_{ \mathcal{O}\mathsf{L}}\mathbf{M}_{\mathsf{L}})^{*}\simeq\mathbf{M}_{\mathsf{ L}}^{*}\otimes_{\mathcal{O}\mathsf{L}}\mathsf{c}_{0,d}\mathcal{OG},\]
where the isomorphism follows from Lemmas 3.26, 7.12(i) and 3.22. The second part of the Lemma now follows from Lemma 2.3(i).
**Lemma 8.6**.: _Let \(\lambda\in\mathscr{P}_{0}(\rho,d)\). The following congruence holds modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{H}\mathsf{ c})\):_
\[(\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes(\mathbf{M}^{(d)})^{*})\otimes_{ \mathcal{O}\mathsf{H}}\big{(}\xi_{\lambda}\downarrow_{\mathsf{H},\mathsf{c}}^ {\mathsf{G},\mathsf{b}}\big{)}\equiv\sum_{j\in I}\sum_{\mu\in\mathscr{P}_{0}^{ j}(\lambda)^{-}}\frac{\varepsilon_{\mu,\lambda}\varepsilon_{\lambda,j}}{ \varepsilon_{\mu}\varepsilon_{j}}\,\xi_{\mu,j}.\]
Proof.: First note that, by Lemma 5.8(ii), every superconstituent of \(\xi_{\lambda}\downarrow_{\mathsf{H},\mathsf{c}}^{\mathsf{G},\mathsf{b}}\) must be of the form \(\xi_{\mu,i}\), for some \(\mu\in\mathscr{P}_{0}^{j}(\lambda)^{-}\) and \(i,j\in I\). Moreover, the coefficient is
\[\begin{cases}\frac{\varepsilon_{\lambda}\varepsilon_{\mu,\lambda}\varepsilon_{i} }{\varepsilon_{\mu,i}}&\text{if }j\leq\ell-i\\ 0&\text{otherwise}.\end{cases}\]
Let \(j\in I\) and \(\mu\in\mathscr{P}_{0}^{j}(\lambda)^{-}\). By Lemma 8.1, we now have that, for \(m\in I\), \(\xi_{\mu,m}\) appears as a superconstituent in \((\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes(\mathbf{M}^{(d)})^{*})\otimes_{ \mathcal{O}\mathsf{H}}\big{(}\xi_{\lambda}\downarrow_{\mathsf{H},\mathsf{c}}^ {\mathsf{G},\mathsf{b}}\big{)}\) with coefficient
\[\sum_{i=0}^{\min\{\ell-j,\ell-m\}}\frac{\varepsilon_{\lambda} \varepsilon_{\mu,\lambda}\varepsilon_{i}}{\varepsilon_{\mu,i}}\frac{\varepsilon _{\mu,i}\varepsilon_{i}\varepsilon_{m}}{\varepsilon_{\mu,m}} = \sum_{i=0}^{\min\{\ell-j,\ell-m\}}\frac{\varepsilon_{\lambda} \varepsilon_{\mu,\lambda}\varepsilon_{i}^{2}\varepsilon_{m}}{\varepsilon_{\mu,m}}\] \[= \begin{cases}\sum_{i=0}^{\ell-j}\frac{\varepsilon_{\lambda} \varepsilon_{\mu,\lambda}\varepsilon_{i}^{2}\varepsilon_{m}}{\varepsilon_{\mu,m}} &\text{if }0\leq m\leq j\\ \sum_{i=0}^{\ell-m}\frac{\varepsilon_{\lambda}\varepsilon_{\mu,\lambda} \varepsilon_{i}^{2}\varepsilon_{m}}{\varepsilon_{\mu,m}}&\text{if }j<m\leq\ell \end{cases} = \begin{cases}(2\ell-2j+1)\frac{\varepsilon_{\lambda}\varepsilon_{\mu, \lambda}\varepsilon_{m}}{\varepsilon_{\mu,m}}&\text{if }0\leq m\leq j\\ (2\ell-2m+1)\frac{\varepsilon_{\lambda}\varepsilon_{\mu,\lambda}\varepsilon_{m}}{ \varepsilon_{\mu,m}}&\text{if }j<m\leq\ell.\end{cases}\]
We have now shown that
\[\big{(}\mathcal{OG}_{d-1}\mathsf{b}_{d-1}\boxtimes(\mathbf{M}^{(d)})^{* }\big{)}\otimes_{\mathcal{O}\mathsf{H}}\big{(}\xi_{\lambda}\downarrow_{\mathsf{H}, \mathsf{c}}^{\mathsf{G},\mathsf{b}}\big{)} \tag{8.7}\] \[= \sum_{j\in I}\sum_{\mu\in\mathscr{P}_{0}^{j}(\lambda)^{-}}\Bigl{(} \sum_{m=0}^{j}(2\ell-2j+1)\frac{\varepsilon_{\lambda}\varepsilon_{\mu,\lambda} \varepsilon_{m}}{\varepsilon_{\mu,m}}\xi_{\mu,m}+\sum_{m=j+1}^{\ell}(2\ell-2m+1) \frac{\varepsilon_{\lambda}\varepsilon_{\mu,\lambda}\varepsilon_{m}}{\varepsilon_{\mu,m} }\xi_{\mu,m}\Bigr{)}.\]
If \(\mu\) is even, we have the following congruence modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\):
\[\sum_{m=0}^{j}(2\ell-2j+1)\frac{\varepsilon_{\lambda}\varepsilon_{\mu,\lambda} \varepsilon_{m}}{\varepsilon_{\mu,m}}\xi_{\mu,m}=\sum_{m=0}^{j}(2\ell-2j+1) \varepsilon_{\lambda}^{2}\xi_{\mu,m}\equiv(2\ell-2j+1)\frac{\varepsilon_{ \lambda}^{2}}{\varepsilon_{j}^{2}}\xi_{\mu,j}, \tag{8.8}\]
where the congruence follows by subtracting
\[\sum_{m=0}^{j-1}(2\ell-2j+1)\frac{\varepsilon_{\lambda}^{2}}{\varepsilon_{j}^ {2}}\Big{(}\tfrac{2}{\varepsilon_{m}^{2}}\xi_{\mu,m}+\xi_{\mu,m+1}\Big{)},\]
which, by Lemma 8.3(i),(ii), is in \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\). (Note that, since \(\mu\) is even, by Lemma 5.3(iii), \(\lambda\) must be odd, unless \(j=0\). Therefore, \(\varepsilon_{\lambda}^{2}/\varepsilon_{j}^{2}\) is an integer.)
If \(\mu\) is odd, we have the following congruence modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\):
\[\sum_{m=0}^{j}(2\ell-2j+1)\frac{\varepsilon_{\lambda}\varepsilon_{\mu, \lambda}\varepsilon_{m}}{\varepsilon_{\mu,m}}\xi_{\mu,m}=\sum_{m=0}^{j}(2\ell -2j+1)\varepsilon_{m}^{2}\xi_{\mu,m}\equiv(2\ell-2j+1)\xi_{\mu,j}, \tag{8.9}\]
where the congruence follows by subtracting
\[\sum_{m=0}^{j-1}(2\ell-2j+1)(\xi_{\mu,m}+\xi_{\mu,m+1}),\]
which, by Lemma 8.3(i),(iii), is in \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\).
Next note that, for \(j<\ell\), we have the following congruence modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\):
\[\sum_{m=j+1}^{\ell}(2\ell-2m+1)\frac{\varepsilon_{\lambda} \varepsilon_{\mu,\lambda}\varepsilon_{m}}{\varepsilon_{\mu,m}}\xi_{\mu,m} \tag{8.10}\] \[= \sum_{m=j+1}^{\ell}(2\ell-2m+1)\varepsilon_{\lambda}\varepsilon_{ \mu,\lambda}\varepsilon_{\mu}\xi_{\mu,m}\equiv(\ell-j)\varepsilon_{\lambda} \varepsilon_{\mu,\lambda}\varepsilon_{\mu}\xi_{\mu,j+1},\]
where the equality holds since \((p-m,m)\) is odd for \(m>0\) and the congruence holds by subtracting
\[\sum_{m=j+1}^{\ell-1}(\ell-m)\varepsilon_{\lambda}\varepsilon_{\mu,\lambda} \varepsilon_{\mu}(\xi_{\mu,m}+\xi_{\mu,m+1}),\]
which, by Lemma 8.3(i), is in \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{Hc})\).
Denote by \(\mathscr{P}_{0}^{j}(\lambda)_{\bar{0}}^{-}\) (resp. \(\mathscr{P}_{0}^{j}(\lambda)_{\bar{1}}^{-}\)) the set of all even (resp. odd) partitions in \(\mathscr{P}_{0}^{j}(\lambda)^{-}\). Putting (8.7), (8.8), (8.9) and (8.10) together, we now have that
\[(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes(\mathbf{M}^{ (d)})^{*})\otimes_{\mathcal{O}\mathsf{H}}\big{(}\xi_{\lambda}\downarrow_{\mathsf{ H,c}}^{\mathsf{G,b}}\big{)} \equiv \sum_{j\in I}\sum_{\mu\in\mathscr{P}_{0}^{j}(\lambda)_{0}^{-}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{c}}\mathsf{D}}(\mathsf{OHc})\). This completes the claim noting that, by Lemma 5.3(iii), \(\lambda\) and \(\mu\) must have opposite parity unless \(j=0\).
Recall the notation \(\Lambda(I,d)\) and (2.18),(2.17) from SS2.5. For \(k\in I\), we denote
\[\underline{\delta}_{k}:=(0,\ldots,0,1,0,\ldots,0)\in\Lambda(I,1)\]
with \(1\) in the \(k\)th position. We can add the compositions coordinate-wise. For example for \(\underline{d}=(d_{0},\ldots,d_{\ell})\in\Lambda(I,d)\) we have
\[\underline{d}+\underline{\delta}_{k}=(d_{0},\ldots,d_{k-1},d_{k}+1,d_{k+1}, \ldots,d_{\ell})\in\Lambda(I,d+1).\]
Similarly we have \(\underline{d}-\underline{\delta}_{k}\in\Lambda(I,d-1)\) makes sense if \(d_{k}>0\).
For \(\underline{d}=(d_{0},\ldots,d_{\ell})\in\Lambda(I,d)\). We now introduce the special notation
\[\xi_{\rho,\underline{d}}:=\xi_{\rho,0^{d_{0}},\ldots,\ell^{d_{\ell}}}^{(\pm) }:=\xi_{\rho,0,\ldots,0,\,\ldots,\,\ell,\ldots,\ell}^{(\pm)}\in\mathrm{Irr}( \mathbb{KLf})\text{ and }\varepsilon_{\rho,\underline{d}}:=\varepsilon_{\rho,0^{d_{0}}, \ldots,\ell^{d_{\ell}}}:=\varepsilon_{\rho,0,\ldots,0,\,\ldots,\,\ell,\ldots,\ell}\]
where \(i\) is repeated \(d_{i}\) times for all \(i\in I\).
Similarly, for \(\underline{d}\in\Lambda(I,d-1)\) and \(k\in I\), we can make sense of \(\xi_{\rho,\underline{d},k}^{(\pm)}\), \(\xi_{\rho,\underline{k},\underline{d}}^{(\pm)}\), \(\varepsilon_{\rho,\underline{d},k}\), etc:
\[\xi_{\rho,\underline{d},k}:=\xi_{\rho,0^{d_{0}},\ldots,\ell^{d_{\ell}},k}^{( \pm)}\in\mathrm{Irr}(\mathbb{KLf})\text{ and }\varepsilon_{\rho,\underline{d},k}:=\varepsilon_{\rho,0^{d_{0}}, \ldots,\ell^{d_{\ell}},k}^{(\pm)}\]
Note that \(\varepsilon_{\rho,\underline{d},k}=\varepsilon_{\rho,\underline{d}+\underline {\delta}_{k}}\) but similar equality is not in general true for the \(\xi\)'s.
\(\underline{d}\in\Lambda(I,d)\), we denote by \(\xi_{\rho,\underline{d}}^{\tilde{\mathcal{S}}_{d}}\) the sum of the irreducible character \(\xi_{\rho,\underline{d}}^{(\pm)}\) with all its \(\mathsf{N}\)-conjugates and their associates. Note that this is the sum of exactly \(\binom{d}{d}\varepsilon_{\rho,\underline{d}}^{2}\) irreducible characters of \(\mathbb{KLf}\). Similarly, for \(\underline{d}\in\Lambda(I,d-1)\), we define \(\xi_{\rho,\underline{d},k}^{\tilde{\mathcal{S}}_{d-1}}\) to be the sum of the irreducible character \(\xi_{\rho,\underline{d},k}^{(\pm)}\) with all its \(\mathsf{N}_{d-1}\)-conjugates and their associates. Note that it is the sum of exactly \(\binom{d-1}{d}\varepsilon_{\rho,\underline{d}+\underline{\delta}_{k}}^{2}\) irreducible characters of \(\mathbb{KLf}\).
Recall the notation (2.16) and other combinatorial notation from SS2.5.
**Lemma 8.11**.: _Let \(\underline{d}\in\Lambda(I,d)\) and \(\lambda\in\mathscr{P}_{0}(\rho,\underline{d})\). Then_
\[{}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathsf{OG}}\xi_{\lambda} \equiv\frac{\varepsilon_{\lambda}\,2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}}{ \varepsilon_{\rho,\underline{d}}}\,K(\lambda)\,\xi_{\rho,\underline{d}}^{ \tilde{\mathcal{S}}_{d}}\,\,(\mathrm{mod}\,\,\mathbb{Z}\mathrm{Irr}_{<_{ \mathsf{s}}\mathsf{D}}(\mathbb{KLf}))\,.\]
Proof.: Throughout this proof all congruences will assumed to be modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathbb{KLf})\).
We note that, with the last part of Lemma 8.5 in mind, it is enough to prove that
\[\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathsf{OL}}{\mathsf{c}}_{0,d}\mathcal{OG }\otimes_{\mathsf{OG}}\xi_{\lambda}\equiv\frac{\varepsilon_{\lambda}2^{(| \lambda^{(0)}|-h(\lambda^{(0)}))/2}}{\varepsilon_{\rho,\underline{d}}}K(\lambda )\xi_{\rho,\underline{d}}^{\tilde{\mathcal{S}}_{d}}. \tag{8.12}\]
We prove (8.12) by induction on \(d\). We first check that the statement holds for \(d=1\). In this case \(|\lambda^{(0)}|=h(\lambda^{(0)})\), \(K^{\prime}_{\lambda^{(0)}}=K_{\lambda^{(1)}}=\cdots=K_{\lambda^{(\ell)}}=1\) and \(\underline{d}=\underline{\delta}_{k}\) for some \(k\in I\). By Lemma 5.3(iii), \(\varepsilon_{\lambda}=\varepsilon_{\rho,k}\). We also have \({\mathsf{c}}_{0}={\mathsf{c}}\) and \({\mathsf{c}}_{1}={\mathsf{b}}\). We, therefore, need to show that
\[\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathsf{OL}}{\mathsf{c}}\mathcal{OG} \mathsf{Gb}\otimes_{\mathsf{OG}}\xi_{\lambda}\equiv\frac{\varepsilon_{ \lambda}}{\varepsilon_{\rho,k}}\xi_{\rho,k}=\xi_{\rho,k}.\]
This now agrees with taking \(d=1\) in Lemma 8.6, as applying Lemma 5.3(iii) again gives
\[\frac{\varepsilon_{\rho,\lambda}\varepsilon_{\lambda,k}}{\varepsilon_{\rho} \varepsilon_{k}}=\frac{\varepsilon_{k}\varepsilon_{\rho}}{\varepsilon_{\rho} \varepsilon_{k}}=1.\]
We now assume \(d>1\) and that (8.12) holds for all \(\mu\in\mathscr{P}(\rho,d-1)\). Note that, by Remark 5.2, the inductive steps we make below are valid.
Now,
\[\begin{split}&\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O}\mathsf{L}} \mathsf{c}_{0,d}\mathcal{OG}\\ &\simeq(\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O}\mathsf{L}} \mathsf{c}_{0,d-1}\mathcal{O}\mathsf{H})\otimes_{\mathcal{O}\mathsf{H}} \mathcal{OG}_{d}\\ &\simeq(\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\boxtimes(\mathbf{M}^{( d)})^{*})\otimes_{\mathcal{O}\mathsf{L}}\left(\mathsf{c}_{0,d-1}^{\prime} \mathcal{OG}_{d-1}\boxtimes(B^{\varnothing,1})^{(d)}\right)\otimes_{\mathcal{O }\mathsf{H}}\mathcal{OG}_{d}\\ &\simeq\left((\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O }\mathsf{L}_{d-1}}\mathsf{c}_{0,d-1}^{\prime}\mathcal{OG}_{d-1})\boxtimes( \mathbf{M}^{(d)})^{*}\right)\otimes_{\mathcal{O}\mathsf{H}}\mathcal{OG}_{d}\\ &\simeq\left((\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O }\mathsf{L}_{d-1}}\mathsf{c}_{0,d-1}^{\prime}\mathcal{OG}_{d-1})\boxtimes(B^{ \varnothing,1})^{(d)}\right)\otimes_{\mathcal{O}\mathsf{H}}(\mathcal{OG}_{d- 1}\mathsf{c}_{d-1}^{\prime}\boxtimes(\mathbf{M}^{(d)})^{*})\otimes_{\mathcal{ O}\mathsf{H}}\mathcal{OG}_{d},\end{split}\]
where the second isomorphism follows from Lemmas 7.12(i) and 3.24 and the third and fourth from Lemma 3.9. Since \(\mathsf{c}_{d}=\mathsf{b}\), \(\mathsf{c}_{d-1}=\mathsf{c}\) and \(\mathsf{c}_{d-1}^{\prime}=\mathsf{b}_{d-1}\), Lemma 8.6 now gives
\[\begin{split}&\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O} \mathsf{L}}\mathsf{c}_{0,d}\mathcal{OG}\otimes_{\mathcal{OG}}\xi_{\lambda}\\ &=\left((\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O} \mathsf{L}_{d-1}}\mathsf{c}_{0,d-1}^{\prime}\mathcal{OG}_{d-1})\boxtimes(B^{ \varnothing,1})^{(d)}\right)\otimes_{\mathcal{O}\mathsf{H}}\left(\mathcal{OG} _{d-1}\mathsf{b}_{d-1}\boxtimes(\mathbf{M}^{(d)})^{*}\right)\otimes_{\mathcal{ O}\mathsf{H}}\left(\xi_{\lambda}\downarrow_{\mathsf{H},\mathsf{c}}^{\mathsf{G}, \mathsf{b}}\right)\\ &\equiv\left((\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O }\mathsf{L}_{d-1}}\mathsf{c}_{0,d-1}^{\prime}\mathcal{OG}_{d-1})\boxtimes(B^{ \varnothing,1})^{(d)}\right)\otimes_{\mathcal{O}\mathsf{H}}\Big{(}\sum_{k\in I }\sum_{\mu\in\mathscr{P}_{0}^{k}(\lambda)^{-}}\frac{\varepsilon_{\mu, \lambda}\varepsilon_{\lambda,k}}{\varepsilon_{\mu}\varepsilon_{k}}\xi_{\mu,k} \Big{)}.\end{split}\]
Implicit in the above calculation is that, by Lemma 8.5,
\[\left((\mathbf{M}_{\mathsf{L}_{d-1}}^{*}\otimes_{\mathcal{O}\mathsf{L}_{d-1}} \mathsf{c}_{0,d-1}^{\prime}\mathcal{OG}_{d-1})\boxtimes(B^{\varnothing,1})^{ (d)}\right)\otimes_{\mathcal{O}\mathsf{H}}?\]
maps \(\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathbb{K}\mathsf{H}\mathsf{c})\) to \(\mathbb{Z}\mathrm{Irr}_{<_{s}\mathsf{D}}(\mathbb{K}\mathsf{L}\mathsf{f})\). Now, by the inductive hypothesis and Lemma 3.59,
\[\begin{split}&\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O} \mathsf{L}}\mathsf{c}_{0,d}\mathcal{OG}\otimes_{\mathcal{OG}}\xi_{\lambda}\\ \equiv&\sum_{k\in I}\sum_{\mu\in\mathscr{P}_{0}^{k}( \lambda)^{-}}\frac{\varepsilon_{\mu,\lambda}\varepsilon_{\lambda,k}}{ \varepsilon_{\mu}\varepsilon_{k}}\frac{\varepsilon_{\rho,\underline{d}- \underline{\delta}_{k}}\varepsilon_{\mu,k}}{\varepsilon_{\mu}\varepsilon_{\rho, \underline{d}-\underline{\delta}_{k},k}}\frac{\varepsilon_{\mu}}{\varepsilon_ {\rho,\underline{d}-\underline{\delta}_{k}}}K(\mu)2^{(|\mu^{(0)}|-h(\mu^{(0)}) )/2}\xi_{\rho,\underline{d}-\underline{\delta}_{k},k}^{\tilde{\mathsf{S}}_{d-1} }\\ =&\sum_{k\in I}\sum_{\mu\in\mathscr{P}_{0}^{k}( \lambda)^{-}}\frac{\varepsilon_{\mu,\lambda}\varepsilon_{\lambda,k}\varepsilon_{ \mu,k}}{\varepsilon_{\mu}\varepsilon_{k}\varepsilon_{\rho,\underline{d}}}\ K(\mu)2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\xi_{\rho, \underline{d}-\underline{\delta}_{k},k}^{\tilde{\mathsf{S}}_{d-1}}.\end{split} \tag{8.13}\]
We now split this sum into two summands, one for \(k=0\) and one for \(k\neq 0\). For the \(k=0\) summand, by Lemma 5.3(iii), we have \(2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}=2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2} \varepsilon_{\mu,\lambda}\). Noting also that in this case \((p-0,0)\) is even, we get
\[\begin{split}&\sum_{\mu\in\mathscr{P}_{0}^{k}(\lambda)^{-}}\frac{ \varepsilon_{\mu,\lambda}\varepsilon_{\lambda,0}\varepsilon_{\mu,0}}{ \varepsilon_{\mu}\varepsilon_{0}\varepsilon_{\rho,\underline{d}}}\ K(\mu)2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\xi_{\rho, \underline{d}-\underline{\delta}_{0},0}^{\tilde{\mathsf{S}}_{d-1}}\\ =&\sum_{\mu^{(0)}\in\mathscr{P}_{0}(\lambda^{(0)})^{-1}} \frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}}K_{\mu^{(0)}}^{ \prime}K_{\lambda^{(1)}}\dots K_{\lambda^{(\ell)}}2^{(|\lambda^{(0)}|-h(\lambda^{( 0)}))/2}\xi_{\rho,\underline{d}-\underline{\delta}_{0},0}^{\tilde{\mathsf{S}}_{d-1} }\\ =&\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d} }}K(\lambda)2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\xi_{\rho,\underline{d}- \underline{\delta}_{0},0}^{\tilde{\mathsf{S}}_{d-1}},\end{split} \tag{8.14}\]
where the second equality follows from the comments at the end of SS2.5 and the third from Lemma 2.14(ii).
For the \(k>0\) summands, we have \(2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}=2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\) and \((p-k,k)\) is always odd. Also, by Lemma 5.3(iii), \(\mu\) and \(\lambda\) always have opposite parity. Therefore,
we get for the \(k\)th summand:
\[\sum_{\mu\in\mathscr{P}^{k}_{0}(\lambda)^{-}}\frac{\varepsilon_{\mu, \lambda}\varepsilon_{\lambda,k}\varepsilon_{\mu,k}}{\varepsilon_{\mu} \varepsilon_{k}\varepsilon_{\rho,\underline{d}}}\ K(\mu)2^{(|\mu^{(0)}|-h(\mu^{ (0)}))/2}\xi_{\rho,\underline{d}-\underline{\delta}_{k},k}^{\tilde{\mathsf{S}}_ {d-1}} \tag{8.15}\] \[=\sum_{\mu\in\mathscr{P}^{k}_{0}(\lambda)^{-}}\frac{\varepsilon_{ \mu,k}}{\varepsilon_{\rho,\underline{d}}}K^{\prime}_{\lambda^{(0)}}K_{\lambda^ {(1)}}\ldots K_{\mu^{(k)}}\ldots K_{\lambda^{(\ell)}}2^{(|\lambda^{(0)}|-h( \lambda^{(0)}))/2}\xi_{\rho,\underline{d}-\underline{\delta}_{k},k}^{\tilde{ \mathsf{S}}_{d-1}}\] \[=\sum_{\mu^{(k)}\in\mathscr{P}(\lambda^{(k)})^{-1}}\frac{ \varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}}K^{\prime}_{\lambda^ {(0)}}K_{\lambda^{(1)}}\ldots K_{\mu^{(k)}}\ldots K_{\lambda^{(\ell)}}2^{(| \lambda^{(0)}|-h(\lambda^{(0)}))/2}\xi_{\rho,\underline{d}-\underline{\delta}_ {k},k}^{\tilde{\mathsf{S}}_{d-1}}\] \[=\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}}K( \lambda)2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\xi_{\rho,\underline{d}- \underline{\delta}_{k},k}^{\tilde{\mathsf{S}}_{d-1}},\]
where, again, the second equality follows from the comments at the end of SS2.5 and the third from Lemma 2.14(i).
Putting (8.13), (8.14) and (8.15) together, we have now shown that
\[\mathbf{M}_{\mathsf{L}}^{*}\otimes_{\mathcal{O}\mathsf{L}}\mathsf{c}_{0,d} \mathcal{O}\mathsf{G}\otimes_{\mathcal{O}\mathsf{G}}\xi_{\lambda}\equiv\quad \sum_{k\in I}\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}}K( \lambda)\,2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\xi_{\rho,\underline{d}- \underline{\delta}_{k},k}^{\tilde{\mathsf{S}}_{d-1}}=\frac{\varepsilon_{ \lambda}}{\varepsilon_{\rho,\underline{d}}}K(\lambda)\xi_{\rho,\underline{d}} ^{\tilde{\mathsf{S}}_{d}},\]
as desired.
**Lemma 8.16**.: _Let \(\mu\in\mathscr{P}_{0}(\rho,d-1)\), \(\lambda\in\mathscr{P}_{0}(\rho,d)\) and \(\underline{d}=(d_{0},\ldots,d_{\ell})\in\Lambda(I,d)\)._
1. _Let_ \(i\in I\)_. If_ \(\xi_{\lambda}\) _appears as a superconstituent with non-zero coefficient in_ \(\mathbf{Y}\otimes_{\mathcal{O}\mathsf{H}}\xi_{\mu,i},\) _then_ \(\lambda\in\mathscr{P}^{j}_{0}(\mu)^{+}\)_, for some_ \(j\in I\)_._
2. _Say_ \(\xi_{\lambda}\) _appears with non-zero coefficient in_ \(\mathsf{{}_{G}\mathbf{X}_{\mathsf{L}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{\rho, \underline{d}}}.\) _If_ \(d_{i}>0\)_, for some_ \(i\in I\)_, then_ \(|\lambda^{(i)}|>0\)_._
3. _Suppose_ \(\mathbf{X}_{d-1}\) _induces a Morita superequivalence between_ \(\mathcal{O}\mathsf{N}_{d-1}\mathsf{f}_{d-1}\) _and_ \(\mathcal{O}\mathsf{G}_{d-1}\mathsf{b}_{d-1}\)_. If_ \(\xi_{\lambda}\) _appears with non-zero coefficient in_ \(\mathbf{Y}\otimes_{\mathcal{O}\mathsf{H}}\xi_{\mu,i},\) _for some_ \(i\in I\)_, then_ \(|\lambda^{(i)}|>0\)_._
Proof.: (i) By its definition and Lemma 7.8(ii), \(\mathbf{Y}\) is a direct summand of
\[\mathcal{O}\mathsf{G}\mathsf{b}\otimes_{\mathcal{O}\mathsf{H}}(\mathcal{O} \mathsf{G}_{d-1}\mathsf{b}_{d-1}\boxtimes\mathbf{M}^{(d)}).\]
The statement now follows from Lemma 8.1 and Lemma 5.8(i).
(ii) By Corollary 7.26 and Remark 5.2, \(\mathsf{{}_{G_{1}}\mathbf{X}_{\mathsf{L}_{1}}}\) is the unique indecomposable summand of \(\mathsf{b}_{1}\mathcal{O}\mathsf{G}_{1}\otimes_{\mathcal{O}\mathsf{L}_{1}} \mathbf{M}_{\mathsf{L}_{1}}\) with vertex \(\Delta\mathsf{D}_{1}\) and all other summands are projective. Therefore, \(\mathsf{{}_{G_{1}}\mathbf{X}_{\mathsf{L}_{1}}}\) is isomorphic to the \((\mathcal{O}\mathsf{G}_{1}\mathsf{b}_{1},\mathcal{O}\mathsf{L}_{1}\mathsf{f}_{1})\)-bisupermodule \(V\) from Proposition 6.13(ii). Now,
\[\mathcal{O}\mathsf{H}_{1}\mathsf{c}_{0,1}=\mathsf{b}_{1}\mathcal{O}\mathsf{G}_{1 }\mathsf{f}_{1}\otimes(B^{\varnothing,1})^{(2)}\otimes\cdots\otimes(B^{ \varnothing,1})^{(d)}. \tag{8.17}\]
Therefore,
\[\mathcal{O}\mathsf{G}\mathsf{c}_{0,d}\otimes_{\mathcal{O}\mathsf{L}} \mathbf{M}_{\mathsf{L}}\simeq \mathcal{O}\mathsf{G}\mathsf{c}_{0,d}\otimes_{\mathcal{O}\mathsf{L}} \left(\mathbf{M}_{\mathsf{L}_{1}}\boxtimes\mathbf{M}^{(2)}\boxtimes\cdots \boxtimes\mathbf{M}^{(d)}\right)\] \[\simeq \mathcal{O}\mathsf{G}\mathsf{c}_{2,d}\otimes_{\mathcal{O}\mathsf{H }_{1}}\mathcal{O}\mathsf{H}_{1}\mathsf{c}_{0,1}\otimes_{\mathcal{O}\mathsf{L}} \left(\mathbf{M}_{\mathsf{L}_{1}}\boxtimes\mathbf{M}^{(2)}\boxtimes\cdots \boxtimes\mathbf{M}^{(d)}\right)\] \[\simeq \mathcal{O}\mathsf{G}\mathsf{c}_{2,d}\otimes_{\mathcal{O}\mathsf{H }_{1}}\left((\mathsf{b}_{1}\mathcal{O}\mathsf{G}_{1}\otimes_{\mathcal{O}\mathsf{L}_{1 }}\mathbf{M}_{\mathsf{L}_{1}})\boxtimes\mathbf{M}^{(2)}\boxtimes\cdots \boxtimes\mathbf{M}^{(d)}\right),\]
where the final isomorphism follows from (8.17) and Lemma 3.9.
Since each \(\mathbf{M}^{(k)}\) has vertex \(\Delta\mathsf{D}_{k}\), it follows from Remark 3.8 and Lemma 3.67 that this last bimodule has
\[\mathcal{OG}_{2,d}\otimes_{\mathcal{OH}_{1}}(\mathbf{X}_{1}\boxtimes\mathbf{M}^ {(2)}\boxtimes\cdots\boxtimes\mathbf{M}^{(d)}) \tag{8.18}\]
as a direct summand and all other summands have vertex strictly smaller than \(\Delta\mathsf{D}\). In particular, by Corollary 7.26, \({}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\) is a direct summand of the bimodule in (8.18). We now move towards proving the claim.
Since \(\mathbf{X}\) is actually an \((\mathcal{OG},\mathcal{ON})\)-bimodule, we can assume that \(\xi_{i}\) appears in the first position. (If it doesn't, then apply an appropriate element of \(\mathsf{N}\) to ensure that it does.) In other words, \(\xi_{\lambda}\) appears with non-zero coefficient in \({}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\otimes_{\mathcal{OL}}\xi_{\rho,i, \underline{d}-\underline{\delta}_{i}}.\) Therefore, by the comments following (8.18), \(\xi_{\lambda}\) appears with non-zero coefficient in
\[\mathcal{OG}_{2,d}\otimes_{\mathcal{OH}_{1}}\left(\mathbf{X}_{1}\boxtimes \mathbf{M}^{(2)}\boxtimes\cdots\boxtimes\mathbf{M}^{(d)}\right)\otimes_{ \mathcal{OL}}\xi_{\rho,i,\underline{d}-\underline{\delta}_{i}}.\]
Proposition 6.13(ii) and Lemma 3.59 now imply that \(\xi_{\lambda}\) appears with non-zero coefficient in
\[\mathcal{OG}_{2,d}\otimes_{\mathcal{OH}_{1}}\left(\mathcal{OG}_{1}\boxtimes \mathbf{M}^{(2)}\boxtimes\cdots\boxtimes\mathbf{M}^{(d)}\right)\otimes_{ \mathcal{OL}}\xi_{\rho^{i},\underline{d}-\underline{\delta}_{i}},\]
where we take the same notation of \(\rho^{i}\) from Section 6. The claim now follows by repeatedly applying Lemmas 6.13(iv) and 3.59 and then repeated applying Lemma 5.8 and Lemma 3.59.
(iii) As \(\mathbf{X}_{d-1}\) induces a Morita superequivalence between \(\mathcal{ON}_{d-1}\mathsf{f}_{d-1}\) and \(\mathcal{OG}_{d-1}\mathbf{b}_{d-1}\), \({}_{\mathsf{L}_{d-1}}\mathbf{X}_{\mathsf{G}_{d-1}}^{*}\otimes_{\mathcal{OG}_{ d-1}}[\xi_{\mu}]\) must be non-zero. So, by Lemma 3.58(ii), there exist some \(j_{1},\ldots,j_{d-1}\in I\) such that \([\xi_{\mu}]\) appears as a superconstituent with non-zero coefficient in \({}_{\mathsf{G}_{d-1}}\mathbf{X}_{\mathsf{L}_{d-1}}\otimes_{\mathcal{OL}_{d-1} }\xi_{\rho,j_{1},\ldots,j_{d-1}}.\) Now, by Lemmas 7.19(ii) and 3.59, \(\xi_{\lambda}\) appears with non-zero coefficient in
\[{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\otimes_{\mathcal{OL}}\xi_{\rho,j_{1}, \ldots,j_{d-1},i}=\mathbf{Y}\otimes_{\mathcal{OH}}\left({}_{\mathsf{G}_{d-1}} \mathbf{X}_{\mathsf{L}_{d-1}}\boxtimes(B^{\varnothing,1})^{(d)}\right)\otimes_ {\mathcal{OL}}\xi_{\rho,j_{1},\ldots,j_{d-1},i}.\]
The claim now follows from part (ii).
### Main argument
The remainder of this section is concerned with proving that \(\mathbf{X}\) induces a Morita equivalence between \(\mathcal{ON}\mathsf{f}\) and \(\mathcal{OG}\mathsf{b}\). In fact, we will prove the following pair of statements via induction on \(m\).
**Hypothesis 8.19**.: Let \(1\leq m\leq d\).
1. \(\mathbf{X}_{m}\) induces a Morita superequivalence between \(\mathcal{ON}_{m}\mathsf{f}_{m}\) and \(\mathcal{OG}_{m}\mathsf{b}_{m}\).
2. We have the following character identities: 1. for all \(\underline{d}\in\Lambda(I,m)\): \[{}_{\mathsf{G}_{m}}\mathbf{X}_{\mathsf{L}_{m}}\otimes_{\mathcal{OL}_{m}}\xi_{ \rho,\underline{d}}=\sum_{\lambda\in\mathscr{P}(\rho,\underline{d})}\frac{ \varepsilon_{\rho,\underline{d}}}{\varepsilon_{\lambda}}\,2^{(|\lambda^{(0)}|-h( \lambda^{(0)}))/2}K(\lambda)\xi_{\lambda}.\] 2. for all \(\underline{d}\in\Lambda(I,m)\) and all \(\lambda\in\mathscr{P}_{0}(\rho,\underline{d})\): \[{}_{\mathsf{L}_{m}}\mathbf{X}_{\mathsf{G}_{m}}^{*}\otimes_{\mathcal{OG}_{m}} \xi_{\lambda}=\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}} \,2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}K(\lambda)\xi_{\rho,\underline{d}}^{ \tilde{\mathsf{S}}_{m}}.\]
**Remark 8.20**.: The two conditions in Hypothesis 8.19(ii) are easily seen to be equivalent using Lemma 3.58(ii) and the fact that \(\mathbf{X}\) is actually an \((\mathcal{OG},\mathcal{ON})\)-bimodule.
Our main argument involves proving Hypothesis 8.19 by induction. The next two propositions do most of the work in the subsequent inductive argument.
**Proposition 8.21**.: _Let \(d>1\). If Hypothesis 8.19 holds for \(1\leq m\leq d-1\), then 8.19(ii) holds for \(m=d\)._
Proof.: We will assume throughout this proof that Hypothesis 8.19 holds for \(1\leq m\leq d-1\). In particular, we are assuming that \(\mathbf{X}_{m}\) induces a Morita superequivalence between \(\mathcal{O}\mathsf{N}_{m}\mathsf{f}_{m}\) and \(\mathcal{OG}_{m}\mathsf{b}_{m}\), for all \(1\leq m\leq d\). We will implicitly use this fact at several points during the proof.
For \(\lambda\in\mathscr{P}_{0}(\rho,m)\), we set
\[\tilde{K}_{\lambda^{(i)}}:=\begin{cases}K^{\prime}_{\lambda^{(0)}}&\text{ if }i=0\\ K_{\lambda^{(i)}}&\text{ if }i>0.\end{cases}\]
The majority of the remaining argument is dedicated to showing that
\[{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{ \rho,i^{d}}=\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}(\rho,d)\\ |\lambda^{(i)}|=d\end{subarray}}\frac{\varepsilon_{\rho,i^{d}}}{\varepsilon_{ \lambda}}2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{K}_{\lambda^{(i)}}\xi _{\lambda}, \tag{8.22}\]
for all \(i\in I\). Until further notice we fix \(i\in I\) and some \(\mu\in\mathscr{P}_{0}(\rho,d-1)\) with \(|\mu^{(i)}|=d-1\) and \(|\mu^{(j)}|=0\), for all \(j\in I\setminus\{i\}\). Let \(k\in I\setminus\{i\}\). By Lemmas 3.58(i) and 8.16(i)(iii),
\[\mathbf{Y}\otimes_{\mathcal{O}\mathsf{H}}\xi_{\mu,k}=C\xi_{\lambda}, \tag{8.23}\]
for some \(C\in\mathbb{N}\), where \(\lambda\) is the unique partition obtained from \(\mu\) by sliding a bead down the \(k^{\text{th}}\) runner. In other words, \(\lambda^{(i)}=\mu^{(i)}\), \(\lambda^{(k)}=(1)\) and \(|\lambda^{(j)}|=0\), for all \(j\neq i,k\).
Using Lemma 3.25 we define \(\chi\in\mathrm{Irr}_{\sigma}(\mathcal{O}\mathsf{N}_{d-1}\mathsf{f}_{d-1})\) via
\[\chi:=\mathbf{X}_{d-1}^{*}\otimes_{\mathcal{OG}_{d-1}}\xi_{\mu}.\]
Then, recalling the notation from Remark 3.52, by Lemma 3.59, we get
\[\chi\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{L}\mathsf{f})\). By Lemma 8.11,
\[{}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{O}\mathsf{G}}\xi_{ \lambda}\equiv\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,i^{d-1},k}}2^{(| \lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{K}_{\lambda^{(i)}}\xi_{\rho,i^{d-1},k}^{\bar{\mathsf{S}}_{d}} \tag{8.27}\]
modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{L}\mathsf{f})\). Applying \({}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{O}\mathsf{G}}\)? to both sides of (8.25) and substituting using (8.26) and (8.27) gives
\[\psi\downarrow_{\mathsf{L}}^{\mathsf{N}}\equiv C\frac{\varepsilon_{\lambda}}{ \varepsilon_{\rho,i^{d-1},k}}2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{K} _{\lambda^{(i)}}\xi_{\rho,i^{d-1},k}^{\bar{\mathsf{S}}_{d}} \tag{8.28}\]
modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{s}}\mathsf{D}}(\mathcal{O}\mathsf{L} \mathsf{f})\).
Since we are assuming Hypothesis 8.19 holds for \(m=d-1\),
\[\chi\downarrow_{{}_{\mathsf{L}_{d-1}}}^{\mathsf{N}_{d-1}}={}_{{}_{\mathsf{L} _{d-1}}}\mathbf{X}_{\mathsf{G}_{d-1}}^{*}\otimes_{\mathcal{O}\mathsf{G}_{d-1} }\xi_{\mu}=\frac{\varepsilon_{\mu}}{\varepsilon_{\rho,i^{d-1}}}2^{(|\mu^{(0)} |-h(\mu^{(0)}))/2}\tilde{K}_{\mu^{(i)}}\xi_{\rho,i^{d-1}}.\]
Therefore, using Lemma 3.59,
\[(\chi\vartriangleleft\xi_{k})\downarrow_{\mathsf{L}}^{\mathsf{N}\cap\mathsf{ H}}=\frac{\varepsilon_{\rho,i^{d-1}}\varepsilon_{\mu,k}}{\varepsilon_{\mu} \varepsilon_{\rho,i^{d-1},k}}\frac{\varepsilon_{\mu}}{\varepsilon_{\rho,i^{d- 1}}}2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\tilde{K}_{\mu^{(i)}}\xi_{\rho,i^{d-1},k}.\]
(Here, when applying Lemma 3.59, we are again using the fact that \(\chi\) and \(\xi_{\mu}\) have the same parity.) Simplifying, we get
\[(\chi\vartriangleleft\xi_{k})\downarrow_{\mathsf{L}}^{\mathsf{N}\cap\mathsf{ H}}=\frac{\varepsilon_{\mu,k}}{\varepsilon_{\rho,i^{d-1},k}}2^{(|\mu^{(0)}|-h(\mu^{(0) }))/2}\tilde{K}_{\mu^{(i)}}\xi_{\rho,i^{d-1},k}.\]
Therefore,
\[\psi\downarrow_{\mathsf{L}}^{\mathsf{N}} =(\chi\vartriangleleft\xi_{k})\uparrow_{\mathsf{N}\cap\mathsf{ H}}^{\mathsf{N}}\downarrow_{\mathsf{L}}^{\mathsf{N}}=\sum_{g\in\mathsf{N}/ \mathsf{N}\cap\mathsf{H}}{}^{g}((\chi\vartriangleleft\xi_{k})\downarrow_{ \mathsf{L}}^{\mathsf{N}\cap\mathsf{H}}) \tag{8.29}\] \[=\frac{\varepsilon_{\mu,k}}{\varepsilon_{\rho,i^{d-1},k}}2^{(| \mu^{(0)}|-h(\mu^{(0)}))/2}\tilde{K}_{\mu^{(i)}}\xi_{\rho,i^{d-1},k}^{\bar{ \mathsf{S}}_{d}}.\]
Taking the \(\varphi\) from Lemma 8.2, we have that \(\varphi(\xi_{\rho,i^{d-1},k}^{\bar{\mathsf{S}}_{d}})\) is either strictly positive or strictly negative. Therefore, we can equate (8.28) and (8.29) to obtain
\[C\varepsilon_{\lambda}2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{K}_{ \lambda^{(i)}}=\varepsilon_{\mu,k}2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\tilde{K}_{ \mu^{(i)}}.\]
Note that \(\lambda^{(i)}=\mu^{(i)}\) and so \(\tilde{K}_{\lambda^{(i)}}=\tilde{K}_{\mu^{(i)}}\). In every case we also have \(2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}=2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\). (Either \(k=0\), in which case \(\lambda^{(0)}=(1)\) and \(\mu^{(0)}=\varnothing\) or \(k\neq 0\), in which case \(\lambda^{(0)}=\mu^{(0)}\).) So
\[C=\frac{\varepsilon_{\mu,k}}{\varepsilon_{\lambda}}=1,\]
where the second equality follows from Lemma 5.3(iii).
From now on \(\mu\) will no longer denote a fixed partition. We have
\[{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{ \rho,i^{d-1},k}=\mathbf{Y}\otimes_{\mathcal{O}\mathsf{H}}\left(\mathbf{X}_{d-1 }\boxtimes(B^{\varnothing,1})^{(d)}\right)\otimes_{\mathcal{O}\mathsf{L}}\xi_{ \rho,i^{d-1},k} \tag{8.30}\] \[=\sum_{\begin{subarray}{c}\mu\in\mathscr{P}(\rho,d-1)\\ |\mu^{(i)}|=d-1\end{subarray}}\frac{\varepsilon_{\mu}\varepsilon_{\rho,i^{d-1},k}}{\varepsilon_{\rho,i^{d-1}}\varepsilon_{\mu,k}}\frac{\varepsilon_{\rho,i^{d- 1}}}{\varepsilon_{\mu}}2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\tilde{K}_{\mu^{(i)}} \left(\mathbf{Y}\otimes_{\mathcal{O}\mathsf{H}}\xi_{\mu,k}\right)\] \[=\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}(\rho,d)\\ |\lambda^{(i)}|=d-1,|\lambda^{(k)}|=1\end{subarray}}\frac{\varepsilon_{\rho,i^{ d-1},k}}{\varepsilon_{\lambda}}2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{K}_{\lambda^{(i)}} \xi_{\lambda}.\]
The first equality follows from Lemma 7.19(ii). The second equality uses Lemma 3.59 and the fact that we are assuming that Hypothesis 8.19(ii) holds for \(m=d-1\). Finally, the third equality is a consequence of (8.23) and, once again, the fact that \(\varepsilon_{\mu,k}=\varepsilon_{\lambda}\), \(2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}=2^{(|\mu^{(0)}|-h(\mu^{(0)}))/2}\) and that \(\mu^{(i)}=\lambda^{(i)}\).
We now determine \({}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{OG}}[\xi_{\lambda}]\), for any \(\lambda\in\mathscr{P}_{0}(\rho,d)\) with \(|\lambda^{(i)}|=d-1\) and \(|\lambda^{(k)}|=1\). By Lemmas 3.58(ii) and 8.16(i),(iii),
\[\mathbf{Y}^{*}\otimes_{\mathcal{OG}}\xi_{\lambda}=\sum_{j,s\in\{i,k\}}\sum_{ \mu\in\mathscr{P}_{0}^{j}(\lambda)^{-}}C_{\mu,s}\xi_{\mu,s},\]
for some \(C_{\mu,s}\in\mathbb{N}\). Therefore, Lemmas 7.19(ii), 3.22, 3.24 and 3.26 imply
\[{}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{OG}} \xi_{\lambda}= \big{(}{}_{\mathsf{L}_{d-1}}\mathbf{X}_{\mathsf{G}_{d-1}}^{*} \boxtimes(B^{\varnothing,1})^{(d)}\big{)}\otimes_{\mathcal{OH}}\mathbf{Y}^{*} \otimes_{\mathcal{OG}}\xi_{\lambda}\] \[= \sum_{j,s\in\{i,k\}}\sum_{\mu\in\mathscr{P}_{0}^{j}(\lambda)^{-}} C_{\mu,s}\big{(}{}_{\mathsf{L}_{d-1}}\mathbf{X}_{\mathsf{G}_{d-1}}^{*}\boxtimes(B^{ \varnothing,1})^{(d)}\big{)}\otimes_{\mathcal{OH}}\xi_{\mu,s}.\]
Using Lemma 3.59, the fact that we are assuming Hypothesis 8.19(ii) for \(m=d-1\) and that \(\mathbf{X}^{*}\) is actually an \((\mathcal{ON},\mathcal{OG})\)-bimodule yields
\[{}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{OG}}\xi_{\lambda}= C_{0}\xi_{\rho,i^{d}}^{\tilde{\mathsf{S}}_{d}}+C_{1}\xi_{\rho,i^{d-1},k}^{\tilde{ \mathsf{S}}_{d}}+C_{2}\xi_{\rho,i^{d-2},k^{2}}^{\tilde{\mathsf{S}}_{d}},\]
for some \(C_{0},C_{1},C_{2}\in\mathbb{N}\). By (8.30) and Lemma 3.58(ii) we know that
\[C_{1}=\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}(\rho,d)\\ |\lambda^{(i)}|=d-1,|\lambda^{(k)}|=1\end{subarray}}\frac{\varepsilon_{\lambda }}{\varepsilon_{\rho,i^{d-1},k}}2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{ K}_{\lambda^{(i)}}\]
and, by Lemma 8.11,
\[C_{0}\xi_{\rho,i^{d}}^{\tilde{\mathsf{S}}_{d}}+C_{1}\xi_{\rho,i^{d-1},k}^{ \tilde{\mathsf{S}}_{d}}+C_{2}\xi_{\rho,i^{d-2},k^{2}}^{\tilde{\mathsf{S}}_{d} }\equiv\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}(\rho,d)\\ |\lambda^{(i)}|=d-1,|\lambda^{(k)}|=1\end{subarray}}\frac{\varepsilon_{ \lambda}}{\varepsilon_{\rho,i^{d-1},k}}2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2 }\tilde{K}_{\lambda^{(i)}}\xi_{\rho,i^{d-1},k}^{\tilde{\mathsf{S}}_{d}}\]
modulo \(\mathbb{Z}\mathrm{Irr}_{<s\mathsf{D}}(\mathcal{OL}\mathsf{f})\). Therefore,
\[C_{0}\xi_{\rho,i^{d}}^{\tilde{\mathsf{S}}_{d}}+C_{2}\xi_{\rho,i^{d-2},k^{2}}^{ \tilde{\mathsf{S}}_{d}}\in\mathbb{Z}\mathrm{Irr}_{<s\mathsf{D}}(\mathcal{OL} \mathsf{f}).\]
Once again, taking the \(\varphi\) from Lemma 8.2, we have that \(\varphi(\xi_{\rho,i^{d}}^{\tilde{\mathsf{S}}_{d}})\) and \(\varphi(\xi_{\rho,i^{d-2},k^{2}}^{\tilde{\mathsf{S}}_{d}})\) are either both strictly positive or both strictly negative. Therefore, \(C_{0}=C_{2}=0\) and so
\[{}_{\mathsf{L}}\mathbf{X}_{\mathsf{G}}^{*}\otimes_{\mathcal{OG}}\xi_{\lambda}= \frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,i^{d-1},k}}2^{(|\lambda^{(0)}|-h( \lambda^{(0)}))/2}\tilde{K}_{\lambda^{(i)}}\xi_{\rho,i^{d-1},k}^{\tilde{ \mathsf{S}}_{d}}. \tag{8.31}\]
We are finally in a position to prove (8.22). We first have that
\[{}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\otimes_{\mathcal{OL}}\xi_{\rho,i^{d}}= \mathbf{Y}\otimes_{\mathcal{OH}}\big{(}\mathbf{X}_{d-1}\boxtimes(B^{\varnothing,1 })^{(d)}\big{)}\otimes_{\mathcal{OL}}\xi_{\rho,i^{d}}=\sum_{\begin{subarray}{c} \mu\in\mathscr{P}_{0}(\rho,d-1)\\ |\mu^{(i)}|=d-1\end{subarray}}C_{\mu}(\mathbf{Y}\otimes_{\mathcal{OH}}\xi_{ \mu,i}),\]
for some \(C_{\mu}\in\mathbb{N}\). Here, the first equality follows from Lemma 7.19(ii) and the second from Lemma 3.59 and the fact that we are assuming Hypothesis 8.19(ii) for \(m=d-1\).
(Note we could already explicitly write down the expression for the \(C_{\mu}\)'s but this is not necessary at this stage.) Therefore, by Lemma 8.16(i),
\[{}_{\mathsf{G}}\mathbf{X_{\mathsf{L}}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{\rho,i ^{d}}=\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}_{0}(\rho,d)\\ |\lambda^{(i)}|=d\end{subarray}}C_{\lambda}\xi_{\lambda}+\sum_{j\neq i}\sum_{ \begin{subarray}{c}\lambda\in\mathscr{P}_{0}(\rho,d)\\ |\lambda^{(i)}|=d-1,|\lambda^{(j)}|=1\end{subarray}}C_{\lambda}\xi_{\lambda},\]
for some \(C_{\lambda}\in\mathbb{N}\). However, (8.31) and Lemma 3.58(ii) give that \(C_{\lambda}=0\) unless \(|\lambda^{(i)}|=d\). In other words,
\[{}_{\mathsf{G}}\mathbf{X_{\mathsf{L}}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{\rho,i^{d}}=\sum_{\begin{subarray}{c}\lambda\in\mathscr{P}_{0}(\rho,d)\\ |\lambda^{(i)}|=d\end{subarray}}C_{\lambda}\xi_{\lambda}. \tag{8.32}\]
We are now in a position to prove (8.22), that is, that each
\[C_{\lambda}=\frac{\varepsilon_{\rho,i^{d}}}{\varepsilon_{\lambda}}2^{(| \lambda^{(0)}|-h(\lambda^{(0)}))/2}\tilde{K}_{\lambda^{(i)}}.\]
However, this is actually unnecessary and we go ahead and prove the more general statement of Hypothesis 8.19(ii) for \(m=d\).
Consider \(\underline{d}\in\Lambda(I,d)\). We set
\[{}_{\mathsf{G}}\mathbf{X_{\mathsf{L}}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{\rho,\underline{d}}=\sum_{\lambda\in\mathscr{P}_{0}(\rho,d)}\kappa_{\lambda}\xi_{ \lambda}, \tag{8.33}\]
for some \(\kappa_{\lambda}\in\mathbb{N}\). Suppose \(\kappa_{\lambda}\neq 0\), for some \(\lambda\in\mathscr{P}_{0}(\rho,d)\). We claim that \(|\lambda^{(j)}|\geq d_{j}\), for each \(j\in I\). For \(d_{j}=d\), this is just (8.32). If \(d_{j}<d\), then
\[{}_{\mathsf{G}}\mathbf{X_{\mathsf{L}}}\otimes_{\mathcal{O}\mathsf{L}}\xi_{\rho,\underline{d}}={}_{\mathsf{G}}\mathbf{X_{\mathsf{L}}}\otimes_{\mathcal{O} \mathsf{L}}\xi_{\rho,j^{d_{j}},\underline{d}-d_{j}\underline{\delta}_{j}}= \mathbf{Y}\otimes_{\mathcal{O}\mathsf{H}}\left(\mathbf{X}_{d-1}\boxtimes(B^ {\varnothing,1})^{(d)}\right)\otimes_{\mathcal{O}\mathsf{L}}\xi_{\rho,j^{d_{j }},\underline{d}-d_{j}\underline{\delta}_{j}},\]
where the first equality follows from the fact that \(\mathbf{X}\) is actually an \((\mathcal{O}\mathsf{G},\mathcal{O}\mathsf{N})\)-bimodule and the second from Lemma 7.19(ii). The claim now holds by the fact that we are assuming Hypothesis 8.19(ii) for \(m=d-1\) and Lemmas 3.59 and 8.16(i).
Since \(d_{1}+\cdots+d_{\ell}=d\), in fact \(|\lambda^{(j)}|=d_{j}\), for all \(\lambda\) with \(\kappa_{\lambda}\neq 0\) and \(j\in I\). Therefore, dualizing (8.33) using 3.58(ii) and noting that \(\mathbf{X}^{*}\) is an \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{G})\)-bimodule, we have that for all \(\lambda\in\mathscr{P}_{0}(\rho,\underline{d})\),
\[{}_{\mathsf{L}}\mathbf{X}^{*}_{\mathsf{G}}\otimes_{\mathcal{O}\mathsf{G}}\xi_{ \lambda}=\alpha_{\lambda}\xi_{\rho,\underline{d}}^{\tilde{\mathsf{S}}_{d}}, \tag{8.34}\]
for some \(\alpha_{\lambda}\in\mathbb{N}\). Also, by Lemma 8.11,
\[{}_{\mathsf{L}}\mathbf{X}^{*}_{\mathsf{G}}\otimes_{\mathcal{O}\mathsf{G}}\xi_{ \lambda}\equiv\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}} \,2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}K(\lambda)\xi_{\rho,\underline{d}}^{ \tilde{\mathsf{S}}_{d}} \tag{8.35}\]
modulo \(\mathbb{Z}\mathrm{Irr}_{<_{\mathsf{G}}\mathsf{D}}(\mathcal{O}\mathsf{L} \mathsf{f})\). Once more, taking \(\varphi\) from Lemma 8.2, we have that \(\varphi(\xi_{\rho,\underline{d}}^{\tilde{\mathsf{S}}_{d}})\) is either strictly positive or strictly negative. Therefore, considering (8.34) and (8.35) together gives \(\alpha_{\lambda}=\frac{\varepsilon_{\lambda}}{\varepsilon_{\rho,\underline{d}}} \,2^{(|\lambda^{(0)}|-h(\lambda^{(0)}))/2}K(\lambda)\), for all \(\lambda\in\mathscr{P}_{0}(\rho,\underline{d})\), and the result is proved.
**Proposition 8.36**.: _Let \(d>1\). If Hypothesis 8.19 holds for all \(1\leq m\leq d-1\), then Hypothesis 8.19(i) holds for \(m=d\)._
Proof.: By Lemma 7.5, \(\mathbf{M}_{\mathsf{L}}\) has vertex \(\Delta\mathsf{D}\). In particular, by the Mackey decomposition formula, \(\mathbf{M}_{\mathsf{L}}\) is projective as both a left and right \(\mathcal{O}\mathsf{L}\)-module. By Lemma 7.9, \({}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\) is a direct summand of \(\mathcal{OG}\otimes_{\mathcal{O}\mathsf{L}}\mathbf{M}_{\mathsf{L}}\) and so \({}_{\mathsf{G}}\mathbf{X}_{\mathsf{L}}\) is projective as a left \(\mathcal{O}\mathsf{G}\)-module and as a right \(\mathcal{O}\mathsf{L}\)-module. Moreover, since \(p\nmid[\mathsf{N}:\mathsf{L}]\), \(\mathbf{X}\) is also projective as a right \(\mathcal{O}\mathsf{N}\)-module.
By Lemma 7.13(ii), \(\mathcal{OG}\mathsf{b}\) is a direct summand of \(\mathbf{X}\otimes_{\mathcal{O}\mathsf{N}}\otimes\mathbf{X}^{*}\) as an \((\mathcal{OG},\mathcal{OG})\)-bimodule. As seen above, \(\mathbf{X}\) is projective as a right \(\mathcal{O}\mathsf{N}\)-module and so \(\mathbf{X}^{*}\) is projective as a left \(\mathcal{O}\mathsf{N}\)-module. It follows that every indecomposable summand of \(\mathbf{X}\otimes_{\mathcal{O}\mathsf{N}}\otimes\mathbf{X}^{*}\), as a left \(\mathcal{OG}\)-module, appears as a summand of \(\mathbf{X}\simeq\mathbf{X}\otimes_{\mathcal{O}\mathsf{N}}\mathsf{ON}\). In particular, \(\mathbf{X}\) is a progenerator as a left \(\mathcal{OG}\)-module. As a consequence, \(\operatorname{End}_{\mathcal{OG}}\mathbf{X}\) is free as an \(\mathcal{O}\)-module.
Similar to the above argument, using part (i) of Lemma 7.13 rather than part (ii), we may prove that \(\mathbf{X}\) is also a progenerator as a right \(\mathcal{O}\mathsf{N}\)-module.
Consider the natural algebra homomorphism
\[(\mathcal{O}\mathsf{N}\mathsf{f})^{\mathrm{op}}\to\operatorname{End}_{ \mathcal{OG}}\mathbf{X} \tag{8.37}\]
given by right multiplication. As \(\mathbf{X}\) is a progenerator as a right \(\mathcal{O}\mathsf{N}\)-module, this is actually a monomorphism. We claim that this homomorphism is an \(\mathcal{O}\)-split monomorphism. Indeed, by Lemma 7.13(i), \(\mathbf{X}^{*}\otimes_{\mathcal{OG}}\mathbf{X}\simeq\mathcal{O}\mathsf{N} \oplus U\), for some \((\mathcal{O}\mathsf{N}\mathsf{f},\mathcal{O}\mathsf{N}\mathsf{f})\)-bisupermodule \(U\). Therefore, we have the sequence of \(\mathcal{O}\)-module homomorphisms
\[\operatorname{End}_{\mathcal{OG}}\mathbf{X} \xrightarrow{\varphi}\operatorname{End}_{\mathcal{O}\mathsf{N}}( \mathbf{X}^{*}\otimes_{\mathcal{OG}}\mathbf{X})\cong\operatorname{End}_{ \mathcal{O}\mathsf{N}}(\mathcal{O}\mathsf{N}\mathsf{f}\oplus U)\] \[\xrightarrow{\vartheta}\operatorname{End}_{\mathcal{O}\mathsf{N}}( \mathcal{O}\mathsf{N}\mathsf{f})\cong(\mathcal{O}\mathsf{N}\mathsf{f})^{ \mathrm{op}},\]
where \(\varphi\) is induced by \(\mathbf{X}^{*}\otimes_{\mathcal{OG}}\)? and \(\vartheta\) is induced by the projection of \(\mathcal{O}\mathsf{N}\mathsf{f}\oplus U\) onto \(\mathcal{O}\mathsf{N}\mathsf{f}\). It is now a trivial task to prove that the resulting composition \(\operatorname{End}_{\mathcal{OG}}\mathbf{X}\to(\mathcal{O}\mathsf{N}\mathsf{f })^{\mathrm{sop}}\) splits the homomorphism in (8.37), as an \(\mathcal{O}\)-module homomorphism.
Once we have shown that \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\operatorname{End}_{\mathcal{OG}}\mathbf{X}\) have the same \(\mathcal{O}\)-rank, we will have that the monomorphism in (8.37) is actually an isomorphism and consequently that \(\mathbf{X}\) induces a Morita equivalence between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{OG}\mathsf{b}\). The majority of the remainder of this proof is dedicated to proving this equality of ranks.
As \(\operatorname{End}_{\mathcal{OG}}\mathbf{X}\) is free as an \(\mathcal{O}\)-module, we can prove the equality of \(\mathcal{O}\)-ranks by showing that
\[\dim_{\mathbb{K}}(\operatorname{End}_{\mathbb{K}\mathsf{G}}\mathbb{K}\mathbf{X })=d!\dim_{\mathbb{K}}(\mathbb{K}\mathbb{L}\mathsf{f})=\dim_{\mathbb{K}}( \mathbb{K}\mathbb{N}\mathsf{f}). \tag{8.38}\]
The second equality is immediate since
\[\dim_{\mathbb{K}}(\mathbb{K}\mathsf{N}\mathsf{f})=\dim_{\mathbb{K}}(\mathbb{K }\mathsf{N}\otimes_{\mathbb{K}\mathsf{L}}\mathbb{K}\mathsf{L}\mathsf{f})=[ \mathsf{N}:\mathsf{L}]\dim_{\mathbb{K}}(\mathbb{K}\mathbb{L}\mathsf{f})=d! \dim_{\mathbb{K}}(\mathbb{K}\mathbb{L}\mathsf{f}).\]
We now calculate the degrees of the characters in \(\operatorname{Irr}(\mathcal{O}\mathsf{L}\mathsf{f})\). [**St**, Proposition 4.2(a)] immediately gives that
\[\xi_{\rho,j_{1},\ldots,j_{d}}^{(\pm)}(1) =\frac{\varepsilon_{\rho}\prod_{k=1}^{d}\varepsilon_{j_{k}}}{ \varepsilon_{\rho,j_{1},\ldots,j_{d}}}\xi_{\rho}^{(\pm)}(1)\xi_{j_{1}}^{(\pm)}(1) \ldots\xi_{j_{d}}^{(\pm)}(1) \tag{8.39}\] \[=\frac{\varepsilon_{\rho}\,2^{\frac{d-d_{0}}{2}}}{\varepsilon_{\rho,j_{1},\ldots,j_{d}}}\xi_{\rho}^{(\pm)}(1)\xi_{j_{1}}^{(\pm)}(1)\ldots\xi_{j_{ d}}^{(\pm)}(1),\]
for any \(j_{1},\ldots,j_{d}\in I\), where \(0\) appears \(d_{0}\) times amongst the \(j_{k}\)'s.
Since we are assuming that Hypothesis 8.19 holds for \(1\leq m\leq d-1\), we can apply Proposition 8.21, to obtain that the character of
\[\mathbb{K}\mathbf{X}\cong\mathbb{K}\mathbf{X}\otimes_{\mathbb{K}\mathsf{L} \mathsf{f}}\mathbb{K}\mathsf{L}\mathsf{f},\]
as a left \(\mathbb{K}\mathsf{G}\)-module, is
\[\sum_{\chi\in\operatorname{Irr}(\mathcal{O}\mathsf{L}\mathsf{f})} \chi(1)(\mathbf{X}\otimes_{\mathcal{O}\mathsf{L}}\chi)\] \[= \sum_{\underline{d}\in\Lambda(I,d)}\frac{\varepsilon_{\rho}}{ \varepsilon_{\rho,\underline{d}}}2^{\frac{d-d_{0}}{2}}\xi_{\rho}^{(\pm)}(1) \xi_{0}^{(\pm)}(1)^{d_{0}}\cdots\xi_{\ell}^{(\pm)}(1)^{d_{\ell}}(\mathbf{X} \otimes\xi_{\rho,\underline{d}}^{\tilde{\mathsf{S}}_{d}})\] \[= \sum_{\underline{d}\in\Lambda(I,d)}\sum_{\lambda\in\mathscr{P}_{0} (\underline{d})}\binom{d}{\underline{d}}\frac{\varepsilon_{\rho}}{ \varepsilon_{\lambda}}2^{\frac{d-d_{0}}{2}}\xi_{\rho}^{(\pm)}(1)\xi_{0}^{(\pm )}(1)^{d_{0}}\cdots\xi_{\ell}^{(\pm)}(1)^{d_{\ell}}2^{(|\lambda^{(0)}|-h( \lambda^{(0)}))/2}K(\lambda)\xi_{\lambda},\]
where the first equality follows from (8.39) and the second is due to Proposition 8.21.
Therefore,
\[\dim_{\mathbb{K}}(\operatorname{End}_{\mathbb{K}\mathsf{G}} \mathbb{K}\mathbf{X})=\] \[= d!\sum_{\underline{d}\in\Lambda(I,d)}\binom{d}{\underline{d}} \varepsilon_{\rho}^{2}2^{d-d_{0}}\big{[}\xi_{\rho}^{(\pm)}(1)\xi_{0}^{(\pm)}( 1)^{d_{0}}\cdots\xi_{\ell}^{(\pm)}(1)^{d_{\ell}}\big{]}^{2}\] \[= d!\sum_{\chi\in\operatorname{Irr}(\mathbb{K}\mathsf{L}\mathsf{f} )}\chi(1)^{2}=d!\dim_{\mathbb{K}}(\mathbb{K}\mathsf{L}\mathsf{f}),\]
where the first equality follows from the above expression for the character of \(\mathbb{K}\mathbf{X}\). (Note that the \(\varepsilon_{\lambda}^{2}\) comes from the fact that \([\xi_{\lambda}]\) is actually the sum of \(\varepsilon_{\lambda}^{2}\) irreducibles.) The second equality follows from Lemma 2.15 and the fourth from (8.39). (Note that the \(\varepsilon_{\rho,j_{1},\ldots,j_{d}}\) from (8.39) disappears as \(\xi_{\rho,j_{1},\ldots,j_{d}}\) is actually the sum of \(\varepsilon_{\rho,j_{1},\ldots,j_{d}}^{2}\) irreducibles.) We have now proved the first equality in (8.38) and the proof is complete.
That \(\mathbf{X}\) induces a Morita superequivalence (and not just a Morita equivalence) is now an immediate consequence of Lemma 3.25.
**Theorem 8.40**.: _Hypothesis 8.19 holds for all \(1\leq m\leq d\). In particular, \(\mathbf{X}\otimes_{\mathcal{O}\mathsf{N}\mathsf{f}}\) induces a Morita superequivalence between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\)._
Proof.: The proof will proceed via induction. The statement for \(m=1\) is just Proposition 6.13(ii). Note that, in this case, the coefficients in Hypothesis 8.19(ii) really are all \(1\), as in Proposition 6.13(ii).
Let \(1\leq k<d\). We assume the statement is true for all \(1\leq m<k\). We first apply Proposition 8.21 with \(d\) replaced by \(k\). (This is valid due to Remark 5.2.) We immediately get that Hypothesis 8.19(ii) holds for \(m=k\).
Next, we apply Proposition 8.36 with \(d\) replaced by \(k\). (Again, this is valid via Remark 5.2.) This time we get that Hypothesis 8.19(i) holds for \(m=k\). This completes the proof.
The following corollary is now just a consequence of Theorem 8.40 and Lemma 3.12.
**Corollary 8.41**.: \(\mathbf{X}_{\bar{0}}\otimes_{\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}}\)\(?\) _induces a Morita equivalence between \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\)._
Of course, \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\) is a spin block of a double cover of an alternating group. Furthermore, by Lemma 4.32, \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\) is the Brauer correspondent of \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\) in \(\mathsf{N}_{\bar{0}}\). We can, therefore, interpret Corollary 8.41 as a result concerning RoCK blocks for double covers of alternating groups.
**Remark 8.42**.: Note that, for the \(\mathsf{G}_{\bar{0}}\) in Corollary 8.41, \(|\mathsf{G}_{\bar{0}}|=n!\). In order to be consistent with the convention from the introduction, we would like to know that the Corollary holds under the condition that \(\mathbb{K}\) (and hence \(\mathcal{O}\)) contains a primitive \((n!)^{\mathrm{th}}\) root of unity and not necessarily a primitive \((2n!)^{\mathrm{th}}\) root of unity. We claim that even Theorem 8.40 holds under these weaker assumptions on \(\mathbb{K}\).
We have \(n\geq 4\) and so we still automatically have that \(-1\) and \(2\) have square roots in \(\mathbb{K}\). Next note we are able to define \(\mathsf{f}\) and \(\mathsf{b}\) since they live in \(\mathcal{O}\mathsf{G}_{\bar{0}}\) and \(\mathcal{O}\mathsf{N}_{\bar{0}}\) respectively. The \(\mathbf{M}\) from Proposition 6.13 is defined entirely using \(\mathcal{O}\mathsf{S}_{p}\). Since \(2p!\leq n!\), this does not cause any issues. Similarly, \(2r!\leq n!\) and so \(B^{\rho,0}\), and hence \(\mathbf{M}_{\mathbf{N}}\), can be constructed.
We can now construct \(\mathbf{X}\) as the super Green correspondence of \(\mathbf{M}_{\mathbf{N}}\) and we have an algebra homomorphism \((\mathcal{O}\mathsf{N}\mathsf{f})^{\mathrm{op}}\to\mathrm{End}_{\mathcal{O} \mathsf{G}}\,\mathbf{X}\) given by the right action of \(\mathcal{O}\mathsf{N}\mathsf{f}\) on \(\mathbf{X}\). That this is an isomorphism follows from the fact that the corresponding homomorphism obtained by extending \(\mathcal{O}\) is an isomorphism.
## 9. Vertices and source of bimodules
We continue to adopt the notation from Section 7, in particular, \(\rho\) a \(d\)-Rouquier \(\bar{p}\)-core.
### Splendid derived equivalences
In this section we show that the Morita equivalences between \(\mathcal{O}\mathsf{N}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}\mathsf{b}\) from Theorem 8.40 and between \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\) from Corollary 8.41 give rise to a splendid derived equivalences.
Throughout this section all isomorphisms and direct summands will assumed to be as bimodules (and not bisupermodules).
Let \(K\) be a finite group and \(e_{K}\) a block idempotent of \(\mathcal{O}K\) with corresponding defect group \(Q\leq K\). Recall that a _source idempotent_ of \(\mathcal{O}Ke_{K}\) is a primitive idempotent \(i\in(\mathcal{O}Ke_{K})^{Q}\) such that \(\mathrm{Br}_{Q}(i)\neq 0\). Let \(J\) be another finite group and \(e_{J}\) a block idempotent of \(\mathcal{O}J\) with corresponding defect group isomorphic to \(Q\) (that we identify with \(Q\)) and \(j\in(\mathcal{O}Je_{J})^{Q}\) a source idempotent of \(\mathcal{O}Je_{J}\). A _splendid derived equivalence_ between \(\mathcal{O}Ke_{K}\) and \(\mathcal{O}Je_{J}\) is a derived equivalence induced by a complex \(\mathcal{Y}\) of \((\mathcal{O}Ke_{K},\mathcal{O}Je_{J})\)-bimodules such that in each degree we have a finite direct sum of summands of the \((\mathcal{O}Ke_{K},\mathcal{O}Je_{J})\)-bimodules \(\mathcal{O}Ki\otimes_{\mathbb{F}R}j\mathcal{O}J\), where \(R\) runs over the subgroups of \(Q\), cf. [\(\mathbf{L}_{5}\), 1.10].
For our purposes it will be useful to think of source idempotents in the following way outlined in [**ALR**]. An idempotent \(i\in(\mathcal{O}Ke_{K})^{Q}\) is a source idempotent of \(\mathcal{O}Ke_{K}\) if and only if \(\mathcal{O}Ki\) is an indecomposable \(\mathcal{O}(K\times Q)\)-module with vertex \(\Delta Q\) (see [**ALR**, Remark 3]). Equally, an idempotent \(i\in(\mathcal{O}Ke_{K})^{Q}\) is a source idempotent of \(\mathcal{O}Ke_{K}\) if and only if \(i\mathcal{O}K\) is an indecomposable \(\mathcal{O}(Q\times K)\)-module with vertex \(\Delta Q\).
We set \(\mathbf{U}\) to be the \(\mathcal{O}\Delta\mathsf{D}\)-module
\[\mathbf{U}:=\Omega_{\mathcal{O}\Delta\mathsf{D}_{1}}^{\ell}(\mathcal{O})\boxtimes \cdots\boxtimes\Omega_{\mathcal{O}\Delta\mathsf{D}_{d}}^{\ell}(\mathcal{O}).\]
**Lemma 9.1**.: _As \((\mathcal{O}\mathsf{N},\mathcal{O}\mathsf{N})\)-bimodules,_
\[\mathbf{M}_{\mathbf{N}}\mid\mathcal{O}\mathsf{N}\otimes_{\mathcal{O}\mathsf{D }}(\mathrm{Ind}_{\Delta\mathsf{D}}^{\mathsf{D}\times\mathbf{D}}\mathbf{U}) \otimes_{\mathcal{O}\mathsf{D}}\mathcal{O}\mathsf{N}.\]
Proof.: We first note that, by the definition of \(\mathbf{M}_{\mathsf{L}}\), Proposition 6.13(i), Remark 3.8 and Lemma 3.67,
\[\mathbf{M}_{\mathsf{L}}\mid\operatorname{Ind}_{\Delta\mathsf{D}}^{\mathsf{L}\times \mathsf{L}}\mathbf{U}\simeq\mathcal{O}\mathsf{L}\otimes_{\mathsf{OD}}( \operatorname{Ind}_{\Delta\mathsf{D}}^{\mathsf{D}\times\mathsf{D}}\mathbf{U}) \otimes_{\mathsf{OD}}\mathcal{O}\mathsf{L}.\]
Therefore, by Lemma 7.5,
\[\operatorname{Res}_{\mathsf{N}\times\mathsf{L}}^{\mathsf{N}\times\mathsf{N}} \mathbf{M}_{\mathsf{N}}\simeq\operatorname{Ind}_{\mathsf{L}\times\mathsf{L}}^{ \mathsf{N}\times\mathsf{L}}\mathbf{M}_{\mathsf{L}}\mid\mathcal{O}\mathsf{N} \otimes_{\mathsf{OD}}(\operatorname{Ind}_{\Delta\mathsf{D}}^{\mathsf{D}\times \mathsf{D}}\mathbf{U})\otimes_{\mathsf{OD}}\mathcal{O}\mathsf{L}.\]
Since \(p\nmid[\mathsf{N}:\mathsf{L}]\), the claim follows.
**Lemma 9.2**.: _As \((\mathcal{O}\mathsf{G},\mathcal{O}\mathsf{N})\)-bimodules,_
\[\mathbf{X}\mid\mathcal{O}\mathsf{G}i\otimes_{\mathsf{OD}}(\operatorname{Ind} _{\Delta\mathsf{D}}^{\mathsf{D}\times\mathsf{D}}\mathbf{U})\otimes_{\mathsf{ OD}}j\mathcal{O}\mathsf{N},\]
_for some source idempotents \(i\in(\mathcal{O}\mathsf{G}\mathsf{b})^{\mathsf{D}}\) and \(j\in(\mathcal{O}\mathsf{N}\mathsf{f})^{\mathsf{D}}\)._
Proof.: By the definition of \(\mathbf{X}\) and Lemmas 9.1 and 7.8(i),
\[\mathbf{X}\mid\mathcal{O}\mathsf{G}\mathsf{b}\otimes_{\mathsf{OD}}( \operatorname{Ind}_{\Delta\mathsf{D}}^{\mathsf{D}\times\mathsf{D}}\mathbf{U}) \otimes_{\mathsf{OD}}\mathcal{O}\mathsf{N}.\]
Since \(\mathbf{X}\) is indecomposable, it is isomorphic to a direct summand of
\[V\otimes_{\mathsf{OD}}(\operatorname{Ind}_{\Delta\mathsf{D}}^{\mathsf{D}\times \mathsf{D}}\mathbf{U})\otimes_{\mathsf{OD}}W, \tag{9.3}\]
for some indecomposable summand \(V\) of \(\mathcal{O}\mathsf{G}\mathsf{b}\) as an \(\mathcal{O}(\mathsf{G}\times\mathsf{D})\)-module and some indecomposable summand \(W\) of \(\mathcal{O}\mathsf{N}\mathsf{f}\) as an \(\mathcal{O}(\mathsf{D}\times\mathsf{N})\)-module.
By Mackey's decomposition formula, \(V\) is relatively \(\Delta Q\)-projective, where \(\Delta Q={}^{(g,g)}\Delta\mathsf{D}\cap(\mathsf{G}\times\mathsf{D})\), for some \(g\in\mathsf{G}\). In particular, \(Q\leq\mathsf{D}\). Similarly, \(W\) is relatively \(\Delta R\)-projective, where \(\Delta R={}^{(h,h)}\Delta\mathsf{D}\cap(\mathsf{D}\times\mathsf{G})\), for some \(h\in\mathsf{N}\). Again, \(R\leq\mathsf{D}\). If \(g\notin N_{\mathsf{G}}(\mathsf{D})\) (resp. \(h\notin N_{\mathsf{H}}(\mathsf{D})\)), then \(Q<\mathsf{D}\) (resp. \(R<\mathsf{D}\)). Therefore, by Lemma 2.3(ii), the bimodule in (9.3) is a direct sum of bimodules each with vertex of order strictly smaller than that of \(\Delta\mathsf{D}\). Since \(\mathbf{X}\) has vertex \(\Delta\mathsf{D}\), this is a contradiction. From now on we assume \(g\in N_{\mathsf{G}}(\mathsf{D})\) and \(h\in N_{\mathsf{H}}(\mathsf{D})\).
We now have that \(Q=R=\Delta\mathsf{D}\). In fact, \(V\) and \(W\) must actually both have vertex \(\Delta\mathsf{D}\). Indeed, if either vertex is strictly contained in \(\Delta\mathsf{D}\), then Lemma 2.3(ii) once again gives that the bimodule in (9.3) is a direct sum of bimodules each with vertex of order strictly smaller than that of \(\Delta\mathsf{D}\).
By the [**ALR**] description of source idempotents, we have now shown that \(V\cong\mathcal{O}\mathsf{G}i\), for some source idempotent \(i\in(\mathcal{O}\mathsf{G}\mathsf{b})^{D}\) and \(W\cong j\mathcal{O}\mathsf{N}\), for some source idempotent \(j\in(\mathcal{O}\mathsf{N}\mathsf{f})^{\mathsf{D}}\), as desired.
For a finite group \(K\) and an \(\mathcal{O}K\)-module \(Z\), an _endosplit \(p\)-permutation resolution_ of \(Z\) is a bounded complex \(\mathcal{Y}\) of finitely generated \(p\)-permutation \(\mathcal{O}K\)-modules with homology concentrated in degree \(0\) isomorphic to \(Z\) such that the complex \(\mathcal{Y}\otimes_{\mathcal{O}}\mathcal{Y}^{*}\) is a split complex of \(\mathcal{O}K\)-modules, where \(K\) acts diagonally on the tensor product.
For the following lemma, we treat \(\mathbf{U}\) as an \(\mathcal{O}\mathsf{D}\)-module via the obvious group isomorphism \(\mathsf{D}\cong\Delta\mathsf{D}\).
**Lemma 9.4**.: \(\mathbf{U}\) _has an \(N_{\mathsf{G}}(\mathsf{D})\)-stable endosplit \(p\)-permutation resolution._
Proof.: Until further notice we fix some \(k\), with \(1\leq k\leq d\), and let \(g\) be a generator of \(\mathsf{D}_{k}\). We first claim that \(\Omega_{\mathcal{O}\mathsf{D}_{k}}^{2}(\mathcal{O})\cong\mathcal{O}\). Indeed, once we have made the identification
\[\Omega_{\mathcal{O}\mathsf{D}_{k}}(\mathcal{O})\cong\big{\{}\sum_{x\in\mathsf{ D}_{k}}\alpha_{x}x\mid\sum_{x\in\mathsf{D}_{k}}\alpha_{x}=0\big{\}}\subseteq \mathcal{O}\mathsf{D}_{k},\]
the kernel of the \(\mathcal{O}\mathsf{D}_{k}\)-module epimorphism given by
\[\mathcal{O}\mathsf{D}_{k}\twoheadrightarrow\Omega_{\mathcal{O}\mathsf{D}_{k}}( \mathcal{O}),\ x\mapsto(1-g)x\]
is \(\langle\sum_{x\in\mathsf{D}_{k}}x\rangle_{\mathcal{O}}\cong\mathcal{O}\), as claimed.
We have now shown that
\[\Omega_{\mathcal{O}\mathsf{D}_{k}}^{\ell}(\mathcal{O})\cong\begin{cases} \mathcal{O}&\text{if $\ell$ is even,}\\ \Omega_{\mathcal{O}\mathsf{D}_{k}}(\mathcal{O})&\text{if $\ell$ is odd.}\end{cases}\]
Therefore, if \(\ell\) is even, \(\mathbf{U}\cong\mathcal{O}\) and we can just take the complex with \(\mathcal{O}\) concentrated in degree \(0\). From now on we assume \(\ell\) is odd and consequently that \(\Omega_{\mathcal{O}\mathsf{D}_{k}}^{\ell}(\mathcal{O})\cong\Omega_{\mathcal{O }\mathsf{D}_{k}}(\mathcal{O})\).
Certainly
\[\mathcal{Y}_{k}:=\cdots\to 0\to\mathcal{O}\mathsf{D}_{k}\to \mathcal{O}\to 0\to\ldots,\]
where the \(\mathcal{O}\mathsf{D}_{k}\) is in degree \(0\) and the non-zero boundary map is given by \(x\mapsto 1\), for all \(x\in\mathsf{D}_{k}\), has homology concentrated in degree \(0\) isomorphic to \(\Omega_{\mathcal{O}\mathsf{D}_{k}}(\mathcal{O})\). Moreover, by [\(\mathbf{L}_{6}\), Proposition 7.11.8], this is an endosplit \(p\)-permutation resolution of \(\Omega_{\mathcal{O}\mathsf{D}_{k}}(\mathcal{O})\). It is also clear that this resolution is \(N_{\tilde{\mathsf{S}}_{\tilde{\mathsf{T}}_{k}}}(\mathsf{D}_{k})\)-stable.
We now drop our assumption that \(k\) is fixed. By [**Ri**, Lemma 7.4], \(\mathcal{Y}:=\mathcal{Y}_{1}\otimes\cdots\otimes\mathcal{Y}_{d}\) is an endosplit \(p\)-permutation resolution of \(\mathbf{U}\).
All that remains to show is that \(\mathcal{Y}\) is \(N_{\mathsf{G}}(\mathsf{D})\)-stable. For this we refer to Lemma 4.30(ii). We note that, since \(\mathsf{D}\leq\mathsf{G}_{\bar{0}}\), all the conjugation actions by elements of \(N_{\mathsf{G}}(\mathsf{D})\) are particularly easy to write down. For example, \(N_{\tilde{\mathsf{S}}_{\tilde{\mathsf{T}}_{k}}}(\mathsf{D}_{k})\) commutes with every \(\mathsf{D}_{l}\), with \(l\neq k\), and, through our identification of all the \(\mathcal{O}\mathsf{D}_{k}\)'s via (4.13),
\[T_{w}(\alpha_{1}\otimes\cdots\otimes\alpha_{d})T_{w}^{-1}=\alpha_{w^{-1}(1)} \otimes\cdots\otimes\alpha_{w^{-1}(d)}, \tag{9.5}\]
for all \(w\in\mathsf{S}_{d}\) and \(\alpha_{k}\in\mathcal{O}\mathsf{D}_{k}\).
Firstly, \(\tilde{\mathsf{S}}_{\mathsf{R}}\) commutes with \(\mathsf{D}\), so \(\mathcal{Y}\) is \(\tilde{\mathsf{S}}_{\mathsf{R}}\)-stable. That \(\mathcal{Y}\) is \(N_{\tilde{\mathsf{S}}_{\tilde{\mathsf{T}}_{k}}}(\mathsf{D}_{k})\)-stable, for all \(1\leq k\leq d\), just follows from our construction. Therefore, the only non-trivial thing to show is that \(\mathcal{Y}\) is stable under the action of each \(T_{w}\). However, this is an immediate consequence of (9.5) and [**Ma**, Lemma 4.1(b)]
**Theorem 9.6**.: \(\mathcal{O}\mathsf{N}\mathsf{f}\) _is splendidly derived equivalent to \(\mathcal{O}\mathsf{G}\mathsf{b}\)._
Proof.: With Lemmas 9.2, 9.4 and Theorem 8.40 in mind, this can just be concluded from [\(\mathbf{L}_{6}\), Proposition 9.11.5(i)], originally stated in [\(\mathbf{L}_{3}\), Theorem 1.3].
We also have the analogous theorem for the RoCK blocks of double covers of alternating groups from Corollary 8.41.
**Theorem 9.7**.: \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\) _is splendidly derived equivalent to \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\)._
Proof.: Once we have replaced Theorem 8.40 with Corollary 8.41, the proof of this result is identical to that of Theorem 9.6 except that the analogue of Lemma 9.2 requires us to be slightly more careful than in the original lemma. More precisely, it is not clear that \(\mathbf{X}_{\bar{0}}\) has vertex \(\Delta\mathsf{D}\) and source \(\mathbf{U}\). However, since \(p\nmid[\mathsf{G}\times\mathsf{N}:\mathsf{G}_{\bar{0}}\times\mathsf{N}_{\bar{0 }}]\) and \(\operatorname{Res}_{\tilde{\mathsf{G}}_{\bar{0}}\times\mathsf{N}_{\bar{0}}}^{ \tilde{\mathsf{G}}\times\mathsf{N}}\mathbf{X}\cong\mathbf{X}_{\bar{0}}\oplus \mathbf{X}_{\bar{1}}\), we must at least have one of \(\mathbf{X}_{\bar{0}}\) or \(\mathbf{X}_{\bar{1}}\) having vertex \(\Delta\mathsf{D}\) and source \(\mathbf{U}\).
The proof will be complete once we have proved that \(\mathbf{X}_{\bar{1}}\) induces a Morita equivalence between \(\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}\) and \(\mathcal{O}\mathsf{G}_{\bar{0}}b\). Now, \(\mathbf{X}_{\bar{1}}\cong g\mathbf{X}_{\bar{0}}\), for any \(g\in\mathsf{G}_{\bar{1}}\). Therefore, as functors, \(\mathbf{X}_{\bar{1}}\otimes_{\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}}\)? is isomorphic to \(\mathbf{X}_{\bar{0}}\otimes_{\mathcal{O}\mathsf{N}_{\bar{0}}\mathsf{f}}\)? composed with the Morita auto-equivalence of \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\) given by conjugation by \(g\). This completes the claim.
We can now construct splendid derived equivalences from our RoCK blocks to their true Brauer correspondents. That is, their Brauer correspondent in the normalizer of the defect group.
**Corollary 9.8**.: \(\mathcal{O}\mathsf{Gb}\) _is splendidly derived equivalent to its Brauer correspondent in \(N_{\mathsf{G}}(\mathsf{D})\) and \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\) is splendidly derived equivalent to its Brauer correspondent in \(N_{\mathsf{G}_{\bar{0}}}(\mathsf{D})\)._
Proof.: In [KL, Proposition 5.2.33] a splendid derived equivalence between \(\mathbb{F}\mathbb{N}\bar{\mathsf{f}}\) (resp. \(\mathbb{F}\mathbb{N}_{\bar{0}}\bar{\mathsf{f}}\)) and the Brauer correspondent of \(\mathbb{F}\mathsf{Gb}\) (resp. \(\mathbb{F}\mathsf{G}_{\bar{0}}\bar{\mathsf{b}}\)) in \(N_{\mathsf{G}}(\mathsf{D})\) (resp. \(N_{\mathsf{G}_{\bar{0}}}(\mathsf{D})\)) was constructed. By [Ri, Theorem 5.2], all these splendid derived equivalences lift to splendid derived equivalences for the appropriate blocks defined over \(\mathcal{O}\). The result now follows from Theorems 9.6, 9.7 and [Ro, Lemma 10.2.6].
### Source algebra equivalences
A _source algebra equivalence_ between two blocks is a Morita equivalence induced by a bimodule with trivial source. As noted in the proof of Lemma 9.4, \(\mathbf{U}\cong\mathcal{O}\), when \(\ell\) is even, that is, when \(p\equiv 1\bmod 4\). We, therefore, have the following theorem:
**Theorem 9.9**.: _If \(p\equiv 1\bmod 4\), \(\mathsf{ON}\mathsf{f}\) is source algebra equivalent to \(\mathcal{O}\mathsf{Gb}\) and \(\mathcal{ON}_{\bar{0}}\mathsf{f}\) is source algebra equivalent to \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\)._
If \(p\equiv 3\bmod 4\), our bimodule \(\mathbf{X}\) no longer has trivial source. However, it is still unclear whether or not an appropriate trivial source bimodule can be found:
**Question 9.10**.: If \(p\equiv 3\bmod 4\), is \(\mathsf{ON}\mathsf{f}\) source algebra equivalent to \(\mathcal{O}\mathsf{Gb}\) and is \(\mathcal{ON}_{\bar{0}}\mathsf{f}\) source algebra equivalent to \(\mathcal{O}\mathsf{G}_{\bar{0}}\mathsf{b}\)?
We compare Theorem 9.9 and Question 9.10 with the analogous situation for RoCK blocks of symmetric groups in [CK, Theorem 2]. In that article, the analogue of the bimodule \(\mathbf{X}\) from this paper was taken to be the Green correspondent of the local block itself, rather than something more complicated like our bimodule \(\mathbf{M}_{\mathbf{N}}\). It, therefore, automatically had trivial source.
|
2309.05584 | Distributional Probabilistic Model Checking | Probabilistic model checking can provide formal guarantees on the behavior of
stochastic models relating to a wide range of quantitative properties, such as
runtime, energy consumption or cost. But decision making is typically with
respect to the expected value of these quantities, which can mask important
aspects of the full probability distribution such as the possibility of
high-risk, low-probability events or multimodalities. We propose a
distributional extension of probabilistic model checking, applicable to
discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs). We
formulate distributional queries, which can reason about a variety of
distributional measures, such as variance, value-at-risk or conditional
value-at-risk, for the accumulation of reward until a co-safe linear temporal
logic formula is satisfied. For DTMCs, we propose a method to compute the full
distribution to an arbitrary level of precision, based on a graph analysis and
forward analysis of the model. For MDPs, we approximate the optimal policy with
respect to expected value or conditional value-at-risk using distributional
value iteration. We implement our techniques and investigate their performance
and scalability across a range of benchmark models. Experimental results
demonstrate that our techniques can be successfully applied to check various
distributional properties of large probabilistic models. | Ingy Elsayed-Aly, David Parker, Lu Feng | 2023-09-11T16:12:03Z | http://arxiv.org/abs/2309.05584v3 | # Distributional Probabilistic Model Checking
###### Abstract
Probabilistic model checking can provide formal guarantees on the behavior of stochastic models relating to a wide range of quantitative properties, such as runtime, energy consumption or cost. But decision making is typically with respect to the _expected_ value of these quantities, which can mask important aspects of the full probability distribution such as the possibility of high-risk, low-probability events or multimodalities. We propose a _distributional_ extension of probabilistic model checking, applicable to discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs). We formulate distributional queries, which can reason about a variety of distributional measures, such as variance, value-at-risk or conditional value-at-risk, for the accumulation of reward until a co-safe linear temporal logic formula is satisfied. For DTMCs, we propose a method to compute the full distribution to an arbitrary level of precision, based on a graph analysis and forward analysis of the model. For MDPs, we approximate the optimal policy with respect to expected value or conditional value-at-risk using distributional value iteration. We implement our techniques and investigate their performance and scalability across a range of benchmark models. Experimental results demonstrate that our techniques can be successfully applied to check various distributional properties of large probabilistic models.
Keywords:Probabilistic Model Checking Risk-Aware Verification Markov Decision Processes.
## 1 Introduction
Computer systems are increasingly being integrated seamlessly with sensing, control and actuation of the physical world. Many of these systems (e.g., robotics) exhibit probabilistic and non-deterministic behavior due to inherent uncertainty (e.g., sensor noise, human interactions), which pose significant challenges for ensuring their safe, reliable, timely and resource efficient execution.
_Probabilistic model checking_ offers a collection of techniques for modelling systems that exhibit probabilistic and non-deterministic behavior. It supports not only their verification against specifications in temporal logic, but also synthesis of optimal controllers (policies). Commonly used models include discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs). A range of verification techniques for these, and other models, are supported by widely used probabilistic model checkers such as PRISM [23] and Storm [13].
To capture the range of quantitative correctness specifications needed in practice, it is common to reason about _rewards_ (or, conversely, _costs_) associated with a model. Examples include checking the worst-case execution time of a distributed coordination algorithm, or synthesizing a controller that guarantees the minimal energy consumption for a robot to complete a sequence of tasks. Typically the _expected_ (average) value of these quantities is computed, however in some situations it is necessary to consider the full probability distribution. Notably, in safety-critical applications, it can be important to synthesize _risk-sensitive_ policies, that avoid high-cost, low-probability events, which can still arise when minimizing policies for the expected value. Risk-aware distributional measures such as _conditional value-at-risk_ (CVaR) [26] address this by minimizing the costs that occur above a specified point in the tail of distribution. Within probabilistic model checking, the use of _quantiles_ has been proposed [33, 29, 20, 17] to reason about cost or reward distributions.
In this paper, we develop a _distributional probabilistic model checking_ approach, which computes and reasons about the full distribution over the reward associated with a DTMC or MDP. More precisely, we consider the reward accumulated until a specification in co-safe LTL is satisfied, the latter providing an expressive means to specify, for example, a multi-step task to be executed by a robot [21], or a sequence of events leading to a system failure. We propose a temporal logic based specification for such distributional queries.
For a DTMC, we perform model checking of these queries by generating a precise representation of the distribution, up to an arbitrary, pre-specified level of accuracy (the distribution is discrete, but with possibly countable infinite support, so at least some level of truncation is typically required). This is based on a graph analysis followed by a forward numerical computation. From this, we can precisely compute a wide range of useful properties, such as the mean, variance, mode or various risk-based measures.
For an MDP, we instead aim to optimize such properties over all policies. In this paper, we focus on optimizing the expected value or CVaR, whilst generating the full reward distribution for each state of the MDP. This is done using _distributional value iteration_ (DVI) [4], which can be seen as a generalization of classical value iteration. Rather than computing a single scalar value (e.g., representing the optimal expected reward) for each MDP state, DVI associates a full distribution with each state, replacing the standard Bellman equation with a distributional Bellman equation.
We consider two types of DVI algorithms, namely _risk-neutral_ DVI for optimizing the expected value and _risk-sensitive_ DVI for optimizing CVaR. Risk-neutral DVI can be shown to converge to a deterministic, memoryless optimal policy, if a unique one exists [4]. For CVaR, memoryless policies do not suffice for optimality, but risk-sensitive DVI does converge for a product MDP that incorporates a (continuous) slack variable representing a reward budget [2]. To improve computational efficiency, we present a risk-sensitive DVI algorithm based on a discretization of the slack variable, and show that the algorithm converges to a CVaR optimal policy for increasingly precise discretizations.
For both DVI algorithms, in practice it is necessary to use approximate distributional representations. We consider the use of categorical and quantile representations. This can impact both optimality and the precision of computed distributions but, for the latter, we can construct the DTMC induced by generated MDP policies and use our precise approach to generate the correct distribution. Finally, we implement our distributional probabilistic model checking framework as an extension of the PRISM model checker [23] and explore the feasibility and performance of the techniques on a range of benchmarks.
### Related Work
#### 1.1.1 Distributional properties.
In the context of probabilistic model checking, there is various prior work that considers properties of reward/cost distributions other than the expected value. Notably, this includes _quantiles_, i.e., optimal reward thresholds which guarantee that the maximal or minimal probability of a reward-bounded reachability formula meets a certain threshold, studied for MDPs in [33, 29, 20, 17]. While [33] and [29] focus on complexity results, [20] and [17] consider practical implementations to compute quantiles, for single- and multi-objective variants, respectively, using model unfoldings over "cost epochs"; [17] also proposes the use of interval iteration to provide error bounds. By contrast, our methods derive the full distribution, rather than targeting quantiles specifically, and our DTMC approach derives error bounds from a forward computation. We also mention [8], which computes probability distributions in a forwards manner, but for infinite-state probabilistic programs and using generating functions, and [7], which proposes an algorithm (but not implementation) to compute policies that trade off expected mean payoff and variance.
#### 1.1.2 Risk-aware objectives.
For MDPs, we focus in particular on _conditional value-at-risk_ (CVaR). There are alternatives, such as mean-variance [32] and value-at-risk [15] but, as discussed in [26], these are not _coherent risk metrics_, which may make them unsuitable for rational decision-making. [22] studies the complexity of decision problems for CVaR, but for mean-payoff rewards and without considering practical implementations. An algorithm for approximate minimization of CVaR is presented in [10], based on repeated solving of concave piecewise linear maximization problems, but it has limited scalability, taking over 2 hours to solve an MDP with about 3,000 states. More recently, [27] proposes both linear programming and value iteration methods to solve CVaR for MDPs and DTMCs. Our approach differs in that it computes the full distribution, allowing multiple distributional properties to be considered. Nevertheless, we present a brief empirical comparison in Section 5.
Other CVaR variants have also been tackled: building on ideas from [10], [30] presents a lexicographic approach which minimizes the expected cost subject to the constraint that the CVaR of the total cost is optimal, and [6, 9] address constrained problems where CVaR has to be lower than a given threshold; neither variant is directly comparable to our solution methods. We also note that all the above work focuses directly on reachability problems, rather than using temporal logic specifications. Alternative temporal logic based approaches to risk-aware
control include [11], which proposes risk-aware verification of MDPs using cumulative prospect theory, and [18] which proposes chance constrained temporal logic for control of deterministic dynamical systems.
**Distributional reinforcement learning.** Our work is based on probabilistic model checking, which fully explores and solves models, but our use of DVI is inspired by _distributional reinforcement learning_[4] which has been attracting increasing interest due to its improved sample efficiency as well as its ability to learn risk sensitive policies in stochastic environments. Various learning algorithms have been proposed, using categorical [3] and quantile [12] representations to approximate distributions, and a comparative analysis of expected and distributional reinforcement learning is provided in [25]. We use the categorical and quantile representations for distributional approximation and our risk-neutral DVI algorithm is a minimization variant adapted from [4]. Risk-sensitive DVI is also sketched in [4], based on [2], but only a theoretical analysis of the method is given, without considering practical implementation aspects, such as how to discretize slack variables for computational efficiency, and how such approximations would affect the correctness of model checking. We extend the risk-sensitive DVI with discretized slack variables and show their effects theoretically in Section 4.3 and empirically via computational experiments in Section 5.
## 2 Background
We begin with some background on random variables, probability distributions, and the probabilistic models used in this paper. We let \(\mathbb{N}\), \(\mathbb{R}\), and \(\mathbb{Q}\) denote the sets of naturals, reals and rationals, respectively, and write \(\mathbb{N}_{\infty}=\mathbb{N}\cup\{\infty\}\).
### Random Variables and Probability Distributions
Let \(X:\Omega\to\mathbb{R}\) be a random variable over a probability space \((\Omega,\mathcal{F},\Pr)\). The _cumulative distribution function_ (CDF) of \(X\) is denoted by \(\mathcal{F}_{X}(x):=\Pr(X\leq x)\), and the inverse CDF is \(\mathcal{F}_{X}^{-1}(\tau):=\inf\{x\in\mathbb{R}:\mathcal{F}_{X}(x)\geq\tau\}\). Common properties of interest for \(X\) include, e.g., the _expected value_\(\mathbb{E}(X)\), the _variance_\(\mathrm{Var}(X)\) which is the square of the _standard deviation_ (s.d.), or the _mode_.
In this paper, we also consider several _risk_-related measures. The _value-at-risk_ of \(X\) at level \(\alpha\in(0,1)\) is defined by \(\mathsf{VaR}_{\alpha}(X):=\mathcal{F}_{X}^{-1}(\alpha)\), which measures risk as the minimum value encountered in the tail of the distribution with respect to a risk level \(\alpha\). The _conditional value-at-risk_ of \(X\) at level \(\alpha\in(0,1)\) is given by \(\mathsf{CVaR}_{\alpha}(X):=\frac{1}{1-\alpha}\int_{\alpha}^{1}\mathsf{VaR}_{ \nu}(X)d\nu\), representing the expected loss given that the loss is greater or equal to \(\mathsf{VaR}_{\alpha}\).
**Example 1**.: Figure 0(a) illustrates an example probability distribution of a random variable \(X\), annotated with its expected value \(\mathbb{E}(X)\), value-at-risk \(\mathsf{VaR}_{0.9}(X)\) and conditional value-at-risk \(\mathsf{CVaR}_{0.9}(X)\). \(\blacksquare\)
When working with the probability distributions for random variables, we write distributional equations as \(X_{1}:\stackrel{{ D}}{{=}}X_{2}\), denoting equality of probability laws (i.e., the random variable \(X_{1}\) is distributed according to the same law as \(X_{2}\)). We
use \(\delta_{\theta}\) to denote the Dirac delta distribution that assigns probability \(1\) to outcome \(\theta\in\mathbb{R}\). In practice, even when distributions are discrete, we require approximate, finite representations for them. In this paper, we consider _categorical_ and _quantile_ distributional representations, both of which provide desirable characteristics such as tractability and expressiveness [4].
Definition 1 (Categorical representation): A _categorical representation_ parameterizes the probability of \(m\) _atoms_ as a collection of evenly-spaced locations \(\theta_{1}<\cdots<\theta_{m}\in\mathbb{R}\). Its distributions are of the form \(\sum_{i=1}^{m}p_{i}\delta_{\theta_{i}}\) where \(p_{i}\geq 0\) and \(\sum_{i=1}^{m}p_{i}=1\). We define the _stride_ between successive atoms as \(\varsigma_{m}=\frac{\theta_{m}-\theta_{1}}{m-1}\).
Definition 2 (Quantile representation): A _quantile representation_ parameterizes the location of \(m\) equally-weighted atoms. Its distributions are of the form \(\frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_{i}}\) for \(\theta_{i}\in\mathbb{R}\). Multiple atoms may share the same value.
Next, we introduce two metrics for measuring the distance between probability distributions. Let \(p\in[1,\infty)\) and \(\mu,\mu^{\prime}\) be two distributions. The _p-Wasserstein distance_\(w_{p}(\mu,\mu^{\prime})\) and the _Cramer distance_\(\ell_{2}(\mu,\mu^{\prime})\) are defined as:
\[w_{p}(\mu,\mu^{\prime}) :=\left(\int_{0}^{1}\lvert\mathcal{F}_{\mu}^{-1}(\tau)-\mathcal{ F}_{\mu^{\prime}}^{-1}(\tau)\rvert^{p}d\tau\right)^{\frac{1}{p}}\] \[\ell_{2}(\mu,\mu^{\prime}) :=\left(\int_{\mathbb{R}}\lvert\mathcal{F}_{\mu}(x)-\mathcal{F}_{ \mu^{\prime}}(x)\rvert^{2}dx\right)^{\frac{1}{2}}\]
Given two multi-dimensional distributions \(\eta,\eta^{\prime}\in\mathit{Dist}(\mathbb{R})^{S}\), we define the _supremum p-Wasserstein distance_ between them as \(\overline{w}_{p}(\eta,\eta^{\prime}):=\sup_{s\in S}w_{p}(\eta(s),\eta^{\prime }(s))\). and the _supremum Cramer distance_ as \(\overline{\ell}_{2}(\eta,\eta^{\prime}):=\sup_{s\in S}\ell_{2}(\eta(s),\eta^{ \prime}(s))\).
We can then describe how to _project_ a distribution onto a representation.
Proposition 1 (Categorical projection [4]): _For a probability distribution \(\mu\), there exists a (unique) projection of \(\mu\) in the Cramer distance (\(\ell_{2}\)) onto the categorical representation denoted by \(\Pi_{\mathrm{C}}\mu=\sum_{i=1}^{m}p_{i}\delta_{\theta_{i}}\) with parameters \(p_{i}=\mathbb{E}_{X\sim\mu}\big{(}h_{i}(\frac{X-\theta_{i}}{\varsigma_{m}}) \big{)}\), where \(h_{i}(x)\) are (half-)triangular kernel functions defined as:_
\[h_{1}(x)=\begin{cases}1&x\leq 0\\ \max(0,1-\lvert x\rvert)&x>0\end{cases}\quad h_{m}(x)=\begin{cases}\max(0,1- \lvert x\rvert)&x\leq 0\\ 1&x>0\end{cases}\]
_and \(h_{i}(x)=\max(0,1-\lvert x\rvert)\) for \(i=2,\ldots,m-1\)._
Figure 1: An example distribution with its categorical and quantile representations.
Proposition 2 (Quantile projection [4]): _For a probability distribution \(\mu\), a projection of \(\mu\) in the 1-Wasserstein distance (\(w_{1}\)) onto the quantile representation is given by \(\Pi_{\mathrm{Q}}\mu=\frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_{i}}\) with parameters \(\theta_{i}=\mathcal{F}_{\mu}^{-1}(\frac{2i-1}{2m})\)._
**Example 2**: _Figures 0(b) and 0(c) show categorical and quantile representations, respectively, approximating the distribution shown in Figure 0(a). \(\blacksquare\)_
### Markov Chains and Markov Decision Processes
In this paper, we work with both discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs).
Definition 3 (DTMC): A _discrete-time Markov chain_ (DTMC) is a tuple \(\mathcal{D}=(S,s_{0},P,\mathit{AP},L)\), where \(S\) is a set of states, \(s_{0}\in S\) is an initial state, \(P:S\times S\to[0,1]\) is a probabilistic transition matrix satisfying \(\forall s\in S:\sum_{s^{\prime}\in S}P(s,s^{\prime})=1\), \(\mathit{AP}\) is a set of atomic propositions and \(L:S\to 2^{AP}\) is a labelling function. \({}_{\blacksquare}\)_
A DTMC \(\mathcal{D}\) evolves between states, starting in \(s_{0}\), and the probability of taking a transition from \(s\) to \(s^{\prime}\) is \(P(s,s^{\prime})\). An (infinite) _path_ through \(\mathcal{D}\) is a sequence of states \(s_{0}s_{1}s_{1}\dots\) such that \(s_{i}\in S\) and \(P(s_{i},s_{i+1})>0\) for all \(i\geq 0\), and a finite path is a prefix of an infinite path. The sets of all infinite and finite paths in \(\mathcal{D}\) are denoted \(\mathit{IPaths_{\mathcal{D}}}\) and \(\mathit{FPaths_{\mathcal{D}}}\), respectively. In standard fashion [19], we define a probability measure \(\Pr_{\mathcal{D}}\) over the set of paths \(\mathit{IPaths_{\mathcal{D}}}\).
Definition 4 (Mdp): A _Markov decision process_ (MDP) is a tuple \(\mathcal{M}=(S,s_{0},A,P,\mathit{AP},L)\), where states \(S\), initial state \(s_{0}\), atomic propositions \(\mathit{AP}\) and labelling \(L\) are as for a DTMC, and \(P:S\times A\times S\to[0,1]\) is a probabilistic transition function satisfying \(\forall s\in S,\forall a\in A:\sum_{s^{\prime}\in S}P(s,a,s^{\prime})\in\{0,1\}\). \({}_{\blacksquare}\)_
In each state \(s\) of an MDP \(\mathcal{M}\), there are one or more _available_ actions which can be taken, denoted \(A(s)=\{a\in A\,|\,P(s,a,s^{\prime})>0\text{ for some }s^{\prime}\}\). If action \(a\) is taken in \(s\), the probability of taking a transition from \(s\) to \(s^{\prime}\) is \(P(s,a,s^{\prime})\), also denoted \(P(s^{\prime}|s,a)\). Paths are defined in similar fashion to DTMCs but are now alternating sequences of states and actions \(s_{0}a_{0}s_{1}a_{1}s_{2}\dots\) where \(a_{i}\in A(s_{i})\) and \(P(s_{i},a_{i},s_{i+1})>0\) for all \(i\geq 0\), and the sets of all infinite and finite paths are \(\mathit{IPaths_{\mathcal{M}}}\) and \(\mathit{FPaths_{\mathcal{M}}}\), respectively.
The choice of actions in each state is resolved by a _policy_ (or _strategy_), based on the execution of the MDP so far. Formally, a policy takes the form \(\pi:\mathit{FPaths}\to A\). We say that \(\pi\) is _memoryless_ if the mapping \(\pi(\omega)\) depends only on \(\mathit{last}(\omega)\), the final state of \(\omega\), and _finite-memory_ if it depends only on \(\mathit{last}(\omega)\) and the current memory value, selected from a finite set and updated at each step of execution. The set of all policies for MDP \(\mathcal{M}\) is denoted \(\Sigma_{\mathcal{M}}\).
Under a given policy \(\pi\), the resulting set of (infinite) paths has, as for DTMCs, an associated probability measure, which we denote \(\Pr_{\mathcal{M}}^{\pi}\). Furthermore, for both memoryless and finite-memory policies, we can build a (finite) _induced DTMC_ which is equivalent to \(\mathcal{M}\) acting under \(\pi\).
**Rewards.** We associate both DTMCs and MDPs with _reward structures_, which are annotations of the states or transitions of a model with numerical values. For consistency with the literature on probabilistic model checking and temporal logics, we use the terminology _rewards_ although in practice these can (and often do) represents _costs_, such as time elapsed or energy consumed. For the purposes of our algorithms, we assume that rewards are integer-valued, but we note that these could be defined as rationals, using appropriate scaling.
Definition 5 (Reward structure): A _reward structure_ is, for a DTMC \(\mathcal{D}\), a function \(r:S\rightarrow\mathbb{N}\) and, for an MDP \(\mathcal{M}\), a function \(r:S\times A\rightarrow\mathbb{N}\).
For an infinite path \(\omega\), we also write \(r(\omega,k)\) for the sum of the reward values over the first \(k\) steps of the path, i.e., \(r(s_{0}s_{1}s_{2}\dots)=\sum_{i=0}^{k-1}r(s_{i})\) for a DTMC and \(r(s_{0}a_{0}s_{1}a_{1}s_{2}\dots)=\sum_{i=0}^{k-1}r(s_{i},a_{i})\) for an MDP. To reason about rewards, we define random variables over the executions (infinite paths) of a model, typically defined as the total reward accumulated along a path, up until some event occurs. Formally, for a DTMC \(\mathcal{D}\), such a random variable is defined as a function of the form \(X:\mathit{IPaths}_{\mathcal{D}}\rightarrow\mathbb{R}\), with respect to the probability measure \(\Pr_{\mathcal{D}}\) over \(\mathit{IPaths}_{\mathcal{D}}\). For an MDP \(\mathcal{M}\) and policy \(\pi\in\Sigma_{\mathcal{M}}\), a random variable is defined as a function \(X:\mathit{IPaths}_{\mathcal{M}}\rightarrow\mathbb{R}\), with respect to the probability measure \(\Pr_{\mathcal{M}}^{\pi}\).
## 3 Distributional Probabilistic Model Checking
We formulate our approach as a _distributional_ extension of probabilistic model checking, which is a widely used framework for formally specifying and verifying quantitative properties of probabilistic models. In particular, we build upon existing temporal logics in common use. The core property we consider is the probability distribution over the amount of reward (or cost) that has been accumulated until some specified sequence of events occurs (which could constitute, for example, the successful completion of a task by a robot, or a combination of events that leads to a system failure). To represent events, we use the co-safe fragment of linear temporal logic (LTL).
Definition 6 (Co-safe LTL): Formulae in (syntactically) _co-safe LTL_, over a set of atomic propositions \(\mathit{AP}\), are defined by the grammar:
\[\psi\,:=\,\texttt{true}\,\,\mid\,\texttt{a}\,\,\mid\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
any satisfying path \((\omega\!\models\!\psi)\) has a _good prefix_ i.e., a finite path prefix \(\omega^{\prime}\) such that \(\omega^{\prime}\omega^{\prime\prime}\!\models\!\psi\) for any suffix \(\omega^{\prime\prime}\). For simplicity, Definition 6 defines a syntactic subset of LTL (where negation occurs only at the level of atomic propositions and the _globally_ operator is omitted) which is guaranteed to be co-safe.
The key ingredient of our temporal logic specifications is a _distributional query_, which gives a property (such as the expected value, or variance) of the distribution over the accumulated reward until an event's occurrence.
Definition 7 (Distributional query): For a DTMC, a _distributional query_ takes the form \(\mathtt{R}_{=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right]\), where \(r\) is a reward structure, \(\mathtt{f}\) is a random variable property (e.g., \(\mathbb{E}\), \(\mathrm{Var},\mathrm{s.d.},\mathrm{mode},\mathtt{VaR},\mathtt{CVaR}\)), and \(\psi\) is a formula in co-safe LTL.
Examples of properties that can be expressed in this framework include:
* the expected time until an algorithm terminates;
* the variance in energy consumption until a robot visits location \(\mathit{goal}_{1}\) followed by location \(\mathit{goal}_{2}\).
* the most likely number of packet collisions before a communication protocol successfully sends one of two messages.
For an MDP, the goal is to optimize some random variable property \(\mathtt{f}\) over the policies of the MDP. In this paper, we restrict our attention to two particular cases, expected value (\(\mathbb{E}\)) and conditional value-at-risk (\(\mathtt{CVaR}\)), and call these _distributional optimization queries_.
Definition 8 (Distributional optimization query): For an MDP, a _distributional optimization query_ takes the form \(\mathtt{R}_{\mathrm{opt}=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right]\), where \(r\) is a reward structure, \(\mathtt{f}\in\{\mathbb{E},\mathtt{CVaR}\}\), \(\mathrm{opt}\in\{\min,\max\}\) and \(\psi\) is a formula in co-safe LTL. For the resulting policy, we can perform _policy evaluation_ on the induced DTMC using one or more other distributional queries \(\mathtt{R}_{=?}^{\mathtt{f}(r^{\prime})}\!\left[\,\psi^{\prime}\,\right]\).
An example optimization query is \(\mathtt{R}_{\min=?}^{\mathtt{CVaR}_{0,9}(r_{\mathit{time}})}\!\left[\,\mathtt{ F}\ \mathit{goal}\,\right]\), which minimizes the conditional value-at-risk with respect to the time for a robot to reach its goal.
**Semantics.** A distributional query \(\mathtt{R}_{=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right]\) is evaluated on a DTMC \(\mathcal{D}\), and a distributional optimization query \(\mathtt{R}_{\mathrm{opt}=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right]\) on an MDP \(\mathcal{M}\), in each case via a random variable for the reward accumulated from its initial state:
\[\mathtt{R}_{=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right] =\mathtt{f}(X_{\mathcal{D}}^{r,\psi})\] \[\mathtt{R}_{\min=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right] =\inf_{\pi\in\Sigma_{\mathcal{M}}}\mathtt{f}(X_{\mathcal{M},\pi}^{r, \psi})\] \[\mathtt{R}_{\max=?}^{\mathtt{f}(r)}\!\left[\,\psi\,\right] =\sup_{\pi\in\Sigma_{\mathcal{M}}}\mathtt{f}(X_{\mathcal{M},\pi}^{r, \psi})\]
where the random variables \(X_{\mathcal{D}}^{r,\psi}:\mathit{IPaths}_{\mathcal{D}}\rightarrow\mathbb{R}\) and \(X_{\mathcal{M},\pi}^{r,\psi}:\mathit{IPaths}_{\mathcal{M}}\rightarrow\mathbb{R}\) are:
\[X_{\mathcal{D}}^{r,\psi}(\omega)\ =\ X_{\mathcal{M},\pi}^{r,\psi}(\omega)\ =\ \begin{cases}r(\omega,k_{\psi}-1)&\text{if }\omega\! \models\!\psi\\ \infty&\text{otherwise}\end{cases}\]
and \(k_{\psi}=\min\{k\mid(\omega,k)\models\psi\}\) is the length of the shortest good prefix for \(\psi\).
**Example 3**.: We illustrate our framework with an example of an autonomous robot navigating within a risky environment ("mud & nails"), modelled as an MDP. Figure 2 illustrates the scenario: the robot starts in the leftmost location (blue circle), and may pass through two types of terrain, mud (orange zigzag) and ground littered with nails (purple hatching). Obstacles are drawn as grey blocks. The cost of navigation is, by default, 1 per step, obstacles incurring an additional cost of 35. In the "nails" terrain, there is a probability of 0.2 of hitting a nail, which then incurs a cost of 5; navigating the "mud" terrain is safer but slower: it incurs a cost of 3 per step. Consider the total cost to visit \(g_{1}\) and then \(g_{2}\). Given a reward structure _cost_ encoding individual costs as above, we can aim to minimize either the expected cost or the conditional value-at-risk, using queries \(\mathtt{R}_{\min=?}^{\mathbb{E}(cost)}[\mathtt{F}\,\left(g_{1}\wedge\mathtt{F }\,g_{2}\right)]\) or \(\mathtt{R}_{\min=?}^{\mathbb{C}\mathtt{V}\mathtt{A}\mathtt{R}_{0,7}(cost)}[ \mathtt{F}\,\left(g_{1}\wedge\mathtt{F}\,g_{2}\right)]\). Figure 2 also shows the resulting policies, plotted on the map in purple and orange, respectively, and the corresponding probability distributions over cost. We can analyze each policy with further distributional queries, e.g., \(\mathtt{R}_{=?}^{\mathsf{f}(cost)}[\mathtt{F}\,\left.g_{1}\right.]\) for \(\mathtt{f}=\{\mathbb{E},\operatorname{Var}\}\) to evaluate the mean and variance of the cost to reach \(g_{1}\). \(\blacksquare\)
## 4 Distributional Model Checking Algorithms
We now describe algorithms for distributional probabilistic model checking, i.e., to evaluate distributional queries of the form \(\mathtt{R}_{=?}^{\mathtt{f}(r)}[\,\psi\,]\) for a DTMC or \(\mathtt{R}_{\operatorname{opt}=?}^{\mathsf{f}(r)}[\,\psi\,]\) for an MDP. Following the semantics given in Section 3, for a DTMC \(\mathcal{D}\), this necessitates generating the probability distribution of the random variable \(X_{\mathcal{D}}^{r,\psi}\), corresponding to reward structure \(r\) and LTL formula \(\psi\), on \(\mathcal{D}\). The value \(\mathsf{f}(X_{\mathcal{D}}^{r,\psi})\) can then be evaluated on the distribution for any \(\mathsf{f}\). For an MDP \(\mathcal{M}\), we aim to find a policy \(\pi^{*}\) which optimizes the value \(\mathsf{f}(X_{\mathcal{M},\pi}^{r,\psi})\) over policies \(\pi\).
For both classes of model, in standard fashion, we reduce the problem to the simpler case in which \(\psi\) is a _reachability_ formula by constructing a product with an automaton. More precisely, for co-safe LTL formula \(\psi\), we can construct a deterministic finite automaton (DFA) \(\mathcal{A}_{\psi}\) representing the "good" prefixes of \(\psi\).
We then construct a DTMC-DFA product \(\mathcal{D}\otimes\mathcal{A}_{\psi}\) or MDP-DFA product \(\mathcal{M}\otimes\mathcal{A}_{\psi}\) with state space \(S\times Q\), where \(S\) is the state space of the original model and \(Q\) the states of the DFA, and where atomic proposition _acc_ labels the states \(\langle s,q\rangle\) for which \(q\) is accepting in \(\mathcal{A}_{\psi}\). There is a one-to-one correspondence
Figure 2: The βmud & nailsβ example. Left: Map of the terrain to navigate, with two policies that minimize expected cost and conditional value-at-risk to visit \(g_{1}\) and then \(g_{2}\). Right: The corresponding distributions over cost.
between paths in the original model and the product model (and, for MDPs, a one-to-one correspondence between policies) [1]. The product construction also preserves both probability spaces over paths and rewards. So we also have a direct correspondence between the following pairs of random variables
\[X_{\mathcal{D}}^{\tau,\psi}\text{ and }X_{\mathcal{D}\otimes\mathcal{A}_{\psi}}^{ \tau,\mathsf{F}\mathit{a}\mathit{c}\mathit{c}}\qquad\quad X_{\mathcal{M},\pi}^ {\tau,\psi}\text{ and }X_{\mathcal{M}\otimes\mathcal{A}_{\psi},\pi^{\prime}}^{\tau, \mathsf{F}\mathit{a}\mathit{c}\mathit{c}\mathit{c}}\]
for corresponding policies \(\pi\) and \(\pi^{\prime}\), and we reduce the problem to analyzing the product. We can also convert optimal policies from \(\mathcal{M}\otimes\mathcal{A}_{\psi}\) to \(\mathcal{M}\), e.g., a memoryless policy \(\pi\)' of \(\mathcal{M}\otimes\mathcal{A}_{\psi}\) becomes a finite-memory policy \(\pi\) of \(\mathcal{M}\).
Hence, in what follows, we restrict our attention to computing the probability distributions for random variables defined as the reward to reach a target set of states \(T\subseteq S\), describing first the case for a DTMC and then the cases for risk-neutral (\(\mathsf{f}=\mathbb{E}\)) and risk-sensitive (\(\mathsf{f}=\mathsf{CVaR}\)) optimization for an MDP. For the latter two, for presentational simplicity, we focus on the case of minimization, which is also more commonly presented for risk-based measures, but it is straightforward to adapt the algorithms to the maximizing case.
### Forward Distribution Generation for DTMCs
We fix a DTMC \(\mathcal{D}\), reward structure \(r\) and set of target states \(T\). In this section, we describe how to compute the probability distribution for the reward \(r\) accumulated in \(\mathcal{D}\) until \(T\) is reached, i.e., for the random variable \(X_{\mathcal{D}}^{\tau,\mathsf{F}\mathit{T}}\). We denote this distribution by \(\mu\). Note that, since individual rewards are integer-valued, and are summed along paths, \(\mu\) is a discrete distribution.
We compute the distribution in a forward manner, up to a pre-specified accuracy \(\varepsilon\), using Algorithm 1. First, note that the reward accumulated along a path that never reaches the target \(T\) is defined to be \(\infty\) (see Section 3). Probabilistic model checking algorithms typically compute the _expected_ reward to reach a target \(T\) from a state \(s\), which is therefore infinite if \(s\) has a non-zero probability of not reaching \(T\). Here, we have to take slightly more care since there may be states from which there is a non-zero probability of both accumulating finite and infinite reward. This means that \(\mu\) is a distribution over \(\mathbb{N}_{\infty}\).
Algorithm 1 first identifies the states \(S_{\infty}\) of \(\mathcal{D}\) from which the probability of accumulating infinite reward is \(1\), which are those in bottom strongly connected components (BSCCs) of \(\mathcal{D}\) that do not intersect with \(T\). It then computes a discrete distribution \(\mu_{\times}\) over \(S\times\mathbb{N}_{\infty}\) where, at the \(k\)th iteration, \(\mu_{\times}(s,i)\) is the probability of being in state \(s\) and having accumulated reward \(i\) after \(k\) steps. A new version \(\mu_{\times}^{\prime}\) is computed at each step. Abusing notation, we write distributions as lists \(\{x_{1}\mapsto p_{1},\dots\}\) of the elements \(x_{j}\) of their support and their probabilities \(p_{j}\). We also keep track of the probabilities \(p_{\overline{T}}\) and \(p_{\infty}\) of, by the \(k\)th iteration, _not_ having reached the target set \(T\) and being in \(S_{\infty}\), respectively. The distribution \(\mu\) is finally computed by summing \(\mu_{\times}(s,i)\) values over all states and can be analyzed with additional distributional properties.
**Correctness and convergence.** Let \(\mu\) be the exact distribution for \(X_{\mathcal{D}}^{\tau,\mathsf{F}\mathit{T}}\) and \(\hat{\mu}\) be the one returned by Algorithm 1, using accuracy \(\varepsilon>0\). We have:
\[\mu(i)\leq\hat{\mu}(i)\leq\mu(i)+\varepsilon\quad\text{for all }i\in\mathbb{N}_{\infty} \tag{1}\]
Note that the support of \(\mu\) may be (countably) infinite, but \(\hat{\mu}\) is finite by construction. In this case, the total truncation error is also bounded by \(\varepsilon\): if \(\hat{k}\in\mathbb{N}\) is the maximum finite value in the support of \(\hat{\mu}\), then \(\sum_{\hat{k}<i<\infty}\mu(i)\leq\varepsilon\).
To see the correctness of Equation (1), observe that \(\hat{\mu}(i)\) is ultimately computed from the sum of the values \(\sum_{s}\mu_{\times}(s,i)\) in Algorithm 1, the total value of which is non-decreasing since rewards are non-negative. In any iteration, at most \(p_{\overline{T}}-p_{\infty}\) will be added to any value \(\mu_{\times}(s,i)\) and, on termination, \(p_{\overline{T}}-p_{\infty}\leq\varepsilon\). Convergence is guaranteed for any \(\varepsilon>0\): since we separate the states \(S_{\infty}\) in non-target BSCCs, within \(k\) iterations, the combined probability of having reached \(T\) (i.e., \(1-p_{\overline{T}}\)) or reaching \(S_{\infty}\) (i.e., \(p_{\infty}\)) tends to \(1\) as \(k\to\infty\).
```
Input : DTMC \(\mathcal{D}=(S,s_{0},P,AP,L)\), reward structure \(r\), target set \(T\subseteq S\), accuracy \(\varepsilon\in\mathbb{R}_{>0}\) Output : The discrete probability distribution \(\mu\) for \(X_{\mathcal{D}}^{r,\mathtt{F}\,T}\).
1\(S_{\infty}\leftarrow\{s\in S\mid s\text{ is in a BSCC }C\subseteq S\text{ with }C\cap T=\emptyset\}\)
2\(\mu_{\times}\leftarrow\delta_{(s_{0},0)}\)\(p_{\overline{T}}=1\); \(p_{\infty}=0\)
3while\(p_{\overline{T}}-p_{\infty}>\varepsilon\)do
4\(\mu^{\prime}_{\times}\leftarrow\{\}\)\(p_{\overline{T}}\gets 0\)for\(((s,i)\mapsto p_{s,i})\in\mu_{\times}\)do
5if\(s\in T\)then
6\(\mu^{\prime}_{\times}(s,i)\leftarrow\mu^{\prime}_{\times}(s,i)+p_{s,i}\)
7else
8for\((s^{\prime}\mapsto p_{s^{\prime}})\in P(s,\cdot)\)do
9if\(s^{\prime}\not\in T\)then
10\(p_{\overline{T}}\gets p_{\overline{T}}+p_{s,i}\cdot p_{s^{\prime}}\)
11if\(s^{\prime}\not\in S_{\infty}\)then
12\(\mu^{\prime}_{\times}(s^{\prime},i+r(s))\leftarrow\mu^{\prime}_{\times}(s^{ \prime},i+r(s))+p_{s,i}\cdot p_{s^{\prime}}\)
13else
14\(p_{\infty}\gets p_{\infty}+\mu_{\times}(s,i)\cdot p_{s^{\prime}}\)
15
16
17\(\mu_{\times}\leftarrow\mu^{\prime}_{\times}\)
18return\(\{i\mapsto p_{i}\,|\,p_{i}=\sum_{s}\mu_{\times}(s,i)\}\cup\{\infty\mapsto p_{ \infty}\}\)
```
**Algorithm 1**Forward distribution generation for DTMCs
### Risk-Neutral Distributional Value Iteration for MDPs
In this section, we present a _risk-neutral_ DVI method, for computing value distributions of states of an MDP \(\mathcal{M}\) under an optimal policy that minimizes the _expected_ cumulative reward to reach a target set \(T\subseteq S\), i.e., minimizes \(\mathbb{E}(X_{\mathcal{M},\pi}^{r,\mathtt{F}\,T})\) for random variables \(X_{\mathcal{M},\pi}^{r,\mathtt{F}\,T}\) of MDP policies \(\pi\).
In contrast to the case for DTMCs, since we now consider expected values, we assume that there exists an optimal policy with finite expected reward, i.e., which reaches the target set \(T\) with probability \(1\). This can be checked efficiently with an analysis of the underlying graph of the MDP [5].
The risk-neutral DVI method is shown in Algorithm 2. For each MDP state \(s\in S\), it initializes its value distribution \(\mu_{s}\) to Dirac distribution \(\delta_{0}\). The algorithm loops through lines 3-11 to update value distributions of any non-target state \(s\in S\setminus T\) as follows. For each available action \(a\in A(s)\) in state \(s\), a value distribution is obtained via the distributional Bellman equation shown in line 6 then projected to \(\eta(s,a)\) to match the chosen representation (see Def. 1, 2). The optimal action \(\pi^{*}(s)\) in state \(s\) is the one that achieves the minimal expected value of \(\eta(s,a)\). The updated value distribution \(\mu^{\prime}_{s}\) of state \(s\) is given by \(\eta(s,\pi^{*}(s))\). The algorithm terminates when the supremum of distributional distance \(d(\mu_{s},\mu^{\prime}_{s})\) across all states (the choice of metrics is discussed later) is less than the convergence threshold \(\epsilon\). Unlike the accuracy \(\varepsilon\) for Algorithm 1, this threshold \(\epsilon\) does _not_ provide a guarantee on the precision of the result after convergence (similar issues occur in classical value iteration for MDPs [16]).
``` Input : MDP \(\mathcal{M}=(S,s_{0},A,P,AP,L)\), reward structure \(r\), target set \(T\subseteq S\), and convergence threshold \(\epsilon\in\mathbb{R}_{>0}\) Output : optimal policy \(\pi^{*}\) for query \(\mathbb{R}_{\min=7}^{\mathbb{E}(r)}[\texttt{F}\,T\,]\), distribution \(\mu_{s_{0}}\) under \(\pi^{*}\)
1foreach\(s\in S\)do
2\(\mu_{s}\leftarrow\delta_{0}\)
3while\(e>\epsilon\)do
4foreach\(s\in S\setminus T\)do
5foreach\(a\in A(s)\)do
6\(\eta(s,a)\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel {\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel {\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{ :}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{ :}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{ :}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel
Policy convergence.When there exists a unique risk-neutral optimal policy, Algorithm 2 is guaranteed to converge to the optimal policy (following [4, Theorem 7.9]). However, when there are multiple optimal policies, the risk-neutral DVI may fail to converge (see [4, Section 7.5]).
### Risk-Sensitive Distributional Value Iteration for MDPs
By contrast to risk-neutral policies that seek to minimize the expected reward, _risk-sensitive_ policies make decisions accounting for risk properties. In this section, we present a risk-sensitive DVI method for minimizing the _conditional value-at-risk_ of reaching a target set in an MDP \(\mathcal{M}\), i.e., minimizing \(\mathsf{CVaR}_{\alpha}(X_{\mathcal{M},\pi}^{\mathsf{r},\mathsf{F}T})\) for random variables \(X_{\mathcal{M},\pi}^{\mathsf{r},\mathsf{F}T}\) of MDP policies \(\pi\). As in the previous section, we assume the existence of a policy with finite expected reward. Our method follows a key insight from [2, 31] that conditional value-at-risk can be represented as the solution of a convex optimization problem.
Lemma 1 (Dual Representation of \(\mathsf{CVaR}\)[2, 31]): _Let \([x]^{+}\) denote the function that is 0 if \(x<0\), and \(x\) otherwise. Given a random variable \(X\) over the probability space \((\Omega,\mathcal{F},\Pr)\), it holds that:_
\[\mathsf{CVaR}_{\alpha}(X)=\min_{b\in\mathbb{R}}\left\{b+\frac{1}{1-\alpha} \mathbb{E}\big{(}\left[X-b\right]^{+}\big{)}\right\}, \tag{2}\]
_and the minimum-point is given by \(b^{*}=\mathsf{VaR}_{\alpha}(X)\)._
Intuitively, the _slack variable_\(b\in[\,V_{\min},\,V_{\max}]\) encodes the risk budget and possible \(\mathsf{VaR}_{\alpha}(X)\) values. Since \(\mathsf{VaR}_{\alpha}(X)\in[\,V_{\min},\,V_{\max}]\), the slack variable is similarly bounded by the minimum and maximum possible accumulated reward within the MDP, respectively. We assume that the reward values are bounded and the probability of reaching the target states is 1, therefore \(\,V_{\min}\) and \(\,V_{\max}\) are also bounded. To enable efficient computation, we consider a discrete number of values for \(b\). More precisely, we define a set \(B\) with \(n\) evenly-spaced atoms \(b_{1}<\cdots<b_{n}\) such that \(b_{1}=V_{\min}\), \(b_{n}=V_{\max}\), and the stride between two successive atoms is \(\varsigma_{n}=\frac{V_{\max}-V_{\min}}{n-1}\). Based on Lemma 1, determining the optimal slack variable value \(b^{*}\) requires computation of \(\mathsf{VaR}_{\alpha}\) for the distribution, which cannot be obtained _a priori_. Thus, we consider all possible risk budgets in the risk-sensitive DVI.
Algorithm 3 illustrates the proposed method. We construct a product MDP model \(\mathcal{M}^{\mathsf{b}}=(S\times B,\{s_{0}\}\times B,A,P^{\mathsf{b}},AP,L^{ \mathsf{b}})\). Unlike the product MDP defined in Section 3, this MDP has multiple initial states, one state \(\langle s_{0},\bar{b}\rangle\) for each risk budget \(\bar{b}\in B\), where \(s_{0}\) is the initial state of the MDP \(\mathcal{M}\). For each transition \(s\xrightarrow{a}s^{\prime}\) in \(\mathcal{M}\) with \(P(s,a,s^{\prime})>0\), there is a corresponding transition \(\langle s,b\rangle\xrightarrow{a}\langle s^{\prime},b^{\prime}\rangle\) in \(\mathcal{M}^{\mathsf{b}}\), where \(b^{\prime}\) is obtained by rounding down the value of \(b-r(s,a)\) to the nearest smaller atom in \(B\) and \(P^{\mathsf{b}}(\langle s,b\rangle,a,\langle s^{\prime},b^{\prime}\rangle)=P(s, a,s^{\prime})\). The labelling function is given by \(L^{\mathsf{b}}(\langle s,b\rangle)=L(s)\).
Next, in lines 2-12, Algorithm 3 initializes and updates the value distribution of each augmented state \(\langle s,b\rangle\in S\times B\) in the product MDP \(\mathcal{M}^{\mathsf{b}}\) in a similar fashion to the risk-neutral DVI described in Section 4.2. However, when choosing the
optimal action (line 8), Algorithm 3 adopts a different criterion that minimizes \(\mathbb{E}([X-b]^{+})\) based on the dual representation of CVaR (see Equation 2).
```
Input : MDP \(\mathcal{M}=(S,s_{0},A,P,AP,L)\), reward structure \(r\), target set \(T\subseteq S\), risk level \(\alpha\), slack variable set \(B\), convergence threshold \(\epsilon\in\mathbb{R}_{>0}\) Output : optimal policy \(\pi^{*}\) for query \(\texttt{R}^{\texttt{CVaR}_{\alpha}(r)}[\,\texttt{F}\,T\,]\), distribution \(\mu_{s_{0}}\) under \(\pi^{*}\)
1 Construct product MDP \(\mathcal{M}^{\texttt{b}}=(S\times B,\{s_{0}\}\times B,A,P^{\texttt{b}},AP,L^{ \texttt{b}})\)
2foreach\(\langle s,b\rangle\in S\times B\)do
3\(\mu_{\langle s,b\rangle}\leftarrow\delta_{0}\)
4while\(e>\epsilon\)do
5foreach\(\langle s,b\rangle\in(S\setminus T)\times B\)do
6foreach\(a\in A(s)\)do
7\(\eta(\langle s,b\rangle,a)\mathrel{\mathop{:}\mskip-1.5mu =}^{D}\texttt{proj}(r(s,a)+\sum_{(s^{\prime},b^{\prime})\in S\times B}P^{ \texttt{b}}(\langle s,b\rangle,a,\langle s^{\prime},b^{\prime}\rangle)\cdot \mu_{(s^{\prime},b^{\prime})})\)
8\(\pi^{\texttt{b}}(\langle s,b\rangle)\leftarrow\arg\min_{a\in A(s)}\mathbb{E} \big{(}\left[X-b\right]^{+}\left|X\sim\eta(\langle s,b\rangle,a)\right)\)
9\(\mu^{\prime}_{\langle s,b\rangle}\leftarrow\eta(\langle s,b\rangle,\pi^{ \texttt{b}}(\langle s,b\rangle))\)
10\(e\leftarrow\sup_{\langle s,b\rangle\in(S\setminus T)\times B}d(\mu_{\langle s,b\rangle},\mu^{\prime}_{\langle s,b\rangle})\)
11foreach\(\langle s,b\rangle\in(S\setminus T)\times B\)do
12\(\mu_{\langle s,b\rangle}\leftarrow\mu^{\prime}_{\langle s,b\rangle}\)
13foreach\(\bar{b}\in B\)do
14\(\upsilon(\bar{b})\leftarrow\texttt{CVaR}_{\alpha}(X|X\sim\mu_{\langle s_{0}, \bar{b}\rangle})\)
15\(\bar{b}^{*}\leftarrow\arg\min_{\bar{b}\in B}\upsilon(\bar{b})\)
16\(\pi^{*}\leftarrow\) policy \(\pi^{\texttt{b}}\) of the product MDP \(\mathcal{M}^{\texttt{b}}\) with initial state fixed to \(\langle s_{0},\bar{b}^{*}\rangle\) return\(\pi^{*}\) and \(\mu_{\langle s_{0},\bar{b}^{*}\rangle}\)
```
**Algorithm 3**Risk-Sensitive Distributional Value Iteration
Once DVI on the product MDP \(\mathcal{M}^{\texttt{b}}\) converges, the algorithm computes the CVaR\({}_{\alpha}\) value of each initial state's distribution \(\mu_{\langle s_{0},\bar{b}\rangle}\). Different choices of the risk budget \(\bar{b}\) lead to various initial value distributions. The algorithm selects the optimal risk budget, denoted by \(\bar{b}^{*}\), that yields the minimum CVaR of all possible initial value distributions. Finally, the algorithm sets the optimal policy \(\pi^{*}\) as the policy \(\pi^{\texttt{b}}\) resulting from the risk-sensitive DVI on the product MDP \(\mathcal{M}^{\texttt{b}}\) with initial state fixed to \(\langle s_{0},\bar{b}^{*}\rangle\), and returns the distribution \(\mu_{\langle s_{0},\bar{b}^{*}\rangle}\).
**Example 5**.: For our running example, the query \(\texttt{R}^{\texttt{CVaR}_{0.7}(cost)}_{\min=7}[\,\texttt{F}\,\left(g_{1} \wedge\texttt{F}\,\,g_{2}\right)\,]\) runs the risk-sensitive DVI method (with \(\alpha=0.7\)) on the _cost_ reward structure and with a target set corresponding to visiting \(g_{1}\) then \(g_{2}\). The resulting optimal policy and distribution are shown in orange in Figure 2.
**Correctness and convergence.** Following [2, Theorem 3.6], when the slack variable \(b\) is continuous (i.e., \(B=\mathbb{R}\)), there exists a solution \(b^{*}\) of Equation 2 and the optimal policy \(\pi^{\texttt{b}}\) of product MDP \(\mathcal{M}^{\texttt{b}}\) with initial state fixed to \(\langle s_{0},b^{*}\rangle\) is the CVaR optimal policy of MDP \(\mathcal{M}\). Next, we show that Algorithm 3 with a discretized slack variable (i.e., \(|B|\) is finite) converges to the CVaR optimal policy as \(|B|\) increases, stated as follows and a proof is provided in the appendix.
Lemma 2: _Let \(\pi_{1}\) denote the optimal policy for minimizing \(\mathsf{CVaR}_{\alpha}(X^{r,\mathsf{F}T}_{\mathcal{M},\pi})\), which is obtained with a continuous slack variable. Let \(\pi_{2}\) denote the optimal policy returned by Algorithm 3 where \(B\) is a finite set of \(n\) evenly-spaced atoms with stride \(\varsigma_{n}\). It holds that \(\mathsf{CVaR}_{\alpha}(X^{r,\mathsf{F}T}_{\mathcal{M},\pi_{2}})-\mathsf{CVaR}_ {\alpha}(X^{r,\mathsf{F}T}_{\mathcal{M},\pi_{1}})=\mathcal{O}(\varsigma_{n})\). As \(\varsigma_{n}\) tends to 0 (i.e., \(|B|\) increases), \(\pi_{2}\) converges to the \(\mathsf{CVaR}\) optimal policy. \({}_{\blacksquare}\)_
## 5 Experiments
We built and evaluated a prototype implementation3 of our distributional probabilistic model checking approach based on PRISM [23], extending its Java-based explicit-state engine. The main benchmarks used are described in Section 5.1 and the experimental results are discussed in the following sections. We focus initially on MDP models, solving them using the DVI methods of Sections 4.2 and 4.3, and then evaluating the resulting policies using the DTMC method of Section 4.1. We then further evaluate the DTMC method on a set of DTMC models from the PRISM Benchmark Suite [24], in particular comparing this (forward, exact) approach to an alternative solution obtained via DVI. Finally, we compare our risk-sensitive method to the \(\mathsf{CVaR}\) techniques from [27], using the benchmarks from that paper. All experiments were run on a machine with an AMD Ryzen 7 CPU and 14 GB of RAM allocated to the Java virtual machine.
Footnote 3: Code, instructions and models are available from this Github repo.
### Case Studies
**Betting Game.** This case study is taken from [30]. The MDP models an agent with an amount of money, initially set to 5, which can repeatedly place a bet of amount \(\lambda\in\{0,1,2,3,4,5\}\). The probability of winning a bet is 0.7, the probability of losing a bet is 0.25, and the probability of hitting a jackpot (i.e., winning \(10\lambda\)) is 0.05. The game ends after 10 stages. The reward function is given by the maximal allowance (e.g., 100) minus the final amount of money that the agent owns.
**Deep Sea Treasure.** This case study is also taken from [30]. The model represents a submarine exploring an area to collect one of several treasures. At each time step, the agent chooses to move to a neighbouring location; it succeeds with probability 0.6, otherwise moves to another adjacent location with probability 0.2. The agent stops when it finds a treasure or has explored for 15 steps. The reward function is defined based on the travel cost (e.g., 5 per step) and opportunity cost (i.e., maximal treasure value minus collected treasure value).
**Obstacle.** This case study is inspired by the gridworld navigation example in [10]. We consider an MDP model of an \(N\times N\) gridworld with a set of scattered obstacles. The agent's goal is to navigate to a destination, while avoiding obstacles which would cause a delay. At each time step, the agent moves in a selected direction with probability 0.9 and an unintended direction with probability 0.1. The reward function is given by the time spent to reach the destination.
**Human-UAV Interaction.** This case study is adapted from the MDP model of the interaction between a human and an unmanned aerial vehicle (UAV) from [14]. A UAV performs road network surveillance missions with the assistance of a human operator. The road network is shown in Figure 2(a), and the UAV is given a mission specified with LTL formula \(\psi=(\mathtt{F}\ w_{2})\wedge(\mathtt{F}\ w_{5})\wedge(\mathtt{F}\ w_{6})\), which translates into covering waypoints \(w_{2}\), \(w_{5}\) and \(w_{6}\) in any order. The reward function is given by the mission completion time.
**Energy.** This case study considers a robot navigating an \(N\times N\) gridworld with energy constraints (Figure 2(b) shows an example for \(N=5\)). At each time step, the robot moves to an adjacent grid location with probability 0.7 or ends up in an unintended adjacent location with probability 0.3. The robot starts with a fixed amount of energy and consumes 1 unit per step. The robot can only recharge its battery in the charging station. When the energy is depleted, the robot is transported with a delay to the charging station. The robot is asked to complete a mission specified with LTL formula \(\psi=(\mathtt{F}\ w_{1})\wedge(\mathtt{F}\ w_{2})\wedge(\mathtt{F}\ w_{3})\). The reward function represents the mission completion time.
### Results Analysis
**Method comparison.** Table 1 summarizes our experimental results across the benchmarks described above. For each MDP, we run both the risk-neutral and risk-sensitive variants of distributional value iteration (DVI), optimizing expecting value and \(\mathtt{CVaR}\), as described in Section 4.2 and Section 4.3, respectively. For the risk neutral case we also run standard value iteration (VI), as implemented in PRISM. For all three methods, we then evaluate the resulting policy, computing the full reward distribution using the forward distribution generation method described in Section 4.1, allowing us to compute more precise results for the expected value and \(\mathtt{CVaR}\) on those policies.
The table shows the time to run each algorithm and the values computed during optimization (the value for the objective being optimized is shown in bold). Additionally, the table shows the time to run the forward distribution method on the induced DTMC, and the (percentage) relative error when comparing the VI/DVI results with the forward distribution outcomes.
Figure 3: Maps used in the UAV and Energy case studies.
For each case study, we also report the number of states in the (product) MDP that is solved. The UAV and Energy benchmarks use non-trivial co-safe LTL formulae for the mission specification (the others are reachability specifications) and so the MDP is a MDP-DFA product. For risk-sensitive DVI, the state space is also augmented with a slack variable resulting in larger product MDPs. We set the slack variable size to \(|B|=51\) for the Energy model, and \(|B|=101\) for the rest. We use the the categorical representation with \(m=101\) for DVI, with \(\epsilon=0.01\) for the convergence metric. For policy evaluation, we use precision \(\varepsilon=10^{-3}\) for the Obstacle and Energy case studies and \(\varepsilon=10^{-5}\) for the others.
_Our DVI methods successively optimise their respective objectives on a range of large MDPs._ Generally, the policy resulting from the risk-neutral method has a lower expected value, while the policy obtained with the risk-sensitive method has a lower \(\mathsf{CVaR}_{\alpha}\), and the risk-neutral method yields the same optimal policy as the baseline VI method. As expected, DVI methods are more expensive than VI, since they work with distributions, not scalar values, but the DVI methods are successfully applied to MDPs with several million states. Additionally, the baseline VI method can only provide information about expected reward values, while the distribution returned by our methods can be used to compute additional distributional properties (Var, \(\mathsf{VaR}\), etc.).
The DTMC forward computation also works on all models. It is often very fast (under a second in 3 cases), but grows expensive on models where the support of the distribution is large. From its results, we see that both DVI methods produce approximate distributions that are close to the true distribution.
Note that in the last three case studies, the \(\mathit{V}_{\max}\) value is higher, resulting in a larger stride and thus more coarse representations for both the value distributions and the slack variable (for risk-sensitive DVI). This results in more
\begin{table}
\begin{tabular}{l l c|c c c|c c c} \hline \hline \multicolumn{1}{c}{ Model} & Method & MDP & Time (s) & \(\mathbb{E}\) & \(\mathsf{CVaR}_{\alpha}\) & \(\mathrm{Time}_{\mathtt{dtmc}}\) (s) & \(\Delta_{\mathbb{E}}^{\%}\) & \(\Delta_{\mathsf{CVaR}}^{\%}\) \\ \hline \multirow{2}{*}{Betting Game} & risk-neut. VI & \(8.9\cdot 10^{2}\) & \(<1\) & **61.9** & - & \(<1\) & - & - \\ & risk-neut. DVI & \(8.9\cdot 10^{2}\) & \(<1\) & **61.9** & 98.0 & \(<1\) & 0.0 & 0.0 \\ & risk-sens. DVI & \(9.0\cdot 10^{4}\) & 36 & 85.3 & **92.2** & \(<1\) & 0.0 & 0.0 \\ \hline \multirow{2}{*}{DS} & risk-neut. VI & \(1.2\cdot 10^{3}\) & \(<1\) & **359.3** & - & \(<1\) & - & - \\ & risk-neut. DVI & \(1.2\cdot 10^{3}\) & \(<1\) & **359.3** & 474.6 & \(<1\) & 0.0 & 0.33 \\ & risk-sens. DVI & \(1.2\cdot 10^{5}\) & 72 & 370.1 & **458.6** & \(<1\) & 0.0 & 0.32 \\ \hline \multirow{2}{*}{Obstacle (\(N=150\))} & risk-neut. VI & \(2.3\cdot 10^{4}\) & \(<1\) & **402.8** & - & 1,838 & - & - \\ & risk-neut. DVI & \(2.3\cdot 10^{4}\) & 97 & **402.7** & 479.2 & 1,838 & 0.01 & 1.95 \\ & risk-sens. DVI & \(2.3\cdot 10^{6}\) & 15,051 & 402.9 & **478.4** & 1,673 & 0.01 & 2.00 \\ \hline \multirow{2}{*}{UAV} & risk-neut. VI & \(1.7\cdot 10^{4}\) & \(<1\) & **124.1** & - & \(<1\) & - & - \\ & risk-neut. DVI & \(1.7\cdot 10^{4}\) & 4 & **123.8** & 168.8 & \(<1\) & 0.2 & 0.47 \\ & risk-sens. DVI & \(1.7\cdot 10^{6}\) & 2,366 & 134.9 & **169.1** & \(<1\) & 0.0 & 0.01 \\ \hline \multirow{2}{*}{Energy (\(N=15\))} & risk-neut. VI & \(2.6\cdot 10^{4}\) & 10 & **184.3** & - & 251 & - & - \\ & risk-neut. DVI & \(2.6\cdot 10^{4}\) & 108 & **184.0** & 382.0 & 234 & 0.17 & 0.47 \\ \cline{1-1} & risk-sens. DVI & \(1.3\cdot 10^{6}\) & 9,384 & 184.6 & **380.9** & 122 & 0.16 & 0.33 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results: Timing and accuracy of each method.
approximation errors when computing metrics from the value distributions generated using DVI. This can be seen in the case of the UAV model where the risk-neutral method underestimates \(\mathsf{CVaR}_{\alpha}\) (168.8 compared to 169.6 from the true distribution generated by the DTMC method for the same policy). The following experiments aim to evaluate how the parameters of the distributional representation affect the resulting approximate distributions generated by DVI.
**Effects on distributional approximation.** Figure 4 plots the effects of varying the number of atoms in categorical and quantile representations, in terms of the \(\ell_{2}\) distance between the approximate distribution resulting from the risk-neutral DVI and the ground truth (obtained via applying the DTMC forward distribution generation method with \(\varepsilon=10^{-5}\) on the resulting optimal policy).
_For both representations, the \(\ell_{2}\) distance approaches 0 as the number of atoms increases, indicating that the approximate distributions become very close to the ground truth._ We observe similar effects with the risk-sensitive method and thus omit the resulting plot. Note that the Deep Sea Treasure model has a larger \(\mathit{V}_{\max}\) (about 800) and thus the resulting \(\ell_{2}\) is higher than other models when using a maximum of 101 atoms in the categorical representation.
A larger number of atoms (\(m\) value) leads to higher computational cost, thus we consider smaller models for the obstacle and energy case studies with \(N=10\) for plotting Figure 4. As an illustrative example of the trade-off between the accuracy and computational cost, for the Energy 10 model (1,796 states), the runtime of using categorical representations with 11 atoms (resp. 101 atoms) is 0.3s (resp. 0.63s), while the runtime when using quantile representations with 10 atoms (resp. 100 atoms) is 0.9s (resp. 5s). The quantile projection is more expensive than the categorical projection, resulting in higher runtime.
**Effects of slack variable atoms.** Figure 5 illustrates the effects of varying the number of atoms used in slack variables (\(|B|\)) in risk-sensitive DVI. _The results show that increasing \(|B|\) generally leads to better policies with smaller \(\mathsf{CVaR}\) values._ This is in part because the algorithm would check a larger set of initial risk budgets \(\bar{b}\in B\). But there is a trade-off since the computational cost grows with an increasing \(|B|\). For example, in the Energy 10 model, the runtime using the categorical representation with 101 atoms for \(|B|=11\) (resp. \(|B|=101\)) is 7.8s (resp. 78.6s), whereas the runtime of using the quantile representation with 1,000 atoms for \(|B|=11\) (resp. \(|B|=101\)) is 477s (resp. 5,163s).
Figure 4: Experimental results of risk-neutral DVI.
### DTMC Performance Analysis
Next, we further evaluate the forward computation method for DTMCs from Section 4.1 on a range of common DTMC benchmarks from the PRISM benchmark suite [24]. In particular, we compare to an alternative computation using the risk-neutral DVI method method of Section 4.2, treating DTMCs as a special case of MDPs. Table 2 shows the performance of the two methods. For each model, we indicate the parameters used for the benchmark and the DTMC size (states and transitions). For the DVI method, we use the categorical representation with a stride of 1 and a value of \(\mathit{V}_{\max}\) large enough to represent the distribution (also shown in the table).
_In two out of the three benchmarks, the DTMC forward computation is much faster._ This is because the DVI method calculates a reward distribution for every state of the model. However, for the third example, where \(\mathit{V}_{\max}\) is significantly higher, DVI is actually faster (the same can be seen for the Obstacle and Energy models in Table 1). The DTMC method computes distribution to a pre-specified accuracy, but DVI may incur approximation errors, primarily due to convergence. The (relative) errors for the expected value and \(\mathsf{CVaR}\) metrics are also shown for every benchmark in Table 2.
\begin{table}
\begin{tabular}{l l|l l|l l|l l} \hline \hline Model & Param.s & States & Transitions & \(\mathit{V}_{\max}\) & DVI(s) & DTMC(s) & \(\Delta_{\mathbb{E}}^{\%}\) & \(\Delta_{\mathsf{CVaR}}^{\%}\) \\ \hline EGL & \(\texttt{N=8,L=3}\) & \(5.4\cdot 10^{6}\) & \(5.5\cdot 10^{6}\) & 40 & 439 & 1 & 0.4 & 0.5 \\ & \(\texttt{N=8,L=4}\) & \(7.5\cdot 10^{6}\) & \(7.6\cdot 10^{6}\) & 40 & 897 & 1 & 0.4 & 0.4 \\ & \(\texttt{N=8,L=5}\) & \(9.6\cdot 10^{6}\) & \(9.7\cdot 10^{6}\) & 50 & 4,345 & 1 & 0.3 & 0.4 \\ \hline Leader & \(\texttt{N=8,K=5}\) & \(2.7\cdot 10^{6}\) & \(3.1\cdot 10^{6}\) & 20 & 41 & 2 & 0.0 & 0.0 \\ & \(\texttt{N=10,K=4}\) & \(9.4\cdot 10^{6}\) & \(1.0\cdot 10^{7}\) & 30 & 577 & 15 & 0.3 & 0.6 \\ & \(\texttt{N=8,K=6}\) & \(1.2\cdot 10^{7}\) & \(1.3\cdot 10^{7}\) & 20 & 163 & 9 & 0.0 & 0.1 \\ \hline Herman & \(\texttt{N=13}\) & \(8.2\cdot 10^{3}\) & \(1.6\cdot 10^{6}\) & 100 & 4 & 14 & 0.6 & 0.9 \\ & \(\texttt{N=15}\) & \(3.3\cdot 10^{4}\) & \(1.4\cdot 10^{7}\) & 120 & 57 & 190 & 0.5 & 0.9 \\ & \(\texttt{N=17}\) & \(1.3\cdot 10^{5}\) & \(1.3\cdot 10^{8}\) & 140 & 1,234 & 2,369 & 0.8 & 1.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison for DTMC forward computation
Figure 5: Experimental results of risk-sensitive DVI.
### Performance comparison with risk-aware SSP
Lastly, we compare our risk-sensitive DVI method with the "risk-aware SSP" (stochastic shortest path) methods of MDP CVaR optimisation from [27], which presents both a linear programming (LP) and a VI approach. Note that these are CVaR-specific, whereas our approach generates the full reward distribution, allowing for the subsequent computation of other distributional properties. Moreover, they only consider the case where the reward for every step is one, compared to the more general reward functions we support. Hence, we cannot compare using the models from Table 1 and instead use the benchmarks from [27].
Table 3 shows the time for the LP and VI methods of [27] and the time to run our risk-sensitive DVI method (Section 4.3) followed by the DTMC method (Section 4.1). We use the same values of \(\alpha\) (correctly resulting in the same CVaR\({}_{\alpha}\) values) and the same timeout of 1 hour. Considering VI (the better performing of the two risk-aware SSP methods), we see that it is significantly faster than our method on the FireWire and WLAN benchmarks, at least in part because it uses PRISM's more efficient symbolic engines. However, for the Grid example, as the model size increases, our approach is faster.
## 6 Conclusion
In this paper, we present a distributional probabilistic model checking approach, which supports a rich set of distributional queries for DTMCs and MDPs. Experiments on a range of benchmark case studies demonstrate that our approach can be successfully applied to check various distributional properties (e.g., CVaR, VaR, variances) of large MDP and DTMC models. We believe that this work paves the way for applying distributional probabilistic model checking in many safety-critical and risk-averse domains (e.g., human-robot interaction, autonomous vehicles). For future work, we will explore distributional queries with multiple objectives and under multi-agent environments.
\begin{table}
\begin{tabular}{l|c c|c c|c} \hline \hline Model & States & Transitions & SSP-LP (s) & SSP-VI (s) & Ours (s) \\ \hline
**Grid**\((x=4)\) & \(1.3\cdot 10^{3}\) & \(1.2\cdot 10^{4}\) & 2 & 0 & 8 \\
**Grid**\((x=8)\) & \(3.2\cdot 10^{3}\) & \(3.8\cdot 10^{4}\) & 159 & 1 & 19 \\
**Grid**\((x=16)\) & \(6.4\cdot 10^{3}\) & \(8.6\cdot 10^{4}\) & 1,915 & 6 & 46 \\
**Grid**\((x=32)\) & \(1.3\cdot 10^{4}\) & \(1.8\cdot 10^{5}\) & timeout & 143 & 101 \\ \hline
**FireWire** & \(1.4\cdot 10^{5}\) & \(1.8\cdot 10^{5}\) & memout & 2 & 920 \\
**WLAN** & \(8.7\cdot 10^{4}\) & \(3.0\cdot 10^{5}\) & memout & 1 & 550 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison with risk-aware SSP |
2306.17705 | A global invariant for path structures and second order differential
equations | We study a global invariant for path structures. The invariant is obtained as
a secondary invariant from a Cartan connection on a canonical bundle associated
to a path structure. It is computed in examples which are defined in terms of
reductions of the path structure. In particular we give a formula for this
global invariant for second order differential equations defined on a torus
$T^2$. | Elisha Falbel, Jose Miguel Veloso | 2023-06-30T14:42:34Z | http://arxiv.org/abs/2306.17705v1 | # A global invariant for path structures and second order differential equations
###### Abstract
We study a global invariant for path structures. The invariant is obtained as a secondary invariant from a Cartan connection on a canonical bundle associated to a path structure. It is computed in examples which are defined in terms of reductions of the path structure. In particular we give a formula for this global invariant for second order differential equations defined on a torus \(T^{2}\).
## 1 Introduction
Path structures on a 3-manifold are defined by a choice of contact structure and a decomposition of the contact plane bundle as a direct sum of two line bundles. This structure was throughly studied in the 19th century (see in particular [T]) as it appears in the description of second order differential equations and their equivalence under certain transformations (see Section 2 and references [A, IL, BGH]).
In Section 2 we collect definitions and examples. In particular we explain the relation with ordinary second order equations. In the following section we define the most important reductions of path structures. The first one is obtained by fixing a global contact form and it is called strict path structure. There exists a Cartan bundle \(Y_{1}\) and a connection adapted to that structure (see 2.5) which was used in ([FMMV]) to obtain a classification of compact 3-manifolds with non-compact automorphism group preserving the strict path structure. We recall the construction in Proposition 2.3. The second one, we call enriched path structure following [MM] which were used by Mion-Mouton to classify certain classes of partial-hyperbolic diffeomorphisms of three manifolds. It consists of path structures where we fix a line transverse to the contact distribution. We define an adapted Cartan bundle \(Y_{2}\) and a canonical connection adapted to this structure (see 2.6 and Proposition 2.5). There exists a natural embedding \(Y_{1}\to Y_{2}\) (Section 2.6.2, Proposition 2.7).
In Section 3 we recall the construction of the Cartan bundle \(Y\) and the canonical adapted connection to a path structure on a 3-manifold (see Proposition 3.3). This construction is due to Cartan in [Car]. Although one can find modern treatments of this topic in several references (in particular [IL, BGH]), we include this section for the sake of completeness and because the conventions we use might differ from others. We obtain a natural embedding \(Y_{2}\to Y\) (see 3.4, Proposition 3.4) and compute the curvature of the bundle \(Y\) in terms of the curvature of \(Y_{2}\) (see 3.4.1). The formulas are used in the computation of the global invariant
in the next section. We also recall the computations by Cartan of the invariants of a second order differential equation.
In the following section we define the global invariant when \(Y_{2}\) admits a global section (see Definition 4.2). This construction is inspired by an analogous construction of a Chern-Simons invariant in the case of CR manifolds given in [BE] (see also [CL] for a relative version which does not depend on the existence of a global section). In [FV] we defined a similar invariant for flag structures. Those are manifolds equipped with a decomposition of a complex contact structure defined on the complexified tangent bundle of a 3-manifold. In this paper we restrict the definition to path structures. We obtain the expression of the invariant in terms of a reduction \(Y_{2}\) or \(Y_{1}\) of the Cartan bundle \(Y\) of the path structure (see Proposition 4.5). We also give a formula of the invariant in the case of a second order differential equation on the torus (Proposition 4.10). It involves an integration of fifth order derivatives of the function defining the ordinary equation in the form \(y^{\prime\prime}=F(x,y,y^{\prime})\). We use coordinates in the projective cotangent bundle over a surface as explained in section 4.1. We characterize certain families of differential equations on the torus which have vanishing global invariant in Corollary 4.11. We then compute the invariant for a family of path structures on tight contact structures on the torus (see Proposition 5.3) and characterize those structures with vanishing global invariant, they turn to be flat. Finally we compute the global invariant for homogeneous path structures on \({\bf SU}(2)\) (see Proposition 6.1) and identify the flat structure on the sphere where the global invariant is maximal.
The authors thank Martin Mion-Mouton for useful discussions.
## 2 Path structures in dimension 3
Path geometries are very related to the theory of second order differential equations. See a modern treatment in section 8.6 of [IL] and in [BGH] where the relation to second order differential equations is also explained. Le \(M\) be a real three dimensional manifold and \(TM\) be its tangent bundle.
**Definition 2.1**: _A path structure on \(M\) is a choice of two sub-bundles \(T^{1}\) and \(T^{2}\) in \(TM\) such that \(T^{1}\cap T^{2}=\{0\}\) and such that \(T^{1}\oplus T^{2}\) is a contact distribution._
The condition that \(T^{1}\oplus T^{2}\) be a contact distribution means that, locally, there exists a one form \(\theta\in T^{*}M\) such that \(\ker\theta=T^{1}\oplus T^{2}\) and \(d\theta\wedge\theta\) is never zero.
One can choose a contact form \(\theta\) up to a scalar function. One can interpret this as follows: one has a \(\mathbb{R}^{*}\)-bundle over the manifold given by the choice of \(\theta\) at each point (one might keep only positive multiples for simplicity). Over this line bundle one defines the tautological form \(\omega_{x}=\pi^{*}(\theta_{\pi(x)})\). This bundle is trivial if and only if there exists a global contact form \(\theta\). If the contact distribution is oriented, then there exists a global contact form. Indeed, using a global metric on the distribution one can define locally a transversal vector to the distribution taking a Lie bracket of orthonormal vectors in the distribution. This defines a global 1-form.
Fix \(\theta\) and local forms \(Z^{1}\) and \(Z^{2}\) defining the lines as above such that \(d\theta=Z^{1}\wedge Z^{2}\). There exists global forms \(Z^{1}\) and \(Z^{2}\) if and only if there exists global vector fields along the
lines. Clearly, if the contact distribution is oriented, it suffices that there exists a global vector field along one of the foliations by lines.
Local equivalence (also called point equivalence) between path structures happens when there exists a local diffeomorphism which gives a correspondence between the lines defining each structure.
### The flat model space
Flat path geometry is the geometry of real flags in \(\mathbb{R}^{3}\). That is the geometry of the space of all couples \((p,l)\) where \(p\in\mathbb{R}P^{2}\) and \(l\) is a real projective line containing \(p\). The space of flags is identified to the quotient
\[\mathbf{SL}(3,\mathbb{R})/B\]
where \(B\) is the Borel group of all real upper triangular matrices.
The Lie algebra of \(\mathbf{SL}(3,\mathbb{R})\) decomposes into the following direct sum of vector subspaces:
\[\mathfrak{sl}(3,\mathbb{R})=\mathfrak{g}^{-2}\oplus\mathfrak{g}^{-1}\oplus \mathfrak{g}^{0}\oplus\mathfrak{g}^{1}\oplus\mathfrak{g}^{2},\]
where
\[\mathfrak{g}^{-2}=\left\{\left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ z&0&0\end{array}\right)\right\},\quad\mathfrak{g}^{-1}=\left\{\left(\begin{array} []{ccc}0&0&0\\ x&0&0\\ 0&y&0\end{array}\right)\right\},\]
\[\mathfrak{g}^{0}=\left\{\left(\begin{array}{ccc}u+v&0&0\\ 0&-2v&0\\ 0&0&-u+v\end{array}\right)\right\},\]
\[\mathfrak{g}^{1}=\left\{\left(\begin{array}{ccc}0&a&0\\ 0&0&b\\ 0&0&0\end{array}\right)\right\},\quad\mathfrak{g}^{2}=\left\{\left(\begin{array} []{ccc}0&0&c\\ 0&0&0\\ 0&0&0\end{array}\right)\right\}.\]
That is the graded decomposition of \(\mathfrak{sl}(3,\mathbb{R})\) where \(\mathfrak{b}=\mathfrak{g}^{0}\oplus\mathfrak{g}^{1}\oplus\mathfrak{g}^{2}\) corresponds to upper triangular matrices with null trace. The tangent space of \(\mathbf{SL}(3,\mathbb{R})/B\) at \([B]\) is identified to
\[\mathfrak{sl}(3,\mathbb{R})/\mathfrak{b}=\mathfrak{g}^{-2}\oplus\mathfrak{g}^{ -1}.\]
### Examples
**Example I** Consider the Heisenberg group
\[\mathbf{Heis}(3)=\{\ (z,t)\ |\ z\in\mathbb{C},\,t\in\mathbb{R}\ \}\]
with multiplication defined by \((z_{1},t_{1})\star(z_{2},t_{2})=(z_{1}+z_{2},t_{1}+t_{2}+2\mathrm{Im}\,z_{1} \overline{z_{2}})\). The contact form
\[\theta=dt-xdy-ydx\]
is invariant under left multiplications (also called Heisenberg translations). If \(\Lambda\subset\mathbf{Heis}(3)\) is a lattice then the quotient \(\Lambda\setminus\mathbf{Heis}(3)\) is a circle bundle over the torus with a globaly defined contact form.
A lattice \(\Lambda\) determines a lattice \(\Gamma\subset\mathbb{C}\) corresponding to projection in the exact sequence
\[0\rightarrow\mathbb{R}\rightarrow\mathbf{Heis}(3)\rightarrow\mathbb{C} \to 0.\]
There are many global vector fields in the distribution defined by \(\theta\) invariant under \(\Lambda\), it suffices to lift an invariant vector field on \(\mathbb{C}\) under \(\Gamma\). All circle bundles obtained in this way are not trivial and the fibers are transverse to the distribution.
**Example II**. Here we consider the torus \(T^{3}\) with coordinates \((x,y,t)\) ( \(\mod 1\)) and the global contact form
\[\theta_{n}=\cos(2\pi nt)dx-\sin(2\pi nt)dy.\]
There are two canonical global vector fields on the distribution given by \(\frac{\partial}{\partial t}\) and \(\sin(2\pi nt)\frac{\partial}{\partial x}+\cos(2\pi nt)\frac{\partial}{\partial y}\). In this example, the fiber given by the coordinate \(t\) has tangent space contained in the distribution.
**Example III**. An homogeneous example is the Lie group \(\mathbf{SU}(2)\) with left invariant vector fields \(X\) and \(Y\) with \(Z=[X,Y]\) and cyclic commutation relations. The vector fields \(X\) and \(Y\) define a path structure on \(\mathbf{SU}(2)\).
**Example IV**. Another homogeneous example is the Lie group \(\mathbf{SL}(2,\mathbb{R})\) with left invariant vector fields \(X\) and \(Y\) with \(Z=[X,Y]\) with \([Z,X]=X\) and \([Z,Y]=-Y\) given by generators
\[X=\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right),\ Y=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right),\ Z=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right).\]
The path structure defined by \(X\) and \(Y\) induces a path structure on the quotient \(\Gamma\setminus\mathbf{SL}(2,\mathbb{R})\) by a discrete torsion free subgroup \(\Gamma\subset\mathbf{SL}(2,\mathbb{R})\). This structure is invariant under the flow defined by right multiplication by \(e^{tZ}\).
**Example V**. Let \(\Sigma\) be a surface equipped with a Riemannian metric. The geodesic flow on the unit tangent bundle \(T^{1}\Sigma\) defines a distribution which, together with the distribution defined by the vertical fibers of the projection of the unit tangent bundle on \(\Sigma\), defines a path structure which is not invariant under the geodesic flow. For \(\Sigma=H^{2}_{\mathbb{R}}\), the hyperbolic space, we obtain \(T^{1}\Sigma=\mathbf{PSL}(2,\mathbb{R})\) with distributions defined by the left invariant distributions \(X-Y\) and \(Z\) (using the same generators of the Lie algebra as in the previous example).
**Example VI** Let \(M\) be a three manifold equipped with a path structure \(D=T^{1}\oplus T^{2}\subset TM\). Suppose \(D\) is orientable and choose a section \(u\) of \(T^{1}\). Each section \(v\) of \(T^{2}\) such that \((u,v)\) is positive gives rise to a CR structure. Indeed we define \(Ju=v\) and \(Jv=-u\). The choice of \(v\) corresponds to a section of an \(\mathbb{R}^{*}_{+}\)-bundle over \(M\). Reciprocally given a CR structure on \(M\), defined by \(J:D\to D\), one can associate path structures corresponding to a choice \(T^{1}\subset D\) and defining then \(T^{2}=J(T^{1})\).
### Path structures and second order differential equations
This is studied since a long time (see [T], [IL] and [BGH]). It turns out that path structures can be obtained putting together second order differential equations in one variable. Indeed, a second order differential equation in one variable is described locally as
\[\frac{d^{2}y}{dx^{2}}=F(x,y,\frac{dy}{dx}).\]
This defines a path structure on a neighborhood of a point in \(\mathbb{R}^{3}\) with coordinates \((x,y,p)\):
\[L_{1}=\ker\{dp-Fdx\}\cap\ker\{dy-pdx\},\quad L_{2}=\ker dx\cap\ker dy.\]
The contact structure is defined by the form
\[\theta=dy-pdx.\]
Defining the forms \(Z^{1}=dx\) and \(Z^{2}=dp-Fdx\), one has that \(d\theta=Z^{1}\wedge Z^{2}\).
One can show easily that every path structure is, in fact, locally equivalent to a second order equation. That is, there exists local coordinates such that \(L_{1}\) and \(L_{2}\) are defined via a second order ODE as above.
### Reductions of path structures
We will describe two reductions of path geometry corresponding to subgroups \(G_{1}\subset G_{2}\subset SL(3,\mathbb{R})\) where
\[G_{1}=\left\{\left(\begin{array}{ccc}a&0&0\\ \star&\frac{1}{a^{2}}&0\\ \star&\star&a\end{array}\right)\right\}\]
and
\[G_{2}=\left\{\left(\begin{array}{ccc}a&0&0\\ \star&\frac{1}{ab}&0\\ \star&\star&b\end{array}\right)\right\}.\]
The models are \(G_{1}/\mathbb{R}^{*}\) and \(G_{2}/\mathbb{R}^{*2}\) and correspond to the Heisenberg group where in the first model we fix a contact form and, in the second, a transverse line to the contact distribution.
Other reductions of the \(G_{2}\)-structure might occur, namely by choosing other embeddings of \(\mathbb{R}^{*}\) into \(G_{2}\). They appear naturally when certain components of the curvature of the Cartan connections on \(Y_{2}\) or \(Y\) are non-vanishing.
We will construct coframe bundles \(Y_{1},Y_{2}\) and a principal bundle \(Y\) over \(M\) with structure groups \(\mathbb{R}^{*},\mathbb{R}^{*2}\) and the Borel group \(B\) together with Cartan connections and canonical embeddings
\[Y_{1}\to Y_{2}\to Y.\]
They correspond to a strict path structure, an enriched path structure (see next sections for definitions) and finally, a path structures on the manifold \(M\).
### Path structures with a fixed contact form: strict path structures.
In this section we fix a contact form and recall the reduction of the structure group of a path geometry obtained in [FV] where we called the path structure with a fixed contact form a pseudo flag structure. This structure is called strict path structure in [FMMV].
\(G_{1}\) denotes from now on the subgroup of \(\mathbf{SL}(3,\mathbb{R})\) defined by
\[G_{1}=\left\{\left(\begin{array}{ccc}a&0&0\\ x&\frac{1}{a^{2}}&0\\ z&y&a\end{array}\right)\ |\ a\in\mathbb{R}^{*},(x,y,z)\in\mathbb{R}^{3}\right\}\]
and \(P_{1}\subset G_{1}\) the subgroup defined by
\[P_{1}=\left\{\left(\begin{array}{ccc}a&0&0\\ 0&\frac{1}{a^{2}}&0\\ 0&0&a\end{array}\right)\right\}.\]
Writing the Maurer-Cartan form of \(G_{1}\) as the matrix
\[\left(\begin{array}{ccc}w&0&0\\ \theta^{1}&-2w&0\\ \theta&\theta^{2}&w\end{array}\right)\]
one obtains the Maurer-Cartan equations:
\[d\theta+\theta^{2}\wedge\theta^{1}=0\] \[d\theta^{1}-3w\wedge\theta^{1}=0\] \[d\theta^{2}+3w\wedge\theta^{2}=0\] \[dw=0.\]
\(G_{1}\) is the automorphism group of the canonical left-invariant strict path structure of \({\bf Heis}(3)\), and that its action induces an identification of \({\bf Heis}(3)\) with the homogeneous space \(X=G_{1}/P_{1}\).
Let \(M\) be a three-manifold equipped with a strict path structure \((E^{1},E^{2},\theta)\) defined by two one dimensional bundles \(E^{1}\) and \(E^{2}\) and contact form \(\theta\). We let \(R\) be the associated Reeb vector field (satisfying \(\iota_{R}d\theta=0\) and \(\theta(R)=1\)). Now let \(X_{1}\in E^{1}\), \(X_{2}\in E^{2}\) be such that \(d\theta(X_{1},X_{2})=1\). The dual coframe of \((X_{1},X_{2},R)\) is \((\theta^{1},\theta^{2},\theta)\), for two 1-forms \(\theta 1\) and \(\theta_{2}\) verifying \(d\theta=\theta^{1}\wedge\theta^{2}\).
At any point \(x\in M\), one can look at the coframes of the form
\[\omega^{1}=a^{3}\theta^{1}(x),\ \omega^{2}=\frac{1}{a^{3}}\theta^{2}(x),\ \omega= \theta(x)\]
for \(a\in\mathbb{R}^{*}\).
**Definition 2.2**: _We denote by \(p_{1}:Y_{1}\to M\) the \(\mathbb{R}^{*}\)-coframe bundle over \(M\) given by the set of coframes \((\omega,\omega^{1},\omega^{2})\) of the above form._
We will denote the tautological forms defined by \(\omega^{1},\omega^{2},\omega\) using the same letters. That is, we write \(\omega^{i}\) at the coframe \((\omega^{1},\omega^{2},\omega)\) to be \(p_{1}^{*}(\omega^{i})\).
**Proposition 2.3**: _There exists a unique Cartan connection on \(Y_{1}\)_
\[\pi_{1}=\left(\begin{array}{ccc}w&0&0\\ \omega^{1}&-2w&0\\ \omega&\omega^{2}&w\end{array}\right)\]
_such that its curvature form is of the form_
\[\Pi_{1}=d\pi_{1}+\pi_{1}\wedge\pi_{1}=\left(\begin{array}{ccc}dw&0&0\\ \omega\wedge\tau^{1}&-2dw&0\\ 0&-\omega\wedge\tau^{2}&dw\end{array}\right)\]
_with \(\tau^{1}\wedge\omega^{2}=\tau^{2}\wedge\omega^{1}=0\)._
Observe that the condition \(\tau^{1}\wedge\omega^{2}=\tau^{2}\wedge\omega^{1}=0\) implies that we may write \(\tau^{1}=\tau^{1}_{2}\omega^{2}\) and \(\tau^{2}=\tau^{2}_{1}\omega^{1}\). The structure equations are
\[d\omega+\omega^{2}\wedge\omega^{1}=0,\]
\[d\omega^{1}-3w\wedge\omega^{1}=\omega\wedge\tau^{1},\]
\[d\omega^{2}+3w\wedge\omega^{2}=-\omega\wedge\tau^{2}.\]
The proof of the proposition is given in [FMMV] and [FV].
Bianchi identities are obtained differentiating the structure equations. They are described in the following equations:
\[dw=C\omega\wedge\omega^{1}+D\omega\wedge\omega^{2}+S\omega^{1} \wedge\omega^{2}, \tag{1}\] \[d\tau^{1}_{2}-6\tau^{1}_{2}w+3D\omega^{1}=\tau^{1}_{20}\omega+ \tau^{1}_{22}\omega^{2}\] (2) \[d\tau^{2}_{1}+6\tau^{2}_{1}w+3C\omega^{2}=\tau^{2}_{10}\omega+ \tau^{2}_{11}\omega^{1} \tag{3}\]
### Path structures with a fixed transverse line: enriched path structures.
In this section we introduce a coframe bundle and a Cartan connection associated to a path structure with a fixed transverse line to the the contact distribution.
The model space is the homogeneous space which is the quotient of the group of lower triangular matrices in \(SL(3,\mathbb{R})\) by the subgroup of diagonal matrices. The Maurer-Cartan form is the Lie algebra valued form which can be represented by
\[\pi=\left(\begin{array}{ccc}\varphi+w&0&0\\ \omega^{1}&-2w&0\\ \omega&\omega^{2}&-\varphi+w\end{array}\right)\]
The Maurer-Cartan equations \(d\pi+\pi\wedge\pi=0\) are given by
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}\]
\[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}\]
\[d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}.\]
Let \(M\) be a three manifold equipped with a path structure \(D=T^{1}\oplus T^{2}\subset TM\). We fix a transverse line \(L\) so that \(TM=T^{1}\oplus T^{2}\oplus L\).
We suppose \(X_{1}\in T^{1}\), \(X_{2}\in T^{2}\) and \(X\in L\) is a frame. The dual coframe is \(\theta^{1}\), \(\theta^{2}\) and \(\theta\). Observe that \(\theta\) is simply a form with \(\ker\theta=D\). One can define a coframe bundle defined by all coframes:
\[\omega^{1}=a^{1}\theta^{1},\ \omega^{2}=a^{2}\theta^{2},\ \omega=\lambda\theta.\]
where we will suppose, for simplicity, that \(a^{1},a^{2},\lambda>0\).
A reduction of this coframe bundle is obtained by imposing that each coframe verifies
\[d\omega_{|D}=(\omega^{1}\wedge\omega^{2})_{|D}\]
for an extension of the 1-form such that \(\ker\omega=D\). This relation does not depend on the particular extension of a form \(\omega\) defined at a point because \(d\omega_{|D}(X,Y)=-\omega([X,Y])\) for any vector fields \(X\) and \(Y\) which are sections of the distribution \(D\).
**Definition 2.4**: _We denote by \(p_{2}:Y_{2}\to M\) the \(\mathbb{R}^{*2}\)-coframe bundle over \(M\) given by the set of 1-forms \((\omega,\omega^{1},\omega^{2})\) defined above. The structure group \(\mathbb{R}^{*2}\) acts as follows_
\[(\omega^{\prime},\omega^{\prime 1},\omega^{\prime 2})=(\omega,\omega^{1}, \omega^{2})\left(\begin{array}{ccc}\lambda&0&0\\ 0&a^{1}&0\\ 0&0&a^{2}\end{array}\right)\]
_where \(\lambda,a^{1},a^{2}\in\mathbb{R}^{*}_{+}\) with \(a^{1}a^{2}=\lambda\)._
In order to define a Cartan connection on \(Y_{2}\) we start taking the tautological forms corresponding to the forms \(\omega,\omega^{1},\omega^{2}\), which we will denote by the same letters by abuse of notation.
Using a coframe section \((\theta,\theta^{1},\theta^{2})\) on \(M\) one can express the tautological forms as
\[\omega=\lambda p_{2}^{*}(\theta),\ \omega^{1}=a^{1}p_{2}^{*}(\theta^{1}),\ \omega^{2}=a^{2}p_{2}^{*}(\theta^{2}),\]
with \(a^{1}a^{2}=\lambda\).
We need to define two forms \(\varphi\) and \(w\) corresponding to the vertical directions
Observe first that we have
\[d\omega=\frac{d\lambda}{\lambda}\wedge\omega+\omega^{1}\wedge\omega^{2}\ \ \mathrm{mod}(\omega)\]
and therefore one may write
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2} \tag{4}\]
where \(\varphi\) restricted to the vertical fiber is \(\frac{d\lambda}{2\lambda}\). The form \(\varphi\) is not yet fixed and any other form \(\varphi^{\prime}\) satisfying the equation satisfies
\[\varphi-\varphi^{\prime}=s\omega\]
where \(s\) is a funtion on \(Y_{2}\).
Differentiating the forms \(\omega^{1}\) and \(\omega^{2}\) we obtain new forms which correspond to the coordinates \(a^{1},a^{2}\) :
\(d\omega^{1}=\frac{da^{1}}{a^{1}}\wedge\omega^{1}+a^{1}d\theta^{1}\) and \(d\omega^{2}=\frac{da^{2}}{a^{2}}\wedge\omega^{2}+a^{2}d\theta^{2}\).
Observing that
\[\frac{d\lambda}{\lambda}=\frac{da^{1}}{a^{1}}+\frac{da^{2}}{a^{2}}\]
we can write
\[d\omega^{1}=\frac{d\lambda}{2\lambda}\wedge\omega^{1}+\frac{1}{2}\left(\frac{ da^{1}}{a^{1}}-\frac{da^{2}}{a^{2}}\right)\wedge\omega^{1}+a^{1}d\theta^{1}\]
\[d\omega^{2}=\frac{d\lambda}{2\lambda}\wedge\omega^{2}-\frac{1}{2}\left(\frac{ da^{1}}{a^{1}}-\frac{da^{2}}{a^{2}}\right)\wedge\omega^{2}+a^{2}d\theta^{2}\]
Now we can make the first right hand term of each equation to be \(\varphi\wedge\omega^{1}\) and \(\varphi\wedge\omega^{2}\) respectively by adding terms in \(\omega,\omega^{1},\omega^{2}\) to \(\frac{d\lambda}{2\lambda}\). The terms in \(\omega^{1}\wedge\omega^{2}\) not appearing in these first terms can be absorbed in the second term in each equation. It remains a last term in each equation that we denote by \(\omega\wedge\tau^{1}\) and \(-\omega\wedge\tau^{2}\) respectively. We proved the following:
**Lemma 2.1**: _There exists forms \(w,\tau^{1},\tau^{2}\) defined on \(Y_{2}\) such that_
\[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}+\omega\wedge\tau^{1}\ \mbox{and}\ \ d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}-\omega\wedge\tau^{2}. \tag{5}\]
_The forms \(\tau^{1}\) and \(\tau^{2}\) are horizontal, that is, they vanish on vectors tangent to the fibers of \(Y_{2}\to M\). Moreover, writing \(\omega^{1}=a^{1}\theta^{1}\), \(\omega^{2}=a^{2}\theta^{2}\), \(\omega=\lambda\theta\) for a choice of sections on \(M\), one has \(\varphi=\frac{d\lambda}{2\lambda}\) and \(6w=\frac{da^{1}}{a^{1}}-\frac{da^{2}}{a^{2}}\) modulo the tautological forms of the fiber bundle \(Y_{2}\)._
Let \(\varphi^{\prime},w^{\prime},\tau^{\prime 1}\) and \(\tau^{\prime 2}\) be other forms satisfying equations above. Taking the difference we obtain
\[0=(\varphi-\varphi^{\prime})\wedge\omega^{1}+3(w-w^{\prime})\wedge\omega^{1}+ \omega\wedge(\tau^{1}-\tau^{\prime 1})\]
and
\[0=(\varphi-\varphi^{\prime})\wedge\omega^{2}-3(w-w^{\prime})\wedge\omega^{2}- \omega\wedge(\tau^{2}-\tau^{\prime 2})\]
Therefore, as \(\varphi-\varphi^{\prime}=s\omega\), we can write
\[0=-3\omega^{1}\wedge(w-w^{\prime})+\omega\wedge(s\omega^{1}+\tau^{1}-\tau^{ \prime 1})\]
and
\[0=3\omega^{2}\wedge(w-w^{\prime})-\omega\wedge(-s\omega^{2}+\tau^{2}-\tau^{ \prime 2}).\]
By Cartan's lemma we obtain
\[w-w^{\prime}=a\omega,\]
\[\tau^{1}-\tau^{\prime 1}=-3a\omega^{1}-s\omega^{1}+b^{1}\omega,\]
\[\tau^{2}-\tau^{\prime 2}=-3a\omega^{2}+s\omega^{2}+b^{2}\omega.\]
Now, we can impose that \(\tau^{1}=\tau^{1}_{1}\omega^{1}+\tau^{1}_{2}\omega^{2}\) and \(\tau^{2}=\tau^{2}_{1}\omega^{1}+\tau^{2}_{2}\omega^{2}\) by choosing convenient \(b^{1}\) and \(b^{2}\) (or by simply considering, from the beginning, \(\tau^{1}\) and \(\tau^{2}\) with no terms in \(\omega\)). Moreover, one can choose unique \(a\) and \(s\) so that \(\tau^{1}_{1}=0\) and \(\tau^{2}_{2}=0\). We conclude that
**Lemma 2.2**: _There exists unique forms \(\varphi,w,\tau^{1},\tau^{2}\) defined on \(Y_{2}\) such that_
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}\]
\[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}+\omega\wedge\tau^{1}\]
\[d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}-\omega\wedge\tau^{2}\]
_with \(\tau^{1}\wedge\omega^{2}=\tau^{2}\wedge\omega^{1}=0\)._
Bianchi identities are obtained differentiating the above equations:
**Lemma 2.3**: _There exists a 1-form \(\psi\) such that_
\[d\varphi=\omega\wedge\psi \tag{6}\]
_The form \(\psi\) may be chosen satisfying \(\psi=A\omega^{1}+B\omega^{2}\) and \(d\psi=-2\varphi\wedge\psi+\omega\wedge\alpha\) where \(A,B\) are functions on \(Y_{2}\) and \(\alpha\) is a 1-form on \(Y_{2}\)._
Proof.: Differentiating equation \(d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}\) one obtains, using equations 5, that \(d\varphi\wedge\omega=0\), that is,
\[d\varphi=\omega\wedge\psi \tag{7}\]
for a 1-form \(\psi\) defined on \(Y_{2}\).
Differentiating \(d\varphi=\omega\wedge\psi\) one has
\[0=d\omega\wedge\psi-\omega\wedge d\psi=(2\varphi\wedge\omega+\omega^{1}\wedge \omega^{2})\wedge\psi-\omega\wedge d\psi=\omega^{1}\wedge\omega^{2}\wedge\psi- \omega\wedge(d\psi+2\varphi\wedge\psi).\]
Using Cartan's lemma, \(\psi=A\omega^{1}+B\omega^{2}\) modulo \(\omega\), and we certainly can choose \(\psi\) satisfying \(d\varphi=\omega\wedge\psi\) with \(\psi=A\omega^{1}+B\omega^{2}\). We conclude that
\[d\psi+2\varphi\wedge\psi=\omega\wedge\alpha.\]
\(\Box\)
Equation \(dd\omega^{1}=0\) gives after simplifications
\[0=d(\varphi+3w)\wedge\omega^{1}+\omega\wedge\omega^{2}(d\tau_{2}^{1}+2\tau_{2 }^{1}(\varphi-3w)). \tag{8}\]
Analogously, \(dd\omega^{2}=0\) simplifies to
\[0=d(\varphi-3w)\wedge\omega^{2}-\omega\wedge\omega^{1}(d\tau_{1}^{2}+2\tau_{1 }^{2}(\varphi+3w)). \tag{9}\]
Using the previous lemma we may write
\[dw=C\omega\wedge\omega^{1}+D\omega\wedge\omega^{2}+S\omega^{1}\wedge\omega^{2},\]
where \(C,D\) and S are functions on \(Y_{2}\).
We can represent the equations above as a matrix equation whose entries are differential forms. The forms are disposed in the Lie algebra \(\mathfrak{b}\subset\mathfrak{sl}(3,\mathbb{R})\) (the Lie algebra of lower triangular matrices) and we obtain the following Proposition.
**Proposition 2.5**: _Let \(Y_{2}\) be the adapted principal bundle constructed above associated to an enriched path structure on a manifold \(M\). Then there exists a unique Cartan's connection with values in \(\mathfrak{b}\):_
\[\pi_{2}=\left(\begin{array}{ccc}\varphi+w&0&0\\ \omega^{1}&-2w&0\\ \omega&\omega^{2}&-\varphi+w\end{array}\right)\]
_with curvature:_
\[\Pi_{2}=d\pi_{2}+\pi_{2}\wedge\pi_{2}=\left(\begin{array}{ccc}\omega\wedge \psi+W&0&0\\ \tau_{2}^{1}\omega\wedge\omega^{2}&-2W&0\\ 0&-\tau_{1}^{2}\omega\wedge\omega^{1}&-\omega\wedge\psi+W\end{array}\right) \tag{10}\]
_where \(W=C\omega\wedge\omega^{1}+D\omega\wedge\omega^{2}+S\omega^{1}\wedge\omega^{2}\) and \(\psi=A\omega^{1}+B\omega^{2}\)._
#### 2.6.1 More Bianchi identities
* Substituting the expressions above in equations 8 and 9 we obtain \[d\tau_{2}^{1}+2\tau_{2}^{1}(\varphi-3w)+(B+3D)\omega^{1}=\tau_{20}^{1}\omega+\tau _{22}^{1}\omega^{2}.\] (11)
* Analogously we obtain \[d\tau_{1}^{2}+2\tau_{1}^{2}(\varphi+3w)-(A-3C)\omega^{2}=\tau_{10}^{2}\omega+ \tau_{11}^{2}\omega^{1}.\] (12) From the last two equations we obtain the following **Proposition 2.6**: _If the adapted connection of \(Y_{2}\) has nul torsion and_ \[dw=S\omega^{1}\wedge\omega^{2},\] _then \(d\varphi=0\)._
* Analogously, \(dd\varphi=0\) simplifies to \[0=\omega\wedge\omega^{1}(dA+3A(\varphi+w))+\omega\wedge\omega^{2}(dB+3B( \varphi-w))\] and we obtain \[dA+3A(\varphi+w)=A_{0}\omega+A_{1}\omega^{1}+A_{2}\omega^{2},\] (13) \[dB+3B(\varphi-w)=B_{0}\omega+B_{1}\omega^{1}+B_{2}\omega^{2},\] (14) with \(A_{2}=B_{1}\).
* Also, \(ddw=0\) simplifies to \[0=\omega\wedge\omega^{1}(dC+3C(\varphi+w))+\omega\wedge\omega^{2}(dD+3D( \varphi-w))+\omega^{1}\wedge\omega^{2}(dS+2S\varphi)\] and we obtain \[dC+3C(\varphi+w)=C_{0}\omega+C_{1}\omega^{1}+C_{2}\omega^{2},\] (15) \[dD+3D(\varphi-w)=D_{0}\omega+D_{1}\omega^{1}+D_{2}\omega^{2},\] (16) \[dS+2S\varphi=S_{0}\omega+S_{1}\omega^{1}+S_{2}\omega^{2},\] (17) with \(C_{2}-D_{1}+S_{0}=0\).
**Lemma 2.4**: _If \(\tau^{1}=\tau^{2}=C=D=0\)_
\[d\varphi=0.\]
_Proof_. From the last formulae we obtain that \(\psi\) is a multiple of \(\omega\) and the result follows. \(\Box\)
#### 2.6.2 The embedding \(\iota_{1}:Y_{1}\to Y_{2}\)
Given a path structure with a fixed contact form \(\omega\) we obtained first a coframe bundle \(Y_{1}\) and one can also obtain a canonical transverse direction by considering the Reeb vector field associated to \(\omega\). One obtains then a coframe bundle \(Y_{2}\) of last section.
Given a coframe \((\omega,\omega^{1},\omega^{2})\in Y_{1}\) one can view the same coframe as a coframe of \(Y_{2}\). This gives an embedding
\[\iota_{1}:Y_{1}\to Y_{2}.\]
By abuse of language we may write the connection forms of \(Y_{1}\) and \(Y_{2}\) using the same letters and then obtain:
**Proposition 2.7**: _There exists a unique embedding \(\iota_{1}:Y_{1}\to Y_{2}\) satisfying \(\iota_{1}^{*}(\omega)=\omega\), \(\iota_{1}^{*}(\omega^{1})=\omega^{1}\) and \(\iota_{1}^{*}(\omega^{2})=\omega^{2}\). Moreover, for this embedding, \(\iota_{1}^{*}(\varphi)=0\) and \(\iota_{1}^{*}(w)=w\)._
_Proof._ If unicity is not satisfied one can obtain the same forms pulling back a different coframe. But from the transformations of the coframe,
\[\tilde{\omega} =\frac{a}{b}\,\omega\] \[\tilde{\omega}^{1} =a^{2}b\,\omega^{1}\] \[\tilde{\omega}^{2} =\frac{1}{ab^{2}}\,\omega^{2}.\]
We obtain then that \(a=b=1\) and the embedding is uniquely determined by the conditions.
Comparing the structure equations of both structures we further get the equations \(\iota_{1}^{*}(\varphi)=0\) and \(\iota_{1}^{*}(w)=w\). \(\Box\)
## 3 The Cartan connection of a path structure
We review in this section the construction of a Cartan connection. The construction is due to E. Cartan in [Car] and one can read a modern description of it in [IL]. We include this section in order to fix our conventions and describe the embedding of \(Y_{2}\) into the corresponding fiber bundle associated to a path geometry (see 3.4 and 3.4.1) which will be used to define the global invariant.
The Maurer-Cartan form on \(SL(3,\mathbb{R})\) is given by a form with values in the Lie algebra \(\mathfrak{sl}(3,\mathbb{R})\) :
\[\pi=\left(\begin{array}{ccc}\varphi+w&\varphi^{2}&\psi\\ \omega^{1}&-2w&\varphi^{1}\\ \omega&\omega^{2}&-\varphi+w\end{array}\right)\]
satisfying the equation \(d\pi+\pi\wedge\pi=0\). That is
\[d\omega=\omega^{1}\wedge\omega^{2}+2\varphi\wedge\omega\] \[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}+\omega \wedge\varphi^{1}\]
\[d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}-\omega\wedge\varphi^{2}\]
\[dw=-\frac{1}{2}\varphi^{2}\wedge\omega^{1}+\frac{1}{2}\varphi^{1}\wedge\omega^{2}\]
\[d\varphi=\omega\wedge\psi-\frac{1}{2}\varphi^{2}\wedge\omega^{1}-\frac{1}{2} \varphi^{1}\wedge\omega^{2}\]
\[d\varphi^{1}=\psi\wedge\omega^{1}-\varphi\wedge\varphi^{1}+3w\wedge\varphi^{1}\]
\[d\varphi^{2}=-\psi\wedge\omega^{2}-\varphi\wedge\varphi^{2}-3w\wedge\varphi^{2}\]
\[d\psi=\varphi^{1}\wedge\varphi^{2}+2\psi\wedge\varphi.\]
### The \(\mathbb{R}^{*}\)-bundle of contact forms and an adapted coframe bundle
We recall the construction of the \(\mathbb{R}^{*}\)-bundle of contact forms. Define \(E\) to be the \(\mathbb{R}^{*}\)-bundle of all forms \(\theta\) on \(TM\) such that \(\ker\theta=T^{1}\oplus T^{2}\). Remark that this bundle is trivial if and only if there exists a globally defined non-vanishing form \(\theta\). Define the set of forms \(\theta^{1}\) and \(\theta^{2}\) on \(M\) satisfying
\[\theta^{1}(T^{1})\neq 0\ \ \text{and}\ \theta^{2}(T^{2})\neq 0.\]
\[\ker\theta^{1}_{|\ker\theta}=T^{2}\ \ \text{and}\ \ \ker\theta^{2}_{|\ker \theta}=T^{1}.\]
Fixing one choice, all others are given by \(\theta^{\prime i}=a^{i}\theta^{i}+v^{i}\theta\).
On \(E\) we define the tautological form \(\omega\). That is \(\omega_{\theta}=\pi^{*}(\theta)\) where \(\pi:E\to M\) is the natural projection. We also consider the tautological forms defined by the forms \(\theta^{1}\) and \(\theta^{2}\) over the line bundle \(E\). That is, for each \(\theta\in E\) we let \(\omega^{i}_{\theta}=\pi^{*}(\theta^{i})\). At each point \(\theta\in E\) we have the family of forms defined on \(E\):
\[\omega^{\prime}=\omega\]
\[\omega^{\prime 1}=a^{1}\omega^{1}+v^{1}\omega\]
\[\omega^{\prime 2}=a^{2}\omega^{2}+v^{2}\omega\]
We may, moreover, suppose that
\[d\theta=\theta^{1}\wedge\theta^{2}\ \ \text{modulo}\ \theta\]
and therefore
\[d\omega=\omega^{1}\wedge\omega^{2}\ \ \text{modulo}\ \omega.\]
This imposes that \(a^{1}a^{2}=1\).
Those forms vanish on vertical vectors, that is, vectors in the kernel of the map \(TE\to TM\). In order to define non-horizontal 1-forms we let \(\theta\) be a section of \(E\) over \(M\) and introduce the coordinate \(\lambda\in\mathbb{R}^{*}\) in \(E\). By abuse of notation, let \(\theta\) denote the tautological form on the section \(\theta\). We write then the tautological form \(\omega\) over \(E\) is
\[\omega_{\lambda\theta}=\lambda\theta.\]
Differentiating this formula we obtain
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2} \tag{18}\]
where \(\varphi=\frac{d\lambda}{2\lambda}\) modulo \(\omega,\omega^{1},\omega^{2}\). Here \(\frac{d\lambda}{2\lambda}\) is a form intrinsically defined on \(E\) up to horizontal forms. We obtain in this way a coframe bundle satisfying equation 18 over \(E\):
\[\omega^{\prime}=\omega\]
\[\omega^{\prime 1}=a^{1}\omega^{1}+v^{1}\omega\]
\[\omega^{\prime 2}=a^{2}\omega^{2}+v^{2}\omega\]
\[\varphi^{\prime}=\varphi-\frac{1}{2}a^{1}v^{2}\omega^{1}+\frac{1}{2}a^{2}v^{ 1}\omega^{2}+s\omega\]
\(v^{1},v^{2},s\in\mathbb{R}\) and \(a^{1},a^{2}\in\mathbb{R}^{*}\) such that \(a^{1}a^{2}=1\).
**Definition 3.1**: _We denote by \(Y\) the coframe bundle \(Y\to E\) given by the set of 1-forms \(\omega,\omega^{1},\omega^{2},\varphi\) as above. Two coframes are related by_
\[(\omega^{\prime},\omega^{\prime 1},\omega^{\prime 2},\varphi^{\prime})=( \omega,\omega^{1},\omega^{2},\varphi)\left(\begin{array}{cccc}1&v^{1}&v^{2}& s\\ 0&a^{1}&0&-\frac{1}{2}a^{1}v^{2}\\ 0&0&a^{2}&\frac{1}{2}a^{2}v^{1}\\ 0&0&0&1\end{array}\right)\]
_where and \(s,v^{1},v^{2}\in\mathbb{R}\) and \(a^{1},a^{2}\in\mathbb{R}^{*}\) satisfy \(a^{1}a^{2}=1\)._
The bundle \(Y\) can also be fibered over the manifold \(M\). In order to describe the bundle \(Y\) as a principal fiber bundle over \(M\) observe that choosing a local section \(\theta\) of \(E\) and forms \(\theta^{1}\) and \(\theta^{2}\) on \(M\) such that \(d\theta=\theta^{1}\wedge\theta^{2}\) one can write a trivialization of the fiber
\[\omega=\lambda\theta\]
\[\omega^{1}=a^{1}\theta^{1}+v^{1}\lambda\theta\]
\[\omega^{2}=a^{2}\theta^{2}+v^{2}\lambda\theta\]
\[\varphi=\frac{d\lambda}{2\lambda}-\frac{1}{2}a^{1}v^{2}\theta^{1}+\frac{1}{2 }a^{2}v^{1}\theta^{2}+s\theta,\]
where \(v^{1},v^{2},s\in\mathbb{R}\) and \(a^{1},a^{2}\in\mathbb{R}^{*}\) such that \(a^{1}a^{2}=\lambda\). Here the coframe \(\omega,\omega^{1},\omega^{2},\varphi\) is seen as composed of tautological forms.
The group \(H\) acting on the right of this bundle is
\[H=\left\{\left(\begin{array}{cccc}\lambda&v^{1}\lambda&v^{2}\lambda&s\\ 0&a^{1}&0&-\frac{1}{2}a^{1}v^{2}\\ 0&0&a^{2}&\frac{1}{2}a^{2}v^{1}\\ 0&0&0&1\end{array}\right)\text{ where }s,v^{1},v^{2}\in\mathbb{R}\text{ and }a^{1},a^{2}\in\mathbb{R}^{*}\text{ satisfy }a^{1}a^{2}=\lambda\ \right\}.\]
Consider the homomorphism from the Borel group \(B\subset\mathbf{SL}(3,\mathbb{R})\) of upper triangular matrices with determinant one into \(H\)
\[j:B\to H\]
given by
\[\left(\begin{array}{ccc}a&c&e\\ 0&\frac{1}{a\bar{b}}&f\\ 0&0&b\end{array}\right)\longrightarrow\left(\begin{array}{ccc}\frac{a}{b}&-a^{2} f&\frac{c}{b}&-eb+\frac{1}{2}acf\\ 0&a^{2}b&0&-\frac{1}{2}abc\\ 0&0&\frac{1}{a\bar{b}^{2}}&-\frac{f}{2\bar{b}}\\ 0&0&0&1\end{array}\right)\]
One verifies that the homomorphism is surjective so that \(H\) is isomorphic to the Borel group of upper triangular matrices in \({\bf SL}(3,\mathbb{R})\).
**Proposition 3.2**: _The bundle \(Y\to M\) is a principal bundle with structure group \(H\)._
### Construction of connection forms on the bundle \(Y\)
The goal of this section is to review the construction of canonical forms defined on the coframe bundle \(Y\to E\) as in [FV]. They give rise to a Cartan connection on \(Y\) with values in \(\mathfrak{sl}(3,\mathbb{R})\).
A local section of the coframe bundle over \(M\) may be given by three forms
\[\theta,\ \ \theta^{1},\ \ \theta^{2}\]
satisfying \(d\theta=\theta^{1}\wedge\theta^{2}\), with \(\ker\theta^{1}_{|\ker\theta}=T^{2}\) and \(\ker\theta^{2}_{|\ker\theta}=T^{1}.\) They give coordinates on the cotangent bundle over \(E\). Indeed, at \(\lambda\theta\in E\), the coframes of \(Y\) are parametrized as follows:
\[\omega=\lambda\theta\]
\[\omega^{i}=a^{i}\theta^{i}+v^{i}\lambda\theta\]
with \(a^{1}a^{2}=\lambda\) and
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}, \tag{19}\]
where \(\varphi=\frac{d\lambda}{2\lambda}\mod\omega^{1},\omega^{2},\omega\).
Differentiating the forms \(\omega^{1}\) and \(\omega^{2}\) we obtain new forms which correspond to the coordinates \(a^{1},v^{1},a^{2},v^{2}\) (recall that \(a^{1}\) and \(a^{2}\) are not independent):
**Lemma 3.1**: _There exists linearly independent forms \(w,\varphi^{1},\varphi^{2}\) defined on \(T^{*}Y\) such that_
\[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}+\omega\wedge\varphi^{1} \mbox{ and }\ d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}-\omega\wedge\varphi^ {2} \tag{20}\]
_with \(w=\frac{1}{6}(\frac{da^{1}}{a^{1}}-\frac{da^{2}}{a^{2}})\) mod \((\omega,\omega^{1},\omega^{2})\) and \(\varphi^{1}=-dv^{1}\), \(\varphi^{2}=dv^{2}\) mod \((\omega,\omega^{1},\omega^{2})\)._
The coordinate \(s\) in the bundle \(Y\) is associated to a new form:
**Lemma 3.2**: _There exists a 1-form \(\psi\) such that_
\[d\varphi=\omega\wedge\psi-\frac{1}{2}(\varphi^{2}\wedge\omega^{1}+\varphi^{1} \wedge\omega^{2}) \tag{21}\]
The forms \(w,\varphi^{1},\varphi^{2}\) and \(\psi\) are not yet determined. Define
\[W=dw+\frac{1}{2}\omega^{2}\wedge\varphi^{1}-\frac{1}{2}\omega^{1}\wedge\varphi^{2}\]
\[\Phi^{1}=d\varphi^{1}+3\varphi^{1}\wedge w+\omega^{1}\wedge\psi+\varphi\wedge \varphi^{1}\]
\[\Phi^{2}=d\varphi^{2}-3\varphi^{2}\wedge w-\omega^{2}\wedge\psi+\varphi\wedge \varphi^{2}\]
**Lemma 3.3**: _There exists unique forms \(w,\varphi^{1},\varphi^{2}\) and \(\psi\) such that \(W=0\), \(\Phi^{1}=Q^{1}\omega\wedge\omega^{2}\) and \(\Phi^{2}=Q^{2}\omega\wedge\omega^{1}\) where \(Q^{1}\) and \(Q^{2}\) are functions on \(Y\)._
We can represent the structure equations 19, 20, 21 as a matrix equation whose entries are differential forms. The forms are disposed in the Lie algebra \(\mathfrak{sl}(3,\mathbb{R})\) and define a Cartan connection on \(Y\).
**Proposition 3.3**: _There exists a unique Cartan connection \(\pi:TY\to\mathfrak{sl}(3,\mathbb{R})\) defined on \(Y\) of the form_
\[\pi=\left(\begin{array}{ccc}\varphi+w&\varphi^{2}&\psi\\ \omega^{1}&-2w&\varphi^{1}\\ \omega&\omega^{2}&-\varphi+w\end{array}\right).\]
_such that its curvature satisfies_
\[\Pi=d\pi+\pi\wedge\pi=\left(\begin{array}{ccc}0&\Phi^{2}&\Psi\\ 0&0&\Phi^{1}\\ 0&0&0\end{array}\right)\]
_with \(\Phi^{1}=Q^{1}\omega\wedge\omega^{2}\), \(\Phi^{2}=Q^{2}\omega\wedge\omega^{1}\) and \(\Psi=(U_{1}\omega^{1}+U_{2}\omega^{2})\wedge\omega\)._
### Curvature forms and Bianchi identities
Curvature forms appear as differentials of connection forms and are used implicitly in order to fix the connection forms.
We recall:
\[W=dw-\frac{1}{2}\omega^{2}\wedge\varphi^{1}+\frac{1}{2}\omega^{1}\wedge\varphi ^{2}=0, \tag{22}\]
\[\Phi^{1}=d\varphi^{1}+3\varphi^{1}\wedge w+\omega^{1}\wedge\psi+\varphi\wedge \varphi^{1}=Q^{1}\omega\wedge\omega^{2}, \tag{23}\]
\[\Phi^{2}=d\varphi^{2}-3\varphi^{2}\wedge w-\omega^{2}\wedge\psi+\varphi\wedge \varphi^{2}=Q^{2}\omega\wedge\omega^{1}, \tag{24}\]
\[\Psi:=d\psi-\varphi^{1}\wedge\varphi^{2}-2\varphi\wedge\psi=(U_{1}\omega^{1}+U _{2}\omega^{2})\wedge\omega. \tag{25}\]
where \(Q^{1},Q^{2},U^{1}\) and \(U^{2}\) are functions on \(Y\).
Equation \(d(d\varphi^{1})=0\) obtained differentiating \(\Phi^{1}\) above implies
\[dQ^{1}-6Q^{1}w+4Q^{1}\varphi=S^{1}\omega+U_{2}\omega^{1}+T^{1}\omega^{2}, \tag{26}\]
'where we introduced functions \(S^{1}\) and \(T^{1}\).
Analogously, equation \(d(d\varphi^{2})=0\) obtained differentiating \(\Phi^{2}\) above implies
\[dQ^{2}+6Q^{2}w+4Q^{2}\varphi=S^{2}\omega-U_{1}\omega^{2}+T^{2}\omega^{1}, \tag{27}\]
where we introduced new functions \(S^{2}\) and \(T^{2}\).
Equation \(d(d\psi)=0\) obtained from 25 implies
\[dU_{1}+5U_{1}\varphi+3U_{1}w+Q^{2}\varphi^{1}=A\omega+B\omega^{1}+C\omega^{2} \tag{28}\]
and
\[dU_{2}+5U_{2}\varphi-3U_{2}w-Q^{1}\varphi^{2}=D\omega+C\omega^{1}+E\omega^{2}. \tag{29}\]
### Embedding \(\iota_{2}:Y_{2}\to Y\)
The goal now is to obtain an immersion \(\iota_{2}:Y_{2}\to Y\). One can construct the bundle \(Y_{2}\) using the bundle \(E\) of contact forms as a first step. Than \(Y_{2}\) is a coframe bundle over \(E\) obtained by the tautological forms \(\omega,\omega^{1},\omega^{2}\) corresponding to forms \(\theta,\theta^{1},\theta^{2}\) satisfying \(d\theta=\theta^{1}\wedge\theta^{2}+2\varphi\wedge\omega\) with an appropriate \(\varphi\).
By abuse of language again as for \(\iota_{1}:Y_{1}\to Y_{2}\), we may write the connection forms of \(Y_{1}\) and \(Y\) using the same letters and then obtain:
**Proposition 3.4**: _There exists a unique embedding \(\iota_{2}:Y_{2}\to Y\) satisfying \(\iota_{2}^{*}(\omega)=\omega\), \(\iota_{2}^{*}(\omega^{1})=\omega^{1}\), \(\iota_{2}^{*}(\omega^{2})=\omega^{2}\), \(\iota_{2}^{*}(\varphi)=\varphi\)._
_Proof._ As \(Y_{2}\) and \(Y\) are both coframe bundles over the line bundle \(E\) of all contact forms, we can assume that the embedding projects to the identity map on \(E\). Over \(E\), \(Y\) is a coframe bundle with structure group
\[\left\{\left(\begin{array}{ccc}a&c&e\\ 0&\frac{1}{a^{2}}&f\\ 0&0&a\end{array}\right)\right\}.\]
In order to determine the embedding we need to choose functions \(c\), \(e\) and \(f\). The diagonal matrix correspond to the fiber of \(Y_{2}\) and does not need to be fixed. Consider then a map from \(M\) to the group above given by
\[h=\left(\begin{array}{ccc}1&c&e\\ 0&1&f\\ 0&0&1\end{array}\right).\]
Recall that
\[{R_{h}}^{*}\pi=h^{-1}d\,h+Ad_{h^{-1}}\pi.\]
We obtain, neglecting the terms of the connection of \(Y\) which are not relevant, the following transformation formulae. Remark that the term \(h^{-1}d\,h\) does not appear in the transformation of these components.
\[\tilde{\omega} =\omega\] \[\tilde{\omega}^{1} =\omega^{1}-f\,\omega\] \[\tilde{\omega}^{2} =\omega^{2}+c\,\omega\] \[\tilde{\varphi} =\varphi-\frac{1}{2}c\,\omega^{1}-f\,\omega^{2}+(\frac{1}{2}cf-e)\,\omega \tag{30}\]
The forms \(\omega^{1}\) and \(\omega^{2}\) defined at each point of \(Y_{2}\) define corresponding forms \(\omega^{1}\) and \(\omega^{2}\) in \(Y\). We observe then that the functions \(f\) and \(c\) must be zero in order that \(\iota_{2}^{*}(\omega^{1})=\omega^{1}\), \(\iota_{2}^{*}(\omega^{2})=\omega^{2}\). Finally the form \(\varphi\) on \(Y_{2}\) defines a corresponding form on \(Y\) and we conclude that \(e=0\) if we impose that \(\iota_{2}^{*}(\varphi)=\varphi\).
\(\Box\)
#### 3.4.1 The curvature of \(Y\) in terms of the curvature of \(Y_{2}\)
We obtain the following equations by pulling back to \(Y_{2}\) the structure equations on \(Y\) through the embedding \(\iota_{2}\):
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}\] \[d\omega^{1}=\varphi\wedge\omega^{1}+3\tilde{w}\wedge\omega^{1}+ \omega\wedge\varphi^{1} \tag{31}\] \[d\omega^{2}=\varphi\wedge\omega^{2}-3\tilde{w}\wedge\omega^{2}- \omega\wedge\varphi^{2}\] (32) \[d\varphi=\omega\wedge\tilde{\psi}-\frac{1}{2}(\varphi^{2}\wedge \omega^{1}+\varphi^{1}\wedge\omega^{2})\] (33) \[d\tilde{w}=-\frac{1}{2}\varphi^{2}\wedge\omega^{1}+\frac{1}{2} \varphi^{1}\wedge\omega^{2}\] (34) \[d\varphi^{1}+3\varphi^{1}\wedge\tilde{w}+\omega^{1}\wedge\tilde {\psi}+\varphi\wedge\varphi^{1}=Q^{1}\omega\wedge\omega^{2}\] (35) \[d\varphi^{2}-3\varphi^{2}\wedge\tilde{w}-\omega^{2}\wedge\tilde {\psi}+\varphi\wedge\varphi^{2}=Q^{2}\omega\wedge\omega^{1}\] \[d\tilde{\psi}-\varphi^{1}\wedge\varphi^{2}-2\varphi\wedge\tilde {\psi}=(U_{1}\omega^{1}+U_{2}\omega^{2})\wedge\omega.\]
In the formulae above we write the pull back of any form \(\alpha\) defined on \(Y\) using the same notation \(\alpha\) except for the pull backs \(\tilde{w}=\iota_{2}^{*}w\) and \(\tilde{\psi}=\iota_{2}^{*}\psi\). We should compare with the structure equations of \(Y_{2}\) and obtain an expression for \(Q^{1}\) and \(Q^{2}\):
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}\] \[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}+\omega \wedge\tau^{1}\] \[d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}-\omega \wedge\tau^{2}\]
with \(\tau^{1}\wedge\omega^{2}=\tau^{2}\wedge\omega^{1}=0\).
Recall also that \(d\varphi=\omega\wedge\psi\) with \(\psi=A\omega^{1}+B\omega^{2}\) and \(dw=C\omega\wedge\omega^{1}+D\omega\wedge\omega^{2}+S\omega^{1}\wedge\omega^{2}\), where \(A,B,C,D\) and \(S\) are functions on \(Y_{2}\).
* The differences between the pull back equations and the structure equations for \(d\omega^{1}\) and \(d\omega^{2}\) give, respectively, \[3(\tilde{w}-w)\wedge\omega^{1}+\omega\wedge(\varphi^{1}-\tau^{1})=0\] and \[3(\tilde{w}-w)\wedge\omega^{2}+\omega\wedge(\varphi^{2}-\tau^{2})=0.\] Therefore, by Cartan's lemma \[\tilde{w}-w=m\omega\] for a function \(m\) to be determined and \[\varphi^{1}-\tau^{1}=-3m\omega^{1}+n\omega\text{ and }\varphi^{2}-\tau^{2}=-3m\omega^{2}+P\omega,\] where \(n\) and \(P\) are functions to be determined.
* The difference between the equation \(d\varphi=\omega\wedge\psi\) and the pull back equation for \(d\varphi\) above is \[\omega\wedge(\tilde{\psi}-\psi)-\frac{1}{2}(\varphi^{2}\wedge\omega^{1}+ \varphi^{1}\wedge\omega^{2})=0.\] Substituting the expressions for \(\varphi^{1}\) and \(\varphi^{2}\) obtained in the item above we obtain \[\omega\wedge(\tilde{\psi}-A\omega^{1}-B\omega^{2})-\frac{1}{2}((\tau^{2}-3m \omega^{2}+P\omega)\wedge\omega^{1}+(\tau^{1}-3m\omega^{1}+n\omega)\wedge \omega^{2})=0,\] which simplifies to \[\omega\wedge(\tilde{\psi}-A\omega^{1}-B\omega^{2}-\frac{P}{2}\omega^{1}-\frac {n}{2}\omega^{2})=0.\] This implies that \[\tilde{\psi}=(A+\frac{P}{2})\omega^{1}+(B+\frac{n}{2})\omega^{2}+q\omega,\] where \(q\) is a function to be determined.
* The difference between the equations for \(d\tilde{w}\) and \(dw\) is then \[d(m\omega)=d\tilde{w}-dw=-\frac{1}{2}\varphi^{2}\wedge\omega^{1}+\frac{1}{2} \varphi^{1}\wedge\omega^{2}-S\omega^{1}\wedge\omega^{2}-C\omega\wedge\omega^{1 }-D\omega\wedge\omega^{2}.\] Substituting in this formula the expressions for \(\varphi^{1}\) and \(\varphi^{2}\) in terms of the enriched structure we obtain \[dm\wedge\omega+m(2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2})=(-S-3m) \omega^{1}\wedge\omega^{2}+(-\frac{P}{2}-C)\omega\wedge\omega^{1}+(\frac{n}{2} -D)\omega\wedge\omega^{2}.\]
That is, \[(S+4m)\omega^{1}\wedge\omega^{2}+\omega\wedge\left(-dm-2m\varphi+(\frac{P}{2}+C) \omega^{1}-(\frac{n}{2}-D)\omega^{2}\right)=0.\] Therefore \[m=-S/4\] and so \[\frac{1}{4}dS+\frac{1}{2}S\varphi+(\frac{P}{2}+C)\omega^{1}-(\frac{n}{2}-D) \omega^{2}+E\omega=0\] where \(E\) is a function determined by the derivative of \(S\). Writing \(dS+2S\varphi=S_{0}\omega+S_{1}\omega^{1}+S_{2}\omega^{2}\) and comparing with the above expression, we obtain \[S_{0}=-4E,\ \ S_{1}=-2(P+2C),\ \ S_{2}=-2(-n+2D).\] (36) Therefore the functions \(P\) and \(n\) are determined. It remains to determine the function \(q\).
* Computing \(d\varphi^{1}=d(\frac{3S}{4}\omega^{1}+n\omega+\tau^{1})\) and equating to the structure equation \(d\varphi^{1}=-3\varphi^{1}\wedge\tilde{w}-\omega^{1}\wedge\tilde{\psi}-\varphi \wedge\varphi^{1}+Q^{1}\omega\wedge\omega^{2}\) we obtain, after a computation writing \[dn+3n(\varphi-w)=n_{0}\omega+n_{1}\omega^{1}+n_{2}\omega^{2},\] (37) \[n_{1}=-\tau_{2}^{1}\tau_{1}^{2}-q+\frac{9}{16}S^{2}-3E\ \,n_{2}=-Q^{1}+\frac{3}{2}S\tau_{2}^{1}+\tau_{20}^{1}.\] (38) Recalling that \(n=S_{2}/2+4D\), \(S\) and \(D\) are determined by \(Y_{2}\), we obtained an expression for \(Q^{1}\) in terms of the enriched structure. Note also that \(q\) is determined by the first equation.
* Analogously, computing \(d\varphi^{2}=-3\frac{3S}{4}\omega^{2}+p\omega+\tau^{2}\) and equating to the structure equation \(d\varphi^{2}-3\varphi^{2}\wedge\tilde{w}-\omega^{2}\wedge\tilde{\psi}+\varphi \wedge\varphi^{2}=Q^{2}\omega\wedge\omega^{1}\) we obtain, after a computation, writing \(dP+3P(\varphi+w)=P_{0}\omega+P_{1}\omega^{1}+P_{2}\omega^{2})\), \[P_{1}=-Q^{2}-\frac{3}{2}S\tau_{1}^{2}+\tau_{10}^{2}\ \,P_{2}=\tau_{2}^{1}\tau_{1}^{2}+q- \frac{9}{16}S^{2}-3E.\] (39) Recalling that \(P=-S_{1}/2-2C\), \(S\) and \(C\) are determined by \(Y_{2}\), we obtained an expression for \(Q^{2}\) in terms of the enriched structure.
The following proposition follows directly from the computations above.
**Proposition 3.5**: _Suppose \(Y_{2}\) with its adapted Cartan connection has null torsion, that is, satisfies \(\tau^{1}=\tau^{2}=0\) and \(dw=S\omega^{1}\wedge\omega^{2}\). Then_
\[Q^{2}=\frac{1}{2}S_{11}\]
_and_
\[Q^{1}=-\frac{1}{2}S_{22},\]
_where \(S_{11}\) is the \(\omega^{1}\) component of the form \(dS_{1}\) and \(S_{22}\) is the \(\omega^{2}\) component of the form \(dS_{2}\)._
_Proof._ From Proposition 2.6, null torsion and the condition that \(dw=S\omega^{1}\wedge\omega^{2}\) (that is, \(C=D=0\)) implies that \(P=-S_{1}/2\) and \(n=S_{2}/2\). The result is therefore implied from the previous formulas. \(\Box\)
### The embedding \(Y_{1}\to Y\)
Recall that \(Y_{1}\) is a coframe bundle of forms \((\theta,\theta^{1},\theta^{2})\) over \(M\). Choosing a local section, the pullback forms over \(M\) are also denoted by \((\theta,\theta^{1},\theta^{2})\). We recall here an embedding \(Y_{1}\to Y\) obtained in [FV].
A section \((\theta,\theta^{1},\theta^{2})\) of the coframe bundle \(Y_{1}\) clearly defines a path geometry on \(M\). We obtain then a line bundle \(E\) and a principal bundle \(Y\) with its associated Cartan connection. Also, \((\theta,\theta^{1},\theta^{2})\) defines, up to the action by the group of matrices
\[\left(\begin{array}{ccc}1&c&e\\ 0&1&f\\ 0&0&1\end{array}\right)\]
sections of the tautological forms \((\omega,\omega^{1},\omega^{2})\) on \(Y\). In order to define a canonical section we use the following
**Proposition 3.6**: _Let \(\theta,\theta^{1},\theta^{2}\) be a coframe section of \(Y_{1}\) and consider the principal bundle \(Y\) defined by this coframe. Then there exists a unique section \(s:M\to Y\) such that \(s^{*}\omega=\theta\), \(s^{*}\omega^{1}=\theta^{1}\), \(s^{*}\omega^{2}=\theta^{2}\) and \(s^{*}\varphi=0\)._
It is easy to verify that this definition is equivariant with respect to the action \(G_{1}\), the one parameter group of the strict contact structure. This defines then the embedding \(Y_{1}\to Y\).
### The equivalence problem for a second order differential equation
In this section we recall the treatment by Cartan of the point equivalence between second order differential equations. It is included in order to fix conventions and to compare the invariants defined in the next section.
Recall that for a second order differential equation we define
\[\theta=dy-pdx,\]
and
\[L_{1}=\ker\{dp-Fdx\}\cap\ker\{dy-pdx\},\ \ \ L_{2}=\ker dx\cap\ker dy.\]
For \(Z^{1}=dx\) and \(Z^{2}=dp-Fdx\), one has then \(d\theta=Z^{1}\wedge Z^{2}\). The general forms defining the lines at each tangent space may be described by
\[\omega^{1}=a_{1}Z^{1},\ \ \omega^{2}=a_{2}Z^{2},\omega=a_{1}a_{2}\theta\]
where \(a_{1},a_{2}\) are non-vanishing positive functions on the manifold, so that we have always
\[2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2}=d\omega=(\frac{da_{1}}{a_{1}}+ \frac{da_{2}}{a_{2}})\wedge\omega+a_{1}Z^{1}\wedge a_{2}Z^{2},\]
and we obtain comparing with 4
\[\varphi=\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}})+r\omega.\]
One computes
\[(\varphi+3w)\wedge\omega^{1}+\omega\wedge\tau^{1}=d\omega^{1}=da_{1}\wedge Z^{ 1}+a_{1}.0=\left(\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}})+\frac{ 1}{2}(\frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}})\right)\wedge\omega^{1} \tag{40}\]
and obtain
\[3w=\frac{1}{2}(\frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}})-r\omega+s\omega^{1}, \tau_{2}^{1}=0.\]
Observe that, if \(f(x,y,p)\) then
\[df=\frac{1}{a_{1}}\frac{df}{dx}\omega^{1}+\frac{1}{a_{2}}f_{p}\omega^{2}+\frac {1}{a_{1}a_{2}}f_{y}\omega,\]
where \(\frac{df}{dx}=f_{x}+f_{y}p+f_{p}F\). Also
\[\begin{array}{lll}d\omega^{2}&=&\frac{da_{2}}{a_{2}}\wedge\omega^{2}+a_{2}( -\frac{1}{a_{2}}F_{p}\omega^{2}-\frac{1}{a_{1}a_{2}}F_{y}\omega)\wedge\frac{1 }{a_{1}}\omega^{1}\\ &=&(\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}})-\frac{1}{2}(\frac{ da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}}))\wedge\omega^{2}+\frac{1}{a_{1}}F_{p}\omega^{1} \wedge\omega^{2}-\frac{1}{(a_{1})^{2}}F_{y}\omega\wedge\omega^{1}\\ &=&(\varphi-3w-2r\omega+(s+\frac{1}{a_{1}}F_{p})\omega^{1})\wedge\omega^{2}- \omega\wedge\frac{F_{y}}{(a_{1})^{2}}\omega^{1}\end{array} \tag{41}\]
Comparing with \(d\omega^{2}=(\varphi-3w)\wedge\omega^{2}-\omega^{\wedge}\tau^{2}\) we obtain \(r=0,s=-\frac{F_{p}}{a_{1}},\tau_{1}^{2}=\frac{F_{y}}{a_{1}^{2}}\),
\[\varphi=\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}}),3w=\frac{1}{2}( \frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}})-\frac{F_{p}}{a_{1}}\omega^{1}.\]
From \(d\varphi=0\), we obtain \(\psi=0\), or \(A=B=0\), and from \(\tau_{2}^{1}=0\) it follows \(D=\tau_{20}^{1}=\tau_{22}^{1}=0\).
From above we get
\[3dw=-dF_{p}\wedge\frac{1}{a_{1}}\omega^{1}=-(\frac{F_{pp}}{a_{2}}\omega^{2}+ \frac{F_{py}}{a_{1}a_{2}}\omega)\wedge\frac{1}{a_{1}}\omega^{1}\]
and comparing with \(dw=C\omega\wedge\omega^{1}+S\omega^{1}\wedge\omega^{2}\) we obtain
\[3C=-\frac{F_{py}}{a_{1}^{2}a_{2}},\quad 3S=\frac{F_{pp}}{a_{1}a_{2}}.\]
Also
\[d\tau_{1}^{2}=d(\frac{F_{y}}{a_{1}^{2}})=-\frac{2}{a_{1}^{2}}(\varphi+3w+ \frac{F_{p}}{a_{1}}\omega^{1})F_{y}+\frac{1}{a_{1}^{2}}(\frac{1}{a_{1}}\frac{ dF_{y}}{dx}\omega^{1}+\frac{1}{a_{1}a_{2}}F_{yy}\omega+\frac{1}{a_{2}}F_{yp}\omega^{2}).\]
Comparing with \(d\tau_{1}^{2}=-2\tau_{1}^{2}(\varphi+3w)-3C\omega^{2}+\tau_{10}^{2}\omega+\tau_{1 1}^{2}\omega^{1}\) we obtain
\[\tau_{11}^{2}=\frac{1}{a_{1}^{3}}(-2F_{p}F_{y}+\frac{dF_{y}}{dx}),\quad\tau_{10 }^{2}=\frac{1}{a_{1}^{3}a_{2}}F_{yy}.\]
Now
\[3dS=-2(3S\varphi)+\frac{1}{a_{1}a_{2}}(\frac{1}{a_{1}}\frac{dF_{pp}}{dx} \omega^{1}+\frac{1}{a_{2}}F_{ppp}\omega^{2}+\frac{1}{a_{1}a_{2}}F_{ppp}\omega)\]
and comparing with \(dS=-2S\varphi+S_{0}\omega+S_{1}\omega^{1}+S_{2}\omega^{2}\) we obtain
\[S_{1}=\frac{1}{3a_{1}^{2}a_{2}}\frac{dF_{pp}}{dx},\quad S_{2}=\frac{1}{3a_{1} a_{2}^{2}}F_{ppp},S_{0}=\frac{1}{3a_{1}^{2}a_{2}^{2}}F_{ppp}.\]
It follows from \(S_{1}=-2P-4C\) that \(6P=\frac{1}{a_{1}^{2}a_{2}}(4F_{yp}-\frac{dF_{pp}}{dx})\). Then
\[6dP=-\frac{1}{a_{1}^{2}a_{2}}(3\varphi+3w+\frac{F_{p}}{a_{1}}\omega^{1})(4F_{ yp}-\frac{dF_{pp}}{dx})+\frac{4}{a_{1}^{2}a_{2}}(\frac{1}{a_{1}}\frac{dF_{yp}}{ dx}\omega^{1}+\frac{1}{a_{2}}F_{ypp}\omega^{2}+\frac{1}{a_{1}a_{2}}F_{yyp}\omega)\]
\[-\frac{1}{a_{1}^{2}a_{2}}(\frac{1}{a_{1}}\frac{d^{2}F_{pp}}{dx^{2}}\omega^{1} +\frac{1}{a_{2}}(\frac{dF_{ppp}}{dx}+F_{ypp}F_{p})\omega^{2}+\frac{1}{a_{1}a _{2}}(\frac{dF_{ppp}}{dx}+F_{ppp}F_{y})\omega)\]
Comparing with \(dP=-(3\varphi+3w)P+P_{0}\omega+P_{1}\omega^{1}+P_{2}\omega^{2}\) we obtain
\[P_{0}=\frac{1}{6a_{1}^{3}a_{2}^{2}}(4F_{ypp}-\frac{dF_{ppp}}{dx}-F_{ppp}F_{y} ),\quad\ P_{1}=\frac{1}{6a_{1}^{3}a_{2}}(-4F_{p}F_{yp}+F_{p}\frac{dF_{pp}}{dx }+4\frac{dF_{yp}}{dx}-\frac{d^{2}F_{pp}}{dx^{2}}),\]
\[P_{2}=\frac{1}{6a_{1}^{2}a_{2}^{2}}(4F_{ypp}-\frac{dF_{ppp}}{dx}-F_{ypp}-F_{ppp }F_{p}).\]
From \(Q^{2}=\tau_{10}^{2}-\frac{3}{2}S\tau_{1}^{2}-P_{1}\) it follows
\[Q^{2}=\frac{1}{6a_{1}^{3}a_{2}}(6F_{yy}-3F_{y}F_{pp}+4F_{p}F_{yp}-F_{p}\frac{ dF_{pp}}{dx}-4\frac{dF_{yp}}{dx}+\frac{d^{2}F_{pp}}{dx^{2}}). \tag{42}\]
It follows from \(S_{2}=2n-4D\) that \(6n=\frac{1}{a_{1}a_{2}^{2}}F_{ppp}\). Then
\[6dn=-\frac{1}{a_{1}a_{2}^{2}}(3\varphi-3w-\frac{F_{p}}{a_{1}}\omega^{1})F_{ ppp}+\frac{1}{a_{1}a_{2}^{2}}(\frac{1}{a_{1}}\frac{dF_{ppp}}{dx}\omega^{1}+ \frac{1}{a_{2}}F_{pppp}\omega^{2}+\frac{1}{a_{1}a_{2}}F_{ppp}\omega).\]
Comparing with \(dn=-n(3\varphi-3w)+n_{0}\omega+n_{1}\omega^{1}+n_{2}\omega^{2}\) we obtain
\[n_{0}=\frac{1}{6a_{1}^{2}a_{2}^{3}}F_{ppp},\quad n_{1}=\frac{1}{6a_{1}^{2}a_{ 2}^{2}}(\frac{dF_{ppp}}{dx}+F_{p}F_{ppp})\quad n_{2}=\frac{1}{6a_{1}a_{2}^{3} }F_{pppp}.\]
From \(Q^{1}=\tau_{20}^{1}+\frac{3}{2}S\tau_{2}^{1}-n_{2}\) it follows
\[Q^{1}=-\frac{1}{6a_{1}a_{2}^{3}}F_{pppp}. \tag{43}\]
Formulas 43 and 42 are in [Car].
A global invariant
In this section we define the global invariant for path structures. It has a very similar definition with the global invariant obtained in [FV] in the context of a structure defined on the complexified tangent space of a 3-manifold. But we make the definition explicit in the case of path structures for the sake of clarity and to adapt differences of conventions with our previous paper.
Define the second Chern class of the bundle \(Y\) with connection form \(\pi\) as
\[c_{2}(Y,\pi)=\frac{1}{8\pi^{2}}\mbox{tr}\,(\Pi\wedge\Pi).\]
\[\left(\begin{array}{ccc}0&\Phi^{2}&\Psi\\ 0&0&\Phi^{1}\\ 0&0&0\end{array}\right)\wedge\left(\begin{array}{ccc}0&\Phi^{2}&\Psi\\ 0&0&\Phi^{1}\\ 0&0&0\end{array}\right)=\left(\begin{array}{ccc}0&0&\Phi^{1}\wedge\Phi^{2} \\ 0&0&0\\ 0&0&0\end{array}\right).\]
As \(\Phi^{1}=Q^{1}\omega\wedge\omega^{2}\) and \(\Phi^{2}=Q^{2}\omega\wedge\omega^{1}\) we have \(\Pi\wedge\Pi=0\) and therefore
\[c_{2}(Y,\pi)=0.\]
**Definition 4.1**: _The transgression form is defined as_
\[TC_{2}(\pi)=\frac{1}{8\pi^{2}}\left(\mbox{tr}\,(\pi\wedge\Pi)+\frac{1}{3} \mbox{tr}\,(\pi\wedge\pi\wedge\pi)\right)=\frac{1}{24\pi^{2}}\mbox{tr}\,(\pi \wedge\pi\wedge\pi).\]
**Lemma 4.1**: _The transgression form is closed, that is, \(d\,TC_{2}(\pi)=c_{2}(Y,\pi)=0\)._
_Proof_. We compute first, using the expressions of \(\Phi^{1}\), \(\Phi^{2}\) and \(\Psi\), that
\[\mbox{tr}\,(\Pi\wedge\pi)=\Phi^{2}\wedge\omega^{1}+\Phi^{1}\wedge\omega^{2}+ \Psi\wedge\omega=0.\]
Differentiating the curvature form we obtain \(d\,\Pi=\Pi\wedge\pi-\pi\wedge\Pi\) and therefore
\[0=d\,\mbox{tr}\,(\Pi\wedge\pi)=\mbox{tr}\,(d\,\Pi\wedge\pi+\Pi\wedge d\,\pi)= \mbox{tr}\,((\Pi\wedge\pi-\pi\wedge\Pi)\wedge\pi+\Pi\wedge(\Pi-\pi\wedge\pi))\]
\[=-\mbox{tr}\,(\pi\wedge\Pi\wedge\pi).\]
Note that \(\mbox{tr}\,(\alpha\wedge\beta)=(-1)^{kl}\mbox{tr}\,(\beta\wedge\alpha)\) if \(\alpha\) and \(\beta\) are two matrices of forms of degree \(k\) and \(l\) respectively. Therefore, computing
\[\frac{1}{3}d\,\mbox{tr}\,(\pi\wedge\pi\wedge\pi) =\mbox{tr}\,(d\,\pi\wedge\pi\wedge\pi)=\mbox{tr}\,((\Pi-\pi\wedge \pi)\wedge\pi\wedge\pi)\] \[=-\mbox{tr}\,(\pi\wedge\pi\wedge\pi\wedge\pi)=0.\]
\(\Box\)
**Definition 4.2**: _Suppose that the fiber bundle \(Y\to M\) is trivial and let \(s:M\to Y\) be a section, we define then_
\[\mu=\int_{M}s^{*}TC_{2}(\pi)=\frac{1}{24\pi^{2}}\int_{M}s^{*}\mathrm{tr}\,(\pi \wedge\pi\wedge\pi).\]
In principle that integral depends on the section but the following proposition shows that the integrand
\[s^{*}TC_{2}(\pi)\]
defines an element in \(H^{3}(M,\mathbb{R})\) which does not depend on the section.
**Proposition 4.3**: _Suppose \(s\) and \(\tilde{s}\) are two sections. Then_
\[\tilde{s}^{*}TC_{2}(\pi)-s^{*}TC_{2}(\pi)=-\frac{1}{8\pi^{2}}d\,s^{*}\mathrm{ tr}\,(h^{-1}\pi\wedge d\,h).\]
_where \(h:M\to H\) is a map such that \(\tilde{s}=R_{h}\circ s\)._
_Proof_. Fix the section \(s\). Than there exists a map \(h:M\to H\) such that \(\tilde{s}=R_{h}\circ s\). We have then
\[\tilde{s}^{*}TC_{2}(\pi)=\frac{1}{24\pi^{2}}s^{*}\mathrm{tr}\,(R_{h}^{*}\pi \wedge R_{h}^{*}\pi\wedge R_{h}^{*}\pi).\]
From the formula
\[R_{h}\,^{*}\pi=h^{-1}d\,h+Ad_{h^{-1}}\pi,\]
we obtain
\[\mathrm{tr}\,(R_{h}^{*}\pi\wedge R_{h}^{*}\pi\wedge R_{h}^{*}\pi)=\]
\[\mathrm{tr}\,\left(h^{-1}d\,h\wedge h^{-1}d\,h\wedge h^{-1}d\,h+3h^{-1}d\,h \wedge h^{-1}\pi\wedge d\,h+3h^{-1}\pi\wedge\pi\wedge d\,h+\pi\wedge\pi\wedge\pi\right)\]
\[=\mathrm{tr}\,\left(-h^{-1}d\,h\wedge d\,h^{-1}\wedge d\,h-3d\,h^{-1}\wedge \pi\wedge d\,h+3h^{-1}\pi\wedge\pi\wedge d\,h+\pi\wedge\pi\wedge\pi\right).\]
Observe that the first term in the right hand side vanishes. Indeed, \(d\,h^{-1}\wedge d\,h\) is upper triangular with null diagonal. Moreover \(h^{-1}d\,h\) is upper triangular and therefore the Lie algebra valued form also has zero diagonal. Therefore
\[\mathrm{tr}\,(h^{-1}d\,h\wedge d\,h^{-1}\wedge d\,h)=0.\]
By the same argument \(\mathrm{tr}\,\left(h^{-1}\Pi\wedge d\,h\right)=0\).
Now we show that
\[d\,\mathrm{tr}\,(h^{-1}\pi\wedge d\,h)=\mathrm{tr}\,\left(d\,h^{-1}\wedge\pi \wedge d\,h-h^{-1}\pi\wedge\pi\wedge d\,h\right).\]
Compute \(d\mathrm{tr}\,(h^{-1}\pi\wedge d\,h)=\mathrm{tr}\,\left(d\,h^{-1}\wedge\pi \wedge d\,h+h^{-1}d\pi\wedge d\,h\right)\)
\[=\mathrm{tr}\,\left(d\,h^{-1}\wedge\pi\wedge d\,h+h^{-1}(\Pi-\pi\wedge\pi) \wedge d\,h\right),\]
which gives, using that \(\mathrm{tr}\,\left(h^{-1}\Pi\wedge d\,h\right)=0\),
\[d\mathrm{tr}\,(h^{-1}\pi\wedge d\,h)=\mathrm{tr}\,\left(d\,h^{-1}\wedge\pi \wedge d\,h-h^{-1}\pi\wedge\pi\wedge d\,h\right).\]
We obtained therefore that
\[\tilde{e}s^{*}TC_{2}(\pi)=s^{*}TC_{2}(\pi)-\frac{1}{8\pi^{2}}d\,s^{*}{\rm tr}\,(h ^{-1}\pi\wedge d\,h)\]
and this completes the proof of the proposition.
\(\Box\)
Let \(\mu(t)\) be the invariant defined as a function of the a parameter describing the deformation of the structure on a closed manifold \(M\) and define \(\delta\mu=\frac{d}{dt}\mu(0)\). One can interpret the flat structures as giving critical points of the global invariant \(\mu\) through the first variation formula which we refer to [FV] for a proof.
**Proposition 4.4**: \[\delta\mu=-\frac{1}{4\pi^{2}}\int_{M}s^{*}{\rm tr}\,(\dot{\pi}\wedge\Pi).\]
The global invariant can be computed most easily for a path structure induced by an enriched or strict path structure.
**Proposition 4.5**: _Let \(M\) be an enriched path structure and \(Y_{2}\to Y\) be the canonical embedding of the enriched geometry into the induced path geometry. Then_
\[8\pi^{2}\mu(Y)=(n_{1}-\frac{3}{4}S_{0}+2\tau_{2}^{1}\tau_{1}^{2})\omega\wedge \omega^{1}\wedge\omega^{2}+\omega\wedge\omega^{1}\wedge(-2A\varphi-(\frac{3}{2 }S_{1}+6C)w)\]
\[+\omega\wedge\omega^{2}(-2B\varphi-(\frac{3}{2}S_{2}+6D)w)-\frac{9}{2}Sw \wedge\omega^{1}\wedge\omega^{2}.\]
_Proof_. One compute first the following formula.
\[\frac{1}{3}{\rm tr}\,(\pi\wedge\pi\wedge\pi)=(2\omega\wedge\varphi-\omega^{1} \wedge\omega^{2})\wedge\psi-\omega\wedge\varphi^{1}\wedge\varphi^{2}+\omega^ {1}\wedge(\varphi+3w)\wedge\varphi^{2}+\omega^{2}\wedge(\varphi-3w)\wedge \varphi^{1}.\]
Therefore using the embedding of \(Y_{2}\to Y\) in the previous section we obtain by a computation:
\[\frac{1}{3}{\rm tr}\,(\pi\wedge\pi\wedge\pi)=(n_{1}-\frac{3}{4}S_{0}+2\tau_{2 }^{1}\tau_{1}^{2})\omega\wedge\omega^{1}\wedge\omega^{2}+\omega\wedge\omega^{ 1}\wedge(-2A\varphi-(\frac{3}{2}S_{1}+6C)w)\]
\[+\omega\wedge\omega^{2}(-2B\varphi-(\frac{3}{2}S_{2}+6D)w)-\frac{9}{2}Sw \wedge\omega^{1}\wedge\omega^{2}.\]
\(\Box\)
Using the embedding of \(Y_{1}\to Y\) in the previous section we obtain by a similar computation:
**Proposition 4.6**: _Let \(M\) be a strict path structure and \(Y_{1}\to Y\) be the canonical embedding of the strict geometry into the induced path geometry. Then_
\[8\pi^{2}\mu(Y)=(n_{1}-\frac{3}{4}S_{0}+2\tau_{2}^{1}\tau_{1}^{2})\omega\wedge \omega^{1}\wedge\omega^{2}-w\wedge((\frac{3}{2}S_{1}+6C)\omega\wedge\omega^{ 1}+(\frac{3}{2}S_{2}+6D)\omega\wedge\omega^{2}+\frac{9}{2}S\omega^{1}\wedge \omega^{2}) \tag{44}\]
### The global invariant for second order differential equations on the torus
In this section we obtain formulas for the global invariant in the case of an ordinary differential equation defined on the torus. Recall that the projectivized cotangent bundle \(\pi:PT^{*}S\to S\) of a surface \(S\) is described locally by \((x,y,[p,q])\) where \((x,y)\) are local coordinates on the surface and \(pdx+qdy\) is a form at \((x,y)\). The Liouville form \(\theta\) on \(T^{*}S\) is defined to be the tautological form \(\theta(x,y,-pdx+qdy)=\pi^{*}(-pdx+qdy)\). It induces a contact distribution on \(PT^{*}S\), which in the chart \((x,y,p)\to(x,y,[p,1])\) is given by the kernel of the form \(dy-pdx\). On the chart \((x,y,q)\to(x,y,[1,q])\) the contact distribution is the kernel of \(dx-qdx\). One can also consider, fixing a metric on the surface, the unit cotangent bundle \((T^{*})^{1}S\) which is a double cover of \(PT^{*}S\).
The fibers of the bundle \(PT^{*}S\) give a canonical field of directions on the Liouville distribution. Observe that, in local coordinates \((x,y,p)\), it is described by \(\ker dx\cap\ker dy=\ker dx\cap\ker(dy-pdx)\). Choosing another direction on the contact distribution amounts to define a form, in local coordiantes \((x,y,p)\), \(dp-G(x,y,p)dx\), where \(G(x,y,p)\) is a function. On the chart \((x,y,q)\) one writes then
\[d(\frac{1}{q})-G(x,y,\frac{1}{q})dx=-\frac{1}{q^{2}}dq-G(x,y,\frac{1}{q})dx.\]
Therefore, the direction is determined by \(dq+G(x,y,\frac{1}{q})q^{3}dy\) (the contact distribution is \(\ker(dx-qdy)\)). In order to have a well defined direction we need that the function \(G(x,y,\frac{1}{q})q^{3}\) has a differentiable extension for \(q=0\).
**Definition 4.7**: _A second order differential equation on a surface \(S\) is a path structure on the projective cotangent bundle with contact structure induced by the Liouville form and such that one of the directions is given by the fibers._
It is convenient to introduce a new coordinate in the fiber \(\alpha\in]-\pi,\pi]\) through the formula \(p=\tan\alpha/2\). The contact distribution is defined by a globally defined form on the coordinates \((x,y,\alpha)\):
\[\theta=\cos\alpha/2dy-\sin\alpha/2dx.\]
The fiber direction is defined by the equations \(dx=dy=0\) which can also be described by, defining \(\theta^{1}=\sin\alpha/2dy+\cos\alpha/2dx\), as \(\ker\theta^{1}\cap\ker\theta\). The last form, which depends on a choice of a function, is
\[\theta^{2}=d\alpha-F(x,y,\alpha)\theta^{1}.\]
Observe that
\[d\theta=\frac{1}{2}\theta^{1}\wedge\theta^{2}.\]
The relation with the differential equation given on the chart \((x,y,p)\) is given writing
\[dp=\frac{1}{2}(1+p^{2})d\alpha\]
and therefore, as \(dy=pdx\) in that chart,
\[d\alpha-F(x,y,\alpha)\theta^{1}=\frac{2}{1+p^{2}}dp-F(x,y,2\arctan p)(\sin\alpha/ 2dy+\cos\alpha/2dx)\]
\[=\frac{2}{1+p^{2}}dp-F(x,y,2\arctan p)(\sin\alpha/2.pdx+\cos\alpha/2dx)\]
\[=\frac{2}{1+p^{2}}dp-F(x,y,2\arctan p)(\sin\alpha/2.\tan\alpha/2+\cos\alpha/2 )dx.\]
and recalling that \(\cos\alpha/2=\frac{1}{\sqrt{1+p^{2}}}\),
\[=\frac{2}{1+p^{2}}dp-F(x,y,2\arctan p)(1+p^{2})^{1/2}dx.\]
Therefore \(2G(x,y,p)=F(x,y,2\arctan p)(1+p^{2})^{3/2}\).
#### 4.1.1 The strict and enriched structure of a differential equation on the torus
Here we will work with a double cover of the projective cotangent bundle of the torus. We define the path structure associated to a differential equation on the torus through a strict path structure defined by
\[\theta=\cos\alpha dy-\sin\alpha dx.\]
\[\theta^{1}=\sin\alpha dy+\cos\alpha dx\]
and
\[\theta^{2}=d\alpha-F(x,y,\alpha)\theta^{1}.\]
Here \(F(x,y,\alpha)\) is a function defined on the torus. Observe that
\[d\theta=\theta^{1}\wedge\theta^{2}.\]
In the following we will write, for a function \(f:T^{3}\rightarrow\mathbb{R}\), defined on the torus,
\[df=f_{0}\theta+f_{1}\theta^{1}+f_{2}\theta^{2},\]
so that \(f_{x}=-f_{0}\sin\alpha+(f_{1}-f_{2}F)\cos\alpha\), \(f_{y}=f_{0}\cos\alpha+(f_{1}-f_{2}F)\sin\alpha\) and \(f_{\alpha}=f_{2}\). Compute
\[d\theta^{1}=(\theta^{2}+F\theta^{1})\wedge\theta\]
and
\[d\theta^{2}=((F_{0}-F^{2})\theta^{1}-F\theta^{2})\wedge\theta+F_{2}\theta^{1} \wedge\theta^{2}.\]
Consider now the enriched structure defined by \(\theta,\theta^{1}\) and \(\theta^{2}\) and the tautological forms \(\omega=a_{1}a_{2}\theta,\omega^{1}=a_{1}\theta^{1}\) and \(\omega^{2}=a_{2}\theta^{2}\).
We first compute
\[d\omega=2\varphi\wedge\omega+\omega^{1}\wedge\omega^{2},\]
where \(\varphi=\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}})\). Next we compute \(d\omega^{1}\) and \(d\omega^{2}\):
\[d\omega^{1}=\frac{da_{1}}{a_{1}}\wedge a_{1}\theta^{1}+a_{1}(\theta^{2}+F \theta^{1})\wedge\theta\]
\[d\omega^{2}=\frac{da_{2}}{a_{2}}\wedge a_{2}\theta^{2}+a_{2}(((F_{0}-F^{2}) \theta^{1}-F\theta^{2})\wedge\theta+F_{2}\theta^{1}\wedge\theta^{2})\]
Comparing with the structure equations of the enriched structure in 5 we may write
\[d\omega^{1}=\varphi\wedge\omega^{1}+3w\wedge\omega^{1}+\omega\wedge\tau^{1} \quad d\omega^{2}=\varphi\wedge\omega^{2}-3w\wedge\omega^{2}-\omega\wedge\tau^ {2}. \tag{45}\]
with
\[3w=\frac{1}{2}(\frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}})-\frac{1}{a_{1}a_{2}} F\omega-\frac{1}{a_{1}}F_{2}\omega^{1},\]
\[\tau^{1}=-\frac{1}{a_{2}^{2}}\omega^{2}\]
and
\[\tau^{2}=\frac{1}{a_{1}^{2}}(F_{0}-F^{2})\omega^{1}.\]
#### 4.1.2 Curvatures
We compute now \(d\varphi=\omega\wedge(A\omega^{1}+B\omega^{2})\) and \(dw=C\omega\wedge\omega^{1}+D\omega\wedge\omega^{2}+S\omega^{1}\wedge\omega^{2}\)(see Proposition 2.5). From \(d\varphi=0\) we obtain \(A=B=0\). Computing \(dw\) and comparing to the formula above we obtain
\[C=\frac{F_{1}-F_{20}+FF_{2}}{3a_{1}^{2}a_{2}},\ \ D=\frac{2F_{2}}{3a_{1}a_{2}^{2} },\ \ S=\frac{F_{22}-F}{3a_{1}a_{2}}.\]
In order to compute the global invariant we need to compute the coefficients \(S_{0},S_{1}\) and \(S_{2}\) in equation 17 (a Bianchi identity) : \(dS+2S\varphi=S_{0}\omega+S_{1}\omega^{1}+S_{2}\omega^{2}\). One obtains
\[S_{0}=\frac{F_{220}-F_{0}}{3a_{1}^{2}a_{2}^{2}},\ \ S_{1}=\frac{F_{221}-F_{1}}{3 a_{1}^{2}a_{2}},\ \ S_{2}=\frac{F_{222}-F_{2}}{3a_{1}a_{2}^{2}}.\]
Now we use the expressions obtained in section 3.4.1 of the curvatures of \(Y\) in terms of the curvature of \(Y_{2}\). In order to compute \(\mu(Y)\) we need to compute \(n\) and its derivatives (see 36 and 37).
We have \(2n=S_{2}+4D=\frac{F_{222}+7F_{2}}{3a_{2}^{2}a_{1}}\) and compute the left hand of \(dn+3(\varphi-w)n=n_{0}\omega+n_{1}\omega^{1}+n_{2}\omega^{2}\) (formula 37) to obtain then
\[n_{0}=\frac{1}{6a_{1}^{2}a_{2}^{3}}\left(F(F_{222}+F_{2})+F_{2220}+7F_{20} \right),\]
\[n_{1}=\frac{1}{6a_{1}^{2}a_{2}^{2}}\left(F_{2}(F_{222}+F_{2})+F_{2221}+7F_{21}\right)\]
\[n_{2}=\frac{1}{6a_{1}a_{2}^{3}}\left(F_{2222}+7F_{22}\right)\]
We use equations 38 and 39 to compute the curvature functions \(Q^{1}\) and \(Q^{2}\). We have \(Q^{1}=-n_{2}+\frac{3}{2}S\tau_{2}^{1}+\tau_{20}^{1}\) and \(Q^{2}=-P_{1}-\frac{3}{2}S\tau_{1}^{2}+\tau_{10}^{2}\). For that sake, we compute first the derivatives of the torsion, \(\tau_{20}^{1}\) and \(\tau_{10}^{2}\), using formulas 11 and 12. Computing the left hand side of the equation \(d\tau_{2}^{1}+2\tau_{2}^{1}(\varphi-3w)+(B+3D)\omega^{1}=\tau_{20}^{1}\omega+ \tau_{22}^{1}\omega^{2}\) and comparing to the right hand side we obtain
\[\tau_{20}^{1}=-\frac{2F}{a_{1}a_{2}^{3}}.\]
Analogously, computing the left hand side of the equation \(d\tau_{1}^{2}+2\tau_{1}^{2}(\varphi+3w)-(A-3C)\omega^{2}=\tau_{10}^{2}\omega+ \tau_{11}^{2}\omega^{1}\) we obtain
\[\tau_{10}^{2}=\frac{F_{00}-4FF_{0}+2F^{3}}{a_{1}^{3}a_{2}}.\]
**Proposition 4.8**: _Given a (local) differential equation as a path structure induced by the forms \(\theta,\theta^{1}\) and \(\theta^{2}\) as above one computes the curvature functions in terms of the enriched structure:_
\[Q^{1}=-\frac{1}{6a_{1}a_{2}^{3}}(F_{2222}+10F_{22}+9F)\]
_and_
\[Q^{2}=\frac{1}{6a_{1}^{3}a_{2}}(F_{2}F_{221}+3F_{2}F_{1}^{2}-4F_{2}F_{20}+4FF_ {2}^{2}+F_{2211}+6F_{1}F_{11}\]
\[-4F_{201}+4F_{2}F_{1}+4FF_{21}-3F_{22}F_{0}+3F_{22}F^{2}-21FF_{0}+9F^{3}+6F_{0 0}).\]
_Proof._ Recall from formula 36 that \(P=-S_{1}/2-2C\) and we write \(dP+3P(\varphi+w)=P_{0}\omega+P_{1}\omega^{1}+P_{2}\omega^{2}\). Computing the left side and comparing the right side we obtain the expression of \(P_{1}\) which we use in the formulas above. \(\Box\)
The following proposition describes locally differential equations satisfying \(Q^{1}=0\).
**Proposition 4.9**: _Differential equations on an open subset with coordinates \((x,y)\) given by \(\theta=\cos\alpha dy-\sin\alpha dx,\,\theta^{1}=\sin\alpha dy+\cos\alpha dx\) and \(\theta^{2}=d\alpha-F(x,y,\alpha)\theta^{1}\) satisfy_
\[Q^{1}=0\]
_if and only if_
\[F(x,y,\alpha)=A(x,y)\cos\alpha+B(x,y)\sin\alpha+C(x,y)\cos 3\alpha+D(x,y)\sin 3\alpha\]
_where \(A,B,C\) and \(D\) are functions on \(x\) and \(y\)._
_Proof._ Observe that \(Q^{1}=0\) is equivalent to \(F_{2222}+10F_{22}+9F=F_{\alpha\alpha\alpha\alpha}+10F_{\alpha\alpha}+9F=0\). The only solutions to this linear equation are of the form above. \(\Box\)
Using the coordinates \((x,y,p)\) as above where the differential equation is described as \(dp-G(x,y,p)dx=0\) the condition \(Q^{1}=0\) implies that \(G(x,y,p)\) is at most a third order polynomial in \(p\) with coefficients functions of \(x\) and \(y\) (see [A]).
#### 4.1.3 The global invariant
We are ready now to use Proposition 4.5 to determine the global invariant:
**Proposition 4.10**: _Let \(M\) be an enriched path structure defined by an ordinary differential equation of second order on the torus with strict structure defined by the forms \(\theta,\theta^{1}\) and \(\theta^{2}\) as above. Let \(Y_{2}\to Y\) be the canonical embedding of the enriched geometry into the induced path geometry whose connection is \(\pi\). Then_
\[8\pi^{2}s^{*}(TC_{2}(\pi))=\frac{1}{12}(-12F_{\alpha}^{2}+2(F_{\alpha\alpha \alpha x}\cos\alpha+F_{\alpha\alpha\alpha y}\sin\alpha+F_{\alpha\alpha\alpha \alpha}F)+14(F_{\alpha x}\cos\alpha+F_{\alpha y}\sin\alpha\]
\[+F_{\alpha\alpha}F)-3(-F_{\alpha\alpha x}\sin\alpha+F_{\alpha\alpha y}\cos \alpha)+3F_{\alpha}-24(-F_{x}\sin\alpha+F_{y}\cos\alpha)+18F^{2}+6FF_{\alpha \alpha})\theta\wedge\theta^{1}\wedge\theta^{2}.\]
_and_
\[8\pi^{2}\mu(Y)=8\pi^{2}\int_{M}s^{*}(TC_{2}(\pi))=\frac{1}{12}\int_{M}(-32F_{ \alpha}^{2}+2F_{\alpha\alpha}^{2}+18F^{2})\theta\wedge\theta^{1}\wedge\theta ^{2}.\]
_Proof_. The terms in Proposition 4.5 were all computed before. A substitution of these terms in the formula gives the first formula. The second formula is obtained by integration by parts.
\(\Box\)
**Corollary 4.11**: _Let \(M\) be equipped with a path structure defined by an ordinary differential equation of second order on the torus with strict structure defined by the forms \(\theta,\theta^{1}\) and \(\theta^{2}\). Let \(Y\) be the canonical Cartan bundle with its associated Cartan connection._
1. _If_ \(\theta^{2}=d\alpha-F(x,y)\theta^{1}\) _(the function_ \(F\) _does not depend on_ \(\alpha\)_). Then_ \(\mu(Y)=0\) _if and only if_ \(F=0\)_._
2. \(Q^{1}(Y)=0\) _and_ \(\mu(Y)=0\) _if and only if_ \(F=0\)_._
_Proof_. Clearly, if \(F=0\) then \(\mu(Y)=0\) and \(Q^{1}=0\). If \(F\) does not depend on \(\alpha\) then the invariant becomes
\[\frac{4}{3}\int_{M}F^{2}\theta\wedge\theta^{1}\wedge\theta^{2},\]
which is zero only if \(F=0\). Suppose now that \(Q^{1}(Y)=0\) and \(\mu(Y)=0\). Observe that the integral formula for the invariant, by an integration by part and a slight rearrangement, may be written as
\[\frac{1}{12}\int_{M}(-12F_{\alpha}^{2}+2F_{\alpha\alpha\alpha\alpha}F+20F_{ \alpha\alpha}F+18F^{2})\theta\wedge\theta^{1}\wedge\theta^{2}.\]
Using the expression of \(Q^{1}\) given in 4.8 and the hypothesis \(Q^{1}=0\), we obtain \(F_{2222}+10F_{22}+9F=0\), and therefore
\[2F_{\alpha\alpha\alpha\alpha}F+20F_{\alpha\alpha}F+18F^{2}=0.\]
Therefore
\[8\pi^{2}\mu(Y)=-\int_{M}F_{\alpha}^{2}\theta\wedge\theta^{1}\wedge\theta^{2}.\]
We observe therefore that if \(\mu(Y)=0\) then \(F_{\alpha}\) should be null. But if \(F\) does not depend on \(\alpha\) it should be null by the first part. \(\Box\)
## 5 Path structures on a torus
We recall example **II** which is the torus \(T^{3}\) with coordinates \((x,y,t)\) ( \(\mod 1\)) and the global contact form, for a fixed \(n\in\mathbb{Z}^{*}\),
\[\theta=\cos(2\pi nt)dx-\sin(2\pi nt)dy.\]
It was proven independently by E. Giroux and Y. Kanda that the contact structures defined by these contact forms classify all tight structures on \(T^{3}\) (see [10]). We will show here that for each of these contact structures one can define a flat path structure.
There are two canonical global vector fields on the distribution given by \(X_{1}=\frac{\partial}{\partial t}\) and \(X_{2}=\sin(2\pi nt)\frac{\partial}{\partial x}+\cos(2\pi nt)\frac{\partial}{ \partial y}\).
We define
\[\theta^{1}=-2\pi ndt,\ \ \theta^{2}=\sin(2\pi nt)dx+\cos(2\pi nt)dy,\]
so that \(d\theta=\theta^{1}\wedge\theta^{2}\) and we define the strict path structure defined by these forms. We compute
\[d\theta^{1}=0,\ \ d\theta^{2}=-\theta^{1}\wedge\theta.\]
Comparing now with the enriched path connection we obtain
\[\varphi=\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}}),\ \ 3w=\frac{1}{2}(\frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}}),\ \ \tau_{2}^{1}=0,\ \ \tau_{1}^{2}=-\frac{1}{a_{1}^{2}}\]
and therefore \(d\varphi=dw=d\tau_{2}^{1}=d\tau_{1}^{2}+2\tau_{1}^{2}(\varphi+3w)=0\). It follows that \(A=B=C=D=S=\tau_{20}^{1}=\tau_{10}^{2}=0\), and it follows from formulas 38 and 39 that \(Q^{1}=Q^{2}=0\). We proved:
**Lemma 5.1**: _The path structures defined by the forms \(\theta^{1},\theta^{2},\theta\) on \(T^{3}\) are flat._
We now define a new strict path structure by fixing the contact form \(\theta\) and changing \(\theta^{1}\) and \(\theta^{2}\) by a constant matrix:
\[\left(\begin{array}{cc}{\theta^{1}}^{\prime}\\ {\theta^{2}}^{\prime}\end{array}\right)=\left(\begin{array}{cc}a&b\\ c&f\end{array}\right)\left(\begin{array}{c}\theta^{1}\\ \theta^{2}\end{array}\right)\ \ \ \ {\rm or}\ \ \left(\begin{array}{c}\theta^{1}\\ \theta^{2}\end{array}\right)=\left(\begin{array}{cc}f&-b\\ -c&a\end{array}\right)\left(\begin{array}{c}{\theta^{1}}^{\prime}\\ {\theta^{2}}^{\prime}\end{array}\right),\]
where
\[\det\left(\begin{array}{cc}a&b\\ c&f\end{array}\right)=1.\]
We compute \({d{\theta^{1}}^{\prime}}=ad\theta^{1}+bd\theta^{2}=-b\theta^{1}\wedge\theta= -bf{\theta^{1}}^{\prime}\wedge\theta+b^{2}{\theta^{2}}^{\prime}\wedge\theta\) and \({d{\theta^{2}}^{\prime}}=cd\theta^{1}+fd\theta^{2}=-f\theta^{1}\wedge\theta=-f^ {2}{\theta^{1}}^{\prime}\wedge\theta+bf{\theta^{2}}^{\prime}\wedge\theta\). In order to compute the enriched connection we need to find \(\varphi^{\prime},w^{\prime},{\tau^{1}}^{\prime},{\tau^{2}}^{\prime}\) satisfying
\[{d{\omega^{1}}^{\prime}}=(\varphi^{\prime}+3w^{\prime})\wedge{\omega^{1}}^{ \prime}+\omega^{\prime}\wedge{\tau_{2}^{1}}^{\prime}\omega^{2}{\prime},\ \ \ {d{\omega^{2}}^{\prime}}=(\varphi^{\prime}-3w^{\prime})\wedge{\omega^{2}}^{ \prime}-\omega^{\prime}\wedge{\tau_{1}^{2}}^{\prime}{\omega^{1}}^{\prime}.\]
Comparing with the structure equations and observing \({\omega^{i}}^{\prime}=a_{i}{\theta^{i}}^{\prime}\) we obtain
\[\varphi^{\prime}=\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}}),\ \ 3w^{ \prime}=\frac{1}{2}(\frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}})+\frac{bf}{a_{1}a _{2}}\omega,\]
\[{\tau_{2}^{1}}^{\prime}=-\frac{b^{2}}{a_{2}^{2}}{\theta^{2}}^{\prime}\ \ \mbox{and}\ \ {\tau_{1}^{2}}^{\prime}=-\frac{f^{2}}{a_{1}^{2}}{\theta^{1}}^{\prime}.\]
Then \(d\varphi=0,\ \ 3dw=\frac{bf}{a_{1}a_{2}}{\omega^{1}}^{\prime}\wedge{\omega^{2}}^{\prime}\), and it follows \(A=B=C=D=0\) and \(3S=\frac{bf}{a_{1}a_{2}}\). Also
\[d{\tau_{2}^{1}}^{\prime}=-2{\tau_{2}^{1}}^{\prime}(\varphi-3w)+\frac{2b^{3}f}{ a_{2}^{3}a_{1}}\omega,\ \ {d{\tau_{1}^{2}}^{\prime}}=-2{\tau_{1}^{2}}^{\prime}(\varphi+3w)-\frac{2bf^{3}} {a_{1}^{3}a_{2}}\omega,\]
and \({\tau_{20}^{1}}^{\prime}=\frac{2b^{3}f}{a_{2}^{3}a_{1}},\ \ {\tau_{10}^{2}}^{\prime}=-\frac{2bf^{3}}{a_{1}^{3}a_{2}}\). At last \(dS=-2S\varphi\) and we get \(S_{0}=S_{1}=S_{2}=0\). Then \(P=n=0\), and we obtain from formulas 38 and 39 that
\[Q^{1}=\frac{3}{2}\frac{b^{3}f}{a_{1}a_{2}^{3}},\ \ Q^{2}=-\frac{3}{2}\frac{bf^{3}} {a_{1}^{3}a_{2}}.\]
We proved
**Lemma 5.2**: _The path structures defined by the forms \({\theta^{\prime}}^{1},{\theta^{\prime}}^{2},\theta\) on \(T^{3}\) have curvatures \(Q^{1}=\frac{3}{2}b^{3}f,\ \ \ Q^{2}=-\frac{3}{2}bf^{3}\) (computed through a section on the torus)._
Note that the path structure is flat if and only if the one of the torsions \({\tau^{1}}^{\prime}\) or \({\tau^{2}}^{\prime}\) are zero and this happens if the direction defined by \(\frac{\partial}{\partial t}\) is one of the line bundles contained in the contact bundle of the path structure. The couple \((b,f)\) is determined up to a sign by the curvatures \(Q^{1}\) and \(Q^{2}\).
The global invariant is given in the next Proposition.
**Proposition 5.3**: _Let \({T^{3}}_{n}(a,b,c,d,f)\) as the path structure on the torus defined as above. Then the global invariant is_
\[\mu({T^{3}}_{n}(a,b,c,d,f))=\frac{3n}{8\pi}(bf)^{2}.\]
_Proof._ This is a direct computation using the formula for the global invariant (see formula 44):
\[\int_{T^{3}}s^{*}TC_{2}(\pi)=\int_{T^{3}}\frac{1}{8\pi^{2}}(2{\tau_{2}^{1}}^{ \prime}{\tau_{1}^{2}}^{\prime}\theta\wedge\theta^{1}\wedge\theta^{2}-\frac{9 }{2}w\wedge\theta^{1}\wedge\theta^{2})=\int_{T^{3}}\frac{1}{8\pi^{2}}\frac{3} {2}b^{2}f^{2}\theta\wedge\theta^{1}\wedge\theta^{2}.\]
Therefore
\[\mu({T^{3}}_{n}(a,b,c,d,f))=\int_{T^{3}}\frac{3}{16\pi^{2}}(bf)^{2}\theta d \theta=\frac{3n}{8\pi}(bf)^{2}.\]
\(\Box\)
Note that the global invariant is null if and only if the path structure is flat.
Invariant path structures on \(\mathbf{SU}(2)\)
Tight contact structures on \(S^{3}\) are all contactomorphic (see [E]). In this section we explicit homogeneous strict path structures on \(\mathbf{SU}(2)\) which are carried by a fixed left invariant tight contact structure.
Let \(\alpha,\beta,\gamma\) be a basis of left invariant 1-forms defined on \(\mathbf{SU}(2)\) with
\[d\alpha=-\beta\wedge\gamma,\quad d\beta=-\gamma\wedge\alpha,\quad d\gamma=- \alpha\wedge\beta\]
A strict path structure on \(\mathbf{SU}(2)\) is given by fixing the contact form \(\theta=\gamma\) and the line fields \(E^{1}=\ker\alpha\cap\ker\theta\) and \(E^{2}=\ker\beta\cap\ker\theta\).
We define strict path structures by choosing a map from \(\mathbf{SU}(2)\) to \(\mathbf{SL}(2,\mathbb{R})\):
\[\theta=\gamma,\quad Z^{1}=r_{1}\beta+r_{2}\alpha,\quad Z^{2}=s_{1}\beta+s_{2}\alpha,\]
with \(r_{1}s_{2}-r_{2}s_{1}=1\). Then
\[d\theta=Z^{1}\wedge Z^{2}.\]
In the case the map \(\mathbf{SU}(2)\rightarrow\mathbf{SL}(2,\mathbb{R})\) is constant, from \(\beta=s_{2}Z^{1}-r_{2}Z^{2}\) and \(\alpha=-s_{1}Z^{1}+r_{1}Z^{2}\), we obtain
\[dZ^{1}=r_{1}d\beta+r_{2}d\alpha=\theta\wedge\left(xZ^{1}+yZ^{2}\right)\]
and analogously,
\[dZ^{2}=\theta\wedge\left(zZ^{1}-xZ^{2}\right),\]
where
\[x=r_{1}s_{1}+r_{2}s_{2},\quad y=-(r_{1}^{2}+r_{2}^{2}),\quad z=s_{1}^{2}+s_{2} ^{2}.\]
Observe that \(x^{2}+yz=-1\). Then for a enriched path structure with coframes obtained from t'the tautological forms \(\omega=a_{1}a_{2}\theta\), \(\omega^{1}=a_{1}Z^{1}\) and \(\omega^{2}=a_{2}Z^{2}\) we obtain
\[d\omega^{1}=(\frac{da_{1}}{a_{1}}+x\theta)\wedge\omega^{1}+a_{1}y\theta\wedge Z ^{2}.\]
\[d\omega^{2}=(\frac{da_{2}}{a_{2}}-x\theta)\wedge\omega^{2}+a_{2}z\theta\wedge Z ^{1}\]
From Proposition 2.3 we have
\[\varphi=\frac{1}{2}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}}),\ \ 3w=\frac{1}{2}(\frac{da_{1}}{a_{1}}-\frac{da_{2}}{a_{2}})+\frac{x}{a_{1}a_{2}}\omega,\]
\[\tau_{2}^{1}=\frac{y}{a_{2}^{2}},\quad\tau_{1}^{2}=-\frac{z}{a_{1}^{2}}.\]
and therefore
\[d\varphi=0,\ \ 3dw=d(x\theta)=\frac{x}{a_{1}a_{2}}\omega^{1}\wedge\omega^{2}\]
so that \(S=\frac{x}{3a_{1}a_{2}},A=B=C=D=0\).
From
\[d\tau_{2}^{1}=-2\frac{da_{2}}{a_{2}}\frac{y}{a_{2}^{2}}=-2\tau_{2}^{1}(\varphi-3w )-2\frac{xy}{a_{1}a_{2}^{3}}\omega\]
\[d\tau_{1}^{2}=2\frac{da_{1}}{a_{1}}\frac{z}{a_{1}^{2}}=-2\tau_{1}^{2}(\varphi+3w )-2\frac{xz}{a_{2}a_{1}^{3}}\omega\]
we obtain
\[\tau_{20}^{1}=-2\frac{xy}{a_{1}a_{2}^{3}},\ \ \tau_{10}^{2}=-2\frac{xz}{a_{2}a_{1 }^{3}}\]
From
\[dS=-\frac{x}{3a_{1}a_{2}}(\frac{da_{1}}{a_{1}}+\frac{da_{2}}{a_{2}})=-2\varphi S\]
we obtain \(S_{0}=S_{1}=S_{2}=0\), ans \(P=n=0\).
It follows from formulas 38 and 39 that \(Q^{1}=\tau_{20}^{1}+\frac{3}{2}S\tau_{2}^{1}\) and \(Q^{2}=\tau_{10}^{2}-\frac{3}{2}S\tau_{1}^{2}\), therefore
\[Q^{1}=-\frac{xy}{a_{1}a_{2}^{3}}\]
and
\[Q^{2}=-\frac{xz}{a_{1}^{3}a_{2}}.\]
Observe that \(y\) and \(z\) never vanish. We conclude that the invariant strict structure on \({\bf SU}(2)\) is a flat path structure if and only if \(x=0\). This can be interpreted, because \(x=r_{1}s_{1}+r_{2}s_{2}\), as the strict structures such that the directions \(E^{1}\) and \(E^{2}\) are perpendicular for the canonical metric defined by the forms \(\alpha\) and \(\beta\).
**Proposition 6.1**: _Define strict path structures on \({\bf SU}(2)\) by choosing a constant map from \({\bf SU}(2)\) to \({\bf SL}(2,\mathbb{R})\):_
\[\theta=\gamma,\quad Z^{1}=r_{1}\beta+r_{2}\alpha,\quad Z^{2}=s_{1}\beta+s_{2}\alpha,\]
_with \(r_{1}s_{2}-r_{2}s_{1}=1\). Let \(x=r_{1}s_{1}+r_{2}s_{2}\). Then the global invariant of the induced path structure is_
\[\mu({\bf SU}(2)(r_{1},r_{2},s_{1},s_{2}))=-\frac{1}{2}-\frac{3}{8}x^{2}.\]
_Proof_. We compute, using formula 44, the global invariant for the family of structures defined on \({\bf SU}(2)\). We have from above that \(x=r_{1}s_{1}+r_{2}s_{2},y=-(r_{1}^{2}+r_{2}^{2}),z=s_{1}^{2}+s_{2}^{2}\) and that \(x^{2}+yz=-1\). Then it follows
\[\int_{{\bf SU}(2)}s^{*}TC_{2}(\pi)=\int_{{\bf SU}(2)}\frac{1}{8\pi^{2}}(2\tau_ {2}^{1}\tau_{1}^{2}\theta-\frac{9}{2}Sw)\wedge\theta^{1}\wedge\theta^{2}=- \int_{{\bf SU}(2)}\frac{1}{8\pi^{2}}(2yz+\frac{1}{2}x^{2})\gamma\wedge\beta \wedge\alpha\]
\[=\int_{{\bf SU}(2)}\frac{1}{8\pi^{2}}(-2-\frac{3}{2}x^{2})\gamma\wedge\beta \wedge\alpha.\]
We use then that \(\int_{{\bf SU}(2)}\gamma\wedge\beta\wedge\alpha=2\pi^{2}\). \(\Box\)
Observe that the invariant is never null for this family even in the case of a flat path structure (which happens when \(x=0\)). Also the critical point of the invariant along this family is a maximal at \(x=0\), at a flat structure, and it is equal to \(-\frac{1}{2}\). |
2303.17865 | $Ξ_c-Ξ_c^{\prime}$ mixing From Lattice QCD | In heavy quark limit, the lowest-lying charmed baryons with two light quarks
can form an SU(3) triplet and sextet. The $\Xi_c$ in the SU(3) triplet and
$\Xi_c'$ in the sextet have the same $J^{PC}$ quantum number and can mix due to
the finite charm quark mass and the fact the strange quark is heavier than the
up/down quark. We explore the $\Xi_c$-$\Xi_c'$ mixing by calculating the
two-point correlation functions of the $\Xi_c$ and $\Xi_c'$ baryons from
lattice QCD. Based on the lattice data, we adopt two independent methods to
determine the mixing angle between $\Xi_c$ and $\Xi_c'$. After making the
chiral and continuum extrapolation, it is found that the mixing angle $\theta$
is $1.2^{\circ}\pm0.1^{\circ}$, which seems insufficient to account for the
large SU(3) symmetry breaking effects found in weak decays of charmed baryons. | Hang Liu, Liuming Liu, Peng Sun, Wei Sun, Jin-Xin Tan, Wei Wang, Yi-Bo Yang, Qi-An Zhang | 2023-03-31T07:59:00Z | http://arxiv.org/abs/2303.17865v2 | # \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing From Lattice QCD
###### Abstract
In heavy quark limit, the lowest-lying charmed baryons with two light quarks can form an SU(3) triplet and sextet. The \(\Xi_{c}\) in the SU(3) triplet and \(\Xi^{\prime}_{c}\) in the sextet have the same \(J^{PC}\) quantum number and can mix due to the finite charm quark mass and the fact the strange quark is heavier than the up/down quark. We explore the \(\Xi_{c}\Xi^{\prime}_{c}\) mixing by calculating the two-point correlation functions of the \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\) baryons from lattice QCD. Based on the lattice data, we adopt two independent methods to determine the mixing angle between \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\). After making the chiral and continuum extrapolation, it is found that the mixing angle \(\theta\) is \(1.2^{\circ}\pm 0.1^{\circ}\), which seems insufficient to account for the large SU(3) symmetry breaking effects found in weak decays of charmed baryons.
+
Footnote β : Corresponding author: [email protected]
+
Footnote β : Corresponding author: [email protected]
## I Introduction
Heavy baryons with a bottom or charm quark provide an ideal place to study the underlying theory for strong interactions. In heavy quark limit, charmed baryons can be elegantly classified according to heavy quark spin symmetry [1; 2]. Ground states of baryons with one heavy quark can be categorized into two sets: if the light quark system is spinless, three baryons form an SU(3) triplet; if the spin of the light quark system is one, then six charmed baryons form a sextet. However in reality, since the charm quark mass is finite and the strange quark is heavier than the up/down quark, charm baryons in the triplet and sextet are likely to mix with each other. An explicit realization is the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing, which is a direct consequence of the effective heavy quark and flavor SU(3) symmetry breaking.
Weak decays of charmed baryons also contribute to the understanding of the electroweak sector of the standard model. In particular semileptonic decays of charmed baryons are valuable to extract the CKM matrix element \(|V_{cs}|\) (for some recent experimental measurements and lattice QCD calculations, please see Refs. [3; 4; 5; 6; 7] and many references therein). It is interesting to notice that based on the experimental measurements the flavor SU(3) symmetry, which is a very useful tool widely applied in the analysis of weak decays of heavy mesons, is found to be sizably broken [8]. Various mechanisms to explain this observation were explored in Ref. [8] and a competitive explanation is through the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing [9]. Very interestingly, several recent works are devoted to determine the mixing angle from various methods, but the results are controversial. Based on experimental data on weak decays of \(\Xi_{c}\) and \(\Xi_{cc}\), it is found that a large mixing angle is called for: \(\theta=24.66^{\circ}\pm 0.90^{\circ}\) in Ref.[9; 10] and \(16.27^{\circ}\pm 2.30^{\circ}\) in Ref.[11]. A direct calculation in QCD sum rule gives \(\theta=5.5^{\circ}\pm 1.8^{\circ}\)[12], while in heavy-quark effective theory, it is found \(\theta=8.12^{\circ}\pm 0.80^{\circ}\)[13]. Besides, an earlier lattice QCD exploration indicates a small mixing angle [14].
This paper presents an update-to-date exploration of the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing from lattice QCD. Using the recently generated lattice configurations, we calculate the matrix elements \(\langle 0|O_{\Xi_{c}^{(3,6)}}(t)\hat{O}_{\Xi_{c}^{(3,6)}}(0)|0\rangle\), where \(O_{\Xi_{c}^{(3)}}\) and \(O_{\Xi_{c}^{(3)}}\) denote the interpolators for the anti-triplet and sextet baryon respectively. With the results for the correlation function matrix, we use the direct fitting method to fit the energy levels and the mixing angle. As a comparison, the lattice spectroscopy method will also be used. After extracting the mixing angles with different lattice spacings and pion masses, we extrapolate the result to the chiral and continuum limit. We finally find that the mixing angle is at the order of \(1^{\circ}\), which is likely to indicate the necessity to include other SU(3) symmetry-breaking effects in \(\Xi_{c}\) decays.
The rest of this paper is organized as follows. The theoretical framework for the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing is collected in Sec. II. We present our lattice simulations of the two
point correlation functions in Sec.III and a determination of the mixing angle from a direct fit in Sec. IV. Sec. V gives an analysis of the mixing angle through solving the generalized eigenvalue problem. The conclusion of this work is given in Sec. VI.
## II \(\Xi_{c}-\Xi_{c}^{\prime}\) mixing in heavy quark effective theory
In quark model, a baryon is made of three quarks. In heavy quark limit, heavy baryons with one charm quark can be well classified according to the angular momentum \(J\) of light quark system. If the spin wave function of the light quark system is antisymmetric, i.e. \(J_{qq}=0\), the Fermi statistics and antisymmetric feature in color space require that the flavor wave function is also antisymmetric in \(SU(3)_{F}\). This corresponds the decomposition \((3\times 3)_{\rm antisymmetric}=\bar{3}\). If \(J_{qq}=1\), the \(SU(3)_{F}\) flavor wave function should be symmetric and hence transforms as \((3\times 3)_{\rm symmetric}=6\).
In heavy quark effective theory (HQET) [1; 2], the generic interpolating operator of a heavy baryon takes the form \(O=\epsilon^{abc}\left(q_{1}^{T}\alpha C\Gamma q_{2}^{b}\right)\Gamma^{ \prime}\tilde{Q}^{c}\), where \(q_{1,2}\) denotes a light quark, and \(\tilde{Q}\) is the heavy quark field in HQET satisfying \(\gamma^{0}\tilde{Q}=\tilde{Q}\). In this interpolator we have explicitly specified the color indices \(a,b,c\). The transposition \(T\) acts on a Dirac spinor, and \(C=\gamma^{0}\gamma^{2}\) is the charge conjugation matrix. Summing over the color indices with the totally antisymmetric tensor \(\epsilon^{abc}\) results in a gauge invariant operator. The Dirac matrix \(\Gamma\) and \(\Gamma^{\prime}\) are related to the internal spin structures of the heavy baryon. For the ground state, the angular momentum of light quark system is either 0 or 1. If \(J=0\) corresponds to the current \(q_{1}^{T}C\gamma_{5}q_{2}\), and \(J=1\) corresponds to \(q_{1}^{T}C\gamma_{5}q_{2}\). Therefore, the baryonic current \(O_{1}=\left(q_{1}^{T}C\gamma_{5}q_{2}\right)\tilde{Q}\) corresponds to the spin 1/2 baryon, and the current \(\vec{O}_{2}=\left(q_{1}^{T}C\vec{\gamma}q_{2}\right)\tilde{Q}\) contains both spin-1/2 and spin-3/2 components. Using \(\vec{O}_{2}\), one can construct the spin-3/2 component \(\vec{O}_{2}^{3/2}=\vec{O}_{2}+\frac{1}{3}\vec{\gamma}\vec{\gamma}\cdot\vec{O}_ {2}\) which satisfying the condition \(\vec{\gamma}\cdot\vec{O}_{2}^{3/2}=0\), and the spin-1/2 component \(O_{2}^{1/2}=-\frac{1}{3}\vec{\gamma}\vec{\gamma}\cdot\vec{O}_{2}=\left(q_{1}^{ T}C\vec{\gamma}q_{2}\right)\cdot\vec{\gamma}\gamma_{5}\tilde{Q}\).
As a result, one can obtain the baryonic currents of \(SU(3)_{F}\) eigenstate as [15]
\[O^{\bar{3}}=\epsilon^{abc}\left(\ell^{Ta}C\gamma_{5}s^{b}\right)P _{+}\tilde{c}^{c}, \tag{1}\] \[O^{6}=\epsilon^{abc}\left(\ell^{Ta}C\vec{\gamma}s^{b}\right) \cdot\vec{\gamma}\gamma_{5}P_{+}\tilde{c}^{c}, \tag{2}\]
with \(\ell=(u,d)\). \(P^{+}=(1+\gamma^{0})/2\) is the positive parity projector.
If the finite charm quark mass and the differences between the strange quark and up/down quark are taken into account, the two \(\Xi_{c}\) states can mix and their mixing effect could be described by a \(2\times 2\) mixing matrix:
\[\left(\begin{array}{c}|\Xi_{c}\rangle\\ |\Xi_{c}^{\prime}\rangle\end{array}\right)=\left(\begin{array}{cc}\cos\theta &\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}|\Xi_{c}^{\bar{3 }}\rangle\\ |\Xi_{c}^{c}\rangle\end{array}\right), \tag{3}\]
where \(\theta\) denotes the mixing angle. The mass eigenstates are orthogonal:
\[H_{\rm QCD}\left|\Xi_{c}\right\rangle=m_{\Xi_{c}}\left|\Xi_{c}\right\rangle, \quad H_{\rm QCD}\left|\Xi_{c}^{\prime}\right\rangle=m_{\Xi_{c}^{\prime}} \left|\Xi_{c}^{\prime}\right\rangle, \tag{4}\]
where \(m_{\Xi_{c}}/m_{\Xi_{c}^{\prime}}\) denote the physical baryon masses.
## III Two-point correlation function from lattice QCD
The simulations in this work will be performed on the gauge configurations generated by the Chinese Lattice QCD (CLQCD) collaboration with \(N_{f}=2+1\) flavor stout smeared clover fermions and Symanzik gauge action, at three lattice spacings and three pion masses including the physical one. The detailed parameters are listed in Table 1. Some previous applications of these configurations can be found in Refs. [16; 17; 18; 7].
In the numerical simulation, to improve the signal-to-noise ratio of the simulation, we generate the \(f\)-flavored wall source to point sink propagators
\[S_{w-p}^{f}(\vec{x},t,t_{0};\vec{p})=\sum_{\vec{x}_{s}}e^{-i\vec{p}\cdot(\vec {x}-\vec{x}_{s})}S^{f}(\vec{x},t;\vec{x}_{s},t_{0}) \tag{5}\]
on the Coulomb gauge fixed configurations. \(S^{f}\) denotes the propagator from wall source at \((\sum_{\vec{x}_{s}}\vec{x}_{s},t_{0})\) to the point sink \((\vec{x},t)\). We consider the \(2\times 2\) correlation function matrix
\[\mathcal{C}(t,t_{0})=\sum_{\vec{x}}\left(\begin{array}{c}\left\langle O_{p}^ {\bar{3}}(\vec{x},t)\bar{O}_{w}^{\bar{3}}(\vec{0},t_{0})\right\rangle\\ \left\langle O_{p}^{6}(\vec{x},t)\bar{O}_{w}^{\bar{3}}(\vec{0},t_{0})\right\rangle \end{array}\right)\left\langle O_{p}^{\bar{3}}(\vec{x},t)\bar{O}_{w}^{6}( \vec{0},t_{0})\right\rangle\\ \left\langle O_{p}^{6}(\vec{x},t)\bar{O}_{w}^{\bar{3}}(\vec{0},t_{0})\right\rangle \end{array}\right) \tag{6}\]
that contains the baryonic currents. The subscript "\(w\)" and "\(p\)" denote the wall source and point sink. The related two-point functions can be constructed by the light, strange, and charm quark propagators \(S_{w-p}^{\{l,s,c\}}\) with momentum \(\vec{p}=\vec{0}\)
\[C_{11}(t,t_{0})= -\sum_{\vec{x}}\epsilon^{abc}\epsilon^{a^{\prime}b^{\prime}c^{ \prime}}\Big{(}S_{w-p}^{l}(\vec{x},t,t_{0})\Big{)}_{\alpha\alpha^{\prime}}^{aa^{ \prime}}\Big{(}C\gamma_{5}S_{w-p}^{s}(\vec{x},t,t_{0})\gamma_{5}C\Big{)}_{ \alpha\alpha^{\prime}}^{bb^{\prime}}\Big{(}P_{+}S_{w-p}^{c}(\vec{x},t,t_{0})P_{+ }T\Big{)}^{cc^{\prime}}, \tag{7}\]
\[C_{12}(t,t_{0})= \sum_{\vec{x}}\epsilon^{abc}\epsilon^{a^{\prime}b^{\prime}c^{\prime} }\sum_{i=1,2,3}\Big{(}S^{l}_{w-p}(\vec{x},t,t_{0})\Big{)}^{aa^{\prime}}_{\alpha \alpha^{\prime}}\Big{(}C\gamma_{5}S^{s}_{w-p}(\vec{x},t,t_{0})\gamma^{i}C\Big{)} ^{bb^{\prime}}_{\alpha\alpha^{\prime}}\Big{(}P_{+}S^{c}_{w-p}(\vec{x},t,t_{0}) \gamma^{i}\gamma_{5}P_{+}T\Big{)}^{cc^{\prime}}, \tag{8}\] \[C_{21}(t,t_{0})= -\sum_{\vec{x}}\epsilon^{abc}\epsilon^{a^{\prime}b^{\prime}c^{ \prime}}\sum_{i=1,2,3}\Big{(}S^{l}_{w-p}(\vec{x},t,t_{0})\Big{)}^{aa^{\prime}} _{\alpha\alpha^{\prime}}\Big{(}C\gamma^{i}S^{s}_{w-p}(\vec{x},t,t_{0})\gamma _{5}C\Big{)}^{bb^{\prime}}_{\alpha\alpha^{\prime}}\Big{(}P_{+}\gamma^{i} \gamma_{5}S^{c}_{w-p}(\vec{x},t,t_{0})P_{+}T\Big{)}^{cc^{\prime}},\] (9) \[C_{22}(t,t_{0})= \sum_{\vec{x}}\epsilon^{abc}\epsilon^{a^{\prime}b^{\prime}c^{ \prime}}\sum_{i.j=1,2,3}\Big{(}S^{l}_{w-p}(\vec{x},t,t_{0})\Big{)}^{aa^{\prime} }_{\alpha\alpha^{\prime}}\Big{(}C\gamma^{i}S^{s}_{w-p}(\vec{x},t,t_{0})\gamma^ {j}C\Big{)}^{bb^{\prime}}_{\alpha\alpha^{\prime}}\Big{(}P_{+}\gamma^{i}\gamma_ {5}S^{c}_{w-p}(\vec{x},t,t_{0})\gamma^{j}\gamma_{5}P_{+}T\Big{)}^{cc^{\prime}} \tag{10}\]
where the subscript \(\alpha,\alpha^{\prime}\) denote the Dirac indices and superscript \(a,b,c,a^{\prime},b^{\prime},c^{\prime}\) denote the color indices. In the above a trace has been taken and we have adopted a unpolarized projector \(T=(1+\gamma^{0})/2\). One can generate the wall source propagators at several time slices \(t_{0}\) and then average the two-point functions constructed by these propagators, so that the statistical precisions of numerical results on different ensembles can reach the same level. The explicit statistics on each ensemble, including the number of gauge configurations and the measurements from different time slices on one configuration, are collected in Table 1.
One can extract the mixing parameters from a joint analysis of both diagonal and non-diagonal terms of \(\mathcal{C}\). In the correlation functions, one can insert the mass eigenstates \(|n\rangle\) and convert them to local matrix elements. The mismatch of flavor basis baryonic currents and mass eigenstates will be related to the mixing angle under the mixing pattern in Eq.(3). For example, the correlation \(C_{11}\) is given as:
\[C_{11}(t,t_{0})= \sum_{\vec{x}}\langle O_{p}^{3}(\vec{x},t)O_{w}^{3}(\vec{0},t_{0 })\rangle\] \[= \sum_{n=\Xi_{c},\Xi^{c}_{c},\cdots}\frac{e^{-m_{n}(t-t_{0})}}{(La )^{3}(2m_{n})}\langle 0|O_{p}^{3}(\vec{0},0)|n\rangle\langle n|\bar{O}_{w}^{3}( \vec{0},0)|0\rangle\] \[= \frac{1}{(La)^{3}}\left[\frac{e^{-m_{\Xi_{c}}(t-t_{0})}}{2m_{\Xi_ {c}}}\langle 0|O_{p}^{3}(\vec{0},0)|\Xi_{c}\rangle\langle\Xi_{c}|\bar{O}_{w}^{3}( \vec{0},0)|0\rangle+\frac{e^{-m_{\Xi^{\prime}_{c}}(t-t_{0})}}{2m_{\Xi^{\prime }_{c}}}\langle 0|O_{p}^{3}(\vec{0},0)|\Xi^{\prime}_{c}\rangle\langle\Xi^{\prime}_ {c}|\bar{O}_{w}^{3}(\vec{0},0)|0\rangle+\cdots\right]. \tag{11}\]
From this expression one can in general determine the mass of the ground state with a large time slice \(t\). The excited state contributions are greatly suppressed in this regime, and the correlation functions become
\[C_{11}(t,t_{0})= \frac{1}{\left(La\right)^{3}}\left[\frac{e^{-m_{\Xi_{c}}(t-t_{0})} }{2m_{\Xi_{c}}}\langle 0|O_{p}^{3}(\vec{0},0)|\Xi_{c}\rangle\langle\Xi_{c}|\bar{O}_{w}^{3}( \vec{0},0)|0\rangle+\frac{e^{-m_{\Xi^{\prime}_{c}}(t-t_{0})}}{2m_{\Xi^{\prime }_{c}}}\langle 0|O_{p}^{3}(\vec{0},0)|\Xi^{\prime}_{c}\rangle\langle\Xi^{\prime}_ {c}|\bar{O}_{w}^{3}(\vec{0},0)|0\rangle\right]\]
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Ensemble & \(\beta\) & \(L^{3}\times T\) & \(a\) (fm) & \(m_{l}^{\rm b}\) & \(m_{s}^{\rm b}\) & \(m_{c}^{\rm b}\) & \(m_{\pi}\) & \(N_{\rm meas}\) \\ \hline C11P14L & \(48^{3}\times 96\) & & \(-0.2825\) & \(-0.2310\) & \(0.4800\) & \(135\) & \(203\times 48\) \\ C11P22M & \(6.20\) & \(32^{3}\times 64\) & \(0.108\) & \(-0.2790\) & \(-0.2310\) & \(0.4800\) & \(222\) & \(451\times 20\) \\ C11P29S & \(24^{3}\times 72\) & & \(-0.2770\) & \(-0.2315\) & \(0.4780\) & \(284\) & \(432\times 26\) \\ \hline C08P30S & \(6.41\) & \(32^{3}\times 96\) & \(0.080\) & \(-0.2295\) & \(-0.2010\) & \(0.2326\) & \(297\) & \(653\times 26\) \\ \hline C06P30S & \(6.72\) & \(48^{3}\times 144\) & \(0.055\) & \(-0.1850\) & \(-0.1687\) & \(0.0770\) & \(312\) & \(136\times 80\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters of the ensembles used in this work, including the gauge coupling \(\beta=10/g^{2}\), the 4-dimensional volume \(L^{3}\times T\), lattice spacing \(a\), bare quark masses \(m_{l,s,c}^{\rm b}\), pion mass \(m_{\pi}\) and total measurements \(N_{\rm meas}\). The total measurements are equal to the number of gauge configurations times the measurements from different time slices on one configuration.
with \(B_{p/w}=(La)^{-3/2}\langle 0|O_{p/w}^{6}(\vec{0},0)|\Xi_{c}^{6}\rangle\). It should be noticed that the lattice artifacts generated by the wall source and point sink have been collected into the local matrix elements \(A_{p/w}\) and \(B_{p/w}\), and would not contaminate the energy eigenstates and their mixing. Illustrated as Fig.1, the ratios \(C_{12}/C_{21}\) are consistent with 1 for all the ensembles we used in this calculation, regardless of the pion masses and lattice spacings. Such an observation indicates that the color-spacial factor introduced by the wall source will contribute to an overall factor, which is independent of the mixing in the flavor space.
\[= \frac{1}{\left(La\right)^{3}}\left[\frac{e^{-m_{\Xi_{c}}(t-t_{0})}} {2m_{\Xi_{c}}}\left\langle 0\left|O_{p}^{\bar{3}}(\vec{0},0)\right.\right. \left.\left(\left|\Xi_{c}^{\bar{3}}\right\rangle\cos\theta+\left|\Xi_{c}^{6} \right\rangle\cos\theta\right)\left(\left\langle\Xi_{c}^{\bar{3}}\right|\cos \theta+\left\langle\Xi_{c}^{6}\right|\sin\theta\right)\left|\bar{O}_{w}^{\bar {3}}(\vec{0},0)\right|0\right\rangle\right] \tag{12}\] \[\equiv A_{p}A_{w}^{\dagger}\left[\frac{\cos^{2}\theta}{2m_{\Xi_{c}}}e ^{-m_{\Xi_{c}}(t-t_{0})}+\frac{\sin^{2}\theta}{2m_{\Xi_{c}^{\prime}}}e^{-m_{ \Xi_{c}^{\prime}}(t-t_{0})}\right],\]
where the \(t\)-independent parameter \(A_{p/w}=(La)^{-3/2}\langle 0|O_{p/w}^{\bar{3}}(\vec{0},0)|\Xi_{c}^{\bar{3}}\rangle\). In the second step, we have used the mixing matrix in Eq. (3). Similarly, the other matrix elements can be expressed as
\[C_{12}(t,t_{0})= \sum_{\vec{x}}\langle O_{p}^{\bar{3}}(\vec{x},t)\bar{O}_{w}^{6}( \vec{0},t_{0})\rangle=A_{p}B_{w}^{\dagger}\left[\frac{\cos\theta\sin\theta}{2m _{\Xi_{c}}}e^{-m_{\Xi_{c}}(t-t_{0})}-\frac{\cos\theta\sin\theta}{2m_{\Xi_{c}^{ \prime}}}e^{-m_{\Xi_{c}^{\prime}}(t-t_{0})}\right], \tag{13}\] \[C_{21}(t,t_{0})= \sum_{\vec{x}}\langle O_{p}^{6}(\vec{x},t)\bar{O}_{w}^{\bar{3}}( \vec{0},t_{0})\rangle=B_{p}A_{w}^{\dagger}\left[\frac{\cos\theta\sin\theta}{2m _{\Xi_{c}}}e^{-m_{\Xi_{c}}(t-t_{0})}-\frac{\cos\theta\sin\theta}{2m_{\Xi_{c}^{ \prime}}}e^{-m_{\Xi_{c}^{\prime}}(t-t_{0})}\right],\] (14) \[C_{22}(t,t_{0})= \sum_{\vec{x}}\langle O_{p}^{6}(\vec{x},t)\bar{O}_{w}^{6}(\vec{0},t_{0})\rangle=B_{p}B_{w}^{\dagger}\left[\frac{\sin^{2}\theta}{2m_{\Xi_{c}}}e^{ -m_{\Xi_{c}}(t-t_{0})}+\frac{\cos^{2}\theta}{2m_{\Xi_{c}^{\prime}}}e^{-m_{\Xi_ {c}^{\prime}}(t-t_{0})}\right], \tag{15}\]
## IV Mixing angle from joint fit
To extract the masses and mixing angle from the correlation functions, one can apply a joint fit based on the parametrization formula of the correlation function matrix in Eq.(12-15) with parameters \(A\), \(B\), \(m_{\Xi_{c}}\), \(m_{\Xi_{c}^{\prime}}\) and \(\theta\) at \(t_{0}=0\).
Figure 1: Ratios of \(C_{12}\) and \(C_{21}\) as function of \(t\) on different ensembles. Their values are consistent with 1 indicate the lattice artifacts generated by wall source are independent of the mixing in the flavor space.
Figure 2: Chiral and continuum extrapolations of \(\theta\) based on the fit results of Eq.(16) as a function of \(m_{\pi}\). The extrapolated results from chiral fits at fixed lattice spacings are shown as the red, green, and blue dashed lines with error bands. The black line corresponds to the chiral and continuum extrapolation. The black point denotes the extrapolated values at the physical pion mass and continuum limit. Uncertainties in all data points are statistical only.
The choice of fit \(t\)-range relies on both fit quality and physical consideration: the starting time slices \(t_{\rm min}\) after which the data points are included in the fit must be chosen such that contributions from higher excited states have been suppressed sufficiently with the smallest possible statistical uncertainties. The \(t_{\rm max}\) are determined by avoiding the contaminations from backward-propagating in the periodic boundary condition and avoiding too many degrees of freedom in the \(\chi^{2}\) fit leading to poorly estimating. The optimal choices of the fit range are different for the different ensembles, so we determine them independently to obtain a reasonable description of the joint matrix fit.
Table 2 includes the fitted values of \(m_{\Xi_{c}}\), \(m_{\Xi_{c}^{\prime}}\), \(\theta\) and their statistical uncertainties, together with the correlated, reduced \(\chi^{2}\)-values (\(\chi^{2}\)/d.o.f) and fit ranges \(t_{\rm min}-t_{\rm max}\) on each ensemble. Of all fits, we evaluate the fit qualities by \(\chi^{2}\)/d.o.f \(\lesssim 1\) and the smallest uncertainties for the parameters. As an example, Fig.3 shows the correlated joint fit results on C11P14L, which is perfectly consistent with the data of correlation functions.
After extracting the mixing angles from the ensembles at different pion masses and lattice spacings, we can extrapolate these results to the physical values of the quark masses and the continuum limit. We perform the chiral and continuum extrapolation through the following ansatz:
\[\theta(m_{\pi},a) =\theta_{\rm phy}+c_{1}\left(m_{\pi}^{2}-m_{\pi,\rm phy}^{2} \right)+c_{2}a^{2},\] \[m_{n}(m_{\pi},a) =m_{n,\rm phy}+c_{1}\left(m_{\pi}^{2}-m_{\pi,\rm phy}^{2}\right)+c _{2}a^{2}. \tag{16}\]
These extrapolations are performed at the next-to-leading-order in the chiral expansion. We also included a quadratic dependence of lattice spacings. In order to estimate the systematic uncertainties associated with this truncation, we add the higher-order analytic terms to the fit functions, such as the quartic term \(m_{\pi}^{4}\) in chiral expansion, or \(a^{3}\) term in continuum extrapolation. The latter one, which may arise from heavy-quark discretization errors, gives sizable systematic error. In practice, we introduce these terms into the fit functions Eq.(16) separately and calculate the extrapolated masses and mixing angle from the new higher-order fits. The extrapolated results with both statistic and systematic uncertainties from the higher-order analytic terms are collected in Table 2.
With the extrapolation uncertainties taken into account, we find the masses for the \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\) are both consistent with the experimental data, while the mixing angle is:
\[\theta=(1.200\pm 0.090\pm 0.020)^{\circ}. \tag{17}\]
## V Mixing angle from generalized eigenvalue problem
Besides the correlated fit, the mixing angle can be also extracted by solving the generalized eigenvalue problem (GEVP) [20; 21; 22; 23; 24]
\[\mathcal{C}(t)v_{n}(t)=\lambda_{n}(t)\mathcal{C}(t_{r})v_{n}(t), \tag{18}\]
where \(n\) labels the states of \(\Xi_{c},\Xi_{c}^{\prime}\), \(\lambda_{n}(t)\) is the eigenvalue and follows the boundary condition \(\lambda_{n}(t_{r})=1\), and there
\begin{table}
\begin{tabular}{|c|c c c c|} \hline & \(m_{\Xi_{c}}\) (GeV) & \(m_{\Xi_{c}^{\prime}}\) (GeV) & \(\theta\) (\({}^{\circ}\)) & \(\chi^{2}\)/d.o.f & fit range (fm) \\ \hline C11P14L & 2.4256(19) & 2.5196(22) & 1.083(30) & 0.96 & \(1.19-2.81\) \\ C11P22M & 2.4380(27) & 2.5351(30) & 0.988(49) & 1.0 & \(1.19-2.92\) \\ C11P29S & 2.4587(27) & 2.5536(29) & 1.002(50) & 1.1 & \(1.19-3.24\) \\ C08P30S & 2.4753(21) & 2.5809(26) & 1.080(42) & 0.95 & \(1.20-2.40\) \\ C06P30S & 2.4695(37) & 2.5815(48) & 1.021(67) & 1.2 & \(1.32-2.40\) \\ \hline Extrapolated & \(2.4380(68)_{\rm stat}(403)_{\rm syst}\) & \(2.5562(74)_{\rm stat}(422)_{\rm syst}\) & \(1.20(9)_{\rm stat}(2)_{\rm syst}\) & & \\ Exp. data [19] & \(2.46794_{-0.00020}^{+0.00017}\) & \(2.5784\pm 0.0005\) & β & & \\ \hline \end{tabular}
\end{table}
Table 2: Results of masses and mixing angles from correlated matrix fits on different ensembles, together with \(\chi^{2}\)/d.o.f and fit ranges \(t_{\rm min}-t_{\rm max}\) for each fits. The following are extrapolated results, where the experimental data are also listed for comparison.
Figure 3: Joint fit of the \(2\times 2\) correlation function matrix using Eq.(12-15). The data shown here are from C11P14L, the colored bands indicate the time range and fit results of each matrix element.
is an orthogonality condition for the eigenvectors of different states \((n,n^{\prime})\) as \(v_{n^{\prime}}^{\dagger}\mathcal{C}(t_{r})v_{n}=\delta_{nn^{\prime}}\). This orthogonality condition allows us to extract the spectrum of near degenerate states \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\). The eigenvalues \(\lambda_{n}\) and eigenvectors \(v_{n}\) usually be solved independently on each time slice \(t\) from the source at \(t_{0}\), while for the nearby masses, their eigenvalues might fluctuate time slice by time slice. So we choose a reference time slice \(t_{r}\) on which reference eigenvector as \(v_{n,\text{ref}}\), and compare eigenvectors on other time slices by finding the maximum value of \(v_{n^{\prime},\text{ref}}^{\dagger}\mathcal{C}(t_{r})v_{n}\) which associates a state \(n\) with a reference state \(n^{\prime}\).
As discussed above, the two-point correlation function can be decomposed into a time-independent factor and \(e^{-m_{n}t}\) at large times. In the first factor, the matrix element \(\langle 0|O^{i}|n\rangle\) is related to the eigenvectors which contain the mixing effects between the flavor and energy eigenstates, and the effective masses related to energy eigenstates can be extracted from the exponential behavior of the correlators \(\lambda_{n}\). The fit function of masses can be expressed as
\[\lambda_{n}(t)=c_{0}e^{-m_{n}(t-t_{r})}\left(1+c_{1}e^{-\Delta E(t-t_{r})} \right), \tag{19}\]
where the fit parameters \(c_{0}\) collect the contributions from time-independent factors and \(m_{n}\) denotes the effective mass of \(n=(\Xi_{c},\Xi_{c}^{\prime})\), \(c_{1}\) and \(\Delta E\) describe the higher excited states contributions which can be neglected at large time.
The extractions of effective masses are illustrated in the upper panel of Fig.4. The effective masses \(m_{n}=\log\left(\lambda_{n}(t)/\lambda_{n}(t+1)\right)\) are calculated from the eigenvalues of GEVP at reference time slice \(t_{r}=5a\). We also investigate the \(t_{r}\) dependence of GEVP method, shown as Fig.5, we choose several reference time slices to extract and fit the effective masses and find that the selection of \(t_{r}\) hardly affects the results of effective masses. The fit results of masses are collected in Table 3.
In addition, the final results also depend on the fit range. Instead of taking the full difference between the variations of fit ranges as a systematic error as a conservative estimate, it has been advocated that a technique of Bayesian model averaging [25] can weight the selections of different fit ranges. This approach allows for a fully rigorous estimation of probability distributions for parameters of interest by combining results from several fits. We adopt this method to investigate the dependence on the fit range in particular the \(t_{\text{min}}\). Shown as the lower panel of Fig.4, the weight factors Pr and standard \(p\)-values describe the fit quality with range start from \(t_{\text{min}}\), and the probability-weighted average of fit results from different \(t_{\text{min}}\) will contribute to the final result, which corresponds to the colored band in the upper panel. More details are shwon in the Appendix.
After diagonalizing the correlation function matrix \(\mathcal{C}\), we find the GEVP eigenvalues \(\lambda_{n}\) are degenerate to the energy basis, thereby the mixing effects will be collected into the GEVP eigenvectors \(v_{n}\). From the parametrization form in Eq.(12-15), the normalized eigenvectors can be expressed as
\[v_{1} =\sqrt{1+\frac{A_{p}^{2}\cot^{2}\theta}{B_{p}^{2}}}\left(\begin{array} []{c}\frac{A_{p}}{B_{p}}\cot\theta\\ 1\end{array}\right), \tag{20}\] \[v_{2} =\sqrt{1+\frac{A_{p}^{2}\tan^{2}\theta}{B_{p}^{2}}}\left(\begin{array []{c}}-\frac{A_{p}}{B_{p}}\tan\theta\\ 1\end{array}\right), \tag{21}\]
Therefore the mixing angle \(\theta\) can be extracted from \(v_{1,2}\).
The fit results on different ensembles are shown in Tabel III. As discussed above, we can also extrapolate
Figure 5: Reference time \(t_{r}\)-dependence of \(\lambda_{\Xi_{c}}\) on C11P14L, with \(t_{r}=\{5,6,7,8\}a\). It indicates that the selection of \(t_{r}\) hardly affects the extraction of effective masses.
Figure 4: Upper panel: effective masses from eigenvalues \(\lambda_{n}(t)\) (\(n=\Xi_{c},\Xi_{c}^{\prime}\)), and model averaging results on C11P14L. The eigenvalues are from solving the GEVP Eq.18 for the correlation function matrix at \(t_{r}=5a\). Lower panel: the model weight factors (solid lines) and standard \(p\)-values (dashed lines) which reflect the fit quality.
the mixing angles and masses from different ensembles to their physical values, illustrated as Fig.6. The comparison of the extrapolated results as well as the PDG averaged ones are also listed in Tabel III. One can see that the extrapolated results agree with the ones from the correlated matrix fit. The estimation of systematic uncertainties is similar to the above discussions.
With the extrapolation uncertainties taken into account, we find the mixing angle is:
\[\theta=(1.22\pm 0.13\pm 0.01)^{\circ}, \tag{22}\]
which is consistent with Eq. (17) within the uncertainty. This value is much smaller than the model result extracted from experimental data on weak decays [9; 10; 11; 26], and thus insufficient to explain the large SU(3) symmetry breaking effects found in charmed baryon decays. Other mechanisms such as QED corrections should be included.
## VI Charm quark mass dependence
In HQET, the mixing between \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\) would vanish in the heavy-quark limit. In this limit, the angular momentum \(J\) of the light quark pairs becomes a conserved quantum number, hence the flavor eigenstate which the angular momentum of light degrees of freedom has an unambiguous value would coincide with the energy states. In reality, the mixing occurs through a finite quark mass correction and thus is proportional to \(1/m_{c}\)[27; 28].
In this section, we also investigate the charm quark mass dependence of the mixing angle. In practice, we use the ensemble C11P29S to generate the correlation function matrices with several bare charm quark masses around the physical one. By applying the correlated matrix fits for each case, we obtain the results in Table 4. As one can see, the mixing angle decreases with the increase of charm quark mass, which is within our expectation.
In order to investigate the heavy quark mass dependence of the mixing angle, and furthermore to predict the behavior at heavy quark limit, we employ a roughly
Figure 6: Chiral and continuum extrapolations of \(\theta\) extracted from GEVP method. The extrapolated results from chiral fits at fixed lattice spacings are shown as the red, green, and blue dashed lines with error bands. The black line corresponds to the chiral and continuum extrapolation. The black point denoted the extrapolated values at the physical pion mass and continuum limit. Errors in all data points are statistical only.
\begin{table}
\begin{tabular}{|c|c c|c|} \hline & \(m_{\Xi_{c}}\) (GeV) & \(m_{\Xi^{\prime}_{c}}\) (GeV) & \(\theta\) (\({}^{\circ}\)) \\ \hline C11P14L & 2.4274(33) & 2.5201(20) & 1.095(88) \\ C11P22M & 2.4322(39) & 2.5321(44) & 1.075(76) \\ C11P29S & 2.4622(59) & 2.5603(79) & 1.094(60) \\ C08P30S & 2.4763(11) & 2.5868(13) & 1.225(52) \\ C06T30S & 2.4682(38) & 2.5905(39) & 1.212(55) \\ \hline Extrapolated & 2.428(11)\({}_{\rm fit}\)(44)\({}_{\rm ext.}\) & \(2.547(12)\)\({}_{\rm fit}\)(36)\({}_{\rm ext.}\) & \(1.22(13)\)\({}_{\rm fit}\)(1)\({}_{\rm ext.}\) \\ Exp. data [19] & \(2.46794^{+0.00017}_{-0.00020}\) & \(2.5784\pm 0.0005\) & β \\ \hline \end{tabular}
\end{table}
Table 3: Results of masses and mixing angles from fitting GEVP eigenvalues and eigenvectors with the model averaging approach [25]. The errors from fit contains both static errors and systematic ones associated with choice of fit range, and the second errors denote systematic ones from extrapolation. The extrapolated results and the PDG-averaged ones are also listed here for comparison.
Figure 7: Mixing angle \(\theta\) as a function of \(1/m_{\Xi_{c}}\). The red data points denote the results based on unphysical charm quark masses, and the black data point denotes the physical one. Uncertainties in all data points are statistical only.
fit ansatz for the mixing angle \(\theta\) as a function of \(m_{\Xi_{c}}\)
\[\theta=\frac{B_{1}}{m_{\Xi_{c}}}+\frac{B_{2}}{m_{\Xi_{c}}^{2}}. \tag{23}\]
The results obtained from fit are \(B_{1}=-2.78(52)\)GeV and \(B_{2}=12.9(1.3)\)GeV\({}^{2}\) with \(\chi^{2}/\)d.o.f = 0.11. It should be noticed that the value of \(B_{1,2}\) would suffer from sizeable discretization effects due to the ensemble we used for this investigation.
## VII Summary
In summary, we have explored the \(\Xi_{c}\Xi_{c}^{\prime}\) mixing by calculating the two-point correlation functions of the operators to interpolate the \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\) baryons from the lattice QCD. Based on the lattice data, we have adopted two independent methods to determine the mixing angle between \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\). Direct analysis of the correlation function is conducted and a small mixing angle is found. This observation is also confirmed through the analysis of solving the generalized eigenvalue problem.
After making the chiral and continuum extrapolation, we found that the mixing angle \(\theta\) is \(1.2^{\circ}\pm 0.1^{\circ}\), which is significantly small than the ones from other methods. This indicates the \(\Xi_{c}-\Xi_{c}^{\prime}\) mixing is insufficient to account for the large SU(3) symmetry-breaking effects found in weak decays of charmed baryons, and further mechanisms are required. A combined investigation of \(\Lambda_{c}\) and \(\Xi_{c}\) decay form factors from lattice QCD that include the mixing effects is undergoing.
## Acknowledgement
We are grateful to Fengkun Guo, Xiao-Gang He, Jiajun Wu, and Qiang Zhao for useful discussions. We thank the anonymous referee for drawing our attention to the model averaging approach. This work is supported in part by Natural Science Foundation of China under grant No. U2032102, 12061131006, 12125503, 12293060, 12293061, 12293062. The LQCD calculations were performed using the Chroma software suite [29] and QUDA [30; 31; 32] through HIP programming model [33]. The computations in this paper were run on the Sivuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University, and Advanced Computing East China Sub-center.
## Appendix A Model averaging method
Statistical modeling plays a crucial role in obtaining meaningful results from lattice field theory calculations. While these models are usually rooted in physics, multiple model variations may exist for the same lattice data. To account for the uncertainties associated with model selection, we employ model averaging [25] approach to improve the stability of our results. This approach involves taking a probability-weighted average of all the model variations, providing a more comprehensive and less conservative approach to addressing systematic errors.
In practice, we consider a set of models \(\{M\}\) which fitting all data in the range \([\{\text{list of }t_{\text{min}}\},t_{\text{max}}]\), and combine the statistical results and relevant parameters obtained from fitting in models \(\{M\}\) to estimate the the weight factor
\[\text{Pr}(M|D)\approx\exp\bigl{[}-\frac{1}{2}(\chi_{\text{aug}}^{2}(\mathbf{a }^{*})+2\text{k}+2\text{N}_{\text{cut}})\bigr{]} \tag{24}\]
where \(\chi_{\text{aug}}^{2}(\mathbf{a}^{*})\) represents the standard best-fit augmented chi-squared, \(k\) corresponds to the number of fit parameters, and \(N_{\text{cut}}\) represents the number of removed data points. The normalized \(\text{Pr}(M|D)\) are used to estimate the fit qualities of models \(\{M\}\). The model-averaged values \(\langle a\rangle\) from the fit parameters \(\langle a\rangle_{M}\) can then be determined using the following procedure:
\[\langle a\rangle=\sum_{M}\langle a\rangle_{M}\text{Pr}(M|D), \tag{25}\]
and its error can be estimated by
\[\sigma=\sqrt{\sum_{M}\left[\sigma_{M}^{2}+\left(\langle a\rangle_{M}-\langle a \rangle\right)^{2}\right]\text{Pr}(M|D)}, \tag{26}\]
where the \(\left(\langle a\rangle_{M}-\langle a\rangle\right)^{2}\) term can be interpreted as the variability across the different models.
In our analysis in Sec. V, we set \(t_{\text{min}}=[6,17]a\) and \(t_{\text{max}}=20a\) for the C11 ensembles, \(t_{\text{min}}=[4,20]a\) and \(t_{\text{max}}=24a\) for C08P30S, and \(t_{\text{min}}=[9,24]a\) and \(t_{\text{max}}=28a\) for C06P30S. The illustration of fit results and fit quality annotated by normalized weight factor Pr and \(p\)-value are collected in Fig.4 as well as Fig.8.
\begin{table}
\begin{tabular}{|c|c c c c c c c c|} \hline \(m_{c}^{\text{b}}\) & 0.2 & 0.3 & 0.4 & 0.44 & \(\mathbf{0.478}\) & 0.5 & 0.6 & 0.7 & 0.8 \\ \hline \(m_{\Xi_{c}}\) (GeV) & 2.0987(25) & 2.2380(28) & 2.3594(26) & 2.4069(26) & \(\mathbf{2.4587(27)}\) & 2.4793(29) & 2.5878(30) & 2.6898(30) & 2.7859(31) \\ \(m_{\Xi_{c}^{\prime}}\) (GeV) & 2.1834(24) & 2.3249(29) & 2.4514(24) & 2.4999(24) & \(\mathbf{2.5536(29)}\) & 2.5718(29) & 2.6823(29) & 2.7859(30) & 2.8835(30) \\ \(\theta\) (\({}^{\circ}\)) & 1.639(75) & 1.349(73) & 1.116(49) & 1.049(46) & \(\mathbf{1.002(50)}\) & 0.969(53) & 0.847(47) & 0.751(42) & 0.674(39) \\ \hline \end{tabular}
\end{table}
Table 4: Results of \(m_{\Xi_{c}}\), \(m_{\Xi_{c}^{\prime}}\) and \(\theta\) at different bare charm quark masses. The bold ones correspond to a physical charm quark. These results are extracted from the correlated fit of the correlation function matrix on C11P29S. The total measurements of non-physical mass cases are \(N_{\text{meas}}=432\times 14\). Uncertainties in all results are statistical only. |
2309.12946 | Unraveling Medium-Range Order and Melting Mechanism of ZIF-4 under High
Temperature | Glass formation in Zeolitic Imidazolate Frameworks (ZIFs) has garnered
significant attention in the field of Metal-Organic Frameworks (MOFs) in recent
years. Numerous works have been conducted to investigate the microscopic
mechanisms involved in the melting-quenching process of ZIFs. Understanding the
density variations that occur during the melting process of ZIFs is crucial for
comprehending the origins of glass formation. However, conducting large-scale
simulations has been challenging due to limitations in computational resources.
In this work, we utilized deep learning methods to accurately construct a
potential function that describes the atomic-scale melting behavior of Zeolitic
Imidazolate Framework-4 (ZIF-4). The results revealed the spatial heterogeneity
associated with the formation of low-density phases during the melting process
of ZIF-4. This work discusses the advantages and limitations of applying deep
learning simulation methods to complex structures like ZIFs, providing valuable
insights for the development of machine learning approaches in designing
Metal-Organic Framework glasses. | Zuhao Shi, Bin Liu, Yuanzheng Yue, Arramel Arramel, Neng Li | 2023-09-22T15:46:39Z | http://arxiv.org/abs/2309.12946v1 | # Unraveling Medium-Range Order and Melting Mechanism of ZIF-4 under High Temperature
###### Abstract
**ABSTRACT:** Glass formation in Zeolitic Imidazole Frameworks (ZIFs) has garnered significant attention in the field of Metal-Organic Frameworks (MOFs) in recent years. Numerous works have been conducted to investigate the microscopic mechanisms involved in the melting-quenching process of ZIFs. Understanding the density variations that occur during the melting process of ZIFs is crucial for comprehending the origins of glass formation. However, conducting large-scale simulations has been challenging due to limitations in computational resources. In this work, we utilized deep learning methods to accurately construct a potential function that describes the atomic-scale melting behavior of Zeolitic Imidazole Framework-4 (ZIF-4). The results revealed the spatial heterogeneity associated with the formation of low-density phases during the melting process of ZIF-4. This work discusses the advantages and limitations
of applying deep learning simulation methods to complex structures like ZIFs, providing valuable insights for the development of machine learning approaches in designing Metal-Organic Framework glasses.
_Keywords: ZIF-4 Glass; Medium-Range Order; Glass Formation Ability; Phase Transition; Deep Learning Accelerated Molecular Dynamics_
## Introduction
The metal-organic frameworks (MOFs) have attracted significant attention in recent years,[1] particularly in relation to their response to changes in temperature and pressure. The emergence of MOF glasses has considerably expanded the research boundaries within the field of MOFs.[2] Both MOF crystals and glasses demonstrate a great potential in gas absorptions,[3; 4] photoelectrics[5; 6] and other fields.[7; 8; 9] As MOF glass research is at an early stage, there is an urgent need to unravel the structure-property correlation (SPC) of MOF glasses that lack of the ordered structure at various length scales.[10]
Understanding how the atomic arrangement within MOF glasses relates to their unique properties is crucial for advancing their development and unlocking their full potential. Besides experimental methods, atomistic simulation is also a powerful tool to unveil the SPC of MOF glasses. Several simulation studies employing both first-principles methods and molecular dynamics have yielded insights into microstructure of MOFs glasses.[11; 12; 13; 14] Such studies have played a crucial role in advancing our understanding of MOF glasses and guiding further experimental efforts.
In the amorphization of MOF, the first-principles molecular dynamics (FPMD) reveals the exchange behavior of organic ligands in MOF glasses.[11; 15] The spatial position resistance of ligands has a strong influence on the glass-forming ability.[16] The reactive forcefield methods have also contributed insight into the medium-range structure evolution.[17; 18; 19; 20] The optical properties of MOF glasses have been linked to their electronic structures by utilizing on large-scale amorphous MOF model.[21; 22]
The highly accurate first-principles simulations are computationally expensive when dealing with large-scale systems, in which the spatial and time scales are often limited. In contrast, the classical molecular dynamic simulation is strongly dependent on the reliability of the potential function to flexible Zeolitic Imidazolate Frameworks (ZIFs), especially for the melting process of several ZIFs.
The ReaxFF method is generally recognized for striking a good balance between accuracy and efficiency in simulations[17]. This method combines elements of classical force fields and quantum mechanical calculations, which allows for the treatment of chemical reactions and bonds breaking/formation. It could make a great contribution to revealing the SPC in glassy ZIFs[23, 24, 25]. The applicability of a generic model, developed to use a large number of static structures of ZIFs, needs to be verified for specific phase transition processes of ZIFs under certain temperature and pressure conditions[26].
Recently, the machine learning methods have made substantial progress in the development of simulations[27, 28]. Among the various machine learning methods, the combination of deep learning method and molecular dynamics has accelerated the exploration of new structures[29]. For complex chemical structures, such as high-entropy alloys and large protein molecules, the deep-learning potential molecular dynamic (DPMD) has revealed their SPC in larger spatial and time scales[30, 31, 32, 33]. Recent studies have shown that DPMD, when trained with a well-curated dataset, can yield accurate predictions for dynamic properties like viscosity of alloys[34]. DPMD combines elements of machine learning and molecular dynamics to capture the complex dynamics of materials.
In this work, a deep-learning potential function was utilized to simulate the melting process of three representative ZIFs. Thanks to the latest version of the open-source DeepMD-kit[35], the deep potential molecular dynamics (DPMD) simulations provided high accuracy comparable to density functional theory (DFT) results. Moreover, DPMD simulations achieved a significant breakthrough in terms of spatial and temporal scales, with an increase of three orders of magnitude. The highly efficient DPMD simulations revealed density variations associated with phase changes during the melting of ZIFs. Additionally, we explored the correlation between the heating velocity and the phase-transition point through the simulations. Finally, we discussed the challenges that DPMD currently faces in the study of amorphous ZIFs.
Figure 1: **Schematic of deep learning accelerated molecular dynamics.** The structural and energetic relationships obtained from first-principles calculations will be employed in the construction of deep learning potentials. The atomic structures will be decomposed into local environments, and the mapping relationships between the structure, energy, forces, and virial quantities will be established using deep learning techniques. The potential functions constructed through deep learning will be utilized for simulations at larger time and spatial scales in LAMMPS.
## Results and Discussions
In this work, the DeepMD-kit was used to construct the neural network (NN) potential, since this code has been continuously improved in recent years [35]. The entire workflow of this study is demonstrated in Fig. 1, where the FPMD that generated the training dataset is combined with the MD simulation using the deep-potential function. The detailed description of parameters in this workflow is depicted in Methods. A major advantage of DPMD is its ability to perform efficient molecular dynamics simulations with DFT accuracy. Using ZIF-4 as an example, Fig. 2 depicts the comparison of model size and the simulation performance of training data (FPMD) and DPMD. Compared to ab initio methods, DPMD simulations offer a significant advantage in terms of both spatial scale and time scales of the simulation. Models with approximately 8000 atoms can be employed to perform dynamical simulations at speeds of up to 0.5 ns/day by using deep learning potentials.
In terms of the size of the simulated system, the structure undergoing DPMD in this work has lattice parameters exceeding 5 nm, meaning that the simulation involves a system of considerable size. Compared to the models used for ab initio simulations where the lattice parameters are only about 1.5 nm, DPMD is certainly more suitable for a statistical description of the local structure over the medium range (5 \(\sim\) 10 A). It can be stated that DPMD is 5000 times more efficient in operation than FPMD for the same size system. In terms of computational speed and the size of the simulated system, DPMD is more suitable than the ab initio method for studying the evolution of structures involving meso-stable states, considering both computational speed and the
Figure 2: **Schematic of simulation structure.** (Left) the ZIF-4 structural model used in the training set (FPMD). (Right) the supercell structure used in DPMD. Considering the model size and computational speed, the DPMD simulation is 10\({}^{3}\) times more efficient than the FPMD.
size of the simulated system. It is known that most of the DFT calculations were performed under the ground states consideration or at room temperature for structural optimization, such as the QMOF database [36]. In our initial simulations, referred to as DPMD-Test, a restricted training set consisting of 20000 samples was utilized. These samples were extracted from trajectory files of ZIF-4 structures at a temperature of 300 K. We aim to access the generalizability of the deep learning potential function, which was carried out under a training of structured sample at specific temperature, by simulating sample deformations over a broader temperature range.
The optimized ZIF-4 structures were heated from 300 K to 2100 K at a heating velocity of 20 K/ps. As a result, the equilibrium structure of DPMD-Test is determined at 1200 K and fairly compared to the FPMD simulation results. Fig. S1 shows the radial distribution function of the equilibrium structure in DPMD-Test at 1200 K. The total radial distribution function displays a peak in the range of 0-10 A. By examining Fig. S2, this peak should be associated with the Zn-Zn atomic pairs. Upon examining the radial distribution functions of the different atomic pairs in Fig. S2, it is found that both DPMD-Test and VASP calculations exhibit a consistent behavior for the atomic pairs with chemical interactions such as the metal-ligand bonding and the intermolecular bonding in ligands. In contrast, DPMD-Test was poorly performed to describe the medium-range structures. The results of DPMD-Test indicate that utilizing structures from a single temperature as the training seeds lead to a poor performance in describing the medium-range structures. To solve this problem, we constructed a larger training set by incorporating structural information at different temperatures, and thereby
improved the performance and accuracy of the model.
To increase the applicability of the deep learning potential function at different temperatures, the training sets were extended by including all sets mentioned in Methods part. In this attempt, the boundary conditions of more than 200,000 configurations of ZIF-4 and ZIF-62 FPMD trajectories is considered. We refer to this
Figure 3: **Static characterization of generative structures under deep learning training potential functions.** (a) The total radial distribution function of DPMD-20K and FPMD simulations and (b) Zn-N pair distribution function of DPMD-20K. (c) The PMF of Zn-N and (d) linear regression of free energy change (\(\Delta\)F) at varied temperature in DPMD-20K.
new potential function as ATP-D3 (All Temperature Potential). The predicted performance is shown in Fig. S3. The root mean-square error of energy, force, and virial for three ZIFs are listed in Table 1. Taking into account the DPMD training results to be implemented for other complex systems, such as high-entropy alloys or heterogeneous catalytic interfaces [37], we believe that the accuracy of the model is sufficient for simulating the structural evolution during the subsequent melting process.
In addition to the ZIF-4 and ZIF-62 structures included in the training set, we tested the structure of ZIF-8 at 300 K to examine the generalization of the potential function. The results demonstrate that the deviation of energy prediction for ZIF-8 at 300 K is over 1%, which is approximately ten times higher than ZIF-4 and ZIF-62. Due to the significant differences in topology and functional group species across ZIF-4, ZIF-62, and ZIF-8, the structure-energy relationships constructed according to the XIF-4 structural order covariates are not effective to accurately describing the energy of ZIF-8. This suggests that a deep-learning potential functions can be applicable for the
\begin{table}
\begin{tabular}{c c c c} \hline \hline Datasets & Energy (meV/atom) & Force (eV/Γ
) & Virial (meV/atom) \\ \hline ZIF-4 (300 K) & 3.61 (0.10\%) & 1.14 & 20.5 \\ ZIF-4 (2500 K) & 3.37 (0.09\%) & 1.25 & 13.8 \\ ZIF-62 (300 K) & 4.41 (0.11\%) & 1.20 & 25.2 \\ ZIF-62 (2500 K) & 6.30 (0.15\%) & 1.30 & 15.7 \\ ZIF-8 (300 K) & 37.1(1.12\%) & 2.02 & 16.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The error of DPMD for varied dataset of three ZIFs (ZIF-4, ZIF-62 and ZIF-8). The italic letter means relative error.
various ZIFs, such as ReaxFF,[17] then a broader phase space input should contain different ligands and topology information. To evaluate the performance of the ATP, we performed a series of simulations at different heating velocity. We first performed the ZIF-4 heating process at 20 K/ps, since this heating velocity is very close to the setting in typical FPMD (15-20 K/ps).[11, 15, 16] We therefore conducted a statistical comparison of the distribution of the static structures. Fig. 3(a) shows the comparison in the radial distribution function between the DPMD-20K and FPMD structures at 1200 K. Compared to DPMD-Test shown in Fig. S1, it can be observed that the structures of DPMD-20K and FPMD are almost identical in their radial distribution functions.
The evolution of the structure with temperature were. Fig. 3(b) shows the Zn-N pair distribution function (PRDF) of DPMD-20K at different temperatures. The
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & & \(\Delta\)F (840K) & \(\Delta\)F (1800K) \\ Ref & \(\Delta\)U (kJ/mol) & \(\Delta\)S (J/mol/K) & & \\ & & & (kJ/mol) & (kJ/mol) \\ \hline Gaillac _et. al_ & & & & \\ & 127 & 37 & 95.9 & 60.4 \\ (Ref 12) & & & & \\ Shi, et. al & & & & \\ & 164.3 & 44.1 & 127.3 & 84.9 \\ (Ref 17) & & & & \\ DPMD-20K & & & & \\ & 163.1 & 60.7 & 112.1 & 53.8 \\ (This work) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison in thermodynamic values between DPMD and FPMD. \(\Delta\)U, \(\Delta\)S and \(\Delta\)F represent the changes of internal energy, entropy and free energy respectively, following a van βt Hoff law \(\Delta\)F = \(\Delta\)U β T*\(\Delta\)S.
intersection between the first peak and the second peak indicates that partial melting can occur when the temperature rises above 1100 K. We calculate the potential of mean force (PMF) of Zn-N pairs by the PRDF. Fig. 3(c) and 3(d) exhibit the temperature dependence of the activation free energy, while Table 2 shows the comparison between DPMD-20K and previous DFT calculations. The error in the structure evolution is smaller in the free energy, while DPMD has a higher value in the change of entropy. According to the PMF fitting formula, the linear fitting formula corresponding to DPMD-20K has a higher slope than the previous DFT results, indicating the energy of DPMD-20K decreases faster than its counterparts.
Due to the high computational efficiency of DPMD, we were able to simulate the heating process of three ZIFs at a slower heating velocity than typically FPMD process. We chose ZIF-4 to explore the influence of heating velocity on structures. To ensure that ZIF-4 structures reach equilibrium at each temperature, we used a segmented equilibrium approach for heating ZIF-4 at 20 K/ps. In addition, the heating process of ZIF-4 at 4 K/ps and 1 K/ps used a one-step heating method to observe the continuous change of ZIF-4 structures.
The energetic variation as a function of temperature at different heating velocity is shown in Fig. 4(a). Fig. 4(b) displays the variation of density of ZIF-4 with the temperatures. It turns out that the density of ZIF-4 is in dynamic equilibrium over the wide temperature range from 300 K to around 1300 K. The density of ZIF-4 within the aforementioned temperature range is about 0.95 g/cm\({}^{3}\), which is much lower than that of ZIF-4. The density of ZIF-4 within the aforementioned temperature range is about 0.95 g/cm\({}^{3}\), which is much lower than that of ZIF-4.
of a real anhydrous ZIF-4 structure. We postulate that the DFT data used for the training is not fully describe the density variation of ZIF-4. In the training data, the volume was fixed to be a constant in NVT ensemble, and hence the change of the long-range structure can be neglected. This has less impact on volume change for the scale of the FPMD, however, when this structure-energy relationship is reproduced at larger scales, it causes bias in the density. As the temperature over 1300 K, the density of ZIF-4 undergoes a non-negligible reduction. As seen in the structure diagrams in Fig. S4(a) and 4(b), ZIF-4 structure does not undergo a significant transformation at this point, and the decrease in density is attributed to the thermal expansion of the lattice. With further incremental changes in temperature, the ZIF-4 structure shows a significant low-density phase in the range of 1400-1550 K. The formation of low-density ZIF-4 has been reported in experiments, where the low-density amorphous phase (LDA) appears before the high-density amorphous phase (HDA) at temperatures around 600K.[38, 39, 40] However, in our simulations, the temperature at which the low-density phase emerges is significantly higher than the experimental value. One possible reason for this discrepancy is that the simulated heating rate and system size are much smaller than those in the experimental samples, resulting in inadequate relaxation of the ZIF-4 crystal structure.
Figure 5: **Spatial heterogeneity in the distribution of densities during ZIF-4 heating.** (a). The variation of the density distribution of the ZIF-4 structure with temperature. The color bar indicates the density from low (blue) to high (red); (b)-(d). Evolution of mid-range structure in the melting process of ZIF4 in DPMD-1K at varied temperature. Only Zn atoms (green balls) are preserved and displayed (along the z-axis direction).
The manifestation of the radial distribution function for the different phases of the structure is shown in Fig. 4(c), where the intersection between the second and third peaks in the second and third stages reflects the partial melting of ZIF-4. The breaking of metal-organic ligand bonds (Zn-N pairs) is the cause of partial melting, as demonstrated in Fig 4(d). With further increasing the temperature, ZIF-4 becomes denser. From the structure above 1550 K in Fig. S4(c), we can obtain the high-density crystalline phase ZIF-zni [41, 10]. As the temperature increases beyond 1700 K, ZIF-4 undergoes melting and subsequent decomposition, and non-physical meaning structures, such as the clusters formation of H atoms. The formation of the LD phase of ZIF-4 is only observed when the heating velocity is low enough (1K/ps). For the typical software based on the DFT method, a significant computational resource is required to perform molecular dynamics at the mentioned
To further investigate the formation process of low-density phase, we visualize the variation of the density distribution of the ZIF-4 structures with temperature in DPMD-1K (Fig. 5). The local density is defined by the number of atoms in the 5 A * 5 A * 5 A box, with green and red colors indicating low and high local atomic densities, respectively. The statistical distribution of the medium-range structure of ZIF-4 is displayed in Fig. 5(b)-(d) in different temperature ranges, corresponding to the heating-up stage of 300-1300 K, the pre-melting stage of 1300-1400 K and the low-density melt stage of 1400-1550 K. We can observe a uniform high-density local phase distribution in the ZIF-4 structure during the heating-up stage. This is due to the accelerated local atomic motion caused by the increasing of the temperature. As the temperature
increases, the lattice of ZIF-4 undergoes considerably expansion during the pre-melting stage, leading to a decrease in high-density regions within the structure. At the temperature surpasses 1400 K, an increased number of low-density regions emerge within the structure of ZIF-4. However, when the temperature is higher than 1550 K, the high-density regions reappear.
It is important to note that the distribution of different density local phases within the ZIF-4 structure is spatially heterogeneous. To further discuss the generation of this spatial heterogeneity, we plot the root-mean-square displacement of different atoms as a function of temperature (Fig. S5). Several constituents of the organic ligand functional groups, represented by N, C and H atoms, are diffused faster at lower temperatures compared to the metal node Zn atoms. These verify the short-range exchange effect during the melting of ZIF-4 as revealed by the previous simulation work.[11, 15, 16] As the temperature increases, the organic functional group progressively detaches from its original metal node. In the short-range structure, this results in the creation of undercoordination [ZnN\({}_{x}\)] (x\(<\)4). The spatial constraints are reduced due to the decreased average Zn atoms coordination in the medium-range structure, resulting in a higher degree of freedom for Zn nodes. As depicted in Fig. 5(b)-(d), Zn atoms with smaller constraints are rearranged in the medium-range structure. This results in the formation of a low-density melt phase.
The Lindemann ratio (\(\Delta\)), calculated using the full width at half maximum (FWHM) of the radial distribution function, has been demonstrated as a reliable structural order parameter for describing the phase transition structures. The FWHM is depicted in Fig.
6 (a). We further explore the short- and medium-range structure transition of ZIF-4 by examining the FWHM of the Zn-N and Zn-Zn pairs. The FWHM shows an increase as the temperature rises. This observation is supported by the variations in the pair distribution function shown in Fig. 3(b), reflecting a softening of the network structure.
The FWHM of the Zn-N and Zn-Zn pairs is consistent at 300-700 K, indicating that ZIF-4 is in a stable crystalline state at the early stage of the heating-up stage. With an increase in temperature, the FHWM of the Zn-N pair increases faster than that of the Zn-Zn pair. This phenomenon demonstrates that the metal-ligand bonds in ZIF-4 is more sensitive to temperature, i.e., ligand detachment from the metal node occurs. It is noteworthy that the FHWM of the Zn-N pair does not change within 1200-1400 K, while the FHWM of the Zn-Zn pair decreases. Since the FHMMs for different radial distribution functions were obtained by averaging over 1000 sampled structures, the resulting statistical error can be considered small. One possible reason is that the medium-range structural transition occurs in ZIF-4 during the pre-melting phase, as demonstrated in Fig. 5(c) and (d). During this transition, the short-range structure of ZIF-4 reaches equilibrium through ligand exchange interactions. This involves the dynamic equilibrium of breaking and reclosing between the organic ligand and different metal nodes within the structure.
The temperature point at which the density decreases significantly is noted as the Transition Temperature (T\({}_{\rm t}\)). As in Fig. 6 (b), it is show that T\({}_{\rm t}\) = 1432 K in DPMD-1K. As the heating velocity decreases, T\({}_{\rm t}\) decreases to 1418 K for ZIF-4 at 0.5 K/ps and further decreases to 1352 K at 0.2 K/ps. The decrease in T\({}_{\rm t}\) with decreased heating velocity is ascribed to the extension of the relaxation time of ZIF-4 at certain temperature. The experimental heating velocity for melting ZIF-4 is usually 10-20 K/min[3, 41, 42, 43], which makes it difficult to reproduce the condition of the simulation. However, the relationship between T\({}_{\rm t}\) and heating velocity demonstrated in DPMD offers the possibility of reproducing the experimental conditions.
The generalizability of deep learning potential functions trained on specific classes of ZIFs poses a significant challenge due to variations in topology and constituent elements among different ZIFs. The extensive preparation of a training set highlights the need for further improvements in the generalizability of machine learning potential functions. To address this challenge, we propose incorporating a priori knowledge from
Figure 6: **Order parameters variation of the ZIF-4 heating process.** (a) FHWM of Zn-N and Zn-Zn pairs at varied temperature; (b) Density variation of ZIF-4 at 1 K/ps, 0.5 K/ps and 0.2 K/ps heating velocity.
experimental data into the hyperparameters used in constructing potential function. This approach emphasizes the importance of going beyond sole reliance on raw simulation data.
Although this research is still in its early stages, we would like to highlight two noteworthy points that researchers should consider when building upon this work. Firstly, the DeePMD-kit demonstrates remarkable efficiency in parallel computing on large-scale computers, making it a powerful tool for high-performance computing. Furthermore, it is worth mentioning that part of the deep potential function training described in this study was successfully conducted on a modest home computer. This indicates that powerful computing resources are not necessarily limited to supercomputers. Researchers can take advantage of the capability to train with historical data and subsequently employ DPMD simulations on platforms like LAMMPS. This development alleviates concerns about outdated research equipment and empowers researchers engaged in cross-scale simulations to focus on their research content. However, it is important to note that the current DPMD method represents a transitional approach and may appear as a black box to certain users. While DPMD enables researchers to extend their research systems in terms of time and space scales, its generalization to similar structures is still insufficient compared to traditional many-body potential functions. Researchers with a background in first principles or quantum chemical simulations can benefit from the DPMD method to expand their investigations and effectively utilize their historical data.
In this work, we explored the application of deep learning accelerated molecular dynamic in amorphization of ZIF-4. By exploring the simulation conditions, the main conclusions can be drawn from this work:
(1) The potential function trained using the outputs of FPMD offers high prediction accuracy within the wide temperature range. This makes it suitable for conducting simulations of same structure on larger time and spatial scales.
(2) DPMD simulations at different heating rates reveal the phase transition in the pre-melting phase, which is not observed in FPMD.
(3) The different transition temperature, at which the liquid phase appears at different heating rates, are very useful for achieving accurate simulations of experimental phenomena.
(4) The low-density state of ZIF-4 appears before melting. The ligands-induced exchange behavior is believed to be the resource of low-density state.
While the DPMD approach provides insights into the melting behavior of ZIF-4 on a larger scale, it is essential to acknowledge certain limitations or shortcomings associated with this methodology. The first is the concern about training data. As the raw FPMD is processed in the NVT ensemble, it would deviate from the true density of amorphous ZIF. As a consequence, there may be a discrepancy between the density predicted by DPMD simulations and the experimental density. The two values might not align well or be accurately fitted to each other. Another limitation associated with the method concerns the training efficiency of deep learning.
**Models and Approaches**
### ZIFs models
The FPMD outputs of ZIF-4 and ZIF-62 in our previous study were chosen to be the training data [16]. The initial ZIF-4, ZIF-62 model used in FPMD was obtained from the CSD database [44]. In the FPMD, all the solution molecules were removed from the cell and a dry cell was obtained. The parameters description of the ZIFs structures can be found in our previous work. In this work, this model was employed to generate sufficient configurations for the training of deep potential.
In the DPMD process, the 3*3*3 supercell of ZIF-4 was built based on relaxed unit cell. The supercell of ZIF-4 contains 7344 atoms with no symmetry, and the cell parameters a = 46.19 A, b = 45.92 A, c = 55.28 A, and \(\alpha=\beta=\gamma=90^{\circ}\).
### First Principles Molecular Dynamic (FPMD)
To ensure the adequacy of the training data, some of the first principles data used in this study were from a previous work [16]. All geometric structures in this study were simulated by using Vienna Ab initio Simulation Package (VASP) in projector augmented wave (PAW) method [45, 46], and the exchange-correlation energy was evaluated in the GGA-PBE potential with the DFT-D3 method [47, 48]. We chose the single K-point at the zone center in the K-points setup, which was sufficient for such a large structure model with nearly three hundred atoms. The energy cut-off is set as 400 eV and the criterions for electronic and force convergence are set at 10\({}^{\text{-5}}\) eV and 2 \(\times\) 10\({}^{\text{-3}}\) eV A-[1], respectively. All FPMD simulations were performed using the canonical ensemble (NVT) with the temperatures set at be 300 K, 600 K, 900 K, 1200 K, 2000 K
and 2500 K. Each simulation run for a minimum of 15 ps with the total simulation time at each temperature to ensure the equilibrium of the system. The temperature was regulated using the Nose-Hoover thermostat and a timestep of 0.5 fs was employed[49].
To expand the phase space of flexible ZIFs structures, additional molecular dynamic simulations and geometry optimization of ZIF-4 were conducted. We additionally performed the isobaric-isothermal (NPT) simulations at 300 K and 2000 K under the Langevin thermostat and zero pressure. Simultaneously, a series of structural optimizations were performed by varying the lattice constants, ranging from a -5% to +5% change. Through NPT simulations and structural optimizations, we generated a total of 50,620 structural trajectory files with flexible lattice at 0 K,
### Scheme of deep-learning potential
The training sets included nine sets corresponds to structures at varied temperatures
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **Starting Temperature** & **Ending Temperature** & **Heating** & **Steps** \\
**Sets** & **(K)** & **(K)** & **Velocity (K/ps)** & **(\textasci{}10^{3})** \\ \hline
**DPMD-Test** & 300 & 2100 & 20.0 & 220 \\
**DPMD-20K** & 300 & 2100 & 20.0 & 220 \\
**DPMD-4K** & 300 & 2100 & 4.0 & 940 \\
**DPMD-1K** & 300 & 2100 & 1.0 & 3640 \\
**DPMD-0.5K** & 1200 & 1900 & 0.5 & 2820 \\
**DPMD-0.1K** & 1300 & 1600 & 0.2 & 6030 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The parameters of varied heating procedures
(300 K, 600 K, 900 K, 1200 K, 2000 K, 2500 K, 300 K, 2000 K-NPT and geometry optimization data), and consisted of more than 200000 configurations from trajectories. After collecting the training data, in OUTCAR or vasprun.xml format, we transformed the atomic position information into local descriptors, thus reducing the dimensionality of the geometric information variables. In the choice of local structure descriptor, we used the two-atom embedding descriptor (se_e2_a). The se_e2_a was constructed from all location information for each atom, including angular and radial information of neighboring atoms.[50] The expected maximum number of each element was set to be C:96, H:96, N:64, Zn:16, which were determined by the number of each element in training set structures to make sure that the energy can be conserved and accuracy of the model is higher. The next step is to construct the structure-energy/force/virial mapping by reading the energy and force information from the VASP output file.
In this work, the deep-learning network architecture was configurated with an embedding network containing three layers with neuron (25, 50, 100), along with a random seed included in each layer. Additionally, an ANN fitting network was employed, which consists of three layers with neurons (240, 240, 240) in each layer. The start and limit pre-factors for energy and force were set as follows: The energy start pre-factor is 0.02, and the energy limit pre-factor is 1. The force start pre-factor is 1000, and the force limit pre-factor is 1. The cutoff radius and cutoff radius smooth of each atom is set to 6.0 A and 0.5 A, respectively. More than one million (\(>1000000\)) steps of interaction are used to train the deep potential function. For the structure of each set, we randomly select 70% of the configurations into the training set and 30% as the
validation set. Finally, the DeePMD-kit code outputs a deep learning potential function file in the form of an atomic pair of potentials.[51]
**Deep Potential Molecular Dynamic (DPMD) in LAMMPS scheme**
All simulations were performed in LAMMPS in the isobaric-isothermal ensemble (NPT) with Nose-Hoover thermostat to equilibrate the system from 300 K to 2000 K at different heating velocities. The pressure is set to be zero with isotropic pressure in all directions. The timestep is set as 0.5 fs. The parameters for various simulation procedures, including their respective heating velocities, are given in Table 3.
**Dynamic Structural Analysis**
The radial distribution function was obtained by the statistical calculation of 50 snapshots in the equilibrium phase at the corresponding temperature. The thermodynamic quantities, including density and enthalpy, are set in LAMMPS to be output statistics every 200 steps (0.1 ps). The coordination number of N atoms surrounding the Zn atoms in ZIFs was computed using a cut-off radius of 2.5 A. This specific value was chosen based on the Zn-N partial radial distribution function at room temperature.
**AUTHOR INFORMATION**
**Corresponding Author**
*Correspondence: [email protected]
**Present Addresses**
\(\dagger\)State Key Laboratory of Silicate Materials for Architectures, Wuhan University of Technology, Wuhan 430070, China
## Author Contributions
N. Li conceived the project and designed all the calculations. Z. Shi conducted the calculations under N. Li supervised. Z. Shi, B. Liu, Arramel, Y. Yue, and N. Li analyzed the data and discussed the results. Z. Shi and N. Li wrote the manuscript. All authors reviewed and contributed to the final manuscript.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (No.11604249); the Fok Ying-Tong Education Foundation for Young Teachers in the Higher Education Institutions of China (No. 161008); and Fundamental Research Funds for the Central Universities (No. WUT35401053-2022).
|
2303.17871 | Zero-point entropies of spin-jam and spin-glass states in a frustrated
magnet | Thermodynamics studies of a prototypical quasi-two-dimensional frustrated
magnet Ba$_2$Sn$_2$ZnCr$_{7p}$Ga$_{10-7p}$O$_{22}$ where the magnetic Cr$^{3+}$
ions are arranged in a triangular network of bipyramids show that the magnetic
zero-point entropy for $p=0.98$ is 55(1)\% of the entropy expected when the
Cr$^{3+}$ moments are fully disordered. Furthermore, when combined with a
previous neutron scattering study and the perimeter scaling entropy of a spin
jam, the analysis reveals that with decreasing $p$, i.e., doping of the
nonmagnetic Ga$^{3+}$ ions, the variation in the magnetic zero-point entropy
can be well explained by the combined effects of the zero-point entropy of the
spin jam state and that of weakly coupled orphan spins, shedding light on the
coexistence of the two types of spin states in quantum magnetism. | Chairote Piyakulworawat, Asiri Thennakoon, Junjie Yang, Hideki Yoshizawa, Daichi Ueta, Taku J Sato, Kuan Sheng, Wei-Tin Chen, Woei-Wu Pai, Kittiwit Matan, Seung-Hun Lee | 2023-03-31T08:08:47Z | http://arxiv.org/abs/2303.17871v3 | # Zero-point entropies of spin-jam and spin-glass states in a frustrated magnet
###### Abstract
Thermodynamics of glassy states in a quasi-two-dimensional frustrated magnet Ba\({}_{2}\)Sn\({}_{2}\)ZnCr\({}_{7g}\)Ga\({}_{10.7g}\)O\({}_{22}\) where \(p\) is the spin density are investigated experimentally. The system features a triangular network of bipyramids of spins with the quantum spin number \(s=3/2\). The DC magnetic susceptibility measurements on a series of samples with \(0.44\leq p\leq 0.98\) show a freezing transition with the transition temperature _T\({}_{f}\leq 1.2\)_ K. _T\({}_{f}\)_ is found to decrease with decreasing \(p\). The low-lying excitations in the glassy state of the system are examined via the temperature dependence of the magnetic heat capacity and are shown to consist of two components: the hydrodynamic Halperin-Saslow modes characteristic of a spin jam and the two-level systems of a spin glass. A continuous crossover between the two glassy states is observed via the varying weights of the two components as the spin density is varied. The \(p\) dependence of the spin jam's zero-point entropy determined from the exotic perimeter-scaling behavior combined with the observed zero-point entropy of the samples provides the \(p\) dependence of the spin glass's zero-point entropy. The obtained result shows that the correlations between orphan spins begin below \(p\sim 0.8\), the limit that was also found using a neutron scattering technique in a previous report on the isostructural compound SrCr\({}_{9g}\)Ga\({}_{12.9g}\)O\({}_{19}\). The domain size of the spin-jam state estimated from the value of the zero-point entropy for the cleanest sample is approximately 4\(\times\)4 bipyramids, about 2.5 times the measured spin correlation length.
## I Introduction
Zero-point entropy, i.e., the entropy at absolute zero temperature, of a macroscopic system has been a strenuously debated topic ever since the introduction of the third law of thermodynamics. One of the most intriguing examples of crystalline solids that possess zero-point entropy is water ice: cubic and hexagonal ice. In both cases, albeit oxygen ions are arranged in an ordered pattern [1; 2; 3; 4], there are frustrations in the four O-H bonds surrounding each oxygen in a tetrahedral environment. Bernal and Fowler formulated the so-called ice rule which dictates that two hydrogens sit closer to the oxygen than the other two [5]. Therefore, the rule results in six possible configurations of the O-H bonds for each tetrahedron. As a result of the geometry of the corner-sharing tetrahedra, Pauling found that the number of possible ground-state configurations grows exponentially with the increasing number of tetrahedra, which results in the zero-point entropy of the water ice being \(S_{0}=\frac{R}{2}\log(\frac{3}{2})=1.68\) Jmol\({}^{-1}\)K\({}^{-1}\) where \(R\) is the gas constant [6]. This value can well account for the difference between the entropy of ice determined at 298 K spectroscopically and by calorimetric techniques [7].
On the other end of the spectrum are ordinary structural glasses that are amorphous solids. The presence of zero-point entropy in glasses has continually undergone both theoretical and experimental investigations. In the theoretical investigation, glasses are mainly described by two distinct models: conventional and kinetic. In the former, as the liquid is rapidly cooled, the configurational entropy of the liquid state is frozen-in at the glass transition, which is then manifested as its finite zero-point entropy, while in the latter the configurational entropy simply vanishes at the glass transition [8; 9; 10; 11; 12; 13; 14; 15]. Despite the two theoretical conflicts, experimental attempts in which the temperature-dependent specific heat of glass-forming liquids is examined seem to support the conventional model [16; 17; 18; 19; 20; 21].
The magnetic analog of the situation in crystalline solids is of particular interest. The so-called spin-glass state that is analogous to the structural glass state can exist in dilute magnetic alloys in which nonmagnetic metals are doped with magnetic ions at very low con
centrations, and the magnetic impurities interact with each other through the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction whose strength varies with the distance between the impurities. Below the spin glass transition temperature, the magnetic moments freeze in random directions without long-range ordering because of the RKKY interactions being random due to the random distances between the impurities, resulting in a finite zero-point entropy. The zero-point entropy in spin glasses has been estimated theoretically by Edwards and Tanaka, who predicted the value for long-range-interacting Ising and \(XY\) spin glasses to be 1.66 and 4.30 Jmol\({}^{-1}\)K\({}^{-1}\), respectively [22; 23]. Experimentally, the zero-point entropy of a dilute dipolar-coupled Ising spin glass LiHo\({}_{p}\)Y\({}_{1-p}\)F\({}_{4}\) with \(p=0.167\) was measured and found to be close to 1.66 Jmol\({}^{-1}\)K\({}^{-1}\) which is consistent with the theoretical prediction [24].
An interesting question that arises is what will happen to the zero-point entropy if, unlike in the dilute magnetic alloys, the magnetic ions are densely populated and strongly interact with each other. The so-called geometrically frustrated magnets are the case in point. For example, pyrochlore rare-earth oxides \(A_{2}B_{2}\)O\({}_{7}\) which exhibit the so-called spin-ice state at low temperatures have similar degenerate ground-state configurations to water ice in which two spins must point inwards while the other two point out of the tetrahedron [25]. Surprisingly, the zero-point entropies of the Ho\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) and Dy\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) spin ices have been reported to exhibit a value close to that of water ice [26; 27; 28; 29]. Other frustrated lattices can have local zero-energy modes in the mean-field level, i.e., the weatherwance modes in the two-dimensional kagome antiferromagnets [30; 31; 32; 33] and the antiferromagnetic hexagon modes in the three-dimensional spinel ZnCr\({}_{2}\)O\({}_{4}\)[34], which can induce macroscopic ground-state degeneracy and thus a finite zero-point entropy.
These densely populated geometrically frustrated magnets can exhibit a magnetic glassy state at low temperatures that is called a spin jam [35]. While the canonical spin-glass state arises due to the random RKKY interactions, the spin-jam state can arise from quantum fluctuations [35]. The essential distinction between the two glassy states is in their energy landscape topology. Quantum fluctuations render the energy landscape of a spin jam to be non-hierarchical and have a flat but rugged shape. On the contrary, for a spin glass, the energy landscape is hierarchical and has a rugged funnel shape [36]. To date, the crossover between these glassy states has been observed in the dynamic susceptibility measurements and the memory effects when the spin density is varied in the systems [37; 38]. Furthermore, the spin-jam theory for a highly frustrated triangular network of bipyramids shows that due to the lack of local zero-energy modes in the system, the entropy scales with the perimeter of the magnetic domain [35]. The number of spin configurations has been found to increase with the number of bipyramids located on the perimeter of the domain. In this paper, we provide experimental evidence of the perimeter scaling of zero-point entropy of the spin-jam state. We also show how the spin-jam state crosses over to the spin-glass state as the spin density \(p\) varies in terms of the low-lying excitations and the zero-point entropy using DC magnetic susceptibility and heat capacity measurements.
## II Model system
SrCr\({}_{9p}\)Ga\({}_{12-9p}\)O\({}_{19}\) (SCGO(\(p\))) has been a good model system for the triangular network of bipyramids or pyrochlore slab (see Fig. 1(A)) [37; 39; 40; 41; 42; 43; 44]. The system, however, has triangular layers made of spin dimers (orange spheres in Fig. 1(A)) that separate the pyrochlore slabs [41]. The existence of the extra magnetic layers of dimers complicates an experimental exploration of the physics of the pure pyrochlore slab as a function of the spin density \(p\). A QS ferrite-derived compound Ba\({}_{2}\)Sn\({}_{2}\)ZnCr\({}_{7p}\)Ga\({}_{10-7p}\)O\({}_{22}\) (BSZCGO(\(p\))) [45; 46; 47; 48] can provide an excellent alternative model system. The crystal structure of BSZCGO is characterized by the hexagonal system with the space group \(P\bar{3}m1\) and lattice parameters \(a=b=5.8568(1)\) A and \(c=14.2537(3)\) A for the sample with \(p=0.97\)[49]. The magnetic \(s=\frac{3}{2}\) Cr\({}^{3+}\) ions form the pyrochlore slabs, and the successive slabs are separated by about 10 A which makes the pyrochlore slabs well isolated and quasi-two-dimensional (see Fig. 1(B)). Most importantly, BSZCGO does not have the magnetic dimer layers as SCGO does. There are, however, two types of intrinsic disorder present in BSZCGO. Firstly, nonmagnetic Ga\({}^{3+}\) ions inevitably share \(6i\) and \(1a\) sites with Cr\({}^{3+}\) ions leading to the highest possible value of the spin density \(p\) to be 0.97 [50]. Furthermore, Ga\({}^{3+}\) ions also share the \(2d\) site with Zn\({}^{2+}\) ions in a 1:1 ratio which causes structural strains and in turn renders bond disorders between Cr\({}^{3+}\) ions [46; 51]. Despite these disorders, BSZCGO is so far the best model system to explore the physics of frustration in the triangular network of bipyramids since the physics is supposed to be robust against such disorders [35; 52]. BSZCGO exhibits a freezing transition with \(T_{\rm f}\) around 1.5 K for \(p=0.97\)[45; 53]. The magnetic heat capacity \(C_{\rm mag}\) has been observed to show a \(T^{2}\) dependence below \(T_{\rm f}\)[45], as SCGO does, which tells us that BSZGCO is indeed a good model system to study the nature of the glassy state of the pyrochlore slabs.
## III Results and discussion
Fig. 2(A) shows the DC magnetic susceptibility data of five samples that exhibit the freezing transitions at \(T_{\rm f}\) indicated by the bifurcation of field-cooled (FC) and zero-field-cooled (ZFC) data. \(T_{\rm f}\) is found to increase with decreasing vacancy density which is consistent with the spin-jam theory (see Fig. 4(A)) [35; 37]. Note that in the case of canonical spin glasses, the impurity dependence of
\(T_{\rm f}\) behaves differently: \(T_{\rm f}\) increases with increasing magnetic impurity density [54; 55; 56]. The nature of the glassy states can be studied more carefully via the behavior of the \(T\)-dependent magnetic heat capacity \(C_{\rm mag}\). Fig. 2(B) shows \(C_{\rm mag}/T\) as a function of \(T\) in the low-temperature region of the six samples with \(p\geq 0.67\). For \(p\geq 0.93\), \(C_{\rm mag}\) exhibits a clear quadratic \(T^{2}\)-dependence. On the other hand, for \(p\leq 0.86\), \(C_{\rm mag}\) begins to deviate from the quadratic behavior. To quantitatively analyze the data, we assume that the thermodynamics of the spin fluctuations can be characterized by two modes, one is the hydrodynamic Halperin-Saslow (HS) mode [57] that is a characteristic of the spin-jam state and yields \(C_{\rm HS}\propto AT^{2}\) for a two-dimensional system [35], and the other is the localized two-level (TL) system that can be due to spin-glass clusters generated by the non-magnetic doping and yields \(C_{\rm TL}\propto T\)[58; 37]. Here the coefficient \(A\) of the \(T^{2}\) term in \(C_{\rm HS}\) is inversely proportional to the spin wave velocity squared \(v^{2}\) for a two-dimensional system:
\[A=\frac{9\zeta(3)k_{\rm B}^{2}V_{\rm c}R}{\pi\hbar v^{2}d}, \tag{1}\]
where \(\zeta\) is the Riemann zeta function, \(k_{\rm B}\) is Boltzmann's constant, \(V_{\rm c}\) is the unit cell volume, and \(d\) is the spacing of successive bilayers [57; 59].
Since the population ratio of the spin jam to the spin glass clusters can vary with the spin density \(p\), we have fitted the magnetic heat capacity of each sample to the following phenomenological formula, \(C_{\rm mag}=fC_{\rm HS}+(1-f)C_{\rm TL}\), where \(f\) and \(1-f\) are the fraction of the spin-jam state and that of the spin-glass state, respectively. The fitting range of \(T\) is from the base temperature, namely, 0.5 K, to the temperature above which \(C_{\rm mag}/T\) deviates from linearity. As shown by the solid lines in Fig. 2(B), the phenomenological formula fits the data well for \(p\geq 0.67\) while the data for \(p<0.67\) could not be fitted due to the lack of enough data points. The fitted parameters are summarized in Table 1. The fraction of the spin-jam clusters \(f\) is the same for \(p\geq 0.93\) which is found to be \(0.92\pm 0.01\). As the spin density \(p\) decreases below 0.93, \(f\) gradually decreases roughly linearly. In other words, the nature of the glassy state of BSZCG changes gradually as \(p\) decreases, and it continually crosses over from a dominantly spin-jam state to a mixed state with considerable spin-glass state.
As another quantitative verification of the HS modes being dominant for large values of \(p\), an energy scale associated with this mode can be estimated from the coefficient \(A\) of the quadratic term of \(C_{\rm HS}\). The spin stiffness \(\rho_{\rm g}\) and the spin wave velocity \(v\) are related by \(v=\gamma\sqrt{\rho_{\rm g}/\chi}\), where \(\chi\) is the magnetic susceptibility and \(\gamma\) is the gyromagnetic ratio, and \(A\) is related to \(v\) via Eq. 1. From the spin stiffness \(\rho_{\rm s}\), the HS energy \(E_{\rm HS}\) is expressed as
\[\frac{E_{\rm HS}}{k_{\rm B}}=\frac{9\zeta(3)}{\pi}\frac{k_{\rm B}^{2}}{g^{2}\mu _{\rm B}^{2}}\frac{\chi}{A}, \tag{2}\]
where \(g\) is the Lande factor and \(\mu_{\rm B}\) the Bohr magneton [59]. The magnetic susceptibility \(\chi\) is obtained from the measured susceptibility below \(T_{\rm f}\). As shown in Ta
Figure 2: The \(T\) dependence of DC magnetic susceptibility and magnetic heat capacity. (A) The DC magnetic susceptibility in the temperature range covering the freezing transition of samples with \(p\geq 0.67\). Open symbols represent ZFC data. Arrows mark \(T_{\rm f}\) for each sample. The inset shows \(f\) as a function of \(p\). The dashed line in the inset is a guiding line. (B) \(C_{\rm mag}/T\) data at low temperatures. Solid lines are best fits with fitting parameters summarized in Table 1.
Figure 1: Magnetic lattices of SCGO and BSZCGO. A triangular network of bipyramids consists of two kagome layers (blue spheres) sandwiching an intermediate triangular layer (red spheres). Bonds shown in different colors have different lengths. (A) In SCGO, a triangular network of dimers (orange spheres) separates the successive pyrochlore slabs. (B) In BSZCGO, successive pyrochlore slabs are well separated, and there are no Cr\({}^{3+}\) ions in between. Axes represent the crystallographic axes of the lattices.
ble I, this formula yields an \(E_{\rm HS}\) that is comparable to the freezing temperatures for the two samples with the highest spin densities \(p\), which supports our interpretation of the dominant glassy state for \(p\geq 0.93\) being the spin jam. For \(p<0.93\), the spin-glass population starts to grow, and its susceptibility contributes significantly to the measured value. Hence, the \(\chi\) due to the spin jam, which is used in Eq. 2, is overestimated from the measured susceptibility, causing an overestimation of \(E_{\rm HS}\) for \(p<0.93\) as shown in Table. I. However, the calculated \(E_{\rm HS}\) for \(p<0.93\) can still serve as an upper bound.
The evolution of the glassy states as a function of spin density \(p\) may also be investigated in terms of entropy. In general, upon cooling, a magnetic system gradually releases its magnetic entropy, and an ordinary magnet would release all of its magnetic entropy when the system exhibits long-range order below the ordering temperature. On the other hand, as mentioned earlier, disordered magnets would not release all their magnetic entropy owing to the presence of strong frustrations. Also, it should be emphasized that the spin-jam and spin-glass states may have different characteristic entropies.
Entropy can be estimated from the heat capacity data as
\[S(T_{\rm base},T)=S_{0}+\Delta S(T_{\rm base},T)=S_{0}+\int_{T_{\rm base}}^{ T}\frac{C_{\rm mag}}{T}dT, \tag{3}\]
where \(S_{0}\) is the zero-point entropy and \(T_{\rm base}\) is the base temperature accessed during the measurements which is \(T_{\rm base}\) = 0.5 K. Thus, by investigating how \(\Delta S\) evolves with increasing \(p\) we can study how the zero-point entropy \(S_{0}\), i.e., the entropy of the glassy state evolves. From the \(C_{\rm mag}/T\) data shown in Fig. 3(A), we numerically calculated and plotted \(\Delta S(T)\) in Fig. 3(B). Remember that the magnetic Cr\({}^{3+}\) ion has spin \(s=\frac{3}{2}\), and the expected maximum magnetic entropy is \(R\log(2s+1)\) = 11.53 Jmol\({}_{\rm Cr}^{-1}\)K\({}^{-1}\) which is represented by the horizontal red dashed line in Fig. 3(B). Here, mol\({}_{\rm Cr}\) is the unit of the number of moles of Cr\({}^{3+}\) ions.
As shown in Fig. 3(B), however, for \(p>p_{\rm c}\) = 0.5 where \(p_{\rm c}\) is the percolation threshold for the magnetic lattice [60], the entropy released, \(\Delta S(0.5\) K, 50 K), between 0.5 K \(<T_{\rm f}\) and 50 K \(\gg T_{\rm f}\) is only about half of the maximum magnetic entropy \(S_{\rm max}\). For instance, \(\Delta S(0.5\) K, 50 K) \(\approx 0.45S_{\rm max}\) for BSZCGO(0.98). This tells us that the entropy that is not released down to 0.5 K is extensive; \(S_{0}(p=0.98)\approx 0.55S_{\rm max}\approx 6.34\) Jmol\({}_{\rm Cr}^{-1}\)K\({}^{-1}\). On the other hand, for \(p=0.44<p_{\rm c}\), \(S(T_{\rm base},T)\) at 50 K is close to \(S_{\rm max}\), \(\Delta S(0.5\) K, 50 K) \(\approx 0.8S_{\rm max}\). This tells us that the extensive zero-point entropy for \(p>p_{\rm c}\) is due to the collective frustrated interactions in the quasi-two-dimensional triangular network of bipyramids. A similar observation was reported for SCGO(0.89) in which at 100 K the magnetic entropy is recovered by only 52% [61]. For comparison, for a spin ice that is the system of Ising spins in the three-dimensional pyrochlore lattice, the zero-point entropy is \((0.67\pm 0.04)R\log 2\)[26]. The extensive zero-point entropy of BSZCGO and SCGO confirms that the system of Heisenberg spins with dominant antiferromagnetic interactions with the nearest neighbors on the quasi-two-dimensional triangular network of bipyramids has a very strong frustration and as a re
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(p\) & \(f\) & \(T_{\rm f}\) (K) & \(A\) (Jmol\({}_{\rm Cr}\)\({}^{-1}\)K\({}^{-3}\)) & \(E_{\rm HS}/k_{\rm B}\) (K) \\ \hline
0.98(1) & 0.92(1) & 1.22(5) & 0.130(2) & 0.9(1) \\
0.93(2) & 0.92(1) & 1.24(5) & 0.120(5) & 1.0(1) \\
0.86(2) & 0.90(1) & 0.93(5) & 0.10(1) & 1.3(1) \\
0.83(2) & 0.79(2) & 0.83(5) & 0.08(1) & 1.8(2) \\
0.71(1) & 0.75(2) & - & - & - \\
0.67(1) & 0.65(4) & 0.73(5) & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Fitting parameters of the Halperin-Saslow modes in BSZCGO where \(p\) is the spin density, \(f\) is the spin-jam population fraction, \(T_{\rm f}\) is the freezing temperature, \(A\) is the coefficient of the quadratic term of \(C_{\rm HS}\), and \(E_{\rm HS}/k_{\rm B}\) is the energy scale of the HS modes. Numbers in parentheses represent standard errors. The values of \(A\) for the last two samples have errors larger than itself. \(T_{\rm f}\) for \(p\) = 0.71 is unavailable.
sult the systems cannot release their entropy even when \(T\ll T_{\rm f}\).
A close examination of \(\Delta S(0.5\) K, 50 K) as a function of \(p\) reveals an interesting dependence on \(p\). As shown in Fig. 4(B), as \(p\) decreases from 0.98 to 0.79, \(\Delta S(0.5\) K, 50 K) decreases by \(\sim\)25%. This means that the zero-point entropy \(S_{0}\) increases as \(p\) decreases from 0.98 to 0.79. Upon further decreasing \(p\) below 0.71, \(\Delta S(0.5\) K, 50 K) increases again, i.e., \(S_{0}\) decreases again. In order to understand the dip in \(\Delta S(0.5\) K, 50 K) as a function of \(p\), we first note that our analysis of the \(T\)-dependent \(C_{\rm mag}\) data indicates that both spin-glass and spin-jam clusters coexist and their fraction changes with the spin density \(p\). This implies that, since the spin-jam and the spin-glass states are expected to have different zero-point entropies, the measured total zero-point entropy \(S_{0}^{\rm tot}\) should include both contributions, the zero-point entropy of spin jam \(S_{0}^{\rm SJ}\) and that of spin glass \(S_{0}^{\rm SG}\), i.e.,
\[S_{0}^{\rm tot}(p)=f(p)S_{0}^{\rm SJ}(p)+[1-f(p)]S_{0}^{\rm SG}(p). \tag{4}\]
This equation together with Eq. 3, however, cannot give us a set of unique solutions. This is because \(S_{0}^{\rm SJ}(p)\) and \(S_{0}^{\rm SG}(p)\) can, in general, vary with \(p\). And the analysis of \(\Delta S(0.5\) K, 50 K) vs. \(p\) shown in Fig. 4(B) to estimate \(S_{0}^{\rm SJ}(p)\) and \(S_{0}^{\rm SG}(p)\) becomes an underconstrained problem; when the experimental values of \(S_{0}^{\rm tot}(p)\) for the two adjacent \(p\)'s are considered to extract \(S_{0}^{\rm SJ}(p)\) and \(S_{0}^{\rm SG}(p)\) for those \(p\)'s, there are four unknown parameters with two experimental values.
We performed the entropy analysis by imposing additional physical constraints inferred from previous experimental and theoretical studies of the same magnetic lattice. Firstly, it was theoretically shown that for the spin-jam state of the triangular network of bipyramids, \(\log\Omega\), where \(\Omega\) is the ground-state degeneracy, scales with the number of bipyramids located on the perimeter of the spin-jam domain [35]. \(S_{0}^{\rm SJ}\) is, thus, proportional to the ratio of the number of bipyramids on the perimeter to those inside the domain, leading to a larger \(S_{0}^{\rm SJ}\) for a smaller spin-jam domain. Secondly, it has been found experimentally for SCGO that the correlation length \(\xi(p)\) remains constant for \(1.0>p>0.8\), i.e., it is robust against small nonmagnetic doping, while it linearly and gradually decreases with further decreasing \(p\) below \(p=0.8\)[37]. Since \(\xi(p)\) is directly proportional to the spin-jam domain size, we assume that \(S_{0}^{\rm SJ}(p)\) is constant for \(1.0>p>0.8\), and it gradually changes in the same way as \(\xi(p)\) does with decreasing \(p\) for \(p<0.8\).
Now let us recall that our \(C_{\rm mag}\) data yields the total zero-point entropy for \(p=0.98\) to be \(S_{0}^{\rm tot}=6.34\)\({\rm Jmol}_{\rm Cr}^{-1}{\rm K}^{-1}\). Since for \(p=0.98\), the magnetic glassy state is predominantly a spin jam, \(f(p=0.98)=0.92\) (see Table 1), we expect that the zero-point entropy of the spin-jam state for \(p=0.98\) is very close to \(S_{0}^{\rm tot}(0.98)\), i.e., \(S_{0}^{\rm SJ}(0.98)\sim S_{0}^{\rm tot}(0.98)\). The zero-point entropy of the spin-glass state for \(p=0.98\) is, on the other hand, likely to be very close to \(S_{\rm max}=R\log(2s+1)=11.53\)\({\rm Jmol}_{\rm Cr}^{-1}{\rm K}^{-1}\) because, when the vacancy density is low, the spin glass is made of almost uncorrelated orphan spins that fluctuate nearly freely, resulting in \(S_{0}^{\rm SG}(0.98)\approx S_{\rm max}\). For our analysis, we first assumed that \(S_{0}^{\rm SG}(0.98)=S_{\rm max}\) and using Eq. 4 obtained \(S_{0}^{\rm SJ}(0.98)=5.92\)\({\rm Jmol}_{\rm Cr}^{-1}{\rm K}^{-1}\) which is close to \(S_{0}^{\rm tot}(0.98)\) as expected. Then \(S_{0}^{\rm SJ}(p)\) for other \(p\) values were estimated according to the \(p\) dependence of \(1/\xi(p)\). After that, the zero-point entropy of the spin-glass state, \(S_{0}^{\rm SG}(p)\) for each \(p\) was calculated using Eq. 4. Fig. 4(C) shows the resulting \(S_{0}^{\rm SJ}(p)\) and \(S_{0}^{\rm SG}(p)\) for all \(p>p_{\rm c}\). It is interesting to notice that \(S_{0}^{\rm SJ}(p)\) and \(S_{0}^{\rm SG}(p)\) exhibit strikingly different behaviors with \(p\). For \(1.0>p>0.8\), the spin-jam zero-point entropy \(S_{0}^{\rm SJ}(p)\) is much lower than \(S_{\rm max}=11.53\)\({\rm Jmol}_{\rm Cr}^{-1}{\rm K}^{-1}\), which is expected from the spin-jam theory that yields the perimeter-scaling entropy for the triangular network of the bipyramids. For \(p<0.8\), as \(p\) decreases, on the other hand, \(S_{0}^{\rm SJ}(p)\) gradually increases up to \(0.81S_{\rm max}\) for \(p=0.51\), which is consistent with the spin-jam theory for the decreasing spin-jam domain size and thus the increasing total perimeter with decreasing \(p\)[35]. In contrast, as \(p\) decreases after \(p\sim 0.8\), \(S_{0}^{\rm SG}(p)\) rapidly decreases down to \(0.25S_{\rm max}\) for \(p=0.51\). The rapid decrease of \(S_{0}^{\rm SG}(p)\) suggests that as the vacancy density in the magnetic lattice increases, the orphan spins begin to correlate with each other resulting in a smaller degeneracy [24]. The value of \(p\sim 0.8\) below which the orphan spins correlate with each other is also consistent with the neutron experimental results of SCGO [37].
To confirm the validity of our analysis, we estimate the typical domain size of the spin-jam clusters from the obtained \(S_{0}^{\rm SJ}(0.98)\) using the spin-jam theory [35] in which \(S_{0}^{\rm SJ}\sim 16Rm/N_{\rm p}\) where \(N_{\rm p}\) is the number of bipyramids on the perimeter and the proportionality constant \(m\sim 0.6\). For \(S_{0}^{\rm SJ}(0.98)\approx 5.9\)\({\rm Jmol}_{\rm Cr}^{-1}{\rm K}^{-1}\), the estimated number of bipyramids on the domain perimeter is about 13, i.e., the domain size is 4\(\times\)4 bipyramids. Since a unit cell contains one bipyramid, the domain size can be roughly approximated by \(3a\approx 17.5\) A for \(p=0.98\). The domain size is compared with the correlation length \(\xi\) measured from neutron scattering experiments which has been found for SCGO(0.8) to be about \(\xi\approx 7\pm 2\) A [62]. Comparing the estimated domain size with \(\xi\), we see that it is about 2.5 times the correlation length \(\xi\) for \(p=0.98\), reflecting a good consistency since \(\xi\) is the distance by which the spin correlation reduces by a factor of \(e^{-1}\).
## IV Conclusion
Our thermodynamic study has shown that the low-temperature glassy state of BSZCGO is a mixture of the spin-jam and spin-glass states, characterized by the Halperin-Saslow modes and the localized two-level systems, respectively. The population ratio of the spin-glass state to the spin-jam state increases as \(p\) decreases,
i.e., the vacancy density increases, down to the percolation threshold, \(p_{\rm c}=0.5\). Furthermore, our quantitative analysis of the magnetic heat capacity \(C_{\rm mag}\) allowed us to estimate the zero-point entropies of the spin-jam and spin-glass states, \(S_{0}^{\rm SJ}(p)\) and \(S_{0}^{\rm SG}(p)\), respectively, as functions of \(p\). We found that, as \(p\) decreases, \(S_{0}^{\rm SJ}(p)\) gradually increases from \(S_{0}^{\rm SJ}(0.98)\approx 5.9\) Jmol\({}_{\rm Cr}^{-1}\)K\({}^{-1}\) to \(S_{0}^{\rm SJ}(0.51)\approx 9.4\) Jmol\({}_{\rm Cr}^{-1}\)K\({}^{-1}\), which is the expected behavior for a spin jam [35]. In contrast, \(S_{0}^{\rm SG}(p)\) rapidly decreases below \(p\sim 0.8\) from \(S_{0}^{\rm SG}(0.98)\approx S_{\rm max}\) to \(S_{0}^{\rm SG}(0.51)\approx 2.8\) Jmol\({}_{\rm Cr}^{-1}\)K\({}^{-1}\). The rapid decrease of \(S_{0}^{\rm SG}(p)\) upon nonmagnetic doping suggests that as the spin vacancy in the magnetic lattice increases, the orphan spins begin to correlate with each other [24].
## V Materials and Methods
Ten powder samples of BSZCGO with \(0.44\leq p\leq 0.97\) and a nonmagnetic sample with \(p=0\) (Ba\({}_{2}\)Sn\({}_{2}\)ZnGa\({}_{10}\)O\({}_{22}\)) were prepared with standard solid-state reactions. A stoichiometric mixture of BaCO\({}_{3}\), SnO\({}_{2}\), ZnO, Ga\({}_{2}\)O\({}_{3}\), and Cr\({}_{2}\)O\({}_{3}\) were intimately ground and pelleted. The pellet was put in an alumina crucible and sintered in air at 1400 \({}^{\circ}\)C for 48 hours with an intermediate grinding.
X-ray diffraction was performed at room temperature for each sample in order to verify the crystal structure and to determine Cr\({}^{3+}\) concentration within the sample. The crystal structure and Cr\({}^{3+}\) occupations at \(1a\) and \(6i\) sites in the unit cell were obtained by means of the Rietveld refinements on the diffraction results using GSAS-II software [63].
The temperature dependence of the DC magnetic susceptibility was measured using a commercial SQUID magnetometer from 0.5 K up to 20 K with an applied magnetic field of 0.01 T. The measurements were done with both field-cooled and zero-field-cooled methods. Susceptibility data of samples with \(p<0.67\) were not taken as these samples had transition temperatures lower than 0.5 K.
The temperature dependence of molar heat capacity was measured with a commercial physical property measuring system utilizing the heat-pulse method. Each sample was pelleted and attached to a substrate with Apiezon grease. The molar heat capacity from 0.5 K to 10 K was measured with the \({}^{3}\)He option and from 10 K to 50 K with the \({}^{4}\)He option in a zero applied magnetic field. The magnetic heat capacity was obtained from subtracting the molar heat capacity of the nonmagnetic sample from that of the magnetic samples.
###### Acknowledgements.
J.Y. and S.-H.L. thank Dr. Matthias Thede and Dr. Andrey Zheludev for their help during some of our DC susceptibility measurements performed at Eidgenossische Technische Hochschule (ETH) Zurich. C.P. thanks Prof. Satoshi Kamoeba for access to his X-ray diffractometer at Tohoku University. C.P. was supported by the DPST scholarship from the Institute for the Promotion of Teaching Science and Technology. Work at Mahiol University was supported in part by the National Research Council of Thailand Grant N41A640158 and the Thailand Center of Excellence in Physics. A.T. and
Figure 4: The \(p\) dependence of (A) \(T_{t}\) and (B) \(\Delta S(0.5\) K, 50 K). The horizontal dashed line is \(S_{\rm max}=R\)log4. The vertical dashed line indicates the percolation threshold \(p_{c}\). The solid lines are guides to the eyes. The gradient color represents the crossover of the system from spin-jam to spin-glass state. (C) The \(p\) dependence of zero-point entropy of spin jam \(S_{0}{}^{\rm SJ}\) and spin glass \(S_{0}{}^{\rm SG}\). \(S_{0}{}^{\rm tot}\) is obtained from \(S_{0}{}^{\rm tot}=S_{\rm max}-\Delta S\) where \(S_{\rm max}=R\)log4 = 11.53 Jmol\({}_{\rm Cr}\)\({}^{-1}\)K\({}^{-1}\).
S.-H.L. were supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences Award DE-SC0016144. W.-T.C. thanks the supports by NSTC-Taiwan with project number 108-2112-M-002-025-MY3, TCECM project number 110-2124-M-002-019, and Academia Sinica iMATE grant number AS-iMATE-111-12. T.J.S. was supported by Grants-in-Aids for Scientific Research (JP22H00101, 19KK0069, 19H01834, 19K21839, 19H05824) from MEXT of Japan.
|
2309.03687 | Nonequilibrium Schwinger-Keldysh formalism for density matrix states:
analytic properties and implications in cosmology | Motivated by cosmological Hartle-Hawking and microcanonical density matrix
prescriptions for the quantum state of the Universe we develop
Schwinger-Keldysh in-in formalism for generic nonequilibrium dynamical systems
with the initial density matrix. We build the generating functional of in-in
Green's functions and expectation values for a generic density matrix of the
Gaussian type and show that the requirement of particle interpretation selects
a distinguished set of positive/negative frequency basis functions of the wave
operator of the theory, which is determined by the density matrix parameters.
Then we consider a special case of the density matrix determined by the
Euclidean path integral of the theory, which in the cosmological context can be
considered as a generalization of the no-boundary pure state to the case of the
microcanonical ensemble, and show that in view of a special reflection symmetry
its Wightman Green's functions satisfy Kubo-Martin-Schwinger periodicity
conditions which hold despite the nonequilibrium nature of the physical setup.
Rich analyticity structure in the complex plane of the time variable reveals
the combined Euclidean-Lorentzian evolution of the theory, which depending on
the properties of the initial density matrix can be interpreted as a decay of a
classically forbidden quantum state. | Andrei O. Barvinsky, Nikita Kolganov | 2023-09-07T13:01:55Z | http://arxiv.org/abs/2309.03687v1 | # Nonequilibrium Schwinger-Keldysh formalism for density matrix states:
###### Abstract
Motivated by cosmological Hartle-Hawking and microcanonical density matrix prescriptions for the quantum state of the Universe we develop Schwinger-Keldysh in-in formalism for generic nonequilibrium dynamical systems with the initial density matrix. We build the generating functional of in-in Green's functions and expectation values for a generic density matrix of the Gaussian type and show that the requirement of particle interpretation selects a distinguished set of positive/negative frequency basis functions of the wave operator of the theory, which is determined by the density matrix parameters. Then we consider a special case of the density matrix determined by the Euclidean path integral of the theory, which in the cosmological context can be considered as a generalization of the no-boundary pure state to the case of the microcanonical ensemble, and show that in view of a special reflection symmetry its Wightman Green's functions satisfy Kubo-Martin-Schwinger periodicity conditions which hold despite the nonequilibrium nature of the physical setup. Rich analyticity structure in the complex plane of the time variable reveals the combined Euclidean-Lorentzian evolution of the theory, which depending on the properties of the initial density matrix can be interpreted as a decay of a classically forbidden quantum state.
_To the memory of Jim Hartle_
###### Contents
* 1 Introduction
* 2 Summary of main results
* 2.1 Schwinger-Keldysh technique for models with density matrix state
* 2.2 Euclidean density matrix
* 2.3 Reflection symmetry and analyticity properties
* 3 Preliminaries
* 3.1 Condensed notations
* 3.2 Canonical formalism
* 3.3 The solution of Dirichlet and Neumann boundary value problems
* 3.4 The relation between Dirichlet and Neumann Green's functions
* 3.5 Canonical quantization
* 3.6 Bogoliubov transformations
* 3.7 Fock space and the coherent states
* 4 Generating functional in the path integral formalism
* 4.1 Gaussian states
* 4.2 In-in boundary value problem
* 4.3 Neumann type basis functions and Green's function representation
* 4.4 Keldysh rotation
* 4.5 Special choice of basis functions and particle interpretation
* 4.6 Euclidean density matrix state
* 4.7 Analytic continuation and KMS condition
* 5 Simple applications
* 5.1 Harmonic oscillator
* 5.2 General one-dimensional system
* 5.3 The case of a pure state: vacuum no-boundary wavefunction
* 6 Discussion and conclusions
* A Inversion of matrices
* B Derivation of Eq. (4.32)
* C Derivation of Eq. (4.73)
* D Properties of Gaussian density matrices
## 1 Introduction
The purpose of this paper is to construct the Schwinger-Keldysh in-in formalism [1; 2] for expectation values and correlation functions in a rather generic non-equilibrium system with the initial state in the form of
a special density matrix. This density matrix is itself assumed to be determined by the dynamical content of the system. The motivation for this construction comes from the scope of ideas of quantum cosmology suggesting that the initial state of the Universe should be prescribed not from some ad hoc and freely variable initial conditions like in a generic Cauchy problem, but rather intrinsically fixed by the field theory model of the Universe. The pioneering implementation of these ideas was the prescription of the Harle-Hawking no-boundary cosmological wavefunction [3; 4], _no-boundary_ connotation indicating the absence of the notion of the initial Cauchy (boundary) surface of spacetime. Such a prescription replaces the existence of this surface by the requirement of regularity of all fields at all spacetime points treated in the past as regular internal points of spacetime manifold.
Applied to a wide class of spatially closed cosmological models this prescription qualitatively leads to the picture of expanding Friedmann Universe with the Lorentzian signature spacetime nucleating from the domain of a Euclidean space with the topology of a 4-dimensional hemisphere, the Euclidean and Lorentzian metrics being smoothly matched by analytical continuation in the complex plane of time coordinate. This picture allows one to avoid initial singularity in the cosmological evolution and, in particular, serves as initial conditions for inflationary scenarios. This is because it implies a pure vacuum state of quantum matter perturbations on top of a quasi-exponentially expanding metric background, both the background and this vacuum state being generated by tunneling from the classically forbidden (underbarrier) state of the Universe, described by the Euclidean spacetime with the imaginary time. Correlation functions of quantum cosmological perturbations in this vacuum state have a good fit to nearly flat red-tilted primordial spectrum of the cosmic microwave background radiation (CMBR) [5; 6] and other features of the observable large scale structure of the Universe [7].
Limitation of this no-boundary concept consists in the fact that it covers only the realm of pure quantum states. Moreover, it prescribes a particular quantum state which in the lowest order of the perturbation theory yields a special vacuum state. In fact, the idea of Hartle-Hawking no-boundary initial conditions came from the understanding that the vacuum state wavefunction \(\varPsi[\varphi(\mathbf{x})]\) of a generic free fields model in flat spacetime can be built by the path integral over the field histories \(\phi(\tau,\mathbf{x})\) on a half-space interpolating between a given 3-dimensional configuration \(\varphi(\mathbf{x})\) on the boundary plane of \(\tau=0\) and the vanishing value of these fields at the Euclidean time \(\tau\to-\infty\). Beyond perturbation theory, in the models with a bounded from below spectrum of their Hamiltonian this procedure yields the lowest energy eigenstate. Thus, the Hartle-Hawking no-boundary wavefunction is the generalization of this distinguished state to a special case of curved spatially closed spacetime, which can be formulated even though the notion of nontrivially conserved energy does not exist for such a situation.
A natural question arises how to generalize this picture to the physical setup with the density matrix replacing this distinguished pure state. The attempt to do this encounters the problem of constructing the set of physical states \(|\psi\rangle\) along with the set of their weights \(w_{\psi}\) participating in the construction of the density matrix \(\hat{\rho}=\sum_{\psi}w_{\psi}|\psi\rangle\langle\psi|\). This problem looks unmanageable without additional assumptions, but the simplest possible assumption -- universal microcanonical equipartition of all physical states -- allows one to write down the density matrix in a closed form provided one has a complete set of equations which determine a full set of \(|\psi\rangle\). These are the Wheeler-DeWitt equations \(\hat{H}_{\mu}|\psi\rangle=0\) which are quantum Dirac constraints in gravity theory selecting the physical states [8], \(\mu\) being the label enumerating the full set of Hamiltonian and diffeomorphism constraints, which includes also a continuous range of spatial coordinates. The density matrix becomes a formal operator projector on the subspace of these states, which can be written down as an operator delta functions
\[\hat{\rho}=\frac{1}{Z}\prod_{\mu}\delta(\hat{H}_{\mu}), \tag{1.1}\]
the factor \(Z\) being a partition function which provides the normalization \(\operatorname{tr}\hat{\rho}=1\)[9]. Important feature of this formal projector is that a detailed construction of the delta function of _noncommuting_ operators \(\hat{H}_{\mu}\) (which form an open algebra of first class constraints) leads to the representation of this projector in terms of the Batalin-Fradkin-Vilkovisky or Faddeev-Popov path integral of quantum gravity [9; 10] and makes it tractable within perturbation theory.
In contrast to the Hartle-Hawking prescription formulated exclusively in Euclidean spacetime this density matrix expression is built within unitary Lorentzian quantum gravity formalism [11]. Euclidean quantum gravity, however, arises in this picture at the semiclssical level as a mathematical tool of perturbative loop expansion. The partition function \(Z\) of the density matrix (its normalization coefficient) should be determined by the above path integral over closed periodic histories, and the dominant semiclassical contribution comes from the saddle points -- periodic solutions of classical equations of motion. The practice of applications to concrete cosmological models shows, however, that such solutions do not exist in spacetime with the Lorentzian signature, but can be constructed in Euclidean spacetime. The deformation of the integration contour in the complex plane of both dynamical variables and their time argument suggests that these Euclidean configurations can be taken as a ground for a dominant contribution of semiclassical expansion. This gives rise to the following definition of the Euclidean path integral density matrix.
Let the classical background have at least two turning points and describe the periodic (classically forbidden or underbarrier) motion between them in imaginary Lorentzian time (or real Euclidean time \(\tau\)). Then the
two-point kernel \(\rho_{E}(\varphi_{+},\varphi_{-})=\left\langle\varphi_{+}\right|\hat{\rho}_{E} \left|\varphi_{-}\right\rangle\) of the density matrix in question is defined by
\[\rho_{E}(\varphi_{+},\varphi_{-})=\frac{1}{Z}\int D\phi\,e^{-S_{E}[\phi]}\Big{|} _{\phi(\tau_{\pm})=\varphi_{\pm}}, \tag{2}\]
where \(S_{E}[\phi]\) is the Euclidean action of the field perturbations \(\phi(\tau)\) on top of the given background, defined on the period of the Euclidean time, \(\tau_{-}\leq\tau\leq\tau_{+}\), the functional integration runs over field histories interpolating between their values \(\varphi_{\pm}\) -- the arguments of the density matrix kernel. \(Z\) is the partition function given by the path integral over the periodic histories with the period \(\beta=\tau_{+}-\tau_{-}\),
\[Z=\int D\phi\,e^{-S_{E}[\phi]}\Big{|}_{\phi(\tau_{+})=\phi(\tau_{-})}, \tag{3}\]
providing the normalization \(\operatorname{tr}\hat{\rho}_{E}=1\). Hermiticity of this density matrix, which in view of its reality reduces to its symmetry \(\rho_{E}(\varphi_{+},\varphi_{-})=\rho_{E}(\varphi_{-},\varphi_{+})\), implies that the background solution is a bounce that has a reflection symmetry with respect to the middle turning point at \(\frac{\tau_{+}+\tau_{-}}{2}\), and the turning points \(\tau_{\pm}\) are in fact identified.
Up to a normalization the expression (2) is the evolution operator of the Schroedinger equation in imaginary time, \(t=-i\tau\), with the quantum Hamiltonian \(\hat{H}_{S}(\tau)\) calculated on top of the _non-stationary_ background. The Hamiltonian operator here is written down in the Schroedinger picture (which is indicated by the subscript \(S\)) and explicitly depends on the Euclidean time because of this non-stationarity, so that the evolution operator is the Dyson chronological \(\tau\)-ordered exponent
\[\rho_{E}(\varphi_{+},\varphi_{-})=\text{const}\times\langle\varphi_{+}| \text{T}e^{-\int_{-}^{\tau_{+}}d\tau\,\hat{H}_{S}(\tau)}|\varphi_{-}\rangle. \tag{4}\]
Because of the properties of the turning points (zero derivatives of the background field) the Euclidean background can be smoothly matched at \(\tau_{\pm}\) with the classically allowed and _real_ background solution of equations of motion parameterized by real Lorentzian time \(t\). The evolution of quantum perturbations on this Lorentzian branch of the background is then driven by the unitary version of the \(t\)-ordered exponent (4)
\[\hat{U}(t_{+},t_{-})=\text{T}e^{-i\int_{t_{-}}^{t_{+}}dt\,\hat{H}_{S}(t)} \tag{5}\]
with the Hermitian time-dependent Hamiltonian which is evaluated on this Lorentzian background. In the cosmological context, when the spatial sections of spacetime of \(S^{3}\)-topology are represented by circles of a variable scale factor, the graphical image of the combined Euclidean-Lorentzian evolution operator \(\hat{U}(T,0)\hat{\rho}_{E}\hat{U}^{\dagger}(T,0)\) is depicted on Fig. 1. It shows the Euclidean spacetime instanton with the topology \(R^{1}\times S^{3}\), \(R^{1}=[\tau_{-},\tau_{+}]\), bounded at the turning points \(\tau_{\pm}\) by two minimal surfaces \(\Sigma_{\pm}\) with a vanishing extrinsic curvature. This instanton represents the density matrix \(\hat{\rho}_{E}\) and connects the Lorenzian spacetime branches. These branches correspond to the unitary and anti-unitary evolution from \(\Sigma_{\pm}\) in some finite interval of the Lorentzian time \(0\leq t\leq T\).1
Footnote 1: Of course, the second Lorentzian branch could have been attached to the middle turning point \(\frac{\tau_{+}+\tau_{-}}{2}\) of the total period, but this reflection asymmetric setup would correspond to the calculation of the in-out amplitude of underbarrier tunneling through the Euclidean domain, which is not the goal of this paper.
The pictorial representation of the cosmological partition function \(Z\) in view of cancellation of unitary evolution factors, \(\operatorname{tr}\bigl{(}\hat{U}(T,0)\hat{\rho}_{E}\hat{U}^{\dagger}(T,0) \bigr{)}=\operatorname{tr}\hat{\rho}_{E}=1\), contains only the Euclidean part of Fig. 1. It is represented by the closed cosmological instanton with the identified surfaces \(\Sigma_{+}=\Sigma_{-}\) and their 3-dimensional field configurations \(\varphi_{+}=\varphi_{-}\) (following from the identification of the arguments in \(\operatorname{tr}\hat{\rho}_{E}=\int d\varphi\,\rho_{E}(\varphi,\varphi)\)). The origin of this instanton having a donut topology \(S^{1}\times S^{3}\) is shown on Fig. 2.
The Euclidean space bridge incorporates the density matrix correlations between the fields on opposite Lorentzian branches, which only vanish for the density matrix of the pure state factorizable in the product of the wavefunction \(\Psi(\varphi_{+})\) and its complex conjugated counterpart \(\Psi^{*}(\varphi_{-})\), \(\rho_{E}(\varphi_{+},\varphi_{-})=\Psi(\varphi_{+})\,\Psi^{*}(\varphi_{-})\). In the cosmological context this situation is depicted on Fig. 3 with two disconnected Euclidean-Lorentzian manifolds corresponding to these factors. Each of them corresponds to the Hartle-Hawking state, and the partition function is
Figure 2: Origin of the partition function instanton from the density matrix instanton by the procedure of gluing the boundaries \(\Sigma_{+}\) and \(\Sigma_{-}\) β tracing the density matrix.
Figure 1: Picture of instanton representing the density matrix. Gray lines depict the Lorentzian Universe nucleating from the instanton at the minimal surfaces \(\Sigma_{-}\) and \(\Sigma_{+}\).
based on the instanton with \(S^{4}\)-topology of Fig. 4. The latter originates by glueing together two 4-dimensional hemispheres (discs \(D_{\pm}^{4}\)) along their common equatorial boundary.
So the goal of this paper is to construct the generating functional of expectation values and correlation functions of Heisenberg operators defined with respect to such a density matrix. Motivated by applications of quantum cosmology, this is essentially non-equilibrium physical setup, because the cosmological inflationary background is very non-stationary. Because of this it raises many questions which for the impure density matrix case go essentially beyond what is known about the Hartle-Hawking state. In particular, despite non-equilibrium nature this pure state selects a distinguished set of positive/negative frequency basis functions of the so-called Euclidean vacuum which for the de Sitter metric background turns out to be a special case of the de Sitter invariant vacuum [12; 13; 14]. But for a density matrix case this distinguished choice is unknown and, moreover, its reasonable particle interpretation is not granted at all to be possible.
The notion of the Euclidean quantum gravity density matrix was pioneered in [15]. Then, within the concept of the above type, it was built in a concrete inflationary model driven by the trace anomaly of Weyl invariant fields [16]. Interpreted as a microcanonical density matrix of spatially closed cosmology [9]2 it was later shown to be a very promising candidate for the initial quantum state of the Universe. In particular, it includes the existence of the quasi-thermal stage preceding the inflation [16], provides the origin of the Higgs-type or \(R^{2}\)-type inflationary scenario [17] with subplanckian Hubble scale [18] and suppresses the contribution of Hartle-Hawking instantons to zero. Thus, this model allows one to circumvent the main difficulty of the Hartle-Hawking prescription -- insufficient amount of inflation in the Hartle-Hawking ensemble of universes dominated by vanishingly small values of the effective cosmological constant. Elimination of this infrared catastrophe is, on the one hand, the quantum effect of the trace anomaly which flips the sign of the Euclidean effective action and sends it to \(+\infty\)[16; 19]. On the other hand, this is the hill-top nature of inflation starting from the maximum of inflaton potential rather than from its minimum [20]. Finally, this model suggests that quantum origin of the Universe is the subplanckian phenomenon subject to semiclassical \(1/N\)-perturbation theory in the number of numerous higher-spin conformal fields [21]. Thus, it sounds reliable even in the absence of currently unavailable non-perturbative methods of quantum gravity.
Footnote 2: This interpretation follows from the analogy with the microcanonical ensemble whose density matrix is a projector on the subspace of fixed conserved energy. As mentioned above, in the absence of the notion of conserved energy the role of this projection in closed cosmology is played by the delta function of Hamiltonian and momentum constraints β the projector on their conserved zero value.
All these conclusions have been recently reviewed in [22] including certain preliminary results on the primordial CMBR spectra, which might even bear potentially observable thermal imprint of the pre-inflation stage of this model [23]. However, detailed calculation of this spectrum and of higher order correlation functions requires the construction of the in-in Schwinger-Keldysh formalism extended to the setup with the initial density matrix of the above type.
Schwinger-Keldysh formalism [1; 2] was intensively applied in quantum gravity and cosmology, and the number of publications on this subject is overwhelmingly high, so that we briefly mention only their minor part. Together with early applications [24; 25; 26] and the pioneering calculation of non-gaussianities in cosmological perturbation spectra [27] these works include the calculation of cosmological correlation functions [28; 29], the results on cosmological singularity avoidance due to nonlocal effects [30], equivalence of the Euclidean and in-in formalisms in de Sitter QFT [31; 32] and even the analysis of initial conditions within Schwinger-Keldysh formalism [33]. Among recent results one should mention the development of a special effective field theory method based on analyticity and unitarity features of in-in formalism [34], its applications to four-point correlators in inflationary cosmology [35] and numerous conformal field theory and holography ramifications of Schwinger-Keldysh technique (see, for example [34; 36] and references therein). However, the success of these works essentially relies on working with the model of spatially flat Universe -- extension to spa
Figure 4: Origin of the partition function instanton from the density matrix instanton by the procedure of gluing the boundaries \(\varSigma_{+}\) and \(\varSigma_{-}\) β tracing the density matrix.
Figure 3: Density matrix of the pure Hartle-Hawking state represented by the union of two no-boundary instantons.
tially closed cosmology with \(S^{3}\)-sections is likely to invalidate many of these exact analytical results. At the same time, despite a general belief that inflation completely washes out details of initial quantum state, learning its imprint on the Universe requires to go beyond \(K=0\) FRW model. Moreover, recent analysis of the large scale Planck 2018 data, associated with the Hubble tension problem in modern precision cosmology [37], testifies at more that 99% confidence level in favor of the closed Universe preferring a positive curvature with \(K=+1\)[38; 39]. Remarkably, the model of microcanonical initial conditions in early quantum cosmology of [9; 16] exists only for \(K=1\). Therefore, robust observational evidence in favour of a positive spatial curvature serves as an additional motivation for this model and justifies our goals.
Having said enough about the motivation coming from cosmology for the density matrix modification of the informal coming from cosmology, let us emphasize that the usefulness of this modification extends to a much wider area. Note that the expression (4) for the case of a static background is nothing but a well-known density matrix of the equilibrium canonical ensemble at the inverse temperature \(\beta=\tau_{+}-\tau_{-}\),
\[\hat{\rho}=\frac{1}{Z}e^{-\beta\widehat{H}}. \tag{6}\]
Its evolution in time gives rise to Matsubara technique of thermal Green's functions [40] and thermofield dynamics [41] which satisfies nontrivial analyticity properties in the complex plane of time including periodicity in the direction of imaginary axis -- Kubo-Martin-Schwinger (KMS) condition [42; 43]. Many of these properties depend on the condition of equilibrium and associated with the conservation of energy. What we suggest here is the generalization of this technique to non-equilibrium situation with the Hamiltonian explicitly depending on time, which would be important in many areas of quantum field theory, high energy and condensed matter physics. To cover as wide scope of models and problems as possible we will try being maximally generic and use condensed notations applicable in generic dynamical systems.
In this paper we will basically consider the elements of the diagrammatic technique for the density matrix in formalism. Therefore we restrict ourselves with the systems having a quadratic action on top of the non-stationary background subject to reflection symmetry discussed above. The one-loop preexponential factors of this formalism will be considered elswhere.
The paper is organized as follows. Section 2 contains the summary of notations and main results. It includes the formulation of in-in generating functional in the generic non-equilibrium system with a Gaussian type initial density matrix, the selection of distinguished set of positive/negative frequency basis functions of the wave operator, determined by the density matrix parameters, and application of this formalism to a special density matrix based on the Euclidean path integral, this case demonstrating special reflection symmetry, analyticity and KMS periodicity properties. Section 3 presents preliminary material of canonical quantization and the technique of boundary value problems and relevant Green's functions in a generic dynamical system. Section 4 contains detailed derivation of all the results. Section 5 is devoted to the demonstration of the formalism on concrete examples, while Section 6 contains a concluding discussion along with the prospects of future research. Several appendices give technical details of derivations and present certain nontrivial properties of Green's functions and Gaussian type density matrices.
## II Summary of main results
### Schwinger-Keldysh technique for models with density matrix state
We consider a generic system with the action \(S[\phi]\) quadratic in dynamical variables \(\phi=\phi^{I}(t)\), the index \(I\) including both the discrete tensor labels and in field-theoretical context also the spatial coordinates,
\[S[\phi]=\frac{1}{2}\int dt\Big{(}\hat{\phi}^{T}A\dot{\phi}+\dot{ \phi}^{T}B\phi+\phi^{T}B^{T}\dot{\phi}+\phi^{T}C\phi\Big{)}. \tag{7}\]
Here \(A=A^{T}\equiv A_{IJ}\), \(B\equiv B_{IJ}\) and \(C=C^{T}\equiv C_{IJ}\) are the matrices acting in the vector space of \(\phi^{J}\), the superscript \({}^{T}\) denoting the transposition, \(\phi\) being a column and \(\phi^{T}\) -- a row (the use of these canonical condensed notations including also spatial integration over contracted indices \(I\) will be discussed in much detail in Section 3). What is most important throughout the paper, all these matrices are generic functions of time \(A=A(t)\), \(B=B(t)\), \(C=C(t)\), reflecting non-equilibrium and non-stationary physical setup. This action will be considered as a quadratic part of the full nonlinear action in field perturbations \(\phi\) on a certain background whose possible symmetries will be inherited by these coefficients as certain restrictions on their time dependence. These restrictions will be very important for the results of the paper and will be discussed below, but otherwise this time dependence is supposed to be rather generic.
The prime object of our interest will be the Schwinger-Keldysh generating functional of the in-in expectation values and correlation functions of Heisenberg operators in the physical state described by the initial density matrix \(\hat{\rho}\). This is the functional of two sources
\[Z[J_{1},J_{2}]=\text{tr}\left[\hat{U}_{J_{1}}(T,0)\,\hat{\rho}\,\hat{U}^{\dagger }_{-J_{2}}(T,0)\right]. \tag{8}\]
Here the trace is taken over the Hilbert space of the canonically quantized field \(\hat{\phi}\) and \(\hat{U}_{J}(T,0)\) is the operator of unitary evolution from \(t=0\) to \(t=T\) with the time dependent Hamiltonian corresponding to the action (7) and modified by the source term \(-J^{T}(t)\phi(t)\equiv-J_{I}(t)\phi^{I}(t)\) with the source \(J^{T}(t)=J_{I}(t)\). In the
Schroedinger picture (labelled by \(S\)) it reads as the chronologically ordered operator T-exponent
\[\hat{U}_{J}(T,0)=\text{T}e^{-i\int\limits_{0}^{T}dt\,\left(\hat{H}_{S}(t)-J(t) \hat{\phi}_{S}\right)}. \tag{3}\]
We will consider the class of density matrices whose kernel in the coordinate representation \(\left\langle\varphi_{+}\right|\hat{\rho}\left|\varphi_{-}\right\rangle=\rho( \varphi_{+},\varphi_{-})\) has the following Gaussian form -- exponentiated quadratic and linear forms in \(\varphi_{\pm}\),
\[\rho(\mathbf{\varphi})=\text{const}\times\exp\left\{-\frac{1}{2}\mathbf{ \varphi}^{T}\mathbf{\Omega}\,\mathbf{\varphi}+\mathbf{j}^{T}\!\mathbf{\varphi}\right\}, \tag{4}\] \[\mathbf{\varphi}=\begin{bmatrix}\varphi_{+}\\ \varphi_{-}\end{bmatrix},\quad\mathbf{j}=\begin{bmatrix}j_{+}\\ j_{-}\end{bmatrix}, \tag{5}\]
where we assembled \(\varphi_{\pm}\) into the two-component column multiplets (denoted by boldfaced letters) \(\mathbf{\varphi}\), did the same with the coefficients \(\mathbf{j}\) of the linear form and introduced the \(2\times 2\) block-matrix \(\mathbf{\Omega}\) acting in the space of such two-component multiplets
\[\mathbf{\Omega}=\begin{bmatrix}R&S\\ S^{*}&R^{*}\end{bmatrix},\quad R=R^{T},\quad S=S^{\dagger}. \tag{6}\]
The blocks of this matrix \(R=R_{IJ}\), \(S=S_{IJ}\) and their complex and Hermitian conjugated versions, \(S^{\dagger}\equiv S^{T*}\), should satisfy these transposition and conjugation properties in order to provide Hermiticity of the density matrix. The same concerns the "sources" \(j_{\pm}\) in the definition of \(\mathbf{j}\), \(j_{+}=j_{-}^{*}\equiv j\). Transposition operation above applies also to two-component objects, so that \(\mathbf{\varphi}^{T}=[\varphi_{+}^{T}\ \varphi_{-}^{T}]\). Below we will denote \(2\times 2\) block matrices and relevant 2-block component columns and rows by boldfaced letters.
Such a choice of the density matrix is motivated by the fact that for a block-diagonal \(\mathbf{\Omega}\) it reduces to a pure quasi-vacuum state, its "source" \(\mathbf{j}\) allows one to induce nonzero mean value of the quantum field and by the differentiation with respect to \(\mathbf{j}\) one can generate a much wider class of density matrices with "interaction" terms in the exponential. Normalizability of the density matrix of course implies that the real part of \(\mathbf{\Omega}\) should be a positive definite matrix.
The path integral representation for the coordinate kernels of the unitary evolution operator (2) allows one to rewrite the generating functional \(Z[J_{1},J_{2}]\) as the double path integral. For this purpose it is useful to introduce the two-component notations for the histories \(\phi_{1}(t)\) and \(\phi_{2}(t)\) as well as for their sources,
\[\phi_{1},\phi_{2}\mapsto\mathbf{\phi}=\begin{bmatrix}\phi_{1}\\ \phi_{2}\end{bmatrix},\quad J_{1},J_{2}\mapsto\mathbf{J}=\begin{bmatrix}J_{1}\\ J_{2}\end{bmatrix}, \tag{7}\]
In terms of these notations the generating functional reads
\[Z[\mathbf{J}]=\int D[\mathbf{\phi},\mathbf{\varphi}]\,\exp\Biggl{\{}i\mathbf{S} [\mathbf{\phi}]+i\int_{0}^{T}dt\,\mathbf{J}^{T}\mathbf{\phi}\] \[\qquad\qquad\qquad\qquad-\frac{1}{2}\mathbf{\varphi}^{T}\mathbf{\Omega}\, \mathbf{\varphi}+\mathbf{j}^{T}\!\mathbf{\varphi}\Biggr{\}}, \tag{8}\]
where the total action is obviously
\[\mathbf{S}[\mathbf{\phi}]=S[\phi_{1}]-S[\phi_{2}] \tag{9}\]
with the actions \(S[\phi_{1,2}]\) given by (1) in the integration range from \(t=0\) to \(t=T\) and the total integration measure over \(\mathbf{\phi}\) and \(\mathbf{\varphi}\)
\[D[\mathbf{\phi},\mathbf{\varphi}]=\int d\varphi_{+}\,d\varphi_{-}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
with the operator \(F\) -- the Hessian of the action (1), \(F\delta(t-t^{\prime})=\delta^{2}S[\phi]/\delta\phi(t)\,\delta\phi(t^{\prime})\),
\[F=-\frac{d}{dt}A\frac{d}{dt}-\frac{d}{dt}B+B^{T}\frac{d}{dt}+C. \tag{14}\]
The block-matrix Green's function \(\mathbf{G}(t,t^{\prime})\), as is usually done in boundary value problems, can be built in terms of the full set of basis functions \(\mathbf{v}_{\pm}\) of this operator, satisfying the boundary conditions of the variational problem for the action in (8). This will explicitly be done in Section 4, but in view of the complexity of these boundary conditions intertwining the \(\phi_{1,2}\)-branches of the field space these basis functions do not have a clear particle interpretation, that is separation into positive and negative frequency parts. This difficulty is caused by the convolution of problems associated, on the one hand, with the non-equilibrium nature of a generic background (rather generic dependence of the operator coefficients \(A(t)\), \(B(t)\) and \(C(t)\) on time) and, on the other hand, with the in-in physical setup involving a nontrivial density matrix.
Despite these difficulties, there exists a distinguished set of basis functions for the wave operator which have a clear particle interpretation, and this is one of the main results of the paper. This set is related by Bogoliubov transformations to \(\mathbf{v}_{\pm}(t)\) and is uniquely prescribed by the full set of complex conjugated positive and negative frequency basis functions of the operator (14) \(v(t)\) and \(v^{*}(t)\) which satisfy the intial value problem at \(t=0\),
\[\begin{split}& Fv(t)=0,\\ &\left.(iW-\omega)v(t)\right|_{t=0}=0,\ \left.(iW+\omega^{*})v^{*}(t) \right|_{t=0}=0,\end{split} \tag{15}\]
where \(W\) is what we call the _Wronskian_ operator
\[W=A\frac{d}{dt}+B, \tag{16}\]
which participates in the Wronskian relation for the operator \(F\), which is valid for arbitrary two complex fields \(\phi_{1,2}(t)\),
\[\phi_{2}^{T}F\phi_{1}-(F\phi_{2})^{T}\phi_{1}=-\frac{d}{dt}\left[\phi_{2}^{T} W\phi_{1}-(W\phi_{2})^{T}\phi_{1}\right] \tag{17}\]
and, moreover, serves as the definition of the conserved (not positive-definite) inner product in the space of solutions of the homogeneous wave equation, \(F\phi_{1,2}=0\),
\[(\phi_{1},\phi_{2})=i\phi_{1}^{\dagger}(W\phi_{2})-i(W\phi_{1})^{\dagger} \phi_{2}. \tag{18}\]
We will call the boundary conditions (15) and associated with them Green's functions the Neumann ones3.
Footnote 3: Strictly speaking, these are the analogue of Robin boundary conditions, because they contain together with the derivative transversal to the boundary also the tangential terms composed of the coefficient \(B\) and \(\omega\).
Important point of the definition (15) is that the frequency matrix \(\omega\) (remember that in the generic setup this is a matrix \(\omega_{IJ}\) acting in the vector space of \(\phi^{J}\)) is not directly contained in the blocks of the matrix (6), but follows from the requirement of the particle interpretation of the basis functions \(v(t)\). This requirement can be formulated as follows. One defines the creation-annihilation operators \(\hat{a}^{\dagger}\) and \(\hat{a}\) in terms of which the Heisenberg operator \(\hat{\phi}(t)\) is decomposed as the sum of positive-negative basis functions \(v(t)\) and \(v^{*}(t)\), \(\hat{\phi}(t)=v(t)\,\hat{a}+v^{*}(t)\,\hat{a}^{\dagger}\). Then there exist non-anomalous and anomalous particle averages with respect to the density matrix,
\[\nu=\operatorname{tr}\bigl{[}\hat{\rho}\,\hat{a}^{\dagger}\hat{a}\bigr{]}, \quad\kappa=\operatorname{tr}\bigl{[}\hat{\rho}\,\hat{a}\,\hat{a}\bigr{]}, \tag{19}\]
and the requirement of vanishing anomalous average \(\kappa=0\) allows one to assign the average \(\nu\) the interpretation of the set of occupation numbers associated with \(\hat{\rho}\). This requirement serves as the equation for the frequency matrix \(\omega\) which, as it is shown in Section 4, can be explicitly solved for a special case of the real matrix \(\mathbf{\varOmega}\). This solution reads
\[\omega=R^{1/2}\sqrt{I-\sigma^{2}}R^{1/2},\quad\sigma\equiv R^{-1/2}SR^{-1/2} \tag{20}\]
and gives the expression for the occupation number matrix in terms of the single symmetric matrix \(\sigma\) after the orthogonal rotation by the orthogonal matrix \(\varkappa\),
\[\nu=\frac{1}{2}\varkappa\left(\sqrt{\frac{I-\sigma}{I+\sigma}}-1 \right)\varkappa^{T}, \tag{21}\] \[\varkappa\equiv\bigl{[}\omega^{1/2}R^{-1}\omega^{1/2}\bigr{]}^{1/ 2}\omega^{-1/2}R^{1/2}=\bigl{(}\varkappa^{T}\bigr{)}^{-1}. \tag{22}\]
As shown in Appendix D, the existence of this particle interpretation with a positive definite matrix \(\nu\) fully matches with conditions of normalizability, boundedness and positivity of the density matrix incorporating positive definiteness of matrices \(I\pm\sigma\) and negative definiteness of \(\sigma\).
With the normalization of these distinguished basis functions \(v\) to unity
\[(v_{A},v_{B})=-(v_{A}^{*},v_{B}^{*})=\delta_{AB}, \tag{23}\]
where \(A\) is the index enumerating the full set of basis functions, the blocks of the in-in Green's function (12) take the form
\[iG_{\mathrm{T}}(t,t^{\prime}) =v(t)\,v^{\dagger}(t^{\prime})\,\theta(t-t^{\prime})+v^{*}(t)\,v ^{T}(t^{\prime})\,\theta(t^{\prime}-t)\] \[+\,v(t)\,\nu\,v^{\dagger}(t^{\prime})+v^{*}(t)\,\nu\,v^{T}(t^{ \prime}), \tag{24}\] \[iG_{>}(t,t^{\prime}) =v(t)\,\bigl{(}\nu+I\bigr{)}\,v^{\dagger}(t^{\prime})+v^{*}(t)\, \nu\,v^{T}(t^{\prime}). \tag{25}\]
Here the terms of the type \(v(t)\,v^{\dagger}(t^{\prime})\) should be understood as the matrix products \(\sum_{A}v_{A}^{I}(t)\,v_{A}^{*J}(t^{\prime})\) (one should bear in mind that the basis function \(v(t)=v_{A}^{J}(t)\) represents the square (but asymmetric) matrix whose upper indices label the field \(\phi^{I}\) components, whereas the subscript indices \(A\) enumerate the basis functions
in their full linear independent set). Correspondingly, \(v(t)\,\nu\,v^{\dagger}(t^{\prime})=\sum_{A_{\perp}B}v^{I}_{A}(t)\,\nu^{AB}v^{J}_{ \mathbf{k}}j^{\prime}(t^{\prime})\), etc.
This form of the Green's functions is very familiar from thermofield dynamics for simple equilibrium condensed matter systems, when all the matrices of the above type become diagonal in the momentum space of field modes labeled by \(A=\mathbf{p}\), \(\sum_{A}=\int d^{3}\mathbf{p}/(2\pi)^{3/2}\) and \(\nu_{AB}=\nu_{\mathbf{p},\mathbf{p}^{\prime}}=(\exp(\beta\omega_{\mathbf{p}})- 1)^{\scalebox{0.7}{$\mathsf{-}$}\!1}\delta(\mathbf{p}-\mathbf{p}^{\prime})\) represents expected occupation number for Bose-Einstein statistics at inverse temperature \(\beta\) (detailed consideration of this example is presented in Section 5). Remarkably, the occupation number picture generalizes to nonequilibrium systems of a very general type -- the function of the single symmetric matrix in the parentheses of Eq. (21) can be diagonalized by extra orthogonal rotation (additional to that of \(\varkappa\)), and its eigenvalues would serve as occupation numbers in the generic nonequilibrium state with the initial density matrix.
### Euclidean density matrix
As discussed in Introduction, in quantum cosmology context the density matrix itself can be given in terms of the Euclidean path integral and thus dynamically determined by individual properties of the system including its action functional. So we consider the path integral expression for the Euclidean density matrix \(\langle\varphi_{+}|\,\hat{\rho}_{E}[J_{E}]\,|\varphi_{-}\rangle\equiv\rho_{E} (\varphi_{+},\varphi_{-};J_{E}]\),
\[\rho_{E}(\varphi_{+},\varphi_{-};J_{E}]=\frac{1}{Z}\int\limits_{ \begin{subarray}{c}\phi(\tau_{\pm})=\varphi_{\pm}\end{subarray}}\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
one directly finds the total Schwinger-Keldysh generating functional with the full set of sources probing the two Lorentzian branches and the Euclidean branch of the in formalism
\[Z[\mathbf{J}, J_{E}]=\text{const}\times\exp\Biggl{\{}-\frac{i}{2}\int_{0}^{T}dt\,dt^{ \prime}\,\mathbf{J}^{T}(t)\mathbf{G}(t,t^{\prime})\mathbf{J}(t^{\prime})\] \[-\int_{0}^{T}dt\,\mathbf{J}^{T}(t)\,\mathbf{G}(t,0)\,\mathbf{j}_{E}+\frac{i}{2 }\,\mathbf{j}_{E}^{T}\,\mathbf{G}(0,0)\,\mathbf{j}_{E}\] \[+\frac{1}{2}\int_{0}^{\beta}d\tau\,d\tau^{\prime}\,J_{E}(\tau)\,G _{D}(\tau,\tau^{\prime})\,J_{E}(\tau^{\prime})\Biggr{\}}. \tag{2.34}\]
### Reflection symmetry and analyticity properties
The obtained expression for \(Z[\mathbf{J},J_{E}]\) features a nontrivial mixup of the Neumann and Dirichlet Green's functions of different Lorentzian \(F\) and Euclidean \(F_{E}\) wave operators, but it becomes essentially unified if we assume that the Lorentzian and Euclidean actions are related by the analytic continuation of the form
\[\left.iS[\phi(t)]\right|_{t=-i\tau}=-S_{E}[\phi_{E}(\tau)], \tag{2.35}\]
where the Lorentzian and Euclidean histories are also related by the same continuation rule \(\phi(t)|_{t=-i\tau}=\phi_{E}(\tau)\). This, in particular, implies that the coefficients of the operators \(F_{E}\) and \(F\) are related by
\[\begin{split}& A_{E}(\tau)=A(-i\tau),\quad B_{E}(\tau)=-iB(-i \tau),\\ & C_{E}(\tau)=-C(-i\tau),\end{split} \tag{2.36}\]
so that \(F|_{t=-i\tau}=-F_{E}\). The origin of these relations, especially in connection with reality condition for the coefficients of both Lorentzian and Euclidean operators at their respective _real_\(t\) and \(\tau\) arguments can be traced back to the properties of the full nonlinear action which gives rise to its quadratic part on top of a special background solution of full equations of motion. It is assumed that the Euclidean background solution has a turning point at \(\tau=0\) where all real field variables have zero \(\tau\)-derivatives and can be smoothly continued to the imaginary axis of \(\tau=it\) where they become again real functions of real \(t\). This leads to the above continuation rule with real \(A(t),B(t),C(t)\) and \(A_{E}(\tau),B_{E}(\tau),C_{E}(\tau)\) at real \(t\) and \(\tau\).
With this analytic continuation rule, the expression (2.34) for \(Z[\mathbf{J},J_{E}]\) can indeed be uniformly rewritten in terms of the Lorentzian, Euclidean and mixed Lorentzian-Euclidean Green's functions, all of them subject to one and the same set of Neumann type boundary conditions which select in the Lorentzian branch of the in-in formalism a distinguished set of positive and negative frequency basis functions. This expression reads
\[Z[\mathbb{J}]=\text{const}\times\exp\Biggl{\{}\frac{1}{2}\int_{\mathbb{C}}dz \,dz^{\prime}\,\mathbb{J}^{T}(z)\,\mathbb{G}(z,z^{\prime})\,\,\mathbb{J}(z^{ \prime})\Biggr{\}}, \tag{2.37}\]
where the \(z\)-integration runs respectively over \(t\) or \(\tau\) in the domain \(\mathbb{C}=[0\leq t\leq T]\cup[0\leq\tau\leq\beta]\) depending on which of these Lorentzian or Euclidean time variables is in the argument of the following block matrix Green's function \(\mathbb{G}(z,z^{\prime})\) and the corresponding source \(\mathbb{J}(z)\),
\[\mathbb{G}(z,z^{\prime})\!=\!\!\left[\begin{array}{cc}-i\mathbf{G}(t,t^{\prime} )&\mathbf{G}_{LE}^{<}(t,\tau^{\prime})\\ \mathbf{G}_{LE}^{>}(\tau,t^{\prime})&G_{E}(\tau,\tau^{\prime})\end{array}\right],\,\mathbb{J}(z)\!=\!\left[\begin{array}{c}\mathbf{J}(t)\\ J_{E}(\tau)\end{array}\right]\!. \tag{2.38}\]
Here the Euclidean and Lorentzian-Euclidean blocks of the total Green's function
\[G_{E}(\tau,\tau^{\prime}) =G_{E}^{>}(\tau,\tau^{\prime})\,\theta(\tau-\tau^{\prime})\] \[\qquad\qquad+G_{E}^{<}(\tau,\tau^{\prime})\,\theta(\tau^{\prime} -\tau), \tag{2.39}\] \[\mathbf{G}_{LE}^{<}(t,\tau) =\left[\begin{array}{c}G_{LE}^{1}(t,\tau)\\ G_{LE}^{2}(t,\tau)\end{array}\right]=\left[\begin{array}{c}I\\ I\end{array}\right]G_{LE}^{<}(t,\tau),\] (2.40) \[\mathbf{G}_{LE}^{>}(\tau,t) =\left[\mathbf{G}_{LE}^{<}(t,\tau)\right]^{T}\!, \tag{2.41}\]
express in terms of the relevant Euclidean and Lorentzian-Euclidean Wightman functions
\[G_{E}^{>}(\tau,\tau^{\prime}) =u_{+}(\tau)(\nu+I)u_{-}^{T}(\tau^{\prime})\!+\!u_{-}(\tau)\,\nu\, u_{+}^{T}(\tau^{\prime}), \tag{2.42}\] \[G_{E}^{<}(\tau,\tau^{\prime}) =\left[\begin{array}{c}G_{E}^{>}(\tau^{\prime},\tau)\end{array} \right]^{T}\!,\] (2.43) \[G_{LE}^{<}(t,\tau) =v(t)\,(\nu+I)\,u_{-}^{T}(\tau)+v^{*}(t)\,\nu\,u_{+}^{T}(\tau). \tag{2.44}\]
In their turn these Green's functions, as one can see, are built according to one and the same universal pattern out of the full set of Lorentzian \(v(t)\) and \(v^{*}(t)\) and Euclidean \(u_{\pm}(\tau)\) basis functions. All these functions are subject to Neumann boundary conditions (2.15) and
\[(W_{E}+\omega)u_{+}|_{\tau=\beta}=0,\qquad(W_{E}-\omega)u_{-}|_{\tau=0}=0. \tag{2.45}\]
For \(\omega\) fixed by the above condition of particle interpretation, leading to the expressions (2.20)-(2.22), the Euclidean basis functions \(u_{\pm}\) have a remarkable property. They satisfy at opposite ends of the Euclidean segment \(\tau_{\mp}\) the same boundary conditions4
Footnote 4: In fact, the requirement of \(\kappa=0\) in (2.19) turns out to be the necessary and sufficient condition for this property of Euclidean basis functions, that is the coincidence of boundary condition for \(u_{\pm}(\tau)\) at both ends of the time segment leads to \(\kappa=0\).
\[(W_{E}+\omega)u_{+}|_{\tau=0}=0,\quad(W_{E}-\omega)u_{-}|_{\tau=\beta}=0. \tag{2.46}\]
If one smoothly continues the operator \(F_{E}\) beyond the segment \(\tau_{-}\leq\tau\leq\tau_{+}\), then it becomes periodic with the
period \(\beta\) (which is possible because \(\tau_{\pm}\) are assumed to be the turning points of the background solution on top of which the Hessian of the nonlinear action of the theory is built). This means that the basis functions \(u_{\pm}\) of this operator become quasi-periodic -- \(u_{\pm}(\tau+\beta)\) expresses as a linear combination of the same basis functions \(u_{\pm}(\tau)\) (no mixing between \(u_{-}\) and \(u_{+}\) occurs in their monodromy matrix). As shown in Section 4, with the normalization \(u_{\pm}(0)=1/\sqrt{2\omega}\) this quasi-periodicity property reads in terms of the occupation number matrix (2.21)
\[\begin{split} u_{-}(\tau+\beta)&=u_{-}(\tau)\frac {\nu+I}{\nu},\\ u_{+}(\tau+\beta)&=u_{+}(\tau)\frac{\nu}{\nu+I}. \end{split} \tag{2.47}\]
Together with the reflection symmetry relative to the middle point of the Euclidean time segment (2.28) the periodicity of the operator \(F_{E}\) implies its reflection symmetry with respect to the point \(\tau=0\)
\[\begin{split} A_{E}(\tau)&=A_{E}(-\tau),\quad B_{ E}(\tau)=-B_{E}(-\tau),\\ C_{E}(\tau)&=C_{E}(-\tau).\end{split} \tag{2.48}\]
Therefore, similarly to quasi-periodicity the basis functions \(u_{\pm}(\pm\tau)\) are also related by the analogue of the anti-diagonal monodromy matrix \(L\), \(u_{+}(\tau)=u_{-}(-\tau)\,L\), which is trivial in view of the normalization \(u_{\pm}(0)=1/\sqrt{2\omega}\),
\[u_{+}(\tau)=u_{-}(-\tau). \tag{2.49}\]
The above relations introduce the analytic structure which allows one to express all basis and Green's functions on the Euclidean-Lorentzian domain \(\mathbb{C}\) in terms of one analytic function \(V(z)\) of the complexified time variable \(z=t-i\tau\). This follows from the fact, mentioned above, that the Lorentzian wave operator \(F\) can be regarded as the analytic continuation of the Euclidean operator \(F_{E}\) into the complex plane of time at the point \(z=0\), \(F\equiv F(t,d/dt)=-F_{E}|_{\tau=it}\). As a consequence its basis function \(v(t)\) in view of its boundary conditions and boundary conditions (2.46) for the Euclidean function \(u_{+}(\tau)\) also turns out to be the analytic continuation of the latter,
\[v(t)=u_{+}(it). \tag{2.50}\]
Therefore, the operators \(F\) and \(-F_{E}\) as well as the full set of their basis functions \(v(t)\) and \(u_{\pm}(\tau)\) can be represented respectively as the boundary values at the real and imaginary axes of the complex \(z\)-plane of the complex operator \(F_{\mathbb{C}}\) and the solution \(V(z)\) of its homogeneous wave equation,
\[F_{\mathbb{C}}V(z)\equiv\biggl{[}-\frac{d}{dz}A(z)\frac{d}{dz}- \frac{d}{dz}B(z)+B^{T}(z)\frac{d}{dz}\\ +C(z)\biggr{]}V(z)=0,\quad z=t-i\tau, \tag{2.51}\] \[\bigl{(}iW_{\mathbb{C}}-\omega\bigr{)}V(z)\bigr{|}_{z=0}=0,\,\,W_ {\mathbb{C}}\equiv A(z)\frac{d}{dz}+B(z). \tag{2.52}\]
The function \(V(z)\) gives rise to basis functions as
\[v(t)=V(z)\bigr{|}_{z=t},\quad u_{\pm}(\tau)=V(z)\bigr{|}_{z=\mp i\tau}, \tag{2.53}\]
and thus can be used in (2.38) for the construction of all Green's functions of the Schwinger-Keldysh in-in formalism. Conversely \(V(z)\) can be obtained by analytic continuation of the single Euclidean function \(u_{+}(\tau)\) from the imaginary axes \(z=-i\tau\),
\[V(z)=u_{+}(iz)=u_{+}(\tau+it), \tag{2.54}\]
and in view of reality of \(u_{+}(\tau)\) for real \(\tau\) it has the property \([V(z)]^{*}=V(-z^{*})\).
Important corollary of these analiticity properties is that in view of the monodromy relations for Euclidean basis functions (2.47) the Lorenzian basis functions become quasi-periodic in the imaginary time
\[v(t-i\beta)\!=\!v(t)\frac{\nu+I}{\nu},\,\,\,v^{*}(t-i\beta)\!=\!v^{*}(t)\frac{ \nu}{\nu+I}. \tag{2.55}\]
Due to inverse matrix factors of positive and negative basis functions here the Lorentzian Wightman functions \(G_{>}(t,t^{\prime})\) (given by the expression (2.25)) and \(G_{<}(t,t^{\prime})=G_{>}^{T}(t^{\prime},t)\) satisfy the relation
\[G_{>}(t-i\beta,t^{\prime})=G_{<}(t,t^{\prime}), \tag{2.56}\]
which is nothing but Kubo-Martin-Schwinger condition [42; 43]. It is important that this condition is satisfied in the generic non-equilibrium system with the special Euclidean density matrix (2.26) even despite the fact that no notion of conserved energy can be formulated in such a physical setup.
The tubular Riemann surface of complex time \(z=t-i\tau\) whose main sheet is compactified in \(\tau\) to the circle of circumference \(\beta\) is shown on Fig. 5. The boundaries of the main sheet of this surface form two shores of the cut depicted by dashed line, along which two branches of Lorentzian evolution are running. This rich analytic structure of Euclidean-Lorentzian evolution suggests that the equivalence of the Euclidean and Lorentzian formalisms proven beyond tree level for interacting QFT on top of the de Sitter spacetime [31; 32] might be extended to a generic reflection-symmetry background underlying our definition of the Euclidean density matrix.
Figure 5: Euclidean-Lorentzian contour \(\mathbb{C}\) on the Riemann surface of complex time \(z=t-i\tau\). Wightman functions are periodic in imaginary (Euclidean) time direction with a period \(\beta\), whereas the basis function \(v(z)\) suffers a jump at the cut denoted by the horizontal dashed line, the two Lorentzian time branches running along the shores of this cut.
Preliminaries
To derive the aforementioned results we dwell here in more detail on the introduced above notations and develop a canonical formalism and quantization of the underlying theory. In particular, we pose rather generic initial value and boundary value problems for equations of motion and discuss the properties of the related Green's functions.
### Condensed notations
The elements of the field space will be denoted as \(\phi^{I}(t)\), where the index \(I\) is, in fact, a multi-index, and contains both the dependence on spatial coordinates denoted as \(\mathbf{x}\) and discrete spin-tensor labels \(i\), \(I=(\mathbf{x},i)\). Thus, we can equivalently write the fields in the form, emphasizing its dependence on the spatial coordinates \(\phi^{I}(t)=\phi^{i}(t,\mathbf{x})\).
Assuming that equations of motion are of the second order in time derivatives one has the most general quadratic action of the theory of the form (1) where we explicitly specify the initial and final moments of time range \(t_{\pm}\),
\[S[\phi]=\frac{1}{2}\int_{t_{-}}^{t_{+}}dt\,\Big{(}\dot{\phi}^{T}A \dot{\phi}+\dot{\phi}^{T}B\phi+\phi^{T}B^{T}\dot{\phi}+\phi^{T}C\phi\Big{)}. \tag{15}\]
Here dots denote the derivatives with respect to time \(t\), and \(A\), \(B\) and \(C\) are the time-dependent real bilinear forms in the space of fields. Moreover, \(A\) and \(C\) are assumed to be symmetric. The explicit action of these bilinear forms on the fields, e.g. for \(A\) reads
\[(A\phi)_{I}(t)=A_{IJ}(t)\phi^{j}(t)\equiv\sum_{j}\int d\mathbf{x}^{\prime}\,A _{ij}(t,\mathbf{x},\mathbf{x}^{\prime})\,\phi^{j}(t,\mathbf{x}^{\prime}), \tag{16}\]
where \(A_{ij}(t,\mathbf{x},\mathbf{x}^{\prime})\) is the kernel of the operator. Thus, the first term in (15) has the following explicit structure
\[\dot{\phi}^{T}A\dot{\phi} =\dot{\phi}^{I}(A\dot{\phi})_{I}\] \[=\sum_{ij}\int d\mathbf{x}\,d\mathbf{x}^{\prime}\,\dot{\phi}^{i} (t,\mathbf{x})A_{ij}(t,\mathbf{x},\mathbf{x}^{\prime})\dot{\phi}^{j}(t, \mathbf{x}). \tag{17}\]
The superscript \(T\) applied to the bilinear form denotes the functional matrix transposition operation which implies the transposition of discrete and spatial labels of the corresponding kernel, but does not touch the time variable
\[(B^{T})_{ij}(t,\mathbf{x},\mathbf{x}^{\prime})=B_{ji}(t,\mathbf{x}^{\prime}, \mathbf{x}). \tag{18}\]
Consequently, the second and the third terms in (15) are the same. However, we will keep them separate for symmetry reasons.
In local non-gauge theories the kernels of the above coefficients are represented by delta functions of spatial coordinates and their finite order derivatives. For local gauge theories treated within reduction to the physical sector in certain gauges these coefficients can become nonlocal in space, but locality in time derivatives within canonical quantization should be strictly observed.
The equations of motion, obtained by varying the action (15) with respect to \(\phi\) have the form
\[F\phi(t)=0,\quad F\equiv-\frac{d}{dt}A\frac{d}{dt}-\frac{d}{dt}B+B^{T}\frac{d} {dt}+C, \tag{19}\]
where the wave operator \(F\), or the Hessian of the action (15), has already been defined above by Eq. (14). Another form of this operator, obtained by integration by parts and involving both left and right time derivatives, the direction of their action being indicated by arrows
\[\overset{\leftrightarrow}{F}\equiv\frac{\overset{\leftarrow}{d}}{dt}A\frac{ \overset{\rightarrow}{d}}{dt}+\frac{\overset{\leftarrow}{d}}{dt}B+B^{T}\frac{ \overset{\rightarrow}{d}}{dt}+C, \tag{20}\]
allows one to rewrite the quadratic action (15) in even more condensed form
\[S[\phi] =\frac{1}{2}\int_{t_{-}}^{t_{+}}dt\,\phi^{T}\overset{ \leftrightarrow}{F}\phi\] \[=\frac{1}{2}\int_{t_{-}}^{t_{+}}dt\,\phi^{T}(F\phi)+\frac{1}{2} \phi^{T}(W\phi)\Big{|}_{t_{-}}^{t_{+}}. \tag{21}\]
Here the Wronskian operator \(W\) is defined by (16) and the origin of the boundary term at \(t_{\pm}\) is the result of integration by parts, which is also associated with the Wronskian relation (17).
### Canonical formalism
The Hamiltonian formalism of the theory with the action (15), which is the first step to the canonical quantization begins with the determination of the momentum \(\pi\) canonically conjugated to the field \(\phi\)
\[\pi=\frac{\partial L}{\partial\dot{\phi}}=A\dot{\phi}+B\phi=W\phi,\quad W=A \frac{d}{dt}+B, \tag{22}\]
where \(L\) is the Lagrangian of the action (15). The corresponding Hamiltonian has the form
\[H=\pi^{T}\dot{\phi}-L=\frac{1}{2}(\pi-B\phi)^{T}A^{-1}(\pi-B\phi)-\frac{1}{2} \phi^{T}C\phi. \tag{23}\]
Together with the Poisson bracket \(\{\phi^{I},\pi_{J}\}=\delta^{I}_{J}\) it defines the dynamics of the system. The Hamiltonian equations of motion read
\[\dot{\phi} =\{\phi,H\}=A^{-1}(\pi-B\phi) \tag{24a}\] \[\dot{\pi} =\{\pi,H\}=B^{T}A^{-1}(\pi-B\phi)+C\phi. \tag{24b}\]
Transition to the Lagrangian formalism by expressing \(\pi\) in terms of \(\phi\) and \(\dot{\phi}\) obviously leads to equations of motion (19) following from the variation of the action (15).
Let us denote the basis of the independent solutions to (3.5) as \(v_{\pm A}^{\,I}(t)\), where the multi-index \(A\) enumerates the number of the particular solution and has the same range as the index \(I\). The general solution in terms of basis functions reads
\[\phi^{I}(t)=v_{+}{}^{\,I}_{A}(t)\,\alpha^{+A}+v_{-}{}^{\,I}_{A}(t)\,\alpha^{-A} \tag{3.11}\]
and can be rewritten in shortened notations as
\[\phi(t)=v_{+}(t)\,\alpha^{+}+v_{-}(t)\,\alpha^{-}. \tag{3.12}\]
Here \(\alpha^{\pm A}\) constitute a set of constants, specifying particular initial conditions. Using (3.8), we find the corresponding solution for the momentum
\[\pi(t)=Wv_{+}(t)\,\alpha^{+}+Wv_{-}(t)\,\alpha^{-}, \tag{3.13}\]
so, the evolution of phase space variables can be rewritten in the joint form as
\[\begin{bmatrix}\phi(t)\\ \pi(t)\end{bmatrix}=\mathcal{M}(t)\begin{bmatrix}\alpha^{+}\\ \alpha^{-}\end{bmatrix},\;\mathcal{M}(t)=\begin{bmatrix}v_{+}(t)&v_{-}(t)\\ Wv_{+}(t)&Wv_{-}(t)\end{bmatrix}. \tag{3.14}\]
Now, we can equip the space of initial conditions, consisting of \(\alpha^{\pm}\), with the Poisson bracket structure inherited from the Poisson brackets of \(\phi\) and \(\pi\). Substituting (3.14) into the left hand side of
\[\left\{\begin{pmatrix}\phi^{I}\\ \pi_{J}\end{pmatrix},\begin{pmatrix}\phi^{I^{\prime}}&\pi_{J^{\prime}}\end{pmatrix} \right\}=\left[\begin{array}{cc}0&\delta^{I}_{J^{\prime}}\\ -\delta^{I^{\prime}}_{J}&0\end{array}\right], \tag{3.15}\]
we have in condensed notations
\[\mathcal{M}(t)\left[\begin{array}{cc}\{\alpha^{+},\alpha^{+}\}&\{\alpha^{+}, \alpha^{-}\}\\ \{\alpha^{-},\alpha^{+}\}&\{\alpha^{-},\alpha^{-}\}\end{array}\right] \mathcal{M}^{T}(t)=\left[\begin{array}{cc}0&I\\ -I&0\end{array}\right], \tag{3.16}\]
where \(I\) denotes the identity matrix. The identity above fixes the pairwise Poisson brackets of \(\alpha^{\pm}\). Let us denote the right hand side of this equality, playing the role of the Poisson bivector in the Darboux coordinates, as
\[\mathcal{P}\equiv\begin{bmatrix}0&I\\ -I&0\end{bmatrix}. \tag{3.17}\]
Introducing also the matrix \(\mathcal{D}\) as inverse to the matrix of the pairwise Poisson brackets
\[\mathcal{D}=\left[\begin{array}{cc}\Delta_{++}&\Delta_{+-}\\ \Delta_{-+}&\Delta_{--}\end{array}\right]\equiv-\left[\begin{array}{cc}\{ \alpha^{+},\alpha^{+}\}&\{\alpha^{+},\alpha^{-}\}\\ \{\alpha^{-},\alpha^{+}\}&\{\alpha^{-},\alpha^{-}\}\end{array}\right]^{-1} \tag{3.18}\]
where the matrices \(\Delta\) denote the corresponding block-elements of \(\mathcal{D}\), we can invert the equality (3.16) as
\[\mathcal{M}^{T}(t)\,\mathcal{P}\,\mathcal{M}(t)=\mathcal{D}. \tag{3.19}\]
Thus, one can express the inverse of \(\mathcal{M}(t)\) in terms of its transpose, namely
\[\mathcal{M}^{-1}(t)=\mathcal{D}^{-1}\,\mathcal{M}^{T}(t)\,\mathcal{P}. \tag{3.20}\]
Before proceeding further, let us show explicitly that the right hand side (3.19) is indeed independent of time \(t\). To demonstrate this, we contract l.h.s of the equation (3.5) where field \(\phi=\phi_{1}\), with another field \(\phi_{2}\), and subtract the same quantity, but with \(F\), acting on \(\phi_{2}\) (\(\phi_{1,2}\) are not necessarily solve e.o.m.). The result can be written as
\[\phi_{2}^{T}F\phi_{1}-(F\phi_{2})^{T}\phi_{1}=-\frac{d}{dt}\left[\phi_{2}^{T}W \phi_{1}-(W\phi_{2})^{T}\phi_{1}\right]. \tag{3.21}\]
Thus, for \(\phi_{1,2}\) -- solutions of (3.5) l.h.s. vanishes, so we have
\[\phi_{2}^{T}W\phi_{1}-(W\phi_{2})^{T}\phi_{1}=\text{const}. \tag{3.22}\]
It is easy to see, that each element of (3.18) has the form (3.22) as above, where the role of solutions \(\phi_{1}\), \(\phi_{2}\) is played by the basis functions \(v^{+}\), \(v^{-}\). Applying the matrix transposition operator to both sides of (3.16), we obtain that the matrix \(\mathcal{D}\) is skew-symmetric, since \(\mathcal{P}^{T}=-\mathcal{P}\). In terms of the block elements of \(\mathcal{D}\) this means that
\[\Delta_{+-}^{T}=-\Delta_{-+},\;\Delta_{++}^{T}=-\Delta_{++},\;\Delta_{--}^{T}=- \Delta_{--}. \tag{3.23}\]
Moreover, using the fact that the coefficient matrices \(A\), \(B\), and \(C\) in (3.1) are real, we conclude that basis functions \(v_{+}\), \(v_{-}\) can also be chosen to be real. Thus, the matrix \(\mathcal{D}\) is real skew-symmetric, so there is a time-independent linear transformation \(\mathcal{S}\), bringing it to the canonical form, i.e. \(\mathcal{S}^{T}\mathcal{D}\mathcal{S}=\mathcal{P}\). Without the loss of generality one set \(\mathcal{D}=\mathcal{P}\) by default5. However, for the reasons which will become clear soon (see equation (4.21) below), we will assume that \(\mathcal{D}\) has the following more general form
Footnote 5: This choice allows to give an additional interpretation for the equation (3.19), which becomes \(\mathcal{M}^{T}(t)\mathcal{P}\mathcal{M}(t)=\mathcal{P}\). Namely, the matrix \(\mathcal{M}(t)\) performs a time-dependent symplectomorphism of the Poisson bivector \(\mathcal{P}\).
\[\mathcal{D}=\left[\begin{array}{cc}0&\Delta_{+-}\\ \Delta_{-+}&0\end{array}\right], \tag{3.24}\]
where
\[\Delta_{+-}=-\Delta_{-+}^{T}=v_{+}^{T}Wv_{-}-(Wv_{+})^{T}v_{-}. \tag{3.25}\]
In terms of the basis functions, the vanishing of the diagonal blocks of \(\mathcal{D}\) implies that \(v_{+}\), \(v_{-}\) are chosen such that
\[\Delta_{++} =v_{+}^{T}Wv_{+}-\left(Wv_{+}\right)^{T}v_{+}=0, \tag{3.26}\] \[\Delta_{--} =v_{-}^{T}Wv_{-}-\left(Wv_{-}\right)^{T}v_{-}=0.\]
This can always be done by an appropriate transformation of the basis functions, possibly mixing \(v_{+}\) and \(v_{-}\). Consequently, the pairwise Poisson brackets of \(\alpha^{+}\) and \(\alpha^{-}\) take the form
\[\{\alpha^{+},\alpha^{-}\} =-\{\alpha^{+},\alpha^{-}\}=-\Delta_{-+}^{-1}, \tag{3.27a}\] \[\{\alpha^{+},\alpha^{+}\} =\{\alpha^{-},\alpha^{-}\}=0. \tag{3.27b}\]
As noted above, one can go further, and set \(\Delta_{+-}=-\Delta_{-+}^{T}=I\).
Now, let us modify the Hamiltonian by introducing time-dependent sources \(J_{\phi}\), \(J_{\pi}\) for the field and its conjugate momentum
\[H\;\mapsto\;H+J_{\phi}^{T}\phi+J_{\pi}^{T}\pi. \tag{3.28}\]
The modified equations of motion can be written as
\[\frac{d}{dt}\begin{bmatrix}\phi_{J}(t)\\ \pi_{J}(t)\end{bmatrix}=\mathcal{A}(t)\begin{bmatrix}\phi_{J}(t)\\ \pi_{J}(t)\end{bmatrix}+\mathcal{P}\begin{bmatrix}J_{\phi}(t)\\ J_{\pi}(t)\end{bmatrix}, \tag{3.29}\] \[\mathcal{A}(t)\equiv\begin{bmatrix}-A^{-1}B&A^{-1}\\ -B^{T}A^{-1}B+C&B^{T}A^{-1}\end{bmatrix},\]
where the subscript \(J\) of \(\phi\), \(\pi\) emphasizes the presence of the sources in equations of motion. We will find a solution to modified equations of motion using the constant variation method. Namely, we start with the solution (3.14) to equations of motion with vanishing sources, but make the integration constants \(\alpha^{+}\), \(\alpha^{-}\) in its definition time-dependent
\[\begin{bmatrix}\phi_{J}(t)\\ \pi_{J}(t)\end{bmatrix}=\mathcal{M}(t)\begin{bmatrix}\alpha^{+}(t)\\ \alpha^{-}(t)\end{bmatrix}, \tag{3.30}\]
Then, we substitute the result to the modified e.o.m. and obtain
\[\mathcal{M}(t)\frac{d}{dt}\begin{bmatrix}\alpha^{+}(t)\\ \alpha^{-}(t)\end{bmatrix}=\mathcal{P}\begin{bmatrix}J_{\phi}(t)\\ J_{\pi}(t)\end{bmatrix}, \tag{3.31}\]
where we exploit the fact that \(\mathcal{M}(t)\) satisfies the system (3.10). Using the equality (3.20) for the inverse of the matrix \(\mathcal{M}(t)\) and integrating the equation on \(\alpha^{+}(t)\) and \(\alpha^{-}(t)\) we obtain
\[\begin{bmatrix}\alpha^{+}(t)\\ \alpha^{-}(t)\end{bmatrix}=\begin{bmatrix}\alpha_{0}^{+}\\ \alpha_{0}^{-}\end{bmatrix}-\int_{t_{-}}^{t}dt^{\prime}\;\mathcal{D}^{-1} \mathcal{M}^{T}(t^{\prime})\begin{bmatrix}J_{\phi}(t^{\prime})\\ J_{\pi}(t^{\prime})\end{bmatrix}, \tag{3.32}\]
where \(\alpha_{0}^{+}\) and \(\alpha_{0}^{-}\) are integration constants. Substitution back to (3.30) gives the solution to the equations (3.29)
\[\begin{bmatrix}\phi_{J}(t)\\ \pi_{J}(t)\end{bmatrix}=\begin{bmatrix}\phi_{0}(t)\\ \pi_{0}(t)\end{bmatrix}-\int_{t_{-}}^{t}dt^{\prime}\;\mathcal{M}(t)\,\mathcal{ D}^{-1}\mathcal{M}^{T}(t^{\prime})\begin{bmatrix}J_{\phi}(t^{\prime})\\ J_{\pi}(t^{\prime})\end{bmatrix}, \tag{3.33}\]
where the initial conditions \(\phi_{0}(t)\), \(\pi_{0}(t)\) are related to constants of integration by
\[\begin{bmatrix}\phi_{0}(t)\\ \pi_{0}(t)\end{bmatrix}=\mathcal{M}(t)\begin{bmatrix}\alpha_{0}^{+}\\ \alpha_{0}^{-}\end{bmatrix}, \tag{3.34}\]
and represent the solution to homogeneous equation, i.e. for vanishing sources \(J_{\phi}\) and \(J_{\pi}\).
Now, let us focus on the case of vanishing momentum source and also redefine the field source for the convenience
\[J_{\pi}(t)=0,\qquad J(t)\equiv-J_{\phi}(t). \tag{3.35}\]
The corresponding e.o.m. in the Lagrange form reads
\[F\phi_{J}(t)+J(t)=0. \tag{3.36}\]
From (3.33), one obtains the explicit form of the solution for \(\phi(t)\), which is
\[\phi_{J}(t)=\phi_{0}(t)-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G_{R}(t,t^{\prime})J(t ^{\prime}), \tag{3.37}\]
where \(G_{R}(t,t^{\prime})\) is called the retarded Green's function and expressed through the top-left block of the matrix \(\mathcal{M}(t)\,\mathcal{D}^{-1}\mathcal{M}^{T}(t^{\prime})\), specifically
\[G_{R}(t,t^{\prime})\!=\!-\!\left(v_{+}(t)\Delta_{-+}^{-1}v_{-}^{T}(t^{\prime}) \!+\!v_{-}(t)\Delta_{+-}^{-1}v_{+}^{T}(t^{\prime})\right)\theta(t\!-\!t^{ \prime}). \tag{3.38}\]
The fact that \(\Delta_{++}=\Delta_{--}=0\) is crucial in obtaining this simple expression for \(G_{R}\). From (3.37) we find that \(G_{R}\) satisfies the equation
\[FG_{R}(t,t^{\prime})=I\,\delta(t-t^{\prime}) \tag{3.39}\]
and is uniquely determined by the condition
\[G_{R}(t,t^{\prime})=0,\qquad t<t^{\prime}. \tag{3.40}\]
The latter fact follows, in particular, from the fact that any two Green's functions of the same differential operator differ by the solution of the homogeneous equation. Once some Green's function, satisfying the condition (3.40) is found, a shift by a solution to homogeneous equation will violate this condition. Alternatively, \(G_{R}\) can be defined via initial value problem
\[G_{R}(t,t^{\prime})\big{|}_{t^{\prime}=t+0}=0,\;WG_{R}(t,t^{\prime})\big{|}_{t^ {\prime}=t+0}=-I. \tag{3.41}\]
The fact that solution (3.37) is expressed through the retarded Green's function means that \(\phi(t)\) is subject to the following initial (rather than boundary) value problem
\[\phi_{J}(t_{-})=\phi_{0}(t_{-}),\;W\phi_{J}(t_{-})=W\phi_{0}(t_{-})\equiv\pi_{ 0}(t_{-}). \tag{3.42}\]
### The solution of Dirichlet and Neumann boundary value problems
The Green's functions, solving the boundary problems, can be obtained from the retarded Green's function by shifting it by the solution of the homogeneous equation (3.5). In particular, one constructs the so-called symmetric Green's function as
\[G_{S}(t,t^{\prime}) =G_{R}(t,t^{\prime})+v_{+}(t)\Delta_{-+}^{-1}v_{-}^{T}(t^{\prime})\] \[=-v_{+}(t)\Delta_{+-}^{-1}v_{-}^{T}(t^{\prime})\,\theta(t-t^{ \prime})\] \[\quad+v_{-}(t)\Delta_{+-}^{-1}v_{+}^{T}(t^{\prime})\,\theta(t^{ \prime}-t). \tag{3.43}\]
It is symmetric under the simultaneous transposition and exchange of the time arguments, i.e. \(G_{S}^{T}(t,t^{\prime})=G_{S}(t^{\prime},t)\). Unlike the retarded Green's function it is defined non-uniquely and the concrete boundary conditions should be specified. These are in one-to-one correspondence to the boundary conditions, satisfied by the basis functions \(v_{+}\) and \(v_{-}\) at the higher and lower time limits \(t=t_{+}\) and \(t=t_{-}\), respectively.
In particular, to solve the inhomogeneous equation (3.36) supplemented with the vanishing Dirichlet boundary conditions
\[\phi_{J}(t_{\pm})=0, \tag{3.44}\]
one can use the Dirichlet Green's function subject to the same boudary conditions
\[G_{D}(t_{\pm},t^{\prime})=0\qquad\leftrightarrow\qquad v_{\pm}(t_{\pm})=0, \tag{3.45}\]
so that the solution reads
\[\phi_{J}(t)=-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G_{D}(t,t^{\prime})\,J(t^{ \prime}). \tag{3.46}\]
Similarly, in solving Neumann boundary problem
\[(iW\mp\omega_{\pm})\phi_{J}(t_{\pm})=0, \tag{3.47}\]
one defines the corresponding Neumann Green's function demanding
\[(iW\mp\omega_{\pm})G_{N}(t_{\pm},t^{\prime})=0\leftrightarrow(iW\mp\omega_{\pm })v_{\pm}(t_{\pm})=0, \tag{3.48}\]
and obtains the solution as
\[\phi_{J}(t)=-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G_{N}(t,t^{\prime})\,J(t^{ \prime}). \tag{3.49}\]
Notably, the Dirichlet and Neumann Green's functions, which are subject to homogeneous boundary conditions, allows one to solve the modified boundary problems, namely with inhomogeneous boundary conditions. Namely, the solutions can be obtained as follow. First, we exploit the equality (3.21) and perform in it the substitutions \(\phi_{2}\mapsto\phi(t^{\prime})\), \(\phi_{1}\mapsto G(t^{\prime},t)\), where \(\phi(t^{\prime})\) solves (3.36) and \(G(t^{\prime},t)\) is some Green's function, solving \(FG(t^{\prime},t)=\delta(t-t^{\prime})\). Next, integrating both sides of the equality over \(t^{\prime}\) from \(t_{-}\) to \(t_{+}\), we obtain
\[\phi_{J}(t)= \,-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G(t,t^{\prime})\,J(t^{\prime})\] \[\,+(WG(t_{+},t))^{T}\phi(t_{+})-(WG(t_{+},t))^{T}\phi_{J}(t_{+})\] \[\,-G^{T}(t_{+},t)\,W\phi_{J}(t_{+})+G^{T}(t_{-},t)\,W\phi_{J}(t_{-}) \tag{3.50}\]
Now, suppose we are to solve (3.36) supplemented by inhomogeneous boundary conditions (in contrast to homogeneous ones (3.44))
\[\phi_{J}(t_{\pm})=\varphi_{\pm}, \tag{3.51}\]
for some constants \(\varphi_{+}\), \(\varphi_{-}\). Substituting these conditions to (3.50) together with Dirichlet Green's function \(G\mapsto G_{D}\), satisfying (3.45), we observe that the third line vanishes, so we get
\[\phi_{J}(t)=-\mathbf{w}^{T}(t)\left[\begin{array}{c}\varphi_{+}\\ \varphi_{-}\end{array}\right]-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G_{D}(t,t^{ \prime})\,J(t^{\prime}), \tag{3.52}\]
where we introduce the notation for the two-component row as the transposition of the newly introduced column
\[\mathbf{w}^{T}(t) \equiv\Big{[}\,G_{D}(t,t_{+})\overset{\leftarrow}{W}\quad-G_{D}( t,t_{-})\overset{\leftarrow}{W}\,\Big{]}\] \[=[\mathbf{w}(t)]^{T}, \tag{3.53}\] \[\mathbf{w}(t) \equiv\begin{bmatrix}\overset{\rightarrow}{W}G_{D}(t_{+},t)\\ -\overset{\rightarrow}{W}G_{D}(t_{-},t)\end{bmatrix}, \tag{3.54}\]
and \(\overset{\leftarrow}{W}\) denotes the Wronskian operator (2.16) acting from the right on the second argument of \(G_{D}(t,t^{\prime})\) at the total boundary of the time domain at \(t_{\pm}\) (the sign taking into account the outward pointing time derivative in \(W\)) -- the notation used above in (2.31). The transposition law here, of course, takes into account the symmetry of Dirichlet Green's function,
\[[G_{D}(t,t_{+})\overset{\leftarrow}{W}]^{T} =\big{(}A(t_{+})\frac{d}{dt_{+}}+B(t_{+})\big{)}G_{D}^{T}(t,t_{+})\] \[=\overset{\rightarrow}{W}G_{D}(t_{+},t). \tag{3.55}\]
The quantity \(\mathbf{w}(t)\) introduced above has the following important property. Namely, evaluating both sides of (3.52) at the boundary points \(t=t_{\pm}\), and using (3.45) we observe that
\[\mathbf{w}^{T}(t_{+})=\Big{[}\,-I\quad 0\,\Big{]}\,,\qquad\mathbf{w}^{T}(t_{-})=\Big{[} \,0\quad-I\,\Big{]}\,. \tag{3.56}\]
Similarly, one can consider inhomogeneous Neumann boundary conditions
\[\big{(}\pm iW-\omega_{\pm}\big{)}\phi_{J}(t_{\pm})=j_{\pm}, \tag{3.57}\]
with some boundary sources \(j_{+}\) and \(j_{-}\). Substitution of this condition and Neumann Green's function \(G\mapsto G_{N}\), satisfying (3.48), to (3.50) gives the solution to (3.36) with the boundary conditions above
\[\phi_{J}(t)=-i\,\mathbf{g}_{N}^{T}(t)\left[\begin{array}{c}j_{+}\\ j_{-}\end{array}\right]-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G_{N}(t,t^{\prime})J(t ^{\prime}). \tag{3.58}\]
Here \(\mathbf{g}_{N}^{T}(t)\) is the notation analogous to (3.53) -- the row built in terms of the Neumann Green's function kernels with the second argument located at the total 2-point boundary of the time domain (points \(t_{-}\) and \(t_{+}\)),
\[\mathbf{g}_{N}^{T}(t)\equiv\left[\begin{array}{cc}G_{N}(t,t_{+})&G_{N}(t,t_{-}) \end{array}\right]. \tag{3.59}\]
### The relation between Dirichlet and Neumann Green's functions
There is important explicit connection between Dirichlet and Neumann Green's functions, which can be derived in the following way. The idea is to consider the problem with homogeneous Neumann boundary conditions (3.47) as the Dirichlet problem with some nontrivial boundary values \(\varphi_{\pm}\). Substituting the solution of this problem (3.52) into (3.47) one obtains a linear equation on \(\varphi_{\pm}\), which can be solved as
\[\left[\begin{array}{c}\varphi_{+}\\ \varphi_{-}\end{array}\right]=(i\mathbf{\omega}+\mathbf{\Omega})^{-1}\int_{t_{-}}^{t_{+ }}dt\,\mathbf{w}(t)\,J(t), \tag{3.60}\]
where the matrices \(\mathbf{\omega}\) and \(\mathbf{\Omega}\) read
\[\mathbf{\omega}\equiv\left[\begin{array}{cc}\omega_{+}&0\\ 0&\omega_{-}\end{array}\right], \tag{3.61}\] \[\mathbf{\Omega}\equiv\left[\begin{array}{cc}\stackrel{{ \rightarrow}}{{-W}}&G_{D}(t_{+},t_{+})\stackrel{{ \leftarrow}}{{W}}&WG_{D}(t_{+},t_{-})\stackrel{{ \leftarrow}}{{W}}\\ \stackrel{{\rightarrow}}{{W}}&G_{D}(t_{-},t_{+})\stackrel{{ \leftarrow}}{{W}}&-\stackrel{{\rightarrow}}{{W}}G_{D}(t_{-},t_{- })\stackrel{{\leftarrow}}{{W}}\end{array}\right]. \tag{3.62}\]
Substituting these \(\varphi_{\pm}\) back into (3.52) gives
\[\phi_{J}(t) =-\int_{t_{-}}^{t_{+}}dt^{\prime}\,\Big{[}G_{D}(t,t^{\prime})\] \[+\mathbf{w}^{T}(t)\,(i\mathbf{\omega}+\mathbf{\Omega})^{-1}\mathbf{w}(t^{\prime}) \Big{]}J(t^{\prime}), \tag{3.63}\]
which implies, after comparing with (3.49), the following expression for the Neumann Green's function
\[G_{N}(t,t^{\prime})=G_{D}(t,t^{\prime})+\mathbf{w}^{T}(t)\,(i\mathbf{\omega}+\mathbf{ \Omega})^{-1}\mathbf{w}(t^{\prime}). \tag{3.64}\]
Here we use the notations (3.53)-(3.54) introduced above. Substituting \(t=t_{\pm}\) to the both sides of the equality and using (3.56), we get the equality
\[\mathbf{g}_{N}^{T}(t)=-\mathbf{w}^{T}(t)\,(i\mathbf{\omega}+\mathbf{\Omega})^{-1}, \tag{3.65}\]
that allows us to express the Dirichlet Green's function from (3.64) via the Neumann one as
\[G_{D}(t,t^{\prime})=G_{N}(t,t^{\prime})-\mathbf{g}_{N}^{T}(t)\,(i\mathbf{\omega}+\mathbf{ \Omega})\,\mathbf{g}_{N}(t^{\prime}), \tag{3.66}\]
where we use the notation (3.59) for the row \(\mathbf{g}_{N}(t)=[G_{N}(t,t_{+})\,\,\,G_{N}(t,t_{-})]\) and its transpose. Using (3.56) once again, we can write down the expression for the block matrix of boundary values of the Neumann function \(\mathbf{g}_{N}\) at both ends of the time segment (double bar denoting the restriction of both arguments to \(t_{\pm}\))
\[\mathbf{G}_{N}\|=\left[\begin{array}{cc}G_{N}(t_{+},t_{+})&G_{N}(t_{+},t_{-})\\ G_{N}(t_{-},t_{+})&G_{N}(t_{-},t_{-})\end{array}\right]\!=\!(i\mathbf{\omega}+\mathbf{ \Omega})^{-1}. \tag{3.67}\]
### Canonical quantization
Before proceeding to the canonical quantization of the theory (3.1), whose Hamiltonian formalism was constructed in the previous subsection, let us make a more specific choice of basis functions, which is more convenient for the quantization purposes. We first choose the basis functions \(v_{\pm}(t)\) real, and such that the matrix \(\mathcal{D}\) defined by (3.24) has a canonical form, \(\mathcal{D}=\mathcal{P}\). Together with the reality of \(\phi(t)\) this implies also the reality of the corresponding integration constants \(\alpha^{\pm}\). Next, we combine these basis functions and integration constants into the following complex conjugated pairs
\[\left[\begin{array}{cc}v_{+}&v_{-}\end{array}\right]\mapsto\left[\begin{array} []{cc}v&v^{*}\end{array}\right]=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}v_ {+}&v_{-}\end{array}\right]\left[\begin{array}{cc}I&I\\ -iI&iI\end{array}\right], \tag{3.68}\]
\[\left[\begin{array}{c}\alpha^{+}\\ \alpha^{-}\end{array}\right]\mapsto\left[\begin{array}{c}\alpha\\ \alpha^{*}\end{array}\right]=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}I&iI \\ I&-iI\end{array}\right]\left[\begin{array}{c}\alpha^{+}\\ \alpha^{-}\end{array}\right]. \tag{3.69}\]
After this change of basis, the matrix \(\mathcal{D}\) becomes
\[\mathcal{D}\mapsto i\mathcal{P}=\left[\begin{array}{cc}0&iI\\ -iI&0\end{array}\right]. \tag{3.70}\]
According to (3.27a), this implies the following pairwise Poisson brackets of \(\alpha\), \(\alpha^{*}\)
\[\{\alpha,\alpha^{*}\}=-\{\alpha^{*},\alpha\}=-iI,\,\,\{\alpha,\alpha\}=\{ \alpha^{*},\alpha^{*}\}=0. \tag{3.71}\]
In terms of the new basis functions, the evolution law (3.14) of the field and the canonical momentum becomes
\[\left[\begin{array}{c}\phi(t)\\ \pi(t)\end{array}\right]=\left[\begin{array}{cc}v(t)&v^{*}(t)\\ Wv(t)&Wv^{*}(t)\end{array}\right]\left[\begin{array}{c}\alpha\\ \alpha^{*}\end{array}\right]. \tag{3.72}\]
The equation (3.20) takes the form
\[\mathcal{M}^{-1}(t)=i\,\mathcal{P}\,\mathcal{M}^{T}(t)\,\mathcal{P},\quad \mathcal{M}(t)=\left[\begin{array}{cc}v(t)&v^{*}(t)\\ Wv(t)&Wv^{*}(t)\end{array}\right] \tag{3.73}\]
and allows to invert the equality (3.72) as
\[\left[\begin{array}{c}\alpha\\ \alpha^{*}\end{array}\right]=i\,\mathcal{P}\,\mathcal{M}^{T}(t)\,\mathcal{P} \left[\begin{array}{c}\phi(t)\\ \pi(t)\end{array}\right]. \tag{3.74}\]
Evaluating at \(t=t_{-}\) and substituting back to (3.72), one obtains evolving phase space variables in terms of the basis functions \(v(t)\), \(v^{*}(t)\) and initial data,
\[\left[\begin{array}{c}\phi(t)\\ \pi(t)\end{array}\right]=i\,\mathcal{M}(t)\,\mathcal{P}\,\mathcal{M}^{T}(t_{-} )\,\mathcal{P}\left[\begin{array}{c}\phi(t_{-})\\ \pi(t_{-})\end{array}\right]. \tag{3.75}\]
Now, we are ready perform the canonical quantization of the system under consideration, whose Hamiltonian form was obtained in the previous subsection. We will quantize it in the Heisenberg picture. Thus, we map the solutions of the Hamiltonian equations to the corresponding Heisenberg operators, i.e. \(\phi(t),\pi(t)\mapsto\,\hat{\phi}(t),\hat{\pi}(t)\), whereas the Poisson bracket is replaced by the commutator times the factor \(i\), so that we obtain the equal-time quantum commutation relations \([\hat{\phi}(t),\hat{\pi}(t)]=i\hat{I}\), where \(\hat{I}\) is the identity operator in the Hilbert space. Thus, the Hamiltonian equations (3.10) are mapped to the corresponding Heisenberg equations, defining the evolution of the operators
\[\frac{d}{dt}\hat{\phi}(t) =-i[\hat{\phi}(t),\hat{H}(t)]=A^{-1}(\hat{\pi}(t)-B\hat{\phi}(t)) \tag{3.76a}\] \[\frac{d}{dt}\hat{\pi}(t) =-i[\hat{\pi}(t),\hat{H}(t)]\] \[=B^{T}A^{-1}\big{(}\hat{\pi}(t)-B\hat{\phi}(t)\big{)}+C\hat{\phi} (t). \tag{3.76b}\]
Here \(\hat{H}(t)\) is the classical Hamiltonian (3.9) where the field and the momentum are replaced by the corresponding Heisenberg operators.
Linearity of the system obviously implies that the classical Hamiltonian and the Heisenberg equations formally coincide and their solutions are in one-to-one correspondence. In particular, the relation (3.8) between the field \(\phi\) and its conjugate momentum \(\pi\) is literally the same at classical and quantum levels \(\hat{\pi}(t)=W\hat{\phi}(t)\). Formal coincidence and linearity of the Hamiltonian and Heisenberg equations allow one to obtain the solution of the latter ones from classical equations (3.75)
\[\left[\begin{array}{c}\hat{\phi}(t)\\ \hat{\pi}(t)\end{array}\right]=i\,\mathcal{M}(t)\,\mathcal{P}\,\mathcal{M}^{T} (t_{-})\,\mathcal{P}\left[\begin{array}{c}\hat{\phi}(t_{-})\\ \hat{\pi}(t_{-})\end{array}\right]. \tag{3.77}\]
Similarly, our quantization procedure implies that the integration constants \(\alpha\), \(\alpha^{*}\) are in one-to-one correspondence to the creation/annihilation operators \(\hat{a}\), \(\hat{a}^{\dagger}\). According to (3.72) the operators \(\hat{\phi}(t)\), \(\hat{\pi}(t)\) are decomposed in the creation/annihilation operators as
\[\left[\begin{array}{c}\hat{\phi}(t)\\ \hat{\pi}(t)\end{array}\right]=\left[\begin{array}{cc}v(t)&v^{*}(t)\\ Wv(t)&Wv^{*}(t)\end{array}\right]\left[\begin{array}{c}\hat{a}\\ \hat{a}^{\dagger}\end{array}\right], \tag{3.78}\]
that can be inverted similar to (3.74) as
\[\left[\begin{array}{c}\hat{a}\\ \hat{a}^{\dagger}\end{array}\right]=i\,\mathcal{P}\,\mathcal{M}^{T}(t)\, \mathcal{P}\left[\begin{array}{c}\hat{\phi}(t)\\ \hat{\pi}(t)\end{array}\right]. \tag{3.79}\]
The fact that \(\hat{a}\) and \(\hat{a}^{\dagger}\) are indeed Hermitian conjugate to each other immediately follows from the Hermicity of \(\hat{\phi}(t)\). Indeed, comparing \(\hat{\phi}(t)\) to it's conjugate
\[\hat{\phi}(t) =v(t)\,\hat{a}+v^{*}(t)\,\hat{a}^{\dagger}, \tag{3.80}\] \[\hat{\phi}^{\dagger}(t) =\left(v(t)\,\hat{a}+v^{*}(t)\,\hat{a}^{\dagger}\right)^{\dagger }=v^{*}(t)\,\hat{a}^{\dagger}+v(t)\,\hat{a},\]
we find the coincidence, for which the choice (3.68) of two complex conjugated basis functions is crucial. The commutation relations of the creation/annihilation operators are inherited from the Poisson brackets (3.71), namely
\[[\hat{a}^{A},\hat{a}^{\dagger B}]=-[\hat{a}^{\dagger B},\hat{a}^{ B}]=\delta^{AB}\hat{I}, \tag{3.81}\] \[[\hat{a}^{A},\hat{a}^{B}]=[\hat{a}^{\dagger A},\hat{a}^{\dagger B }]=0.\]
Though the explicit solution to the Heisenberg equations (3.77) is obtained, we still have no the expression for the evolution operator in a closed form. The latter solves the Schroedinger equation in the form of the chronological ordering,
\[i\frac{d}{dt}\hat{U}(t,t^{\prime})=\hat{H}_{S}(t)\hat{U}(t,t^{\prime}),\;\hat{ U}(t_{+},t_{-})=\mathrm{T}e^{-i\int_{t_{-}}^{t_{+}}dt\,\hat{H}_{S}(t)}, \tag{3.82}\]
where \(\hat{H}_{S}(t)\) is the Hamiltonian in the Schroedinger picture, so that its time dependence is only due to time-dependent coefficients \(A\), \(B\), and \(C\). The operators \(\hat{\phi}\), \(\hat{\pi}\) in the Schroedinger picture are identified with the Heisenberg ones, evaluated at the initial time
\[\hat{\phi}=\hat{\phi}(t_{-}),\qquad\hat{\pi}=\hat{\pi}(t_{-}). \tag{3.83}\]
In the presence of the source, \(H\mapsto H-J^{T}\phi\), the solution (3.78) to the Heisenberg equation generalizes to
\[\hat{\phi}_{J}(t)=\hat{\phi}(t)-\int_{t_{-}}^{t_{+}}dt^{\prime}\,G_{R}(t,t^{ \prime})J(t^{\prime}), \tag{3.84}\]
that can be easily derived from (3.37). Here \(\hat{\phi}(t)\) is the solution (3.78) to the sourceless Heisenberg equation. The Schroedinger equation for the evolution operator in the presence of the source
\[i\frac{d}{dt}\hat{U}_{J}(t,t^{\prime})=\big{(}\hat{H}_{S}(t)-J^{T}(t)\hat{\phi} \big{)}\,\hat{U}_{J}(t,t^{\prime}),\;\hat{U}_{J}(t,t)=\hat{I} \tag{3.85}\]
can be solved by the chronological Dyson \(t\)-exponent (cf. Eq. (2.3)),
\[\hat{U}_{J}(t_{+},t_{-})=\mathrm{T}\,e^{-i\int_{t_{-}}^{t_{+}}dt\,\big{(}\hat{H} _{S}(t)-J(t)\hat{\phi}_{S}\big{)}}. \tag{3.86}\]
Another representation for the evolution operator follows from functional integral formalism. If one introduces the coordinate representation, associated with the Scroedinger operators (3.83),
\[\hat{\phi}|\varphi\rangle=\varphi\,|\varphi\rangle,\;\hat{\pi}|\varphi\rangle=i \frac{\partial}{\partial\varphi}|\varphi\rangle,\;\hat{I}=\int d\varphi\,| \varphi\rangle\langle\varphi|, \tag{3.87}\]
then the matrix elements of \(\hat{U}_{J}\) in the coordinate representation express in terms of the following functional integral
\[\langle\varphi_{+}|\hat{U}_{J}(t_{+},t_{-})\,|\varphi_{-}\rangle\] \[\quad=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
time), whereas the first equation allows one to fix the initial value of the basis functions as
\[v(t_{-})=\frac{1}{\sqrt{2\omega_{\rm re}}},\quad\tilde{v}(t_{-})=\frac{1}{\sqrt{2 \tilde{\omega}_{\rm re}}}, \tag{3.102}\]
where \(\omega_{\rm re}\) and \(\tilde{\omega}_{\rm re}\) are the real parts of \(\omega\) and \(\tilde{\omega}\), respectively. Using (3.99) with the inner product defined in (3.97) one finds the following expressions for Bogoliubov coefficients relating two sets of Neumann basis functions with different frequency matrices,
\[U =\frac{1}{\sqrt{2\tilde{\omega}_{\rm re}}}(\omega+\tilde{\omega}^{ \dagger})\frac{1}{\sqrt{2\tilde{\omega}_{\rm re}}}, \tag{3.103a}\] \[V =\frac{1}{\sqrt{2\tilde{\omega}_{\rm re}}}(\tilde{\omega}^{ \dagger}-\omega^{\dagger})\frac{1}{\sqrt{2\tilde{\omega}_{\rm re}}}. \tag{3.103b}\]
### Fock space and the coherent states
Once the basis functions \(v(t)\), \(v^{*}(t)\) are chosen, we can define the Fock space, associated to the corresponding creation/annihilation operators. Namely, introducing the vacuum state \(|0\rangle\) as
\[\hat{a}|0\rangle=0, \tag{3.104}\]
one defines the Fock space as a linear space spanned by
\[|A_{1},A_{2},\ldots A_{n}\rangle\equiv\hat{a}^{\dagger A_{1}}\hat{a}^{\dagger A _{2}}\ldots\hat{a}^{\dagger A_{n}}|0\rangle. \tag{3.105}\]
Next, let us obtain the coordinate representation of the Fock states. For this purpose, we rewrite (3.79) explicitly for \(t=t_{-}\) as
\[\left[\begin{array}{c}\hat{a}\\ \hat{a}^{\dagger}\end{array}\right]=\frac{1}{\sqrt{2\omega_{\rm re}}}\left[ \begin{array}{cc}\omega^{*}&iI\\ \omega&-iI\end{array}\right]\left[\begin{array}{c}\hat{\phi}\\ \hat{\pi}\end{array}\right], \tag{3.106}\]
where \(\omega\) is given in terms of the positive frequency basis function
\[\omega=\left(iWv\right)v^{-1}\big{|}_{t=t_{-}}, \tag{3.107}\]
and rewrite the definition (3.104) of the vacuum state in the coordinate representation (3.87) as
\[\frac{1}{\sqrt{2\omega_{\rm re}}}\left(\frac{\partial}{\partial\varphi}+ \omega^{*}\varphi\right)\langle\varphi|0\rangle=0. \tag{3.108}\]
Therefore, up to \(\pi\)-dependent normalization the wavefunction of a vacuum reads
\[\langle\varphi|0\rangle=\left(\det\omega_{\rm re}\right)^{\frac{1}{4}}\,\exp \biggl{\{}-\frac{1}{2}\varphi^{T}\omega^{*}\varphi\biggr{\}}. \tag{3.109}\]
Coordinate representation of the excited states can be found by using the definition (3.105) and the expression for \(\hat{a}^{\dagger}\) in the coordinate representation.
Similarly, one can define the coherent states \(|\alpha\rangle\) as eigenstates of the annihilation operator
\[\hat{a}|\alpha\rangle=\alpha|\alpha\rangle. \tag{3.110}\]
Projecting the definition on the coordinate representation basis vector \(|\varphi\rangle\), one obtains the equation
\[\left(\frac{\partial}{\partial\varphi}+\omega^{*}\varphi\right)\langle\varphi| \alpha\rangle=\sqrt{2\omega_{\rm re}}\langle\varphi|\alpha\rangle, \tag{3.111}\]
whose integration gives the (unnormalized) solution
\[\langle\varphi|\alpha\rangle=\exp\biggl{\{}-\frac{1}{2}\varphi^{T}\omega^{*} \varphi+\alpha^{T}\sqrt{2\omega_{\rm re}}\,\varphi-\frac{1}{2}\alpha^{T}\alpha \biggr{\}}. \tag{3.112}\]
For this normalization we have the following expression for the Fock states in terms of coherent state
\[|A_{1},A_{2},\ldots A_{n}\rangle=\left.\frac{\partial^{n}}{\partial\alpha^{A_ {1}}\,\partial\alpha^{A_{2}}\ldots\partial\alpha^{A_{n}}}|\alpha\rangle\right| _{\alpha=0}. \tag{3.113}\]
Coherent states allows to perform a partition of unity as
\[\hat{I}=\int d\alpha^{*}\,d\alpha\,e^{-\alpha^{\dagger}\alpha}|\alpha\rangle \langle\alpha|. \tag{3.114}\]
## IV Generating functional in the path integral formalism
We begin our derivation of the in-in Green's function generating functional for the theory, defined in the previous section, by the physical motivation and the definition of an arbitrary Gaussian initial state. After that, we derive the corresponding two-component Green's functions. As we will observe, there is an ambiguity in the definition of these Green's function, parameterized by a matrix, defining initial conditions for the modes, employed in the mode expansion of the field operators. There is no any a priory preferred choice, fixing this ambiguity. However, being motivated by the simple harmonic oscillator in a thermal state, we make a choice of the modes such that the resulting Green's function has the form and the properties, very close to that of the Green's functions for the equilibrium system in a thermal state. Further, we introduce the notion of the quasi-thermal state, which is a very particular case of the Gaussian state, in which the properties of the Green's functions become even more closer to those of the thermal ones, in particular, satisfying the Kubo-Martin-Schwinger (KMS) condition.
### Gaussian states
Our goal is to find the explicit and useful form of the generating functional
\[Z[J_{1},J_{2}]={\rm tr}\left[\hat{U}_{J_{1}}(T,0)\,\hat{\rho}\,\,\hat{U}_{-J_{2 }}^{\dagger}(T,0)\right] \tag{4.1}\]
of in-in correlation functions
\[\mathrm{tr}\left[\hat{\rho}\,\mathrm{T}\big{(}\hat{\phi}(t^{\prime} _{1})\ldots\phi(t^{\prime}_{m})\big{)}\,\mathrm{T}\big{(}\hat{\phi}(t_{1})\ldots \phi(t_{n})\big{)}\right]\] \[=\frac{i^{m-n}}{Z}\,\frac{\delta^{n+m}Z[J_{1},J_{2}]}{\delta J_{1 }(t_{1})\ldots\delta J_{1}(t_{n})\,\delta J_{2}(t^{\prime}_{1})\ldots\delta J_ {2}(t^{\prime}_{m})}\bigg{|}_{J_{1}=J_{2}=0}. \tag{4.2}\]
where \(\hat{U}_{J}\) are the evolution operators subject to equation (3.85) with different sources \(J_{1}\) and \(-J_{2}\), whereas \(\mathrm{T}\) and \(\mathrm{\widetilde{T}}\) denote chronological and anti-chronological ordering, respectively. The relation between (4.1) and the correlation functions (4.2) obviously follows from (3.86). The basic elements are the two-point correlation functions, namely
\[iG_{\mathrm{T}}(t,t^{\prime}) \equiv\mathrm{tr}\left[\hat{\rho}\,\mathrm{T}\big{(}\hat{\phi}(t )\hat{\phi}(t^{\prime})\big{)}\right], \tag{4.3a}\] \[iG_{\mathrm{T}}(t,t^{\prime}) \equiv\mathrm{tr}\left[\hat{\rho}\,\mathrm{T}\big{(}\hat{\phi}( t)\hat{\phi}(t^{\prime})\big{)}\right],\] (4.3b) \[iG_{<}(t,t^{\prime}) \equiv\mathrm{tr}\left[\hat{\rho}\,\hat{\phi}(t)\hat{\phi}(t^{ \prime})\right], \tag{4.3c}\]
where \(G_{\mathrm{T}}\), \(G_{\mathrm{\widetilde{T}}}\), and \(G_{<}\) are Feynman, anti-Feynman and Wightman Green's functions, respectively.
The density matrix \(\hat{\rho}\) is assumed to be the Hermitian positive-definite operator of unit trace. Inserting the partition of unity in the coordinate representation to the definition (4.1) of the generating functional three times, and using the path integral representation (3.88) of the evolution operator, one obtains the following expression for the generating functional
\[Z[J_{1},J_{2}]=\int d\varphi_{+}\,d\varphi_{-}\,\rho(\varphi_{+}, \varphi_{-})\] \[\quad\times\int\mathcal{D}\phi_{1}\,\mathcal{D}\phi_{2}\,\exp \Biggl{\{}iS[\phi_{1}]-iS[\phi_{2}]\] \[\quad+i\int_{0}^{T}dt\,\,\left(J_{1}^{T}\phi_{1}+J_{2}^{T}\phi_{ 2}\right)\Biggr{\}}\,\Bigg{|}_{\begin{subarray}{c}\phi_{1}(T)=\phi_{2}(T),\\ \phi_{1}(0)=\varphi_{+},\,\phi_{2}(0)=\varphi_{-}\end{subarray}}, \tag{4.4}\]
where the integration over \(\phi_{1,2}(t)\) runs with the indicated boundary conditions and we introduce the notation for the coordinate representation of the density matrix \(\rho(\varphi_{+},\varphi_{-})=\langle\varphi_{+}|\,\hat{\rho}\,|\varphi_{-}\rangle\).
Now, we restrict ourselves with the Gaussian density matrices, i.e. those whose coordinate representation has the form of the Gaussian exponent
\[\rho(\boldsymbol{\varphi})=\frac{1}{Z}\exp\left\{-\frac{1}{2}\boldsymbol{ \varphi}^{T}\boldsymbol{\Omega}\,\boldsymbol{\varphi}+\boldsymbol{j}^{T} \boldsymbol{\varphi}\right\},\,\,\boldsymbol{\varphi}=\begin{bmatrix}\varphi_ {+}\\ \varphi_{-}\end{bmatrix}, \tag{4.5}\]
where the matrix \(\boldsymbol{\Omega}\), and the vector \(\boldsymbol{j}\) play the role of the parameters of \(\hat{\rho}\), and normalization constant \(1/Z\) is independent of \(\boldsymbol{\varphi}\). The Hermitian property of the density matrix, \(\langle\varphi_{+}|\,\hat{\rho}\,|\varphi_{-}\rangle=\langle\varphi_{-}|\, \hat{\rho}\,|\varphi_{+}\rangle^{*}\), which in the coordinate representation reads
\[\rho(\varphi_{+},\varphi_{-})=\rho^{*}(\varphi_{-},\varphi_{+}), \tag{4.6}\]
implies the following conditions on \(\boldsymbol{\Omega}\) and \(\boldsymbol{j}\)
\[\boldsymbol{X}\,\boldsymbol{\Omega}\,\boldsymbol{X}=\boldsymbol{\Omega}^{*}, \quad\boldsymbol{X}\,\boldsymbol{j}=\boldsymbol{j}^{*},\quad\boldsymbol{X} \equiv\left[\begin{array}{cc}0&I\\ I&0\end{array}\right], \tag{4.7}\]
or, in a more explicit block-matrix form
\[\boldsymbol{j}=\left[\begin{array}{c}j\\ j^{*}\end{array}\right],\,\,\boldsymbol{\Omega}=\left[\begin{array}{cc}R&S \\ S^{*}&R^{*}\end{array}\right],\,R=R^{T},\,\,S=S^{\dagger}. \tag{4.8}\]
Normalizability of \(\hat{\rho}\) implies that the real part of the sum \(R+S\) is positive-definite. The case in which the matrix \(S\) is non-vanishing corresponds to the mixed states, i.e. such that \(\hat{\rho}^{2}\neq\hat{\rho}\). The role of the linear term in the exponential in (4.5) is two-fold. Firstly, \(\boldsymbol{j}\) defines non-vanishing mean value of the field operator. Secondly, it can also be used to introduce non-linearities to the density matrix, namely, by differentiating it with respect to \(\boldsymbol{j}\). The typical example of the (pure) Gaussian state is the vacuum state (3.104), i.e. \(\hat{\rho}=|0\rangle\langle 0|\), associated with some choice of the annihilation operator, for which \(R=\omega^{*}\), \(S=0\), and \(\boldsymbol{j}=0\). Another example of the pure Gaussian state is the coherent state (3.110) whose density matrix reads \(\hat{\rho}=|\alpha\rangle\langle\alpha|\), and \(R=\omega^{*}\), \(S=0\) again, but \(j=\sqrt{2\omega_{\mathrm{re}}}\alpha^{*}\).
### In-in boundary value problem
Substituting the general Gaussian density matrix to (4.4), one obtains
\[Z[J_{1},J_{2}]=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
be rewritten in the joint form
\[\mathbf{S}[\mathbf{\phi}] =\frac{1}{2}\int_{0}^{T}dt\,\mathbf{\phi}^{T^{*\!*}}\mathbf{F}\mathbf{\phi}\] \[=\frac{1}{2}\int_{0}^{T}dt\,\mathbf{\phi}^{T}\mathbf{F}\mathbf{\phi}+\frac{1}{ 2}\mathbf{\phi}^{T}\mathbf{W}\mathbf{\phi}\bigg{|}_{0}^{T}. \tag{4.11}\]
This allows us to treat the underlying equations of motion, Green's functions, etc. in exactly the same way as the original theory with the action (3.1), except that now the field content is doubled. In terms of the new notations the expression for the generating functional is given by Eqs.(2.8)-(2.10) of Section 2.
The saddle point equation obtained by varying the exponential of this double-field action (2.8) with respect to all fields including the boundary values at \(t=0\) and \(t=T\) reads
\[\delta\left\{i\mathbf{S}[\mathbf{\phi}]+i\int_{0}^{T}dt\,\mathbf{J}^{T}\mathbf{ \phi}-\frac{1}{2}\mathbf{\varphi}^{T}\mathbf{\Omega}\,\mathbf{\varphi}+\mathbf{j}^{T}\mathbf{ \varphi}\right\}\] \[\qquad\qquad=\int_{0}^{T}dt\,\delta\mathbf{\phi}^{T}(\mathbf{F}\mathbf{\phi} +\mathbf{J})+i\,\delta\mathbf{\phi}^{T}\,\mathbf{W}\mathbf{\phi}\big{|}_{t=T}\] \[\qquad\qquad-\delta\mathbf{\varphi}^{T}\Big{[}(i\mathbf{W}+\mathbf{\Omega}) \mathbf{\phi}\big{|}_{t=0}-\mathbf{j}\,\Big{]}=0. \tag{4.12}\]
Independent variation of the fields \(\delta\mathbf{\phi}(t)\) in the interior of the time interval gives equations of motion
\[\mathbf{F}\mathbf{\phi}(t)+\mathbf{J}(t)=0, \tag{4.13}\]
whereas the variation of the boundary values \(\delta\mathbf{\phi}(T)\) and \(\delta\mathbf{\phi}(0)=\delta\mathbf{\varphi}\) supply these equations with the boundary conditions. They read as the following matrix relations
\[(i\mathbf{W}+\mathbf{\Omega})\,\mathbf{\phi}\big{|}_{t=0}=\mathbf{j}, \tag{4.14}\] \[\Big{[}\,I\quad I\Big{]}\,\mathbf{W}\mathbf{\phi}\big{|}_{t=T}=0,\,\Big{[} \,I\,\,\,-I\,\Big{]}\,\mathbf{\phi}\big{|}_{t=T}=0, \tag{4.15}\]
where we took into account that in view of \(\phi_{1}(T)=\phi_{2}(T)\) the variation \(\delta\mathbf{\phi}^{T}(T)=\delta\phi_{1}^{T}(T)\,\Big{[}\,I\,\,\,I\Big{]}\), and the boundary conditions at \(t=T\) reduce to the equality of both the fields and their time derivatives of both \(\phi_{1}\) and \(\phi_{2}\).
To solve the boundary value problem above, we first find the Green's function subject to the homogeneous version of the above boundary conditions, i.e. those of vanishing \(\mathbf{j}\)
\[\mathbf{FG}(t,t^{\prime})=\mathbf{I}\,\delta(t-t^{\prime}), \tag{4.16}\] \[(i\mathbf{W}+\mathbf{\Omega})\mathbf{G}(t,t^{\prime})\big{|}_{t=0}=0,\] (4.17) \[\Big{[}\,I\quad I\Big{]}\,\mathbf{WG}(t,t^{\prime})\big{|}_{t=T}=0,\] (4.18) \[\Big{[}\,I\,\,\,-I\,\Big{]}\,\mathbf{G}(t,t^{\prime})\big{|}_{t=T}=0.\]
We can construct the Green's function \(\mathbf{G}\) solving the problem above, out of the basis functions \(\mathbf{v}_{\pm}\). These basis function should solve the homogeneous equation, and satisfy the same boundary conditions as those of the Green's function,
\[\mathbf{Fv}_{\pm}(t)=0,\quad(i\mathbf{W}+\mathbf{\Omega})\,\mathbf{v}_{-}(t)\big{|} _{t=0}=0, \tag{4.19}\] \[\Big{[}\,I\quad I\Big{]}\,\mathbf{W}\mathbf{v}_{+}(t)\big{|}_{t=T}=0,\] (4.20) \[\Big{[}\,I\,\,\,-I\Big{]}\,\mathbf{v}_{+}(t)\big{|}_{t=T}=0.\]
Applying the generic Green's function expression (3.43) to the case of the doubled field content, we obtain the Green's function \(\mathbf{G}\) in terms of these basis functions
\[\mathbf{G}(t,t^{\prime}) =-\mathbf{v}_{+}(t)\,\mathbf{\Delta}_{-+}^{-1}\,\mathbf{v}_{-}^{T}(t^{\prime}) \,\theta(t-t^{\prime})\] \[\quad+\mathbf{v}_{-}(t)\,\mathbf{\Delta}_{+-}^{-1}\,\mathbf{v}_{+}^{T}(t^{ \prime})\,\theta(t^{\prime}-t) \tag{4.21}\] \[\mathbf{\Delta}_{-+} =\mathbf{v}_{-}^{T}\,\mathbf{W}\mathbf{v}_{+}-(\mathbf{W}\mathbf{v}_{-})^{T}\,\mathbf{v}_ {+}=-\mathbf{\Delta}_{+-}^{T}. \tag{4.22}\]
### Neumann type basis functions and Green's function representation
However, we do not have the explicit form of basis functions \(\mathbf{v}_{\pm}\). We will construct \(\mathbf{v}_{\pm}\) with the help of another set basis functions \(\mathbf{v}\), \(\mathbf{v}^{*}\) subject to much simpler boundary conditions
\[\mathbf{Fv}(t)=0,\quad(i\mathbf{W}-\mathbf{\omega})\mathbf{v}(t)\big{|}_{t=0}=0, \tag{4.23}\] \[\mathbf{\omega}=\left[\begin{array}{cc}\omega&0\\ 0&\omega^{*}\end{array}\right]. \tag{4.24}\]
Since \(\mathbf{W}\) and \(\mathbf{\omega}\) are block-diagonal, the basis functions \(\mathbf{v}\), \(\mathbf{v}^{*}\) can be chosen block-diagonal too, namely
\[\mathbf{v}=\left[\begin{array}{cc}v&0\\ 0&v^{*}\end{array}\right],\qquad\mathbf{v}^{*}=\left[\begin{array}{cc}v^{*}&0\\ 0&v\end{array}\right]. \tag{4.25}\]
With a real operator \(F\) the blocks of these matrices solve the equations \(Fv(t)=0\) and \(Fv^{*}(t)=0\), subject to the complex conjugated boundary conditions
\[(iW-\omega)v(t)\big{|}_{t=0}=0,\quad(iW+\omega^{*})v^{*}(t)\big{|}_{t=0}=0. \tag{4.26}\]
Thus, \(v\) and \(v^{*}\) are simply the basis functions for the single field \(\phi_{+}\) or \(\phi_{-}\) subject to the Neumann boundary conditions introduced above. We assume that \(\omega\) is the symmetric matrix with a positive-definite real part.
The answer for the basis function \(\mathbf{v}_{+}\) in terms of \(v\) and \(v^{*}\) can be easily constructed as
\[\mathbf{v}_{+}=\mathbf{v}+\mathbf{v}^{*}\mathbf{X}=\left[\begin{array}{cc}v&v^{*}\\ v&v^{*}\end{array}\right],\,\mathbf{X}=\left[\begin{array}{cc}0&I\\ I&0\end{array}\right], \tag{4.27}\]
while the calculation of \(\mathbf{v}_{-}\) requires more efforts. We will obtain the answer for \(\mathbf{v}_{-}\) with the use of Bogoliubov coefficients relating two sets of different Neumann
basis functions (3.103) by treating \(\mathbf{v}_{-}\) as the negative frequency basis function complex conjugated to its positive frequency counterpart \(\mathbf{v}^{*}\) satisfying at \(t=0\) the boundary condition \((i\mathbf{W}-\mathbf{\Omega}^{*})\,\mathbf{v}^{*}_{-}|_{t=0}=0\). Thus, in accordance with (3.103) the answer for \(\mathbf{v}_{-}\) reads
\[\mathbf{v}_{-}=\mathbf{v}^{*}\mathbf{U}^{T}-\mathbf{v}\mathbf{V}^{T}. \tag{4.28}\]
where \(\mathbf{U}\), \(\mathbf{V}\) are the corresponding Bogoliubov coefficients
\[\mathbf{U} =\frac{1}{\sqrt{2\mathbf{\Omega}_{\rm re}}}(\mathbf{\Omega}+\mathbf{\omega}) \frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}}, \tag{4.29a}\] \[\mathbf{V} =\frac{1}{\sqrt{2\mathbf{\Omega}_{\rm re}}}(\mathbf{\Omega}-\mathbf{\omega}^{ *})\frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}}. \tag{4.29b}\]
Here we assume the normalization \(\mathbf{v}(0)=1/\sqrt{2\mathbf{\omega}_{\rm re}}\), \(\mathbf{v}_{-}(0)=1/\sqrt{2\mathbf{\Omega}_{\rm re}}\), and denote the real parts of \(\mathbf{\omega}\) and \(\mathbf{\Omega}\) respectively as \(\mathbf{\omega}_{\rm re}\) and \(\mathbf{\Omega}_{\rm re}\).
Finally, let us consider the details of the Green's function \(\mathbf{G}(t,t^{\prime})\) defined by (4.21) for a particular form of \(\mathbf{v}_{\pm}\) we have just built. Its matrix \(\mathbf{\Delta}_{-+}\) given by (4.22) reads
\[i\mathbf{\Delta}_{-+}=\frac{1}{\sqrt{2\mathbf{\Omega}_{\rm re}}}\Big{[}(\mathbf{I}-\mathbf{X} )\,\mathbf{\omega}+\mathbf{\Omega}\,(\mathbf{I}+\mathbf{X})\Big{]}\frac{1}{\sqrt{2\mathbf{\omega} _{\rm re}}}. \tag{4.30}\]
Next, let us consider separately the first term in (4.21). After the calculation presented in Appendix B one obtains for it the following form
\[\mathbf{v}_{+}(t)\,(i\mathbf{\Delta}_{-+})^{-1}\,\mathbf{v}_{-}^{T}(t^{\prime})=\mathbf{v}_{+} (t)\,\mathbf{v}^{\dagger}(t^{\prime})+\mathbf{v}_{+}(t)\,\mathbf{\nu}\,\mathbf{v}_{+}^{T}(t^{ \prime}), \tag{4.31}\]
where we introduce the following symmetric matrix
\[\mathbf{\nu}=\Big{[}\mathbf{I}+\mathbf{X}-\sqrt{2\mathbf{\omega}_{\rm re}}\,\mathbf{X}\,(\mathbf{\omega }+\mathbf{\Omega})^{-1}\mathbf{X}\sqrt{2\mathbf{\omega}_{\rm re}}\Big{]}^{-1}-\mathbf{X}. \tag{4.32}\]
Recalling that the second term of the expression (4.21) can be obtained from the first one by the simultaneous transposition and exchange of time arguments, and observing that the second term in (4.31) is symmetric under this transformation, we find that two theta functions sum up to identity, so that the final expression for the Green's function reads
\[i\mathbf{G}(t,t^{\prime})=i\mathbf{G}_{0}(t,t^{\prime})+\mathbf{v}_{+}(t)\,\mathbf{\nu}\,\mathbf{ v}_{+}^{T}(t^{\prime}), \tag{4.33}\]
where \(\mathbf{G}_{0}\) is defined as
\[i\mathbf{G}_{0}(t,t^{\prime}) =\mathbf{v}_{+}(t)\,\mathbf{v}^{\dagger}(t^{\prime})\,\theta(t-t^{\prime} )+\mathbf{v}^{*}(t)\,\mathbf{v}_{+}^{T}(t^{\prime})\,\theta(t^{\prime}-t)\] \[=\mathbf{v}(t)\,\mathbf{v}^{\dagger}(t^{\prime})\,\theta(t-t^{\prime})+ \mathbf{v}^{*}(t)\,\mathbf{v}^{T}(t^{\prime})\,\theta(t^{\prime}-t)\] \[\quad+\mathbf{v}^{*}(t)\,\mathbf{X}\,\mathbf{v}^{\dagger}(t^{\prime}), \tag{4.34}\]
and interpreted as the Green function, corresponding to the vacuum state, having the density matrix \(\hat{\rho}_{0}=|0\rangle\langle 0|\), associated with the basis functions \(v(t)\), \(v^{*}(t)\). Indeed, from (3.109) one observes that the matrix \(\mathbf{\Omega}\), defining the vacuum density matrix \(\hat{\rho}_{0}\), coincides with \(\mathbf{\omega}^{*}\), i.e. \(\mathbf{\Omega}=\mathbf{\omega}^{*}\). In this case, \(\mathbf{\nu}\) vanishes due to its definition (4.32), so from (4.33) we find that \(\mathbf{G}=\mathbf{G}_{0}\).
Substituting the generating functional obtained to (4.2), we observe that for vanishing \(\mathbf{j}\) the block-matrix components of \(\mathbf{G}\) are composed of the Feynman, anti-Feynman and Wightman Green's functions (4.3), namely
\[\mathbf{G}(t,t^{\prime})=\begin{bmatrix}G_{\rm T}(t,t^{\prime})&G_{<}(t,t^{\prime })\\ G_{>}(t,t^{\prime})&G_{\rm T}(t,t^{\prime})\end{bmatrix}, \tag{4.35}\]
where \(G_{>}(t,t^{\prime})\equiv G_{<}^{T}(t^{\prime},t)\), and the explicit form of the block components can be read off form (4.33).
Now we have to find the solution \(\mathbf{\phi}(t)\) of the boundary value problem (4.13)-(4.15) in order to substitute it into the exponential of (2.8). The only inhomogeneous boundary conditions in this problem are the Neumann conditions (4.14), so that the solution is given by the double-field version of (3.58) with the substitutions \(j_{+}\mapsto 0\) (remember that there is no \(j_{+}\) at the point \(t=T\)) and \(j_{-}\mapsto-\mathbf{j}\). Thus it reads
\[\mathbf{\phi}(t)=i\mathbf{G}(t,0)\,\mathbf{j}-\int_{0}^{T}dt^{\prime}\,\mathbf{G}(t,t^{\prime} )\mathbf{J}(t^{\prime}). \tag{4.36}\]
Substituting it to the exponential of (2.8) then gives Eq. (2.11) advocated in Section 2.
\[Z[\mathbf{J}] =\text{const}\times\exp\!\left\{-\frac{i}{2}\int dt\,dt^{\prime}\, \mathbf{J}^{T}(t)\mathbf{G}(t,t^{\prime})\mathbf{J}(t)\right.\] \[\quad-\int dt\,\mathbf{J}^{T}(t)\mathbf{G}(t,0)\,\mathbf{j}+\frac{i}{2}j^{T} \mathbf{G}(0,0)\,\mathbf{j}\Bigg{\}}, \tag{4.37}\]
where all time integrations run from \(t=0\) to \(t=T\). Here the restriction of \(\mathbf{G}(t,t^{\prime})\) to \(\mathbf{G}(t,0)\) does not lead to essential simplification whereas \(\mathbf{G}(0,0)\) has, as shown in Appendix C, the following explicit and simple form in terms of the parameters of the density matrix
\[i\mathbf{G}(0,0)=\frac{\mathbf{I}+\mathbf{X}}{2\mathbf{\Omega}_{\rm re}}, \tag{4.38}\]
where the "ratio" of matrices \(\mathbf{I}+\mathbf{X}\) and \(\mathbf{\Omega}_{\rm re}\) is unambiguous because these matrices are commuting in view of the special form of \(\mathbf{\Omega}\) subject to the relation \(\mathbf{X}\mathbf{\Omega}\mathbf{X}=\mathbf{\Omega}^{*}\).
### Keldysh rotation
For further convenience it is useful to perform the change of the basis in the doubled field space \(\phi_{+}\), \(\phi_{-}\) and introduce the so-called classical and quantum fields \(\phi_{c}\) and \(\phi_{q}\)[44; 45],
\[\mathbf{\phi}_{K}(t)=\begin{bmatrix}\phi_{c}(t)\\ \phi_{q}(t)\end{bmatrix}=\mathbf{C}\mathbf{\phi}(t),\,\mathbf{C}\equiv\begin{bmatrix}\frac {1}{2}I&\frac{1}{2}I\\ I&-I\end{bmatrix}. \tag{4.39}\]
This transformation is called Keldysh rotation. In the new basis, the Green's function \(\mathbf{G}\) takes the form
\[\mathbf{G}_{K}(t,t^{\prime})\!=\!\mathbf{C}\mathbf{G}(t,t^{\prime})\,\mathbf{C}^{T}\!\!=\!\left[ \begin{array}{cc}G_{K}(t,t^{\prime})&G_{R}(t,t^{\prime})\\ G_{A}(t,t^{\prime})&0\end{array}\right]\!. \tag{4.40}\]
Here \(G_{R}\) and \(G_{A}\) are the retarded and advanced Green's functions, respectively, having the following operator form
\[G_{R}(t,t^{\prime}) =-i\,\mathrm{tr}\!\left(\hat{\rho}\left[\hat{\phi}(t),\hat{\phi}( t^{\prime})\right]\right)\theta(t-t^{\prime})\] \[=-i\big{[}v(t)v^{\dagger}(t^{\prime})-v^{*}(t)v^{T}(t^{\prime}) \big{]}\,\theta(t-t^{\prime}), \tag{4.41}\] \[G_{A}(t,t^{\prime}) =G_{R}^{T}(t^{\prime},t). \tag{4.42}\]
They are consistent with the classical definition (3.38), in particular, because of independence of the commutator average of the state \(\hat{\rho}\). The block \(G_{K}\) is called Keldysh Green's function and contains the information about the state. In view of operator averages (4.3) it expresses as the mean value of the anti-commutator of fields and, due to (4.40), explicitly reads in terms basis functions as
\[iG_{K}(t,t^{\prime})=\tfrac{1}{2}\,\mathrm{tr}\!\left(\hat{\rho} \left\{\hat{\phi}(t),\hat{\phi}(t^{\prime})\right\}\right)\] \[\qquad=\left[\begin{array}{cc}v(t)&v^{*}(t)\end{array}\right] \left(\mathbf{\nu}+\tfrac{1}{2}\mathbf{X}\right)\left[\begin{array}{c}v^{T}(t^{ \prime})\\ v^{\dagger}(t^{\prime})\end{array}\right]. \tag{4.43}\]
### Special choice of basis functions and particle interpretation
Thus far, the matrix \(\omega\), which defines the Neumann boundary conditions for the basis functions \(v\), \(v^{*}\), is not fixed except the requirement of symmetry under transposition and positive definiteness of its real part. In this section we make a convenient choice of \(\omega\) which leads to the expressions for the Green's functions admitting particle interpretation with well-defined notion of average occupation number.
For this purpose, it is useful to rewrite the Keldysh Green's function in terms of non-anomalous and anomalous particle averages
\[\nu=\mathrm{tr}\!\left[\hat{\rho}\,\hat{a}^{\dagger}\hat{a}\right],\qquad \kappa=\mathrm{tr}\!\left[\hat{\rho}\,\hat{a}\,\hat{a}\right]\!, \tag{4.44}\]
so that from (4.43) \(G_{K}\) becomes
\[i\,G_{K}(t,t^{\prime}) =\left[\begin{array}{cc}v(t)&v^{*}(t)\end{array}\right]\] \[\times\left[\begin{array}{cc}\kappa&\nu^{*}+\tfrac{1}{2}I\\ \nu+\tfrac{1}{2}I&\kappa^{*}\end{array}\right]\left[\begin{array}{c}v^{T}(t^ {\prime})\\ v^{\dagger}(t^{\prime})\end{array}\right]. \tag{4.45}\]
Note that the matrix \(\kappa\) is symmetric, whereas \(\nu\) is Hermitian. Comparing with (4.43) we find the connection between particle averages and the matrix \(\mathbf{\nu}\)
\[\mathbf{\nu}=\left[\begin{array}{cc}\kappa&\nu^{*}\\ \nu&\kappa^{*}\end{array}\right]. \tag{4.46}\]
Thus, we see that block-diagonal components of \(\mathbf{\nu}\) are responsible for anomalous averages. To ascribe the particle interpretation to the creation/annihilation operators, we will try to choose the matrix \(\omega\), defining the corresponding basis functions \(v(t)\) and \(v^{*}(t)\) so that the diagonal blocks of \(\mathbf{\nu}\), defining the anomalous averages \(\kappa\), vanish. Moreover, this choice will simplify the expressions for the Green's functions, since they contain the terms, containing \(\kappa\). For example, with a nonzero \(\kappa\) the Wightman function reads
\[G_{>}(t,t^{\prime}) =v(t)\,(\nu^{*}+I)\,v^{\dagger}(t^{\prime})+v^{*}(t)\,\nu\,v^{T}( t^{\prime})\] \[\qquad+v(t)\,\kappa\,v^{T}(t^{\prime})+v^{*}(t)\,\kappa^{*}\,v^{ \dagger}(t^{\prime}). \tag{4.47}\]
To make the matrix \(\mathbf{\nu}\) block off-diagonal consider the expression (4.32) and note that the only block diagonal contribution is contained in the identity matrix \(\mathbf{I}\) and, possibly, in the term involving \((\mathbf{\omega}+\mathbf{\Omega})^{-1}\). Thus, we want to choose \(\omega\) such that block-diagonal contribution of the latter exactly cancels those of \(\mathbf{I}\). Using the block matrix inversion formula6
Footnote 6: The useful form of the block matrix inversion formula is
\[\left[\begin{array}{cc}A&B\\ C&D\end{array}\right]^{-1} =\left[\begin{array}{cc}(A-BD^{-1}C)^{-1}&0\\ 0&(D-CA^{-1}B)^{-1}\end{array}\right]\] \[\qquad\times\left[\begin{array}{cc}I&-BD^{-1}\\ -CA^{-1}&I\end{array}\right]\]
with \(A=R+\omega\), \(B=S\), \(C=S^{*}\), \(D=R^{*}+\omega^{*}\). we have the condition of the vanishing block-diagonal part of \(\mathbf{\nu}\),
\[R+\omega-S(R^{*}+\omega^{*})^{-1}S^{*}-2\omega_{\mathrm{re}}=0. \tag{4.48}\]
We will focus on the case in which \(R\) and \(S\) are real. The formalism described below can be easily extended to the complex \(R\), but it seems that there is no straightforward extension to general complex (Hermitian) \(S\). Introducing the dimensionless quantities
\[r=\omega^{-1/2}\,R\,\omega^{-1/2},\qquad s=\omega^{-1/2}\,S\,\omega^{-1/2}, \tag{4.49}\]
the equation (4.48) can be rewritten as \(r+I-s(r+I)^{-1}s=2\) and further simplified by introducing the new variable \(\tilde{s}=(r+I)^{-1/2}s\,(r+I)^{-1/2}\) and solving for \(\tilde{s}\), so that it takes the following form
\[r^{2}=s^{2}+I. \tag{4.50}\]
This is implicit equation on \(\omega\), due to the above definition of \(r\) and \(s\). Its explicit form reads
\[R\omega^{-1}R=S\omega^{-1}S+\omega, \tag{4.51}\]
which can be solved in the form advocated in Section 2
\[\omega=R^{1/2}\sqrt{I-\sigma^{2}}R^{1/2},\quad\sigma\equiv R^{-1/2}SR^{-1/2}. \tag{4.52}\]
Note that the assumption of positive definiteness of \(\omega\) implies that \(I-\sigma^{2}=(I-\sigma)(I+\sigma)\) is positive definite. Recalling that \(R+S=R^{1/2}(I+\sigma)R^{1/2}\) should be positive definite for normalizability of the density matrix, it is easy to see that \(I-\sigma=R^{-1/2}(R-S)R^{-1/2}\), or equivalently \(R-S\) should be positive definite too. Then, the substitution of the obtained expression for \(\omega\) to (4.32) gives the desired block-diagonal matrix form of \(\mathbf{\nu}\) advocated in Section 2
\[\mathbf{\nu} =\left[\begin{array}{cc}0&\nu\\ \nu&0\end{array}\right],\quad\nu\equiv\frac{1}{2}\varkappa\left(\sqrt{\frac{I -\sigma}{I+\sigma}}-I\right)\varkappa^{T}, \tag{4.53}\] \[\varkappa \equiv\left[\omega^{1/2}R^{-1}\omega^{1/2}\right]^{1/2}\omega^{- 1/2}R^{1/2}=\left(\varkappa^{T}\right)^{-1} \tag{4.54}\]
where the matrix \(\varkappa\) introduced above is orthogonal. Therefore, as a consequence of positive definiteness of \(I+\sigma\) and \(I-\sigma\), the matrix \(\nu\) is necessarily real. As shown in Appendix D, for the density matrix to be positive definite, the matrix \(\sigma\) should be negative definite, so \(\nu\) is positive definite.
Substituting it to (4.33), one immediately obtains simple expressions for the Green's functions. In particular, for Wightmann and Feynman functions one has
\[iG_{\mathrm{T}}(t,t^{\prime})= v(t)\,v^{\dagger}(t^{\prime})\,\theta(t-t^{\prime})+v^{*}(t)\,v^{T}(t^{ \prime})\,\theta(t^{\prime}-t)\] \[+\,v(t)\,\nu\,v^{\dagger}(t^{\prime})+v^{*}(t)\,\nu\,v^{T}(t^{ \prime}), \tag{4.55}\] \[iG_{>}(t,t^{\prime})= v(t)\left(\nu+I\right)v^{\dagger}(t^{\prime})+v^{*}(t)\,\nu\,v^{T}(t^{ \prime}), \tag{4.56}\]
while the others can be expressed through them in a straightforward way.
It will be useful to express \(\mathbf{\Omega}\) in terms of \(\nu\). Disentangling \(\mathbf{\Omega}\) from (4.32), and then using the explicit form (4.53) of \(\mathbf{\nu}\), corresponding to the special choice of Neumann basis functions with (4.52), we obtain the following expression
\[\mathbf{\Omega}=\omega^{1/2}\left[\begin{array}{cc}\frac{2\nu^{2}+2\nu+I}{2\nu+I }&-\frac{2\nu(\nu+I)}{2\nu+I}\\ -\frac{2\nu(\nu+I)}{2\nu+I}&\frac{2\nu^{2}+2\nu+I}{2\nu+I}\end{array}\right] \omega^{1/2}. \tag{4.57}\]
### Euclidean density matrix state
Now, let us focus on the particular Gaussian state, which is obtained from the Euclidean path integral, namely
\[\rho_{E}(\varphi_{+},\varphi_{-};J_{E}] =\frac{1}{Z}\int\limits_{\phi(\tau_{\pm})=\varphi_{\pm}}D\phi\, \exp\biggl{\{}-S_{E}[\phi]\] \[\qquad-\int_{0}^{\beta}d\tau\,J_{E}(\tau)\phi(\tau)\biggr{\}}, \tag{4.58}\]
Here \(S_{E}\) is the quadratic action of Euclidean field theory within time limits \(\tau_{\pm}\) which we will chose to be \(\tau_{+}=\beta\) and \(\tau_{-}=0\),
\[S_{E}[\phi] =\frac{1}{2}\int_{\tau_{-}}^{\tau_{+}}d\tau\,\phi^{T}\overset{ \leftrightarrow}{F}_{E}\,\phi\] \[=\frac{1}{2}\int_{\tau_{-}}^{\tau_{+}}d\tau\,\phi^{T}F_{E}\,\phi +\frac{1}{2}\phi^{T}W_{E}\phi\left|{}_{\tau_{-}}^{\tau_{+}}, \tag{4.59}\] \[F_{E}\equiv-\frac{d}{d\tau}A_{E}\frac{d}{d\tau}-\frac{d}{d\tau}B _{E}+B_{E}^{T}\frac{d}{d\tau}+C_{E},\] (4.60) \[W_{E}\equiv A_{E}\frac{d}{d\tau}+B_{E}.\]
The partition function \(Z\) in the normalization factor is such that \(\operatorname{tr}\rho_{E}=1\) for vanishing source \(J_{E}=0\). Hermiticity of the density matrix implies the following (sufficient) condition on the coefficient matrices \(A_{E}\), \(B_{E}\), and \(C_{E}\) as the functions of \(\tau\)
\[A_{E}(\beta-\tau) =A_{E}^{*}(\tau),\quad B_{E}(\beta-\tau)=-B_{E}^{*}(\tau), \tag{4.61}\] \[C_{E}(\beta-\tau) =C_{E}^{*}(\tau),\]
which are not necessary real. Nevertheless, we restrict ourselves to the real case below. The source \(J_{E}\) is included in the path integral to be able to introduce non-linear terms of the Euclidean action, leading to the non-Gaussianities of the resulting density matrix.
We take the path integral (4.58) over the Euclidean fields \(\phi\) by using the saddle point method. The boundary conditions of the integral fix the endpoints \(\phi(\beta)=\varphi_{+}\), \(\phi(0)=\varphi_{-}\), so we have the boundary problem with the Dirichlet boundary conditions
\[F_{E}\phi+J_{E}=0, \tag{4.62a}\] \[\phi(\tau_{\pm})=\varphi_{\pm}, \tag{4.62b}\]
Using the Dirichlet Green's function \(G_{D}\) for vanishing boundary conditions
\[F_{E}G_{D}(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime}),\quad G_{D}(\tau_{\pm },\tau^{\prime})=0, \tag{4.63}\]
and substituting to (3.52), one expresses the solution of (4.62) as follows
\[\phi(\tau)=-\mathbf{w}_{E}^{T}(\tau)\,\mathbf{\varphi}-\int_{0}^{\beta}d\tau^{\prime}\,G _{D}(\tau,\tau^{\prime})J_{E}(\tau^{\prime}), \tag{4.64}\]
where we introduce the notations, similarly to those of the Lorentzian context (3.53)-(3.54), for the row \(\mathbf{w}_{E}^{T}(\tau)\) obtained by the transposition of the column \(\mathbf{w}_{E}(\tau)\) in
\[\mathbf{w}_{E}^{T}(\tau)= \bigl{[}\mathbf{w}_{E}(\tau)\bigr{]}^{T}=\begin{bmatrix}W_{E}G_{D}( \beta,\tau)\\ -W_{E}G_{D}(0,\tau^{\prime})\end{bmatrix}^{T}\] \[= \Bigl{[}\,G_{D}(\tau,\beta)\overset{\leftarrow}{W}_{E}\,\,\,-G_{D}( \tau,0)\overset{\leftarrow}{W}_{E}\,\Bigr{]} \tag{4.65}\]
(the last equality implies the symmetry of the Dirichlet Green's function, \(G_{D}^{T}(\tau,\tau^{\prime})=G_{D}(\tau^{\prime},\tau)\)). Substitution
back to (4.58) gives
\[\rho_{E}(\varphi_{+}, \varphi_{-};J_{E}]=\text{const}\times\exp\biggl{\{}-\frac{1}{2} \boldsymbol{\varphi}^{T}\boldsymbol{\Omega}\,\boldsymbol{\varphi}+\boldsymbol{ j}^{T}\boldsymbol{\varphi}\] \[+\frac{1}{2}\int d\tau\,d\tau^{\prime}J_{E}(\tau)G_{D}(\tau,\tau ^{\prime})J_{E}(\tau^{\prime})\biggr{\}}, \tag{4.66}\]
where we disregard the source independent prefactor, all \(\tau\)-integrations run from \(0\) to \(\beta\), whereas the matrix \(\boldsymbol{\Omega}\) and the source \(\boldsymbol{j}\), introduced in (4.5) take the following particular form
\[\boldsymbol{\Omega}\equiv\begin{bmatrix}\stackrel{{ \rightarrow}}{{-}}&\\ -\overset{\rightarrow}{W_{E}}G_{D}(\beta,\beta)\overset{\leftarrow}{W_{E} }&\overset{\rightarrow}{W_{E}}G_{D}(\beta,0)\overset{\leftarrow}{W_{E}}\\ \overset{\rightarrow}{W_{E}}G_{D}(0,\beta)\overset{\leftarrow}{W_{E}}&- \overset{\rightarrow}{W_{E}}G_{D}(0,0)\overset{\leftarrow}{W_{E}}\end{bmatrix}, \tag{4.67}\] \[\boldsymbol{j}^{T}=\int_{0}^{\beta}d\tau\,J_{E}(\tau)\, \boldsymbol{w}_{E}^{T}(\tau). \tag{4.68}\]
Now, one can substitute the density matrix (4.66), defined by the parameters (4.67) to the general expression for the generating functional (4.37). This leads to
\[Z[\boldsymbol{J},J_{E}] =\text{const}\times\exp\biggl{\{}-\frac{i}{2}\int dt\,dt^{\prime} \,\boldsymbol{J}^{T}(t)\boldsymbol{G}(t,t^{\prime})\boldsymbol{J}(t^{\prime})\] \[-\int dt\,d\tau\,\boldsymbol{J}^{T}(t)\boldsymbol{G}(t,0) \overset{\rightarrow}{W_{E}}\boldsymbol{g}_{D}(\tau)\,J_{E}(\tau)\] \[+\frac{1}{2}\int d\tau\,d\tau^{\prime}\,J_{E}(\tau)\,G_{E}(\tau, \tau^{\prime})\,J_{E}(\tau^{\prime})\,\Biggr{\}}. \tag{4.69}\]
Note that the kernel of the third integral here is the periodic Euclidean Green's function
\[G_{E}(\tau,\tau^{\prime})=G_{D}(\tau,\tau^{\prime})+i\,\boldsymbol{w}_{E}^{T} (\tau)\,\boldsymbol{G}(0,0)\,\boldsymbol{w}_{E}(\tau^{\prime}) \tag{4.70}\]
corresponding to the fact that with the Lorentzian sources switched off the functional \(Z[0,J_{E}]\) represents the Euclidean path integral over periodic fields \(\phi(\tau)\) on the time interval with the identified boundary points \(\tau_{\pm}\). The expression for this Green's function seemingly dependent via \(\boldsymbol{G}(0,0)\) on Lorentzian objects is in fact independent of them. This property is based on the relation (4.38) and derived in Appendix C.
### Analytic continuation and KMS condition
The further transformation of the generating functional, which allows one to reveal its new analyticity properties, can be performed due to two assumptions. The first assumption is that the Euclidean action (4.59) is obtained by analytic continuation of the Lorentzian one (3.1), namely
\[iS[\phi]\big{|}_{t=-i\tau}=-S_{E}[\phi] \tag{4.71}\]
This implies the following form of the Euclidean action coefficient functions
\[A_{E}(\tau) =A(-i\tau),\quad B_{E}(\tau)=-iB(-i\tau),\] \[C_{E}(\tau) =-C(-i\tau). \tag{4.72}\]
Though this requirement sounds rather restrictive, it can be based on the assumptions discussed in Introduction about the properties of the Euclidean background underlying the quadratic action and sandwiched between the two (identified) turning points at which the analytic match between the Euclidean and Lorentzian branches can be done. Another assumption which we use in what follows is the possibility to make a special choice of the Neumann basis functions, derived above.
The first step is to rewrite the second and the third terms in the exponential of the generating functional (4.69) in terms of the Euclidean Neumann Green's function \(G_{N}(\tau,\tau^{\prime})\) instead of the Dirichlet one, i.e. \((W_{E}+\omega)G_{N}(\beta,\tau^{\prime})=(W_{E}-\omega^{*})G_{N}(0,\tau^{ \prime})=0\) where \(\omega\) is the same as in (4.23)-(4.24). This is done using the relations (3.65)-(3.66) (after the replacement \(\boldsymbol{\omega}\mapsto-i\boldsymbol{\omega}\) associated with the transition to the Euclidean version of Dirichlet and Neumann Green's functions) and the derivation in Appendix C. The result reads as the expression (4.69) with the kernel of the Lorentzian-Euclidean term \(-\boldsymbol{G}(t,0)\,\boldsymbol{w}_{E}(\tau)\) replaced by \(\boldsymbol{G}(t,0)\,(\boldsymbol{\omega}+\boldsymbol{\Omega})\,\boldsymbol{g }_{N}(\tau)\) and the new form of the periodic Green's function \(G_{E}(\tau,\tau^{\prime})\) in the Euclidean-Euclidean block
\[G_{E}(\tau,\tau^{\prime}) =G_{N}(\tau,\tau^{\prime})\] \[+\boldsymbol{g}_{N}^{T}(\tau)\sqrt{2\boldsymbol{\omega}_{\text{ re}}}(\boldsymbol{\nu}^{*}+\boldsymbol{X})\sqrt{2\boldsymbol{\omega}_{\text{ re}}}\boldsymbol{g}_{N}(\tau^{\prime}), \tag{4.73}\]
where \(\boldsymbol{g}_{N}(\tau)\) is the Euclidean version of the definition (3.59) for the Neumann Green's function.
To proceed further we have to derive several important properties of Euclidean Neumann Green's function which is the part of (4.73), specific to the choice (4.52) of \(\omega\). In terms of the Euclidean basis functions it reads as
\[G_{N}(\tau,\tau^{\prime})= -u_{+}(\tau)(\Delta_{+}^{N})^{-1}u_{-}(\tau^{\prime})\,\theta(\tau -\tau^{\prime})\] \[+u_{-}(\tau)(\Delta_{+-}^{N})^{-1}u_{+}(\tau^{\prime})\,\theta( \tau^{\prime}-\tau), \tag{4.74}\]
Here \(u_{+}\), \(u_{-}\) are the basis functions obeying Neumann boundary conditions
\[(W_{E}+\omega)u_{+}|_{\tau=\beta}=0,\quad(W_{E}-\omega)u_{-}|_{\tau=0}=0 \tag{4.75}\]
and, as usual,
\[\Delta_{+-}^{N}\!=\!u_{+}^{T}W_{E}u_{-}\!-\!(W_{E}u_{+})^{T}u_{-},\ \Delta_{-+}^{N}\!=\!-(\Delta_{+-}^{N})^{T}. \tag{4.76}\]
Note that the boundary conditions on \(u_{\pm}\) above are exactly the analytic continuation \(t\mapsto-i\tau\) of the boundary conditions (4.26) on \(v\), \(v^{*}\).
Now, consider in detail the matrix of boundary values of the Euclidean Neumann Green's function at \(\tau_{+}=\beta\) and \(\tau_{-}=0\)
\[\boldsymbol{G}_{N}\|\!=\!\left[\begin{array}{ll}u_{-}(\beta)(\Delta_{+-}^{N} )^{-1}u_{+}^{T}(\beta)&-u_{+}(\beta)(\Delta_{-+}^{N})^{-1}u_{-}^{T}(0)\\ u_{-}(0)(\Delta_{+-}^{N})^{-1}u_{+}^{T}(\beta)&-u_{+}(0)(\Delta_{-+}^{N})^{-1}u_{- }^{T}(0)\end{array}\right] \tag{4.77}\]
(double vertical bar denotes here the restriction of the two Green's function arguments to two boundary surfaces
thus forming the \(2\times 2\) block matrix). Using the Euclidean version of the relation (3.66), we find the alternative form of this matrix
\[\mathbf{G}_{N}\|=(\mathbf{\omega}+\mathbf{\Omega})^{-1}=\frac{1}{\sqrt{2\omega}}\left[\begin{array} []{cc}I&\frac{\nu}{\nu+I}\\ \frac{\nu}{\nu+I}&I\end{array}\right]\frac{1}{\sqrt{2\omega}}, \tag{4.78}\]
where we use the explicit form (4.53) of \(\mathbf{\nu}\), corresponding to the particular choice of basis functions described in the previous subsection.
Equating these two expressions for \(\mathbf{G}_{N}\|\), with due regard to the structure of \(\Delta_{+-}^{N}\) in (4.76), we find the two sets of equalities. The first set follows from the diagonal blocks
\[(W_{E}+\omega)u_{+}|_{\tau=0}=0,\qquad(W_{E}-\omega)u_{-}|_{\tau=\beta}=0, \tag{4.79}\]
and means that the basis functions \(u_{+}\), \(u_{-}\) obey the same Neumann boundary conditions at both boundary values of the Euclidean time (cf. Eq. (4.75)). This also implies the following explicit form of the matrices \(\Delta_{+-}^{N}\), \(\Delta_{-+}^{N}\)
\[\Delta_{+-}^{N}=-(\Delta_{-+}^{N})^{T}=2u_{+}^{T}\,\omega\,u_{-}, \tag{4.80}\]
where the basis functions \(u_{+}\), \(u_{-}\) are evaluated either at \(\tau=0\) or \(\tau=\beta\). Similarly, from the off-diagonal blocks of (4.78), one gets the formulas, relating the boundary values of the basis functions
\[\begin{split} u_{-}(\beta)&=\frac{1}{\sqrt{2\omega }}\frac{\nu+I}{\nu}\sqrt{2\omega}\,u_{-}(0),\\ u_{+}(\beta)&=\frac{1}{\sqrt{2\omega}}\frac{\nu}{ \nu+I}\sqrt{2\omega}\,u_{+}(0).\end{split} \tag{4.81}\]
It is useful to continue the Euclidean equations of motion beyond the interval \(0<\tau<\beta\) with the period \(\beta\) (which is again possible because \(\tau=0\) and \(\tau=\beta\)) are the turning points),
\[\begin{split} A_{E}(\tau+\beta)&=A_{E}(\tau), \quad B_{E}(\tau+\beta)=B_{E}(\tau),\\ C_{E}(\tau+\beta)&=C_{E}(\tau).\end{split} \tag{4.82}\]
Together with (4.61) it also implies
\[\begin{split} A_{E}(\tau)&=A_{E}(-\tau),\quad B_{E} (\tau)=-B_{E}(-\tau),\\ C_{E}(\tau)&=C_{E}(-\tau).\end{split} \tag{4.83}\]
Once the functions \(u_{\pm}(\tau)\) satisfy the same homogeneous boundary conditions for both \(\tau=0\) and \(\tau=\beta\) (cf. (4.75) and (4.79)), being translated by the period they can only differ by the multiplication with some non-singular matrices \(L_{\pm}\), \(u_{\pm}(\tau+\beta)=u_{\pm}(\tau)L_{\pm}\). From (4.81) we obtain their explicit form
\[\begin{split} u_{-}(\tau+\beta)&=u_{-}(\tau)\,u_{- }^{-1}(0)\frac{1}{\sqrt{2\omega}}\,\frac{\nu+I}{\nu}\sqrt{2\omega}\,u_{-}(0), \\ u_{+}(\tau+\beta)&=u_{+}(\tau)\,u_{+}^{-1}(0) \frac{1}{\sqrt{2\omega}}\frac{\nu}{\nu+I}\sqrt{2\omega}\,u_{+}(0).\end{split} \tag{4.84}\]
With the normalization
\[u_{+}(0)=u_{-}(0)=\frac{1}{\sqrt{2\omega}}, \tag{4.85}\]
this monodromy simplifies to
\[\begin{split} u_{-}(\tau+\beta)&=u_{-}(\tau)\, \frac{\nu+I}{\nu},\\ u_{+}(\tau+\beta)&=u_{+}(\tau)\,\frac{\nu}{\nu+I}. \end{split} \tag{4.86}\]
Similarly, in view of the reflection symmetry (4.83) of the operator \(F_{E}\) the functions \(u_{+}(\tau)\) and \(u_{-}(-\tau)\) can differ at most by some non-degenerate matrix \(L\), \(u_{+}(\tau)=u_{-}(-\tau)\,L\). For the normalization (4.85) this implies
\[u_{+}(\tau)=u_{-}(-\tau). \tag{4.87}\]
For the choice (4.85) we have \(\Delta_{+-}^{N}=-\Delta_{-+}^{N}=I\), so that the blocks of the Euclidean and Lorentzian-Euclidean Green's function in (4.73) read
\[\begin{split} G_{E}(\tau,\tau^{\prime})=&\,u_{+}( \tau)\,u_{-}^{T}(\tau^{\prime})\,\theta(\tau-\tau^{\prime})\\ &+u_{-}(\tau)\,u_{+}^{T}(\tau^{\prime})\,\theta(\tau^{\prime}- \tau)\\ &+u_{+}(\tau)\,\nu\,u_{-}^{T}(\tau^{\prime})+u_{-}(\tau)\,\nu\,u_{+ }^{T}(\tau^{\prime}),\\ &\begin{bmatrix}G^{1}_{LE}(t,\tau)\\ G^{2}_{LE}(t,\tau)\end{bmatrix}=\mathbf{G}(t,0)(\mathbf{\omega}+\mathbf{\Omega})\mathbf{g}_{N }(\tau)\\ &=\left[\begin{array}{c}I\\ I\end{array}\right]\Big{(}v(t)\,\nu\,u_{-}^{T}(\tau)+v^{*}(t)\,(\nu+I)\,u_{+ }^{T}(\tau)\Big{)}.\end{split} \tag{4.89}\]
This finally leads us to the expression for the generating functional (2.37) with the total block-matrix Green's function given by Eqs.(2.38)-(2.44), which was advocated in Section 2.
If one introduces the Euclidean Wightmann Green's functions
\[\begin{split} G^{>}_{E}(\tau,\tau^{\prime})\!=&\,u_{+}(\tau)( \nu+I)u_{-}^{T}(\tau^{\prime})\!+\!u_{-}(\tau)\,\nu\,u_{+}^{T}(\tau^{\prime}), \\ G^{<}_{E}(\tau,\tau^{\prime})&=\big{[}G^{>}_{E}(\tau^{\prime},\tau)\big{]} ^{T},\end{split} \tag{4.90}\]
then \(G_{E}(\tau,\tau^{\prime})\) can be expressed as
\[G_{E}(\tau,\tau^{\prime})=G^{>}_{E}(\tau,\tau^{\prime})\,\theta(\tau-\tau^{ \prime})+G^{<}_{E}(\tau,\tau^{\prime})\,\theta(\tau^{\prime}-\tau), \tag{4.91}\]
and the Lorenzian Wighmann Green's function (4.56) is an analytic continuation of \(G^{>}_{E}(\tau)\).
Now, it is time to connect the Euclidean basis functions \(u_{\pm}\) and the Lorentzian ones \(v\), \(v^{*}\). Specifically, let us show that both sets of functions can be obtained from a single function \(V(z)\) of the complex time \(z=t-i\tau\), obeying complexified equations of motion (3.5)
\[\bigg{[}-\frac{d}{dz}A(z)\frac{d}{dz}-\frac{d}{dz}B(z)+B^{T}(z)\frac{d}{dz}+C (z)\bigg{]}V(z)=0. \tag{4.92}\]
This equation reduces to the Lorentzian e.o.m. for \(z=t\) and to the Euclidean ones for \(z=-i\tau\). Under the assumption that coefficient functions \(A(t)\), \(B(t)\), and \(C(t)\) are real, together with the reflection symmetry (4.83), one can find that \(V^{*}(z)\equiv(V(z^{*}))^{*}\) obeys the same equation. Moreover, the initial conditions (4.26) for \(v\), \(v^{*}\) are connected with those (4.75) for \(u_{\pm}\) via analytic continuation \(t\mapsto-i\tau\). This motivates us to impose the boundary condition on \(V\) as follows
\[\big{[}iW_{\mathbb{C}}-\omega\,\big{]}V(z)\big{|}_{z=0}=0,\qquad W_{\mathbb{C }}\equiv A(z)\frac{d}{dz}+B(z) \tag{4.93}\]
which reduces to those for \(v\) or \(u_{+}\) after the substitution \(z=t\) or \(z=-i\tau\), respectively. Supplementing the latter condition with the normalization
\[V(0)=\frac{1}{\sqrt{2\omega}}, \tag{4.94}\]
one finds
\[v(t)=V(t),\quad u_{+}(\tau)=V(-i\tau), \tag{4.95}\]
i.e. \(v\) and \(u_{+}\) are analytic continuation of each other. Similarly, complex conjugation of (4.93) and the same assumptions of coefficient functions reality and its reflection symmetry, we find that \(V^{*}\) obeys the following boundary condition
\[\big{[}iW_{\mathbb{C}}+\omega\big{]}V^{*}(z)\big{|}_{z=0}=0, \tag{4.96}\]
so that \(v^{*}\) and \(u_{+}\) can be obtained from \(V^{*}\) as
\[v^{*}(t)=V^{*}(t),\quad u_{-}(\tau)=V^{*}(-i\tau). \tag{4.97}\]
Thus, assuming that the complexified basis function \(V(z)\), \(z=t-i\tau\) is analytic on \(0\leq t\leq T\), \(0\leq\tau<\beta\), we have the following transformation law of the basis functions
\[v(t-i\beta)=v(t)\,\frac{\nu}{\nu+I},\;v^{*}(t-i\beta)=v^{*}(t)\,\frac{\nu+I}{ \nu}. \tag{4.98}\]
Substituting to (4.56), one has the following condition on Wightmann Green's function
\[G_{>}(t-i\beta,t^{\prime})=G_{<}(t,t^{\prime}), \tag{4.99}\]
which is nothing but KMS condition advocated in Section 4.7.
## 5 Simple applications
### Harmonic oscillator
In this section we consider harmonic oscillator as the simplest instructive example, which demonstrates the main concepts and quantities, introduced above, together with the convenience of the special choice of the basis functions \(v\), \(v^{*}\). The corresponding action reads
\[S[\phi]=\frac{1}{2}\int dt\,\big{(}\dot{\phi}^{2}-\omega_{0}^{2}\phi^{2}\big{)}, \tag{5.1}\]
where \(\phi\) is one-component field, defining the coordinate of oscillator, and \(\omega_{0}\) is its frequency.
We will consider the system in the state, defined by the Euclidean path integral (4.58), where the Euclidean action is an analytic continuation (4.71) of the Lorentzian one
\[S_{E}[\phi_{E}]=\frac{1}{2}\int d\tau\,\big{(}\dot{\phi}_{E}^{2}+\omega_{0}^{ 2}\phi_{E}^{2}\big{)}. \tag{5.2}\]
Note that for \(J_{E}=0\) density matrix (4.58) coincides with the thermal density matrix of the inverse temperature \(\beta\). The corresponding differential operator defining the Euclidean equation of motion \(F_{E}\phi_{E}=0\) and the Wronskian read
\[F_{E}=-\frac{d^{2}}{d\tau^{2}}+\omega_{0}^{2},\qquad W_{E}=\frac{d}{d\tau}. \tag{5.3}\]
To exploit the answer (4.66), one should first calculate the Dirichlet Green's function, which can be constructed out of corresponding basis functions \(u_{\pm}^{D}(\tau)\) satisfying
\[F_{E}u_{\pm}^{D}(\tau)=0,\quad u_{+}^{D}(\beta)=u_{-}^{D}(0)=0. \tag{5.4}\]
These basis function can be chosen as
\[u_{+}^{D}(\tau)=\sinh\omega_{0}(\tau-\beta),\quad u_{-}^{D}(\tau)=\sinh \omega_{0}\tau \tag{5.5}\]
so that Dirichlet Green's function has the following form
\[G_{D}(\tau,\tau^{\prime})=\frac{1}{\Delta_{+-}^{D}}\big{[}u_{+} ^{D}(\tau)\,u_{-}^{D}(\tau^{\prime})\,\theta(\tau-\tau^{\prime})\] \[\qquad\qquad\qquad+u_{-}^{D}(\tau)\,u_{+}^{D}(\tau^{\prime})\, \theta(\tau^{\prime}-\tau)\,\big{]}, \tag{5.6}\] \[\Delta_{+-}^{D}=-\sinh\beta\omega_{0}. \tag{5.7}\]
Substituting the Green's function obtained to (4.67), one finds the explicit form of the density matrix constituents
\[\mathbf{\varOmega}=\frac{\omega_{0}}{\sinh\beta\omega_{0}}\left[ \begin{array}{cc}\cosh\beta\omega_{0}&-1\\ -1&\cosh\beta\omega_{0}\end{array}\right] \tag{5.8}\] \[\mathbf{j}=\frac{1}{\sinh\beta\omega_{0}}\int_{0}^{\beta}d\tau\, \left[\begin{array}{cc}-\sinh\omega_{0}\tau\\ \sinh\omega_{0}(\tau-\beta)\end{array}\right]J_{E}(\tau). \tag{5.9}\]
The basis functions, satisfying (4.26) are the linear combinations of \(e^{\pm i\omega_{0}t}\) which are the solutions of e.o.m. and read (cf. (3.96) and (3.103))
\[v(t)=\frac{1}{2\sqrt{2\omega}}\Big{[}\frac{\omega_{0}+\omega}{\omega_{0}}e^{- i\omega_{0}t}+\frac{\omega_{0}-\omega}{\omega_{0}}e^{i\omega_{0}t}\,\Big{]}, \tag{5.10}\]
where we assume \(\omega\) to be real for the simplicity.
The remaining component of the Lorenzian Green's function (4.33) is the matrix \(\mathbf{\nu}\), defined in (4.32). Substituting \(\mathbf{\Omega}\) defined in (5.8), one obtains
\[\mathbf{\nu} =\left[\begin{array}{cc}\kappa&\nu\\ \nu&\kappa\end{array}\right],\quad\nu=\frac{\omega^{2}+\omega_{0}^{2}}{4\omega \omega_{0}}\coth\frac{\beta\omega_{0}}{2}-\frac{1}{2}, \tag{5.11}\] \[\kappa =\frac{\omega^{2}-\omega_{0}^{2}}{4\omega\omega_{0}}\coth\frac{ \beta\omega_{0}}{2}\]
that makes the Green's function (4.33) rather cumbersome even for harmonic oscillator (see (4.47) for Wightman function). Obviously, for the choice \(\omega=\omega_{0}\) diagonal component \(\kappa\) of \(\mathbf{\nu}\) vanishes, that leads to significant simplifications. Let us show that this choice follows from the construction of Section 4.5. Extracting \(R\) and \(S\) from (5.8) as
\[R =\frac{\omega_{0}\cosh\beta\omega_{0}}{\sinh\beta\omega_{0}}, \quad S=-\frac{\omega_{0}}{\sinh\beta\omega_{0}}, \tag{5.12}\] \[\sigma \equiv\frac{S}{R}=-\frac{1}{\cosh\beta\omega_{0}}.\]
Substitution to (2.20) gives \(\omega=\omega_{0}\) as expected. This immediately leads to vanishing \(\kappa\) and
\[\mathbf{\nu}=\left[\begin{array}{cc}0&\nu\\ \nu&0\end{array}\right],\qquad\nu=\nu_{0}\equiv\frac{1}{e^{\beta\omega_{0}}-1}, \tag{5.13}\]
where one recognizes Bose-Einstein average occupation number in the expression for \(\nu\) obtained. Basis function \(v(t)\) takes the form of positive frequency basis function
\[v_{0}(t)\equiv v(t)\big{|}_{\omega=\omega_{0}}=\frac{1}{\sqrt{2\omega_{0}}}e^{ -i\omega_{0}t}. \tag{5.14}\]
Thus, from (4.33) we obtain well-known expression for Wightman Green's function
\[G_{>}(t,t^{\prime})=(\nu_{0}+1)\,v_{0}(t)v_{0}^{*}(t^{\prime})+ \nu_{0}\,v_{0}(t)v_{0}^{*}(t^{\prime})\\ =\frac{1}{2\omega_{0}}\big{(}(\nu_{0}+1)\,e^{-i\omega_{0}(t-t^{ \prime})}+\nu_{0}\,e^{i\omega_{0}(t-t^{\prime})}\big{)}, \tag{5.15}\]
and in terms of which the corresponding Feynman and anti-Feynman Green's functions can be expressed in a straightforward way. Note that (4.47) with (5.11)-(5.10) substituted gives exactly the same answer, but in much more cumbersome form.
### General one-dimensional system
Now, let us consider a more general case in which the field \(\phi\) is one-component, i.e. defines a coordinate of some non-equilibrium mechanical system, and the assumptions of Section 4.7 are fulfilled. In this case the Euclidean basis functions defined in (4.75) are also one-component. Thus, from (2.47), we conclude that under a shift of the argument by the period the basis functions \(u_{\pm}(\tau)\) simply acquire a numerical factor. According to Floquet theory of periodic differential equations (Euclidean equation of motion following from (4.59) belongs to exactly such a class of equations) this means that the basis functions \(u_{\pm}(\tau)\) are close to the notion of Bloch functions (eigenfunctions of the translation-by-period operation). This fact motivates us to apply the Floquet theory [46], which is especially powerful in one-dimensional case.
In one-dimensional case the Euclidean equation of motion reads
\[\left[-\frac{d}{d\tau}A_{E}\frac{d}{d\tau}-\dot{B}_{E}(\tau)+C_{E}(\tau) \right]\!\phi_{E}(\tau)=0. \tag{5.16}\]
where the \(A_{E}(\tau)\), \(B_{E}(\tau)\) and \(C_{E}(\tau)\) become simply the functions (\(1\times 1\) matrices). Assuming that kinetic term is positive, i.e. \(A_{E}(\tau)>0\) one can define a new variable
\[y(\tau)=\sqrt{A_{E}(\tau)}\phi_{E}(\tau). \tag{5.17}\]
so that the e.o.m. acquires the canonical form
\[\bigg{[}\frac{d^{2}}{d\tau^{2}}+Q(\tau)\bigg{]}y(\tau)=0, \tag{5.18}\]
where
\[Q(\tau)=-\frac{1}{2}\frac{d^{2}}{d\tau^{2}}\log A_{E}(\tau)- \frac{1}{4}\left(\frac{d}{d\tau}\log A_{E}(\tau)\right)^{2}\\ +\frac{1}{A_{E}(\tau)}\left(\dot{B}_{E}(\tau)-C_{E}(\tau)\right) \tag{5.19}\]
and \(Q(\tau)\) is periodic and reflection symmetric
\[Q(\tau+\beta)=Q(\tau),\qquad Q(\tau)=Q(-\tau). \tag{5.20}\]
The equation (5.18) with periodic \(Q(\tau)\) is usually referred as Hill's equation [47].
Floquet theory guarantees that if the equation (5.18) has no periodic and doubly periodic solution then there exists the basis \(y_{\pm}(\tau)\) of solutions such that
\[y_{\pm}(\tau+\beta)=e^{\mp\beta\varepsilon}y_{\pm}(\tau) \tag{5.21}\]
where the parameter \(\varepsilon\) is either real or imaginary, and functionally depends on \(Q(\tau)\). Without loss of generality we set \(\varepsilon>0\) in the real case and \(\varepsilon=-iq\), \(0<q<\pi/\beta\) in the imaginary one. The basis function has the following important properties, depending on whether \(\varepsilon\) is imaginary or real. Real \(\varepsilon\) leads to real \(y_{\pm}(\tau)\), whereas imaginary \(\varepsilon\) implies \((y_{\pm}(\tau))^{*}=y_{\mp}(\tau)\). Reflection symmetry \(Q(\tau)=Q(-\tau)\) leads to additional property \(y_{\pm}(\tau)=y_{\mp}(-\tau)\) so that \((y_{\pm}(\tau))^{*}=y_{\pm}(-\tau)\) for imaginary \(\varepsilon\).
Now, we can return to the original equation (5.16). Using (5.17), one can obtain the basis of its solutions \(u_{\pm}(\tau)\) out of \(y_{\pm}(\tau)\) as
\[u_{\pm}(\tau)=\frac{1}{\sqrt{A_{E}(\tau)}}y_{\pm}(\tau). \tag{5.22}\]
This basis inherits the properties of \(y_{\pm}(\tau)\) under translation by period, reflection and complex conjugation. In particular
\[u_{\pm}(\tau+\beta)=e^{\mp\beta\varepsilon}\,u_{\pm}(\tau). \tag{5.23}\]
Comparing with (2.47) one concludes that the parameter \(\varepsilon\) is connected to \(\nu\) as
\[\nu=\frac{1}{e^{\beta\varepsilon}-1}. \tag{5.24}\]
The basis functions \(u_{\pm}(\tau)\) have significantly different frequency properties depending on whether \(\varepsilon\) is real or imaginary. Thus, real \(\varepsilon\) implies
\[(W_{E}\pm\omega)u_{\pm}\big{|}_{\tau=0,\beta}=0 \tag{5.25}\]
where \(\omega\) is a real number, which coincides with those defined in (2.20), as will be described below. In contrast, imaginary \(\varepsilon\) leads to the property \((u_{\pm}(\tau))^{*}=u_{\pm}(-\tau)\), so that the fraction \(W_{E}u_{\pm}(0)/u_{\pm}(0)=W_{E}u_{\pm}(\beta)/u_{\pm}(\beta)\) is imaginary7, and one can write
Footnote 7: In deriving this property we use that \(\dot{A}_{E}(0)=B_{E}(0)=0\), following from \(A_{E}(\tau)=A_{E}(-\tau)\) and \(B_{E}(\tau)=-B_{E}(-\tau)\).
\[(iW_{E}\pm\omega^{\prime})u_{\pm}\big{|}_{\tau=0,\beta}=0, \tag{5.26}\]
where the number \(\omega^{\prime}=i\omega\) is real.
Let us calculate the density matrix (4.58) and examine its properties. To use the answer (4.66), one should first construct the Dirichlet Green's function. The corresponding basis functions \(u_{\pm}^{D}(\tau)\) obeying \(u_{-}^{D}(0)=u_{+}^{D}(\beta)=0\) can be constructed as linear combinations of \(u_{\pm}(\tau)\). Namely, one defines \(u_{-}^{D}(\tau)\) as
\[u_{-}^{D}(\tau)=\frac{1}{2}\big{(}u_{+}(\tau)-u_{-}(\tau)\big{)}, \tag{5.27}\]
so that \(u_{-}^{D}(0)=0\) due to \(u_{-}(\tau)=u_{+}(-\tau)\). Due to reflection symmetry of (5.16) one can obtain \(u_{+}^{D}(\tau)\) from \(u_{-}^{D}(\tau)\) as
\[u_{+}^{D}(\tau)\equiv u_{-}^{D}(\beta-\tau)=\frac{1}{2}\big{(}e^{-\beta \varepsilon}u_{-}(\tau)-e^{\beta\varepsilon}u_{+}(\tau)\big{)}. \tag{5.28}\]
The corresponding Wronskian of \(u_{+}^{D}\) and \(u_{-}^{D}\) reads
\[\Delta_{+-}^{D} =u_{+}^{D}\,(W_{E}u_{-}^{D})-(W_{E}u_{+}^{D})\,u_{-}^{D}\] \[=-\sinh\beta\varepsilon\,u_{+}(0)\,W_{E}u_{+}(0) \tag{5.29}\]
where we use the relations (5.27)-(5.28) between Dirichlet basis functions and \(u_{\pm}(\tau)\), and its derivatives at the boundary points
\[\begin{split} W_{E}u_{-}^{D}(0)&=-W_{E}u_{+}^{D}( \beta)=W_{E}u_{+}(0),\\ W_{E}u_{-}^{D}(\beta)&=-W_{E}u_{+}^{D}(0)=\cosh \beta\varepsilon\,W_{E}u_{+}(0).\end{split} \tag{5.30}\]
Substitution of the corresponding Dirichlet Green's function to (4.67) gives
\[\mathbf{\varOmega}=\frac{\omega}{\sinh\beta\varepsilon}\begin{bmatrix}\cosh \beta\varepsilon&-1\\ -1&\cosh\beta\varepsilon\end{bmatrix}, \tag{5.31}\]
where \(\omega\) is defined in (5.25). Note that for real \(\varepsilon\) this coincides with (4.57), with (5.24) substituted. For imaginary \(\varepsilon\) we express it as \(\varepsilon=iq\), so \(\mathbf{\varOmega}\) has the form
\[\mathbf{\varOmega}=\frac{\omega^{\prime}}{\sin\beta q}\begin{bmatrix}\cos\beta q &-1\\ -1&\cos\beta q\end{bmatrix}, \tag{5.32}\]
where \(\omega^{\prime}\) is defined in (5.26).
Following Appendix D, let us examine the properties of the underlying density matrix, defined by the obtained \(\mathbf{\varOmega}\). For real \(\varepsilon\) we have \(R=\omega\coth\beta\varepsilon\) and \(S=-\omega/\sinh\beta\varepsilon\), so that \(R\), \(R+S\) and \(R-S\) have the same sign, and we conclude that the density matrix is bounded, normalizable and positive-definite for \(\omega>0\). If it is the case, \(\sigma\equiv S/R=-1/\cosh\beta\varepsilon\), so the definition (5.25) is consistent with (2.20), and particle interpretation is allowed. In contrast, for imaginary \(\varepsilon\) we have \(R=\omega^{\prime}\cot\beta q\), \(S=-\omega^{\prime}/\sin\beta q\), so \(R+S\) and \(R-S\) have different signs, so even if the density matrix is normalizable, the particle interpretation is not available.
### The case of a pure state: vacuum no-boundary wavefunction
As we have shown above, the Euclidean density matrix prescription in a rather nontrivial way suggests a distinguished choice of the particle interpretation. In context of the pure Hartle-Hawking state this fact is well known and takes place in a much simpler way. Let us briefly discuss this here along with a general demonstration how the transition from a mixed state to the pure one proceeds via the change of spacetime topology of the underlying Euclidean instanton from Fig. 1 to Fig. 3.
The no-boundary state defined by the path integral over the fields on the Euclidean "hemisphere" \(D_{+}^{4}\) of Fig. 3 (and its reflection dual on \(D_{-}^{4}\) considered as a factor in the factorizable pure density matrix of Fig. 3) is the vacuum wavefunction (3.109) with the real frequency (3.107), \(\omega=[iWv(t)][v(t)]^{-1}|_{t=0}\). The relevants positive-frequency basis function \(v(t)\), similarly to (2.50), can be regarded as the analytic continuation of a special Euclidean basis function \(u(\tau)\), \(v(t)=u(\tau_{+}+it)\). This basis function is selected by the requirement that it is regular everywhere inside \(D_{+}^{4}\), including its pole which we label by \(\tau=0\)[12; 48].
To show this one should repeat the calculation of Section 4.6 on \(D_{+}^{4}\) -- the support of the Euclidean action \(S_{E}(\varphi)\) evaluated at the regular solution of equations of motion \(F_{E}\phi(\tau)=0\) with the boundary value \(\varphi=\phi(\tau_{+})\) at the single boundary \(\Sigma_{+}=\partial D_{+}^{4}\). This regular solution
is given by the expression proportional to the regular basis function \(u(\tau)\) of \(F_{E}\) on \(D^{4}_{+}\),
\[\phi(\tau)=u(\tau)[u(\tau_{+})]^{-1}\varphi, \tag{111}\]
because the contribution of the complementary basis function dual to the regular \(u(\tau)\) should be excluded in view of its singularity at \(\tau=0\).8 After the substitution into the expression for the action (108) its on-shell value reduces to the contribution of the single surface term at \(\varSigma_{+}\), \(S_{E}(\varphi)=\frac{1}{2}\phi^{T}(W_{E}\phi)|_{\varSigma_{+}}\). As a result \(S_{E}(\varphi)=\frac{1}{2}\varphi^{T}\omega\varphi\), and the Hartle-Hawking wavefunction \(\varPsi_{HH}(\varphi)\propto e^{-S_{E}(\varphi)}\) becomes the vacuum state (109) with
Footnote 8: The point \(\tau=0\) is an internal regular point of a smooth manifold \(D^{4}_{+}\), so that this point with \(\tau\) treated as a radial coordinate turns out to be a regular singularity of the equation \(F_{E}\phi(\tau)=0\). Its two linearly independent solutions \(u_{\mp}(\tau)\) have the asymptotic behavior \(u_{\mp}\propto\tau^{\mu}\) with \(\mu_{-}>0>\mu_{+}\), so that only \(u(\tau)\equiv u(\tau)\) is the regular one, while the contribution of the singular \(u_{+}(\tau)\to\infty\), \(\tau\to 0\), should be discarded from the solution \(\phi(\tau)\)[48].
\[\omega=-[W_{E}u(\tau_{+})][u(\tau_{+})]^{-1}\] \[\qquad\qquad=[iWv(t)][v(t)]^{-1}\big{|}_{t=0}, \tag{112}\]
where the second equality follows from the analytic continuation rule \(v(t)=u(\tau_{+}+it)\). Thus, the Hartle-Hawking no-boundary wavefunction of the linearized field modes is the vacuum of particles uniquely defined by a particular choice of positive-frequency basis functions \(v(t)\) which in their turn are the analytic continuation of the _regular_ Euclidean basis functions \(u(\tau)\), \(v(t)=u(\tau_{+}+it)\).9 This is a well-known fact [48; 12] which in the case of de Sitter cosmology corresponds to the Euclidean de Sitter invariant vacuum [13; 14].
Footnote 9: The set \(u(\tau)\) is of course defined only up to a linear transformation with some constant matrix \(L\), \(u(\tau)\mapsto u(\tau)L\), \(v(t)\mapsto v(t)L\), but this Bogoliubov transformation does not mix frequencies and therefore does not change particle interpretation.
It is known that vacuum in-in formalism in equilibrium models can be reached by taking the zero temperature limit \(\beta\to\infty\). It is not quite clear how this limit can be obtained in generic non-equilibrium situations, but it is likely that the transition from mixed Euclidean density matrix to a pure state is always associated with ripping the Euclidean domain into two disjoint manifolds \(D^{4}_{+}\) and \(D^{4}_{-}\) depicted in Fig. 3. To show this consider generic situation of the mixed state with the Euclidean density matrix of Fig. 1. This density matrix has a Gaussian form (4)-(6) with the matrix \(\mathbf{\varOmega}\) given by Eq. (31) with the Dirichlet Green's function which can be represented in terms of two sets of Dirichlet basis functions \(u^{D}_{\pm}(\tau)\), \(u^{D}_{\pm}(\tau_{\pm})=0\),
\[G_{D}(\tau,\tau^{\prime})=-u^{D}_{+}(\tau)\,(\varDelta^{D}_{- +})^{-1}[u^{D}_{-}(\tau^{\prime})]^{T}\,\theta(\tau-\tau^{\prime})\] \[\qquad\qquad\qquad+u^{D}_{-}(\tau)\,(\varDelta^{D}_{+-})^{-1}[u^ {D}_{+}(\tau^{\prime})]^{T}\theta(\tau^{\prime}-\tau). \tag{113}\]
Now consider the case of a pure state, when the density matrix factorizes into the product of two wavefunctions, or the situation of \(\varOmega_{+-}\equiv S=0\). This off-diagonal block of \(\mathbf{\varOmega}\) reads as
\[S =\overset{\rightarrow}{W}_{E}G_{D}(\tau_{+},\tau_{-})\overset{ \leftarrow}{W}_{E}\] \[=[W_{E}u^{D}_{+}(\tau_{+})]\,[u^{D}_{+}(\tau_{-})]^{-1}, \tag{114}\]
where we used the fact that
\[\Delta_{-+}=[u^{D}_{-}(\tau_{+})]^{T}W_{E}u^{D}_{+}(\tau_{+})=-[W_{E}u^{D}_{-} (\tau_{-})]^{T}u^{D}_{+}(\tau_{-})\]
in view of boundary conditions on \(u^{D}_{\pm}(\tau)\). Therefore, the requirement of \(S=0\) implies singularity of \(u^{D}_{+}(\tau_{-})\) which is impossible, because the Green's function \(G_{D}(\tau,\tau^{\prime})\) can have a singularity only at the coincidence point of its arguments \(\tau=\tau^{\prime}\). This means that no Dirichlet Green's function on a smooth connected Euclidean manifold of the topology \([\tau_{-},\tau_{+}]\times S^{3}\) can generate the density matrix of a pure state. The only remaining option is ripping the bridge between \(\varSigma_{+}\) and \(\varSigma_{-}\) into the union of two disjoint parts \(D^{4}_{\pm}\) by shrinking the middle time slice at \(\bar{\tau}\equiv\frac{\tau_{+}+\tau_{-}}{2}\) to a point.
In context of the cosmological model driven by the set of Weyl invariant quantum fields [16; 9; 22] this option also matches with the interpretation of zero temperature limit \(\beta\to\infty\), because the inverse temperature of the gas of conformal particles in this model is given by the instanton period in units of the conformal time \(\beta=2\int_{\bar{\tau}}^{\tau_{+}}d\tau/a(\tau)\to\infty\), which diverges because the cosmological scale factor (the size of the spatial \(S^{3}\)-section) \(a(\tau)\to 0\) at \(\tau\to\bar{\tau}\).
## VI Discussion and Conclusions
Generality of the above formalism allows one to apply it to a wide scope of problems ranging from condensed matter physics to quantum gravity and cosmology. Our goal in future work will be its use in the calculation of the primordial CMB spectrum of cosmological perturbations in the model of microcanonical initial conditions for inflationary cosmology [16; 9; 20], which was briefly discussed as a motivation for this research. Quasi-thermal nature of this setup was associated in these papers with the fact that the model was based on local Weyl invariant (conformal) matter which, on the one hand, generates the Friedmann background providing the necessary reflection symmetry and, on the other hand, turns out to be effectively in equilibrium, because in the comoving frame it describes a static situation.
Our results show, however, that thermal properties, including particle interpretation with the distinguished positive/negative frequency decomposition, are valid in much more general case. Specifically, the corresponding frequency matrix \(\omega\) in the initial conditions problem for basis functions (15) is shown to be determined by the parameters of Gaussian type density matrix (20), and
the occupation number matrix \(\nu\) reads as (21)-(22). In this setup, the Euclidean density matrix, which incorporates the reflection symmetry property guaranteed by (4.61), plays the role of the particular case. If in addition the Lorentzian action is related to the Euclidean action via the analytic continuation at the turning points of the bounce background (which, of course, respects its reflection symmetry), important analytic properties of correlation functions, including the KMS condition, begin to hold. These are the main results of the paper. They allow one to derive the full set of Lorentzian domain, Euclidean domain and mixed, Lorentzian-Euclidean, Green's functions of the in-in formalism and reveal its rich analytic structure. In particular, the results of Section 4.2 significantly extend those of [49], where the nonequilibrium evolution of Gaussian type density matrices was examined. The discussion of simple application examples of Section 5 shows the relation of the obtained formalism to the stability properties of dynamical systems in Floquet theory and the theory of Bloch functions. These properties, in their turn, are related to the eigenmode properties of the wave operator \(F_{E}\) subject to periodic boundary conditions on the bounce instanton within Euclidean time \([\,\tau_{-},\tau_{+}]\)-range and deserve further studies.
Prospective nature of rich analytic structure of the Euclidean-Lorentzian in-in formalism consists in the hope that quantum equivalence of purely Euclidean calculations of loop effects with those of the Lorentzian calculations can be extended to generic bounce type backgrounds. This equivalence was proven in [31; 32] for the vacuum case of the flat chart of the de Sitter spacetime vs its Euclidean version -- \(S^{4}\) instanton. A similar but much simpler equivalence at the one-loop order was observed within covariant curvature expansion in asymptotically flat spacetime for systems with the Poincare-invariant vacuum which is prescribed as the initial condition at asymptotic past infinity [50]. This equivalence is realized via a special type of analytic continuation from Euclidean to Lorentzian spacetime, which guarantees unitarity and causality of relevant nonlocal form factors.
Further applications of the in-in formalism in quantum cosmology require its extension to models with local gauge and diffeomorphism invariance (see also [51] for related problem in the context of quantum electrodynamics). What have been built thus far is the formalism in the physical sector of the theory for explicitly disentangled physical degrees of freedom. In cosmological models subject to time parametrization invariance time is hidden among the full set of metric and matter field variables, and disentangling time is a part of the Hamiltonian reduction to the physical sector. This reduction shows that the cosmological background can be devoid of physical degrees of freedom (just like Friedmann equation in FRW-metric models does not involve any physical degree of freedom in the metric sector of the system). This might play a major role in handling a zero mode of the wave operator \(F_{E}\), which necessarily arises on the bounce type background [52] and comprises in the cosmological context one of the aspects of the problem of time in quantum gravity [11]. This and the other problems of cosmological applications of the in-in formalism go beyond the scope of this paper and will be the subject of future research.
## Acknowledgements
The authors are grateful to A.A. Radovskaya, A.G. Semenov and D.A. Trunin for useful discussions. A.O.B. is also grateful for fruitful enjoyable conversations with Philip Stamp, Richard Woodard and especially grateful to Alexander Kamenshchik for long term collaboration on the problem of quantum initial conditions in cosmology. The work was supported by the Russian Science Foundation grant No 23-12-00051.
## Appendix A Inversion of matrices
Suppose we want to invert the following even-dimensional matrices of the form
\[\mathbf{M}_{1}=\mathbf{I}-\mathbf{P}_{\pm}\mathbf{A},\quad\mathbf{M}_{2}=\mathbf{I}-\mathbf{A}\mathbf{P}_{\pm},\] (A.1)
where
\[\mathbf{P}_{\pm}=\mathbf{I}\pm\mathbf{X},\qquad\mathbf{X}=\begin{bmatrix}0&I\\ I&0\end{bmatrix},\] (A.2)
and the matrix \(\mathbf{A}\) satisfies the following property
\[\mathbf{X}\,\mathbf{A}\,\mathbf{X}=\mathbf{A}^{*}.\] (A.3)
In terms of the block-matrix representation of \(A\) this simply means that \(\mathbf{A}\) has the following form
\[\mathbf{A}=\begin{bmatrix}B&C\\ C^{*}&B^{*}\end{bmatrix}.\] (A.4)
Next, we formally expand (A.1) in Taylor series and obtain
\[(\mathbf{M}_{1})^{-1}=\sum_{n=0}^{\infty}(\mathbf{P}_{\pm}\mathbf{A})^{n},\quad(\mathbf{M}_{2 })^{-1}=\sum_{n=0}^{\infty}(\mathbf{A}\mathbf{P}_{\pm})^{n}\] (A.5)
Observing that \((\mathbf{P}_{\pm}\mathbf{A})^{n}=\mathbf{P}_{\pm}(\mathbf{A}+\mathbf{A}^{*})^{n-1}\mathbf{A}\), \((\mathbf{A}\mathbf{P}_{\pm})^{n}=\mathbf{A}(\mathbf{A}+\mathbf{A}^{*})^{n-1}\mathbf{P}_{\pm}\), we immediately find the needed inversion formulae
\[(\mathbf{M}_{1})^{-1} =\mathbf{I}+\sum_{n=0}^{\infty}\mathbf{P}_{\pm}(\mathbf{A}+\mathbf{A}^{*})^{n}\bm {A}\] \[=\mathbf{I}+\mathbf{P}_{\pm}(\mathbf{I}-\mathbf{A}-\mathbf{A}^{*})^{-1}\mathbf{A},\] (A.6) \[(\mathbf{M}_{2})^{-1} =\mathbf{I}+\mathbf{A}\sum_{n=0}^{\infty}(\mathbf{A}+\mathbf{A}^{*})^{n}\mathbf{P}_{\pm}\] \[=\mathbf{I}+\mathbf{A}(\mathbf{I}-\mathbf{A}-\mathbf{A}^{*})^{-1}\mathbf{P}_{\pm}.\] (A.7)
## Appendix B Derivation of Eq. (4.32)
To obtain the expression (4.33) for the Green's function \(\mathbf{G}(t,t^{\prime})\) it is sufficient to derive the expression (4.31) for its part \(\mathbf{v}_{+}(t)(i\mathbf{\Delta}_{-+})^{-1}\mathbf{v}_{-}^{T}(t^{\prime})\). For this purpose we first write down the explicit form of \(\mathbf{v}_{-}^{T}\), substituting (4.29a) into (4.28) which gives
\[\mathbf{v}_{-}^{T}(t^{\prime})=(\mathbf{\omega}+\mathbf{\Omega})\frac{1}{\sqrt{2\mathbf{ \omega}_{\rm re}}}\mathbf{v}^{\dagger}(t^{\prime})+(\mathbf{\omega}^{*}-\mathbf{\Omega}) \frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}}\mathbf{v}^{T}(t^{\prime}).\] (B.1)
Next, by adding and subtracting the expression \([-\mathbf{X}\mathbf{\omega}+\mathbf{\Omega}\mathbf{X}]\left(2\mathbf{\omega}_{\rm re}\right)^{-1/2 }\mathbf{v}^{\dagger}(t^{\prime})\) we artificially disentangle the expression featuring in the square brackets of (4.30), so that we get
\[\mathbf{v}_{-}^{T}(t^{\prime})= \Big{[}(\mathbf{I}-\mathbf{X})\,\mathbf{\omega}+\mathbf{\Omega}\left(\mathbf{I}+\mathbf{ X}\right)\Big{]}\frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}}\mathbf{v}^{\dagger}(t^{ \prime})\] \[+(\mathbf{\omega}^{*}-\mathbf{\Omega})\frac{1}{\sqrt{2\mathbf{\omega}_{\rm re }}}\mathbf{v}_{+}^{T}(t^{\prime}),\] (B.2)
where \(\mathbf{v}^{T}\) was complemented to \(\mathbf{v}_{+}^{T}\) in accordance with Eq. (4.27). Note that \(\mathbf{\omega}_{\rm re}\) and \(\mathbf{X}\) commute with each other. Further, by noting that \(\mathbf{X}\mathbf{\omega}\mathbf{X}=\mathbf{\omega}^{*}\) let us rewrite the difference \(\mathbf{\omega}^{*}-\mathbf{\Omega}\) so that we again disentangle the same expression as in square brackets above
\[\mathbf{\omega}^{*}-\mathbf{\Omega}=(\mathbf{\omega}+\mathbf{\Omega})\mathbf{X}-\Big{[}(\mathbf{I}-\bm {X})\,\mathbf{\omega}+\mathbf{\Omega}\left(\mathbf{I}+\mathbf{X}\right)\Big{]}\mathbf{X}.\] (B.3)
As a result \(\mathbf{v}_{-}^{T}\) takes the form
\[\mathbf{v}_{-}^{T}(t^{\prime})= \Big{[}(\mathbf{I}-\mathbf{X})\,\mathbf{\omega}+\mathbf{\Omega}\left(\mathbf{I}+\mathbf{X }\right)\Big{]}\] \[\times\frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}}\big{(}\mathbf{v}^{ \dagger}(t^{\prime})-\mathbf{X}\mathbf{v}_{+}^{T}(t^{\prime})\big{)}\] \[+(\mathbf{\omega}+\mathbf{\Omega})\frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}} \mathbf{v}_{+}^{T}(t^{\prime}).\] (B.4)
Substitution to \(\mathbf{v}_{+}(t)(i\mathbf{\Delta}_{-+})^{-1}\mathbf{v}_{-}^{T}(t^{\prime})\) gives
\[\mathbf{v}_{+}(t)(i\mathbf{\Delta}_{-+})^{-1}\mathbf{v}_{-}^{T}(t^{\prime})= \mathbf{v}_{+}(t)\mathbf{v}^{\dagger}(t^{\prime})-\mathbf{v}_{+}(t)\mathbf{X}\mathbf{v}_{+}^{T}(t ^{\prime})\\ +\mathbf{v}_{+}(t)\sqrt{2\mathbf{\omega}_{\rm re}}\Big{[}(\mathbf{I}-\mathbf{X}) \,\mathbf{\omega}+\mathbf{\Omega}\left(\mathbf{I}+\mathbf{X}\right)\Big{]}^{-1}\\ \times(\mathbf{\omega}+\mathbf{\Omega})\mathbf{X}\frac{1}{\sqrt{2\mathbf{\omega }_{\rm re}}}\mathbf{v}_{+}^{T}(t^{\prime}),\] (B.5)
where the expression in the square brackets can be rearranged as follows by using the fact that the matrices \(\mathbf{\omega}_{\rm re}\) and \(\mathbf{X}\) commute and \(\mathbf{X}^{2}=\mathbf{I}\)
\[(\mathbf{I}-\mathbf{X})\,\mathbf{\omega}+\mathbf{\Omega}\left(\mathbf{I}+\mathbf{X} \right)=(\mathbf{\omega}+\mathbf{\Omega})\mathbf{X}\frac{1}{\sqrt{2\mathbf{\omega}_{\rm re}}} \\ \times\Big{[}\mathbf{I}+\mathbf{X}-\sqrt{2\mathbf{\omega}_{\rm re}}\mathbf{X}(\bm {\omega}+\mathbf{\Omega})^{-1}\mathbf{X}\sqrt{2\mathbf{\omega}_{\rm re}}\Big{]}\sqrt{2\mathbf{ \omega}_{\rm re}}.\] (B.6)
Substituting this expression to (B.5) we get the desired result (4.31) with the matrix \(\mathbf{\nu}\) given by (4.32).
## Appendix C Derivation of Eq. (4.73)
The aim of this appendix is twofold. First of all, we derive the simple form (4.38) of \(\mathbf{G}(0,0)\), showing that the Euclidean Green's function of Eq. (4.69)
\[G_{E}(\tau,\tau^{\prime})=G_{D}(\tau,\tau^{\prime})\\ +i\,\mathbf{g}_{D}\overset{\leftarrow}{W}_{E}(\tau)\,\mathbf{G}(0,0) \overset{\rightarrow}{W}_{E}\mathbf{g}_{D}(\tau^{\prime})\] (C.1)
is indeed independent of Lorentzian quantities. Next, we express the generating functional (4.69) in terms of Neumann Green's function rather than Dirichlet one by using the relations (3.65)-(3.66) and thus derive another form of the periodic Green's function (4.73).
Let us write down the explicit form of \(i\mathbf{G}(0,0)\) taken from equations (4.21) and (4.30)
\[i\mathbf{G}(0,0)=(\mathbf{I}+\mathbf{X})\Big{[}(\mathbf{I}-\mathbf{X})\,\mathbf{\omega}+\mathbf{\Omega} \left(\mathbf{I}+\mathbf{X}\right)\Big{]}^{-1}.\] (C.2)
Next, we identically add and subtract \(\mathbf{\Omega}^{*}\) inside the square brackets and extract the factor \(\mathbf{\Omega}+\mathbf{\Omega}^{*}=2\mathbf{\Omega}_{\rm re}\) out of the brackets. So, we obtain
\[i\mathbf{G}(0,0)=\frac{\mathbf{I}+\mathbf{X}}{2\mathbf{\Omega}_{\rm re}}\big{[} \mathbf{I}-(\mathbf{I}-\mathbf{X})\mathbf{A}\big{]}^{-1},\] (C.3) \[\mathbf{A}\equiv(\mathbf{\Omega}^{*}-\mathbf{\omega})(\mathbf{\Omega}+\mathbf{\Omega }^{*})^{-1},\]
where we used the fact that \(\mathbf{X}\mathbf{\Omega}^{*}\mathbf{X}=\mathbf{\Omega}\) and \(\mathbf{X}^{2}=\mathbf{I}\) and also noted the the fraction is unambiguous since \(\mathbf{I}+\mathbf{X}\) and \(\mathbf{\Omega}\) commute with each other. Now, the expression in the square brackets can be inverted with the use of (A.6). The result of this inversion is the identity matrix \(\mathbf{I}\) plus a second term, having \(\mathbf{I}-\mathbf{X}\) as a left multiplier. Observing that \((\mathbf{I}+\mathbf{X})(\mathbf{I}-\mathbf{X})=0\), we conclude that only \(\mathbf{I}\) survives, hence the result reads
\[i\mathbf{G}(0,0)=\frac{\mathbf{I}+\mathbf{X}}{2\mathbf{\Omega}_{\rm re}}.\] (C.4)
Thus, we see that the Euclidean Green's function (C.1) is independent of any Lorentzian quantities, \(\mathbf{\omega}\) in particular.
Now, let us rewrite the Euclidean Green's function (C.1) following from the generating functional (4.69) in a different form, namely express Dirichlet Green's function in terms of Neumann one, satisfying \((W_{E}+\omega)G_{N}(\beta,\tau^{\prime})=(W_{E}-\omega^{*})G_{N}(0,\tau^{ \prime})=0\), where \(\omega\) is the same as in (4.23)-(4.24). For this purpose, we use the relations (3.66) and (3.65) with the substitution \(\mathbf{\omega}\mapsto-i\mathbf{\omega}\) reflecting the Euclidean nature of the Neumann Green's function. Applying these relations together with (C.4) to the Euclidean Green's function (C.1), we obtain
\[G_{E}(\tau,\tau^{\prime})=G_{N}(\tau,\tau^{\prime})\\ +\mathbf{g}_{N}^{T}(\tau)\Big{[}(\mathbf{\omega}+\mathbf{\Omega})\frac{\mathbf{I}+ \mathbf{X}}{2\mathbf{\Omega}_{\rm re}}(\mathbf{\omega}+\mathbf{\Omega})\\ -(\mathbf{\omega}+\mathbf{\Omega})\Big{]}\mathbf{g}_{N}(\tau^{\prime}).\] (C.5)
It turns out that this expression can be significantly simplified, and directly related to the matrix \(\mathbf{\nu}\), defined in (4.32). To show this rewrite \(\mathbf{\nu}\) in a different form by defining the matrix
\[\mathbf{B}^{-1}\equiv\sqrt{2\omega_{\mathrm{re}}}\,\mathbf{X}\,(\mathbf{\omega}+\mathbf{\Omega}) ^{-1}\mathbf{X}\sqrt{2\omega_{\mathrm{re}}}\] (C.6)
and extracting it out of the square brackets in (4.32). So we get
\[\mathbf{\nu}+\mathbf{X}=-\Big{[}\mathbf{I}-\mathbf{B}(\mathbf{I}+\mathbf{X})\Big{]}^{-1}\mathbf{B},\] (C.7)
where we moved \(\mathbf{X}\) for further convenience to the left hand side. Now, we explicitly invert the expression in the square brackets above, using (A.7). After straightforward rearrangements, the result reads
\[\mathbf{\nu}+\mathbf{X}=\mathbf{X}\frac{1}{\sqrt{2\mathbf{\omega}_{\mathrm{re}}}} \bigg{[}(\mathbf{\omega}+\mathbf{\Omega})\frac{\mathbf{I}+\mathbf{X}}{2\mathbf{\Omega}_{\mathrm{ re}}}(\mathbf{\omega}+\mathbf{\Omega})\\ -(\mathbf{\omega}+\mathbf{\Omega})\bigg{]}\frac{1}{\sqrt{2\mathbf{\omega}_{ \mathrm{re}}}}\mathbf{X}.\] (C.8)
Thus, the comparison to (C.5) gives
\[G_{E}(\tau,\tau^{\prime})=G_{N}(\tau,\tau^{\prime})\\ +\mathbf{g}_{N}^{T}(\tau)\sqrt{2\omega_{\mathrm{re}}}(\mathbf{\nu}^{*}+ \mathbf{X})\sqrt{2\omega_{\mathrm{re}}}\mathbf{g}_{N}(\tau^{\prime}),\] (C.9)
where we use the fact that \(\mathbf{X}\mathbf{\nu}\mathbf{X}=\mathbf{\nu}^{*}\).
## Appendix D Properties of Gaussian density matrices
Suppose we have the Gaussian density matrix (4.5) which we rewrite here for the convenience
\[\rho(\varphi_{+},\varphi_{-})=\frac{1}{Z}\exp\left\{-\frac{1}{2}\mathbf{\varphi}^ {T}\mathbf{\Omega}\,\mathbf{\varphi}+\mathbf{j}^{T}\mathbf{\varphi}\right\},\;\mathbf{\varphi}= \left[\begin{array}{c}\varphi_{+}\\ \varphi_{-}\end{array}\right]\] (D.1)
where
\[\mathbf{j}=\left[\begin{array}{c}j\\ j^{*}\end{array}\right],\;\mathbf{\Omega}=\left[\begin{array}{cc}R&S\\ S^{*}&R^{*}\end{array}\right],\;R=R^{T},\;S=S^{\dagger}.\] (D.2)
and examine the following properties of it, namely
1. normalizability, i.e. finiteness of \(\mathrm{tr}\,\hat{\rho}\),
2. boundedness, i.e. finiteness of \(\|\hat{\rho}|\psi\rangle\|\) for arbitrary normalizable state \(|\psi\rangle\),
3. positive definiteness, i.e. positivity of the eigenvalues of \(\hat{\rho}\).
Normalizability is equivalent to the existence of the integral
\[\mathrm{tr}\,\hat{\rho}\!=\!\int d\varphi\,\rho(\varphi,\varphi)=\big{[} \mathrm{det}(R\!+\!S\!+\!R^{*}\!+\!S^{*})\,\big{]}^{-1/2},\] (D.3)
which is equivalent to positive definiteness of the real part of \(R+S\). Boundedness of \(\hat{\rho}\) is equivalent to existence of \(\hat{\rho}^{2}\) whose coordinate form reads \(\langle\varphi_{1}|\,\hat{\rho}^{2}\,|\varphi_{2}\rangle\propto\big{[}\mathrm{ det}(R+R^{*})\,\big{]}^{-1/2}\), so that we should demand the positive definiteness of the real part of \(R\).
Positive definiteness requires additional attention, namely the analysis of the eigenvalues and eigenvectors of \(\hat{\rho}\). We will focus on the case in which \(R\) and \(S\) are real, and \(\mathbf{j}=0\). All the results will also hold for non-vanishing \(\mathbf{j}\) but its derivation will be more cumbersome. We will also assume that normalizability and boundedness of the density matrix are enforced, i.e. both \(R\) and \(R+S\) are positive definite. Let us consider a matrix element \(\langle\varphi|\,\hat{\rho}\,|\alpha\rangle\), where \(|\alpha\rangle\) is the coherent state defined by Eq. (3.110). Inserting a partition of unity in the coordinate representation, we have
\[\langle\varphi|\,\hat{\rho}\,|\alpha\rangle=\int d\varphi^{\prime }\,\rho(\varphi,\varphi^{\prime})\,\langle\varphi^{\prime}|\alpha\rangle\\ =\frac{1}{Z}\exp\Big{[}-\frac{1}{2}\varphi^{T}\big{(}R-S(R+ \omega)^{-1}S\big{)}\varphi\\ -\alpha^{T}\sqrt{2\omega}(R+\omega)^{-1}S\varphi\\ -\frac{1}{2}\alpha^{T}\big{(}I-\sqrt{2\omega}(R+S)^{-1}\sqrt{2 \omega}\big{)}\alpha\Big{]}.\] (D.4)
Now, let us assume \(R-S\) is positive definite, i.e. the choice (4.52) of \(\omega\) can be made. After some calculations one can rewrite the matrix element above as
\[\langle\varphi|\,\hat{\rho}\,|\alpha\rangle=\frac{1}{Z}\exp\biggl{[} -\frac{1}{2}\varphi^{T}\omega\,\varphi+\alpha^{T}\frac{\nu}{\nu+I}\sqrt{2 \omega}\,\varphi\\ -\frac{1}{2}\alpha^{T}\left(\frac{\nu}{\nu+I}\right)^{2}\alpha \biggr{]},\] (D.5)
where \(\nu\) defined in terms of \(R\) and \(S\) in (4.53). Comparing the right hand side to (3.110) one concludes that
\[\hat{\rho}\bigl{|}\alpha\bigr{\rangle}=\bigl{|}\frac{\nu}{\nu+I}\alpha\bigr{\rangle}.\] (D.6)
Taking derivatives of the both sides of this equality with respect to \(\alpha\), substituting \(\alpha=0\) afterwards, and comparing to (3.113), one observes that eigenvalues of \(\hat{\rho}\) are arbitrary products of eigenvalues of \(\frac{\nu}{\nu+I}\). Thus, the latter matrix should be positive definite for \(\hat{\rho}\) to be positive definite too. Using the expression (4.53) for \(\nu\), one finds that positive definiteness is satisfied for a negative definite matrix \(\sigma\) and consequently for a negative definite \(S=R^{1/2}\sigma R^{1/2}\).
Summarizing, we found that the Gaussian density matrix (4.5) is normalizable if the real part of the sum \(R+S\) is positive definite. Density matrix is bounded if a real part of \(R\) is positive definite. If, in addition, the difference \(R-S\) is positive definite, which is motivated by the necessity of particle interpretation, presented in Section 4.5, one concludes that the density matrix is positive definite if \(S\) is negative definite. |
2309.06099 | Domain wall statics and dynamics in nanowires with arbitrary
Dzyaloshinskii-Moriya tensors | The influence of different Dzyaloshinskii-Moriya interaction (DMI) tensor
components on the static and dynamic properties of domain walls (DWs) in
magnetic nanowires is investigated using one dimensional collective coordinates
models and micromagnetic simulations. It is shown how the different
contributions of the DMI can be compactly treated by separating the symmetric
traceless, antisymmetric and diagonal components of the DMI tensor. First, we
investigate the effect of all different DMI components on the static DW tilting
in the presence and absence of in plane (IP) fields. We discuss the
possibilities and limitations of this measurement approach for arbitrary DMI
tensors. Secondly, the interplay of different DMI tensor components and their
effect on the field driven dynamics of the DWs are studied and reveal a
non-trivial effect of the Walker breakdown field of the material. It is shown
how DMI tensors combining diagonal and off-diagonal elements can lead to a
non-linear enhancement of the Walker field, in contrast with the linear
enhancement obtainable in the usual cases (interface DMI or bulk DMI). | Adriano Di Pietro, Felipe GarcΓa SΓ‘nchez, Gianfranco Durin | 2023-09-12T10:05:53Z | http://arxiv.org/abs/2309.06099v1 | # Domain wall statics and dynamics in nanowires with arbitrary Dzyaloshinskii-Moriya tensors
###### Abstract
The influence of different Dzyaloshinskii-Moriya interaction (DMI) tensor components on the static and dynamic properties of domain walls (DWs) in magnetic nanowires is investigated using one dimensional collective coordinates models and micromagnetic simulations. It is shown how the different contributions of the DMI can be compactly treated by separating the symmetric traceless, antisymmetric and diagonal components of the DMI tensor. First, we investigate the effect of all different DMI components on the static DW tilting in the presence and absence of in plane (IP) fields. We discuss the possibilities and limitations of this measurement approach for arbitrary DMI tensors. Secondly, the interplay of different DMI tensor components and their effect on the field driven dynamics of the DWs are studied and reveal a non-trivial effect of the Walker breakdown field of the material. It is shown how DMI tensors combining diagonal and off-diagonal elements can lead to a non-linear enhancement of the Walker field, in contrast with the linear enhancement obtainable in the usual cases (interface DMI or bulk DMI).
## I Introduction
Recent years have seen an increased interest in the study of magnetic domain wall (DW) dynamics in perpendicularly magnetized nanowires as these are at the core of many emerging spintronic device concepts in memory storage [1, 2], sensing [3, 4] and logic [5, 6, 7]. To this day, many challenges still need to be addressed in order to make such technologies viable for the industry. Among the challenges to be faced is the phenomenon of Walker breakdown [8] field which sets a strong upper limit to the velocity a DW can be efficiently moved through a nanowire. It is a well established fact that magnetic DWs can be moved through a magnetic nanowire either via applied magnetic fields or spin transfer torque induced by spin polarized currents [9, 10]. For small enough values of the driving force, the shape anisotropy of the material (which in thin film geometries favours Bloch walls) is able to counter act the torque on the magnetization that would cause precessional motion [11]. In the steady state regime, the DW is able to move rigidly and its peak velocity displays a linear dependence from the driving force. As the driving force increases, the competing torque becomes too strong and cannot be compensated by the effective field inside the DW: once that threshold, called Walker Breakdown (WB) field, is reached, the domain wall begins the so called precessional motion regime [11, 12], in which the peak velocity of the DW drastically reduces. Several strategies have been tried to counteract this phenomenon and increase the maximum attainable DW velocity [13, 14]. For instance, the choice of materials displaying chiral interactions such as the Dzyaloshinskii-Moriya interaction (DMI) [15, 16] in perpendicularly magnetized nanowires is known to greatly enhance the domain wall Walker breakdown [12] because of the effective field component providing an additional restoring torque for the moving DW. While the effects of interface DMI (iDMI) are well known and understood, the effects of different, more exotic types of DMI [17, 18, 19] found in lower symmetry magnetic crystals are, to our knowledge, not studied in detail. The study of the possible effects induced by these additional DMI forms is becoming increasingly relevant as new deposition techniques are making the production of thin films with the required low symmetries a reality [20, 21, 22]. In the following we propose a micromagnetic study to analyze the DW statics and dynamics with additional terms accounting for arbitrary DMI tensors in magnetic nanowires. The paper is organized as follows: in Section II.1 we describe the energy contributions of our system and show how to compactly treat more exotic DMI tensors by decomposing them in antisymmetric, symmetric traceless and diagonal contributions. In Section II.2, we introduce the collective coordinate models (CCMs) and derive the DW energy density for arbitrary DMI tensors both in the \(q-\chi-\phi\) model [23] and the \(q-\phi\) model [24, 12]. In Sections III.1 and III.2 we show how the derived energy densities correctly predict the DW tilting both with an without applied in-plane (IP) fields. In Section III.3, we explore the applicability of the canting angle method to measure forms of the DM tensor going beyond the iDMI discussed in ref.[23]. Finally, in Section III.4 we derive the dynamical equations for the DW in the \(q-\phi\) model and show how the presence of certain combinations of DMI tensor components can lead to non-trivial changes in the DW Walker breakdown field. The derived analytical results are compared throughout with Micromagnetic simulations performed with the MuMax3 [25] software. We conclude by summarizing our results and providing an outlook for future investigations in Section IV.
Theoretical background
### Energy density in the presence of arbitrary DMI tensors
We consider a magnetic ultrathin film of volume \(\Omega_{V}\) grown on a substrate and a capping layer of a different material so that the symmetry is broken along the normal to the plane. In addition to the usual energy terms, we add a contribution relative to an arbitrary DMI tensor yielding a total density of the form [26; 27]
\[E=\int_{\Omega_{V}}\big{\{}\,A|\mathbf{\nabla}\mathbf{m}|^{2}-Q_{ij} \mathcal{M}^{ji}-\frac{1}{2}\mu_{0}M_{s}\mathbf{m}\cdot\mathbf{H}_{d}\] \[+K_{u}(1-(\mathbf{m}\cdot\hat{\mathbf{u}}_{z})^{2})-\mu_{0}M_{s}\mathbf{m} \cdot\mathbf{H}_{z}\,\big{\}}\,d^{3}\mathbf{r} \tag{1}\]
where \(\mathbf{m}(x,t)=\mathbf{M}(x,t)/M_{s}\) is the normalized magnetization vector, \(A\) is the symmetric exchange coefficient (in this case a constant), \(\mathbf{H}_{d}\) is the magneto-static field, \(\mathbf{H}_{z}\) is the Zeeman field and \(K_{u}\) is the uniaxial anisotropy constant with the easy axis directed along \(z\). Finally, \(Q_{ij}\) represents the DMI tensor and \(\mathcal{M}_{ji}=\sum_{k}\varepsilon_{ik}(m_{z}\partial_{j}m_{k}-m_{k} \partial_{j}m_{z})\) is the chirality of the magnetic configuration [26]. We remark how both the chirality \(\mathcal{M}_{ji}\) and the DMI tensor \(Q_{ij}\) treated here are already restricted to a 2 dimensional system, i.e. \(\mathcal{M}_{ji},Q_{ij}\in\mathbb{R}^{2\times 2}\) and are reported in Fig.1. In the following we briefly outline some of the consequences of the symmetry properties of the DMI tensor. First of all, we remark that the DMI tensor, much like any other rank-2 tensor, can be decomposed in a sum of symmetric traceless, antisymmetric and diagonal components as follows
\[\hat{\mathbf{Q}}_{ij}=\underbrace{\begin{pmatrix}0&D_{a}\\ -D_{a}&0\end{pmatrix}}_{\text{Antisymmetric}}+\underbrace{\begin{pmatrix}D_{b}&D_ {s}\\ D_{s}&-D_{b}\end{pmatrix}}_{\text{Symmetric-traceless}}+\underbrace{\begin{pmatrix}D _{t}&0\\ 0&D_{t}\end{pmatrix}}_{\text{Diagonal}}. \tag{2}\]
A purely anti-symmetric DMI tensor \((Q_{A})_{ij}=\sum_{k}D_{k}\varepsilon_{kij}\) yields Lifshitz invariant energy density terms of the form
\[\mathcal{E}_{A;DMI}=-2\mathbf{D}\cdot[\mathbf{m}(\nabla\cdot\mathbf{m})-(\nabla\cdot\mathbf{m} )\mathbf{m}], \tag{3}\]
which correspond to the interface DMI (iDMI) term often studied in the literature [28; 12]. The symmetric component of the DMI tensor, on the other hand, yields an energy contribution of the form
\[\mathcal{E}_{S;DMI}=-\mathbf{m}\cdot(\hat{\mathbf{Q}}_{S}\nabla\times\mathbf{m}), \tag{4}\]
where \(\hat{\mathbf{Q}}_{S}\nabla=\sum_{j}(Q_{S})_{ij}\partial_{j}\). A DMI of this form is related to the so called "anisotropic DMI" in the discrete microscopic treatment [29; 30; 31]. The special case of a purely diagonal matrix yields an energy term of the form
\[\mathcal{E}_{S;DMI}=-2(Q_{S})_{ii}(\mathbf{m}\cdot\partial_{i}\mathbf{m})_{i} \tag{5}\]
which, in the case of a single independent component \(Q_{ii}=D\) yields
\[\mathcal{E}_{S;DMI}=-2D\,\mathbf{m}\cdot(\nabla\times\mathbf{m}). \tag{6}\]
This energy contribution corresponds to a bulk DMI (bDMI) term responsible for stabilizing bulk chiral structures [32].
### Collective coordinate models with arbitrary DMI
Since the contributions of the iDMI terms (i.e. the \(D_{a}\) part of eq.(2)) and bDMI (i.e. the \(D_{t}\) part of eq.(2)) to ordinary collective coordinate models (CCMs) are known [33; 23], to account for the complete DMI tensor we just have to compute the energy density terms relative to the symmetric \(D_{s}\) and traceless \(D_{b}\) components. To this end, we consider a DMI tensor compatible with the \(S_{4}\) point group symmetry which has the form
\[\hat{\mathbf{Q}}_{S_{4}}=\begin{pmatrix}D_{b}&D_{s}\\ D_{s}&-D_{b}\end{pmatrix}. \tag{7}\]
Plugging this DMI tensor in eq.(1) and writing the magnetization in spherical coordinates \(\mathbf{m}=\mathbf{M}/M_{s}=\big{(}\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos \theta\big{)}^{T}\) we can write the \(S_{4}\) DMI energy density as follows
\[\mathcal{E}_{DMI,S_{4}}=D_{b}(\sin\varphi\ \partial_{x}\,\theta+ \cos\varphi\ \partial_{y}\theta)+\] \[D_{s}(\sin\varphi\ \partial_{y}\theta-\cos\varphi\ \partial_{x}\theta). \tag{8}\]
To derive the CCM we must now substitute \(\theta\) and \(\phi\) with the Ansatz for the tilted DW [23]
\[\tan\left(\frac{\theta(q,\chi)}{2}\right) =\exp\left(Q\frac{(x-q)\cos\chi+y\sin\chi}{\Delta}\right) \tag{9}\] \[\varphi(t) =\phi(t), \tag{10}\]
where \(q\) represents the DW position along the \(x\)-axis, \(\chi\) represents the DW tilting angle, \(\Delta\) the DW width and \(Q=\pm 1\) represents the sense of rotation of angle \(\theta\) ( i.e. \(Q=\pm 1\Rightarrow m_{z}(-\infty)=\pm 1\) and \(m_{z}(+\infty)=\mp 1\) ). For a schematic of the system and the angles, refer to Fig.2-(a). Noticing that the Ansatz of eq.(9) allows us to compactly compute the derivatives of eq.(8) as
\[\partial_{x}\theta =Q\frac{\sin\theta\cos\chi}{\Delta} \tag{11}\] \[\partial_{y}\theta =Q\frac{\sin\theta\sin\chi}{\Delta} \tag{12}\]
we can write the energy density \(\mathcal{E}_{DMI,S_{4}}\) of eq.(8) as
\[\mathcal{E}_{DMI,S_{4}}=Q\frac{\sin\theta}{\Delta}\big{[}D_{b} \sin(\phi+\chi)-D_{s}\cos(\phi+\chi)\big{]}. \tag{13}\]
To obtain a DW surface energy, we quench the \(x\)-degree of freedom of eq.(13) by integrating it out
\[\sigma_{DW,S_{4}} =\int_{-\infty}^{+\infty}\mathcal{E}_{DMI,S_{4}}\ \mathrm{d}x\] \[=\pi Q\big{[}D_{b}\sin(\phi+\chi)-D_{s}\cos(\phi+\chi)\big{]}. \tag{14}\]
We can now add this DW energy component to the other energy terms already used in [23] to obtain a generalized DW energy density as a function of all the DMI tensor components
\[\sigma_{DW}(\phi,\chi)= 2\frac{A}{\Delta}+\pi Q\big{[}D_{a}\cos(\phi-\chi)-D_{s}\cos(\phi+ \chi)-D_{t}\sin(\phi-\chi)+D_{b}\sin(\phi+\chi)\big{]}+\] \[2\Delta(K_{0}+K\sin^{2}(\phi-\chi))-\pi\Delta M_{s}(H_{y}\sin\phi +H_{x}\cos\phi), \tag{15}\]
with \(K_{0}=K_{u}+\frac{M_{s}\mu_{0}}{2}(N_{x}-N_{z})\) and \(K=\frac{M_{s}\mu_{0}}{2}(N_{y}-N_{x})\) being the effective and shape anisotropy constants, respectively. \(N_{x},N_{y},N_{z}\) are the demagnetizing factors which depend on the geometry of the sample [34; 35]. If the phenomenon of DW tilting is not to be considered, the properties of the DW can be studied by considering the more simple \(q-\phi\) model [12; 24] which can be obtained by setting \(\chi=H_{x}=H_{y}=0\) in eq.(15),
\[\sigma_{DW}(\phi)= 2\frac{A}{\Delta}+\pi Q\big{[}(D_{a}-D_{s})\cos(\phi)+(D_{b}-D_{ t})\sin(\phi)\big{]}+\] \[2\Delta(K_{0}+K\sin^{2}(\phi)) \tag{16}\]
## III Results
### In plane field driven DW titling in the presence of arbitrary DMI tensors
It is a well established fact, that the presence of iDMI induces a tilt in the DW profile [23; 36] under the application of an external in-plane (IP) transverse field. The origin of this phenomenon is explained by considering the relative energy balance of the DW in presence of chiral interactions and Zeeman fields. In the absence of applied IP fields, the DW reaches an internal equilibrium angle dictated by the relative strength of DMI and demagnetizing contributions. If we apply an external IP field along the positive \(y\)-direction, the DW magnetization is going to feel the added competing interaction requiring it to align along the direction of the external field. At the same time, the iDMI produces an effective field component that stabilizes Neel walls. To try and accommodate both torques, the DW tilts by an angle \(\chi\) increasing the DW energy by a factor \(1/\cos\chi\). In the following we try and extend what is known about DW tilting in the presence of iDMI to the case of arbitrary DMI tensors (see Fig.1). As a first step, we analyze the new DMI energy terms of the \(\chi=0\) case
\[\sigma_{DMI}=\pi\big{[}(D_{a}-D_{s})\cos(\phi)+(D_{b}-D_{t})\sin(\phi)\big{]}. \tag{17}\]
and of the \(\chi\neq 0\) case
\[\sigma_{DMI}= \pi\big{[}D_{a}\cos(\phi-\chi)-D_{s}\cos(\phi+\chi)\] \[-D_{t}\sin(\phi-\chi)+D_{b}\sin(\phi+\chi)\big{]}, \tag{18}\]
where we have set \(Q=1\) for convenience. In the untilted case \(\chi=0\), eq.(17) suggests that the different DMI tensor components all simply induce either Neel or Bloch wall stabilizing effective fields, however this intuitive picture is only valid as long as no tilting is observable. If tilting is present (eq.(18)) in the system, we
Figure 1: DMI tensor components for all 21 non-centrosymmetric crystallographic point groups as imposed by the Neumann principle [27]. The 11 centrosymmetric point groups have a vanishing DMI tensor and are not shown. The components \(D_{a},D_{s},D_{b},\bar{D}_{t}\) are the ones shown in the decomposition of eq.(2), while terms of the form \(D_{ij}\) are combinations of \(D_{a},D_{s},D_{b},D_{t}\).
need to take in account the fact that the \(D_{s}\) and \(D_{b}\) components minimize the energy of the DW as a function of \(\phi+\chi\) as opposed to \(\phi-\chi\). As a first step to understand the implications of this difference, we discuss the equilibrium angles stabilized by all the different DMI tensor components of eq.(15). The values of the physical parameters used in the micromagnetic simulation for the statics and dynamics of the DW represent the values measured in Pt/Co/AlOx nanowires [28]. We set the exchange constant \(A=10^{-11}\)J/m, the saturation magnetization \(M_{s}=1.09\) MA/m, the effective anisotropy constant \(K_{0}=1.25\) MJ/m\({}^{3}\), the damping coefficient \(\alpha=0.5\). The chosen nanowire dimensions are \(L_{x}=1\)\(\mu\)m, \(L_{y}=160\) nm and \(L_{z}=0.6\) nm (see Fig.2-(a) for the schematics of the setup). By observing the Fig.2-(b)-i., (assuming \(\chi>0\)) we notice how \(\phi-\chi\) represents the DW magnetization angle in the reference frame of the tilted DW. \(\phi+\chi\) on the other hand, represents the DW magnetization in the reference frame of a mirrored image of the tilted DW, i.e. with a canting angle of \(-\chi\) (see Fig.2-(b)-ii). As obtained from [33] and [23], the \(D_{a}\) and \(D_{t}\) components of the DM tensor stabilize, respectively, Neel and Bloch DWs in the reference frame of the tilted DW (see Fig.2-(c)-i. and -ii.). On the other hand, the dependence from the \(\phi+\chi\) angle of \(D_{s}\) and \(D_{b}\) components results in the stabilization of Neel or Bloch DWs in a reference located in a mirror image version of the DW itself (see Fig.2-(c)-iii. and -iv.). To emphasize how the effect of the \(D_{s}\) and \(D_{b}\) components can only be distinguished from the \(D_{a}\) and \(D_{t}\) contributions in the presence of DW tilting (i.e. \(\chi\neq 0\)), we analyze the equilibrium configurations obtained from the minimization of the untilted case and compare them with micromagnetic simulations performed with a version of the MuMax3 code [25] suitably modified to account for the new components of the DMI tensor of eq.(13). By observing eq.(16), it is immediately apparent that in the case \(\chi=0\), the effect of \(D_{s}\) and \(D_{a}\) (or \(D_{b}\) and \(D_{t}\)) cannot be untangled as all these energy terms contribute to the stabilization of an untilted Neel- (\(D_{s}\) and \(D_{a}\)) or an untilted Bloch-wall (\(D_{b}\) and \(D_{t}\)). This effect is clearly visible in Fig.3-(a), where a DM tensor composed only of a \(D_{a}\) part stabilizes a Neel wall (Fig.3-(a)-ii.) while a DM tensor composed of a \(D_{s}\) part stabilized a Neel wall with opposite chirality (Fig.3-(a)-i.). In the presence of DW tilting (induced e.g. by the presence of an applied IP field along the \(y\)-axis), the different energy contributions become distinguishable as can be seen in Fig.3-(b)-i. and -ii., where the DW magnetization in the presence of \(D_{s}=\pm 1.5\) mJ/m\({}^{2},\ D_{a}=D_{b}=0\) and an IP field of \(H_{y}=100\) mT points in a direction compatible with a Neel wall in a reference frame tilted in the oppo
Figure 2: (a) Scheme of the system used in the micromagnetic simulations. We show the dimensions \(L_{x}=1\)\(\mu\)m, \(L_{y}=160\) nm and \(L_{z}=0.6\) nm as well as the internal DW angle \(\phi\) and the DW tilt angle \(\chi\). (b) \(\phi\) and \(\chi\) angles in the case of \(\chi>0\) (i.) \(\phi\) and \(\chi\) angles in the case of \(\chi>0\) (ii.) (c) Schematic representation of the internal DW angle stabilized by the presence of the different DMI tensor components of eq.(15) in the presence of an applied IP field
site direction \(-\chi\). (see the dotted line in Fig. 3(d-e) The
simultaneous presence of all the different DMI contributions as well as their relative importance is more complex and is studied both numerically, via the minimization of eq.(15) and with micromagnetic simulations. In Fig.4-(a) we observe the tilting angle \(\chi\) of the DW in the presence of a DMI tensor compatible with \(C_{2v}\) crystal symmetry [17], i.e.
\[\mathbf{\hat{Q}}_{C_{2v}}=\begin{pmatrix}0&D_{12}\\ D_{21}&0\end{pmatrix}. \tag{19}\]
By observing the value of \(\chi\) for \(D_{21}=0\) we notice a vanishing of the DW tilting while a form of DMI (\(D_{12}\neq 0\)) is still present. This phenomenon can be understood using the intuitive picture of competing effective fields. As can be observed in the untilted model of eq.(16), the term stabilizing Neel walls has the form \((D_{a}-D_{s})\cos(\phi)\). In the \(C_{2v}\) case of eq.(19), we have \(D_{a}=(D_{12}-D_{21})/2\) and \(D_{s}=(D_{12}+D_{21})/2\) and therefore
\[\Rightarrow D_{a}-D_{s}=D_{21} \tag{20}\]
implying that the component of the DMI tensor that stabilizes Neel walls (and is responsible for tilting since it competes with the \(H_{y}\) torque) is the \(D_{21}\) component. In Fig.4-(b,c) on the other hand, we observe the behavior or the DW tilting angle \(\chi\) in the presence of a DMI tensor compatible with \(S_{4}\) crystal symmetry [17; 18] in 2 different cases. In Fig.4-(b) we have
\[\mathbf{\hat{Q}}_{S_{4}}=\begin{pmatrix}0&D_{s}\\ D_{s}&0\end{pmatrix}, \tag{21}\]
while in Fig.4-(c) we have
\[\mathbf{\hat{Q}}_{S_{4}}=\begin{pmatrix}D_{b}&D_{s}\\ D_{s}&-D_{b}\end{pmatrix}. \tag{22}\]
By comparing the 2 graphs we can observe how the presence of \(D_{b}\) terms emphasizes the canting effect. This can be understood by recalling how the \(D_{b}\) terms energetically favors the formation of Bloch walls. In the presence of a transverse field along the \(y\)-direction, the effective field coming from \(D_{b}\) acts constructively and exacerbates the canting one would normally observe without \(D_{b}\). In Fig.4-(d,e) we explore the canting angle \(\chi\) in the presence of a DMI tensor compatible with the point group symmetry \(T\) (or others [27]) i.e.
\[\mathbf{\hat{Q}}_{T}=\begin{pmatrix}D_{t}&0\\ 0&D_{t}\end{pmatrix}. \tag{23}\]
In Fig.4-(d,e) we study the behavior of \(\chi\) as a function of \(D_{t}\) in the presence of a transverse field along the \(y\)-direction (Fig.4-(d)) and in the presence of a transverse field along the \(x\)-direction (Fig.4-(e)). We observe how tilting is only present in the case of an applied transverse field applied along the \(x\)-direction. This can be explained observing eq.(15) where we notice that the DMI associated to \(D_{t}\) tends to stabilize Bloch walls (Fig.3-(c)-ii.): as a consequence a transverse \(H_{x}\) field tries to change the internal DW magnetization to a Neel configuration. Much like in the case of the \(D_{a}\) and \(D_{s}\) (see Fig.2-(c)-i. and -iii.), the DW responds by tilting to try and accommodate both the Zeeman- and the \(D_{t}\) effective field.
### Intrinsic DW tilting in the presence of \(D_{b}\) and \(D_{s}\)
As mentioned in the discussion of Sec.III.1, the appearance of DW tilting in perpendicularly magnetized nanowires is a consequence of the internal equilibrium of torques trying to orient the DW magnetization some preferred configuration. According to eq.(15), if the DMI tensor of the system displays both diagonal and off diagonal components, the conflict of Neel- and Bloch-wall stabilizing torques is expected to be present even in the absence of an applied IP field. By observing Fig.5-(c), we can in fact see how the presence of a DMI tensor compatible with the \(S_{4}\) point group symmetry (see eq.(22)), DW tilting occurs even in the absence of IP fields. In the thin film limit, considering a situation where the DMI
Figure 3: a-c) Internal DW magnetization angle stabilized by 3 different representative DMI tensors in the absence of an applied IP field (d-e) Internal DW magnetization angle stabilized by 2 different representative DMI tensors in the presence of an applied IP field. The mirrored image of the tilted domain wall in (d) and (e) is included for clarity. The DMI tensor components are expressed in mJ/m\({}^{2}\).
strength dominates the demagnetizing field, the magnetization angle in the reference frame of the DW (i.e. \(\phi+\chi\)) can be easily derived by minimizing the simplified DW energy density
\[\sigma_{DW}(\phi,\chi)=2\frac{A}{\Delta}+\pi\big{[}D_{b}\sin(\phi+\chi)-D_{s} \cos(\phi+\chi)\big{]}, \tag{24}\]
which yields the simple solution (Fig.5-(a))
\[\phi+\chi=\arctan\left(-\frac{D_{b}}{D_{s}}\right). \tag{25}\]
To obtain an approximate solution for the tilting angle \(\chi\) in the \(D_{s}/D_{b}\ll 1\) limit as a function of the material parameters, we can follow the procedure outlined in ref.[23] making the analogy between the \(D_{b}\) DMI field and an applied field along the \(y-\)axis. As discussed in Section.II.1, DW tilting is the result of an energy balance between satisfying the internal constraints of the DW and the energy cost due to its surface area increase. We imagine a scenario where the initial state of the DW is a Neel configuration (large \(D_{s}\) hypothesis), i.e. \(\sigma_{0}=2A/\Delta+\pi D_{s}+2\Delta K_{0}\). The energy of the DW surface scales with \(\sim 1/\cos\chi\), while the energy gain of the \(D_{b}\) DMI component in the DW scales approximately with \(\sin\chi\). If we assume a small \(D_{b}\) contribution (\(D_{b}/D_{s}\ll 1\)), we can approximate the DW energy in the Neel configuration as fixed and the energy of the DW as
\[\sigma_{DW}\approx\frac{\sigma_{0}-\pi D_{b}\sin\chi}{\cos\chi}, \tag{26}\]
which is minimized by
\[\sin\chi=\frac{\pi D_{b}}{\sigma_{0}}=\frac{\pi D_{b}}{2A/\Delta+\pi D_{s}+2 \Delta K_{0}}. \tag{27}\]
As we can see from Fig.5-(b), the above formula fits the simulations data reasonably well for small \(D_{b}\), where the dependency of the tilting angle \(\chi\) from the \(D_{b}\) component is approximately linear.
### Measuring \(D_{a}\) and \(D_{s}\) DMI contributions with IP fields
According to the discussion of Sec.III.1 and Fig.3, it might seem impossible to use the canting angle as a function of applied IP fields to measure \(D_{a}\) and \(D_{s}\) since in the untilted case of eq.(16), the \(D_{s}\) energy density component simply contributes to the stabilization of a Neel wall and can either collaborate or compete with the
Figure 4: Comparison of Micromagnetic simulations [25] and numerical minimization of the energy density of eq.(15) for (a) The tilting angle \(\chi\) as a function of the D\({}_{21}\) DMI tensor component of in the case of a C\({}_{2e}\) symmetric DMI. (b) Tilting angle \(\chi\) as a function of the D\({}_{s}\) DMI tensor component of in the case of a S\({}_{4}\) symmetric DMI in the case of D\({}_{b}=0\). (c) Tilting angle \(\chi\) as a function of the D\({}_{s}\) DMI tensor component of in the case of a S\({}_{4}\) symmetric DMI in the case of D\({}_{b}=1.5\) mJ/m\({}^{2}\). (d) Tilting angle \(\chi\) as a function of the D\({}_{b}\) DMI tensor component of in the case of a T symmetric DMI with an IP applied field in the y-direction of magnitude \(\mu_{0}H_{y}=100\) mT (e) Tilting angle \(\chi\) as a function of the D\({}_{b}\) DMI tensor component of in the case of a T symmetric DMI with an IP applied field in the x-direction of magnitude \(\mu_{0}H_{x}=100\) mT.
contribution depending on the relative sign. We can in fact observe how in Fig.6-(a), the response of the tilting angle \(\chi\) to an IP H\({}_{y}\) field in the case of \(D_{s}\neq 0\) is identical to the case \(-D_{a}\) and cannot be distinguished. However, according to eq.(15), Fig.3-(b)-i. and -ii., even when the canting angle \(\chi\) is identical, the equilibrium angle \(\phi\) inside the DW in the presence of \(D_{s}\) is different when compared to a system with \(D_{a}\). This implies that the simultaneous action of H\({}_{y}\) and H\({}_{x}\) IP fields should induce a different response of the DW canting angle \(\chi\) in the nanowire. In Fig.6-(b), we show how the canting angle \(\chi\) responds differently in the presence of \(D_{a}\) or \(D_{s}\) under the application of a rotating IP field of the form
\[\mu_{0}\mathbf{H}=\mu_{0}H_{0}\begin{pmatrix}\cos(\omega t)\\ \sin(\omega t)\\ 0\end{pmatrix}, \tag{28}\]
where \(t\in[0,T]\), \(\omega=2\pi/T\) and \(\mu_{0}H_{0}=100mT\). We stress the fact that the variable \(t\) does not have the unit of a physical time, since in the simulation the canting angle \(\chi\) in response to the applied field is recorded after the system has had time to relax and not after a fixed time interval. In the x-axis of Figs-6-(a,b) we refer to this variable as "steps". In Fig.6-(b) also shows how the form of these curves could in principle be fitted to Eq.(15) to extract the \(D_{a},D_{s}\) coefficients, potentially allowing for the magneto-optical measurements of different DMI tensor components with the canting angle method. The fit is performed by calculating \(\chi\) from a constrained minimization of the DW energy density of eq.(15) using \(H_{x},H_{y}\) as variables and \(D_{a}\) and \(D_{s}\) as fitting parameters.
### Domain-wall dynamics in the presence of arbitrary DMI tensors
After having studied the effects of the different components of the DMI tensor on the static configurations of magnetic domain walls in nanowires, we now focus on the effects on the dynamics. Given that in the field driven, steady state regime the magnetization angle in the reference frame of the DW is only dependent on the IP torques exerted by the driving field \(H_{z}\), the anisotropy contributions \(H_{k}\) and the various components of the DMI tensor, we can avoid considering \(\chi\) as a collective coordinate in the dynamical equations and work with the simpler \(q-\phi\) model [23; 36] whose DW energy density \(\sigma_{DW}(q,\phi)\) with the generalized chiral interaction tensor from eq.(2) can be written as
\[\sigma_{DW}(\phi)= 2\frac{A}{\Delta}+\pi\big{[}(D_{a}-D_{s})\cos(\phi)+(D_{b}-D_{t} )\sin(\phi)\big{]}+\] \[2\Delta(K_{0}+K\sin^{2}(\phi))-\pi\Delta M_{s}(H_{y}\sin\phi+H_ {x}\sin\phi). \tag{29}\]
By explicitly writing the Lagrangian of the DW as \(\mathcal{L}=\sigma_{DW}+(M_{s}/\gamma)\phi\hat{\sigma}\sin\theta\) and the Rayleigh dissipation function to correctly account for damping effects \(\mathcal{F}=(\alpha M_{s}/2\gamma)\ \dot{\mathbf{m}}\). We can derive the equations of mo
Figure 5: (a) DW angle \(\phi+\chi\) as a function of the off-diagonal DMI tensor components \(D_{s}\) in the \(S_{4}\) symmetric case. (b) DW angles as a function of the diagonal DMI tensor components \(D_{b}\) ( \(D_{b}/D_{s}\ll 1\) limit) in the \(S_{4}\) symmetric case. (c) intrinsic DW tilting in the presence of simultaneous presence of \(D_{s}\) and \(D_{b}\) for 3 representative cases.
tion from the Euler-Lagrange-Rayleigh equation [24],
\[\frac{\partial\mathcal{L}}{\partial X}-\frac{d}{dt}\left(\frac{\partial\mathcal{L }}{\partial\dot{X}}\right)+\frac{\partial\mathcal{F}}{\partial\dot{X}}=0,\ \ X\in\{q,\phi,\Delta\}, \tag{30}\]
obtaining the following equations of motion
\[\dot{q}= \frac{\Delta\gamma_{0}}{1+\alpha^{2}}\bigg{[}\alpha QH_{z}+QH_{K} \frac{\sin 2\varphi}{2}-\frac{\pi}{2}\tilde{f}^{\prime}_{\text{DMI}}(\varphi)\] \[-Q\frac{\pi}{2}\left(H_{y}\cos\varphi-H_{x}\sin\varphi\right) \bigg{]}, \tag{31}\] \[\dot{\varphi}= \frac{\gamma_{0}}{1+\alpha^{2}}\bigg{[}H_{z}-\alpha\bigg{(}H_{K} \frac{\sin 2\varphi}{2}-Q\frac{\pi}{2}\tilde{f}^{\prime}_{\text{DMI}}(\varphi)-\] \[\frac{\pi}{2}\left(H_{y}\cos\varphi-H_{x}\sin\varphi\right) \bigg{)}\bigg{]},\] (32) \[\dot{\Delta}= \frac{12\gamma_{0}}{\mu_{0}M_{s}\alpha\pi^{2}}\bigg{[}\frac{A}{ \Delta}-\Delta\left(K_{0}+K\sin^{2}\varphi\right)+\] \[\mu_{0}M_{s}\Delta\frac{\pi}{2}\left(H_{x}\cos\varphi+H_{y}\sin \varphi\right)\bigg{]}. \tag{33}\]
Where we define
\[H_{K}=\frac{2K}{M_{s}\mu_{0}}\,\ \tilde{f}^{\prime}_{DMI}(\phi)=\frac{1}{2 \Delta M_{s}\mu_{0}}\frac{\partial f_{DMI}(\phi)}{\partial\phi} \tag{34}\]
and \(f_{DMI}(\phi)\) represents the trigonometric function with all the different DMI contributions (antisymmetric \(D_{a}\), symmetric \(D_{s}\), traceless \(D_{b}\) and diagonal \(D_{t}\))
\[f_{DMI}(\phi)=(D_{a}-D_{s})\cos\phi+(D_{t}-D_{b})\sin\phi \tag{35}\]
If we assume an up-down initial configuration (\(Q=+1\)) and an in plane (IP) field free stationary case (i.e \(H_{x}=H_{y}=0\)), imposing the stationary conditions \(\dot{\phi}=\dot{\Delta}=0\) yields the conditions [13; 8] for rigid motion of the DW magnetization
\[H_{z} =\alpha\bigg{(}H_{k}\ \frac{\sin 2\phi}{2}-\frac{\pi}{2}\big{(}(H_{ DMI,a}-H_{DMI,s})\sin\phi\] \[+\ (H_{DMI,b}-H_{DMI,t})\cos\phi\big{)}\bigg{)}, \tag{36}\]
where \(H_{DMI,i\in\{a,s,b,t\}}=D_{i}/2\Delta\mu_{0}M_{s}\) is the effective field strength associated to the different DMI components. In order to make the notation more compact, we define
\[\kappa:=\frac{K}{K_{0}}\,\ \tilde{D}^{\prime}:=\frac{\pi(D_{a}-D_{s})}{ \mu_{0}H_{K}M_{s}\Delta_{0}}\,\ \tilde{D}^{\prime\prime}:=\frac{\pi(D_{t}-D_{b})}{H_{K}\mu_{0}M_{s}\Delta_{0}} \tag{37}\]
where \(\Delta_{0}=\sqrt{\frac{A}{K_{0}+K\sin^{2}\phi}}\) represents the equilibrium DW width that can be obtained by setting \(\dot{\Delta}=0\) in eq.(33). These definitions allow us to rewrite (36) in the form
\[H_{z}=\frac{\alpha H_{k}}{2}\left[(\tilde{D}^{\prime\prime}\cos\phi-\tilde{D }^{\prime}\sin\phi)\frac{\sqrt{1+\kappa\sin^{2}\phi}}{\kappa}+\sin 2\phi \right]. \tag{38}\]
For fixed \(\kappa,\tilde{D}^{\prime\prime},\tilde{D}^{\prime}\), the Walker field is identified as the largest \(H_{z}\) fulfilling eq.(38) and is obtained by maximising the right hand side of eq.(38) [33], i.e.
\[H_{W}:=\frac{\alpha H_{k}}{2}\times\] \[\max_{\phi\in[0,2\pi)}\Bigg{[}(\tilde{D}^{\prime\prime}\cos\phi- \tilde{D}^{\prime}\sin\phi)\frac{\sqrt{1+\kappa\sin^{2}\phi}}{\kappa}+\sin 2 \phi\Bigg{]} \tag{39}\]
The maximization of (39) is not possible in closed analytical form, however one can treat the thin film limit, where the perpendicular magnetic anisotropy dominates over the shape anisotropy, i.e. \(N_{z}\gg N_{x}\,\ N_{y}\) implying the condition \(\kappa\ll 1\) on eq.(39). The asymptotic solution in that case has the following form
\[H_{W}\sim\begin{cases}\frac{\tilde{D}^{\prime\prime}\ |\tilde{D}^{\prime\prime} |+|\tilde{D}^{\prime}|\ \tilde{D}^{\prime}}{\kappa\sqrt{(\tilde{D}^{\prime\prime})^{2}+(\tilde{D}^{ \prime})^{2}}}\ \text{if}\ sign(\tilde{D}^{\prime\prime}\cdot\tilde{D}^{\prime})=1\\ \frac{D^{\prime\prime}\ |D^{\prime\prime}|-|\tilde{D}^{\prime}|\ \tilde{D}^{\prime}}{ \kappa\sqrt{(\tilde{D}^{\prime\prime})^{2}+(\tilde{D}^{\prime})^{2}}}\ \text{if}\ sign(\tilde{D}^{\prime\prime}\cdot\tilde{D}^{\prime})=-1 \end{cases} \tag{40}\] \[\text{(as }\kappa\to 0\text{)}.\]
We validate the assumption of \(\kappa\ll 1\) approximation in our case by pointing out how the demagnetizing factors in
Figure 6: (a) DW tilting angle \(\chi\) response to an H\({}_{y}\) field sweep from \(-200\) mT to \(+200\) mT in the case of pure \(D_{a}\) (blue dots) and pure \(D_{s}\) (orange squares) contributions to DMI. As can be seen the 2 responses overlap almost completely. (b) DW tilting angle \(\chi\) response to a rotating IP field with H\({}_{y}\) and H\({}_{x}\) components (see eq.(28) in the case of pure \(D_{a}\) and pure \(D_{s}\) contributions to DMI. The dashed curves are obtained by fitting the energy minimum of eq.(15) onto the results obtained via micromagnetic simulations using \(D_{a}\) and \(D_{s}\) as the fitting parameters.
the case of a slab geometry can be calculated analytically [35] and our geometry \(L_{x}=1\)\(\mu\)m, \(L_{y}=160\) nm and \(L_{z}=0.6\) nm yields the following values for the demagnetizing factors
\[N_{x}=0.0013\,\ N_{y}=0.0082\,\ N_{z}=0.990. \tag{41}\]
We now proceed and discuss the obtained analytical result by comparing them with numerical simulations. By observing eq.(40), we first of all notice how in the limit of \(\tilde{D}^{\prime}\to 0\) (i.e. a DMI tensor with only elements on the diagonal) or the limit \(\tilde{D}^{\prime\prime}\to 0\) (i.e. a DMI tensor with only elements on the off-diagonal) the asymptotic behavior of eq.(40) becomes
\[H_{W}(\tilde{D}^{\prime}\to 0) \sim\tilde{D}^{\prime\prime}/\kappa \tag{42}\] \[H_{W}(\tilde{D}^{\prime\prime}\to 0) \sim\tilde{D}^{\prime}/\kappa \tag{43}\]
which shows a linear behavior compatible both with our numerical results (see Fig.7) and, in the \(H_{W}(\tilde{D}^{\prime}\to 0)\sim\tilde{D}^{\prime\prime}/\kappa\) case, with the results shown in [33]. We emphasize how these limiting cases show a linear dependence of the WB field only in the case of exclusive presence of diagonal or off-diagonal elements, but not both at the same time. By observing Fig.8-(a,b) we point out how the presence of both a diagonal and off-diagonal component of the DMI tensor results in a departure from the linear behavior also described by eq.(42) and eq.(43) hinting at the fact that the components of the effective field counteracting precessional motion do not cooperate additively but in a non-linear way. Furthermore, we emphasize how this behavior of the WB field directly translates in the attainable peak DW velocity since, in the \(\kappa\ll 1\) limit,
\[v_{max}\sim\frac{\Delta_{0}\gamma_{0}\alpha}{1+\alpha^{2}}H_{W}= \tag{45}\] \[\frac{\Delta_{0}\gamma_{0}\alpha}{1+\alpha^{2}}\left\{\frac{ \tilde{D}^{\prime\prime}\ |\tilde{D}^{\prime\prime}|+|\tilde{D}^{\prime\prime}|\ \tilde{D}^{\prime}}{\kappa\sqrt{(\tilde{D}^{\prime\prime})^{2}+(\tilde{D}^{ \prime})^{2}}}\ \text{if}\ sign(\tilde{D}^{\prime\prime}\cdot\tilde{D}^{\prime})=1\right.\] \[\left.\left(\text{as}\ \kappa\to 0\right).\right. \tag{46}\]
In Fig.8-(c) we report the peak velocities calculated with eq.(46) and show how with \(D_{s}=1.5\) mJ/m\({}^{2}\) and \(D_{b}=-1.5\) mJ/m\({}^{2}\) even peak velocities as high as \(v_{max}\approx 1200\) m/s are theoretically achievable. Furthermore, using experimentally measured [20] parameters for the \(S_{4}\) symmetric Schrebersite compound Fe\({}_{1.9}\)Ni\({}_{0.9}\)Pd\({}_{0.2}\)P (\(A=8\) pJ/m, \(K_{u}=31\) kJ/m\({}^{3}\), \(M_{s}=417\) kA/m) while keeping the nanowire dimensions unchanged, eq.(46) predicts how peak velocities \(v_{max}\approx 1700\) m/s can be achieved even with much smaller DMI tensor components (i.e. \(D_{s}=D_{b}=0.2\) mJ/m\({}^{2}\)).
## IV Conclusion
In this work we modified to the existing CCM [23; 24; 36] to include and study the effects of arbitrary DMI tensors on the statics and dynamics of domain walls in magnetic nanowires. We discuss how the effects of a DMI tensors can be described by inspecting how its symmetric traceless (\(D_{s},D_{b}\)), antisymmetric (\(D_{a}\)) and diagonal (\(D_{t}\)) components act on the effective field inside the DW. We first show how DW canting is well described by the energy density of the \(q-\phi-\chi\) model (see Fig.4) and discuss how the canting angle method is able to distinguish diagonal (\(D_{b}\),\(D_{t}\)) DMI contributions and off-diagonal DMI contributions (\(D_{a},D_{s}\)). We also observe how measuring the response of the canting angle \(\chi\) to the simultaneous application of an IP field with both H\({}_{x}\) and H\({}_{y}\) components could potentially be a means to magneto-optically measure symmetric (\(D_{s}\)) and antisymmetric (\(D_{a}\)) contributions (see Fig.6) to DMI. Other IP field applications schemes could be studied to further enhance the resolution power of this technique. We then proceed and show how, in the presence of both \(D_{s}\) and \(D_{b}\) DMI components, DW tilting can be present even in the absence of IP fields. We derive a simple analytic formula for the canting angle \(\chi\) as a function of \(D_{b}\) valid in the \(D_{s}/D_{b}\ll 1\) limit (see Fig.5). We then study the effect of the different DMI tensor components on the the field driven dynamic properties of DWs in magnetic nanowires. We discover that the effects of the interplay of the Neel- and Bloch-Stabilizing DMI components on the magnitude of the WB field is not trivial and determines a departure from the simple linear dependency (Fig.8-(a)) in the case of pure interface- [12] of bulk-DMI [33]. We then derive an analytic formula describing the dependency the of the WB field on the different DMI tensor components (eq.(40)) in the thin film limit, comparing its predictions with micromagnetic simulations (Fig.8-(b)). The very high theoretically achievable DW velocities in the order km/s (Fig.8-(c)) is confirmed by simulation and could open the way to a new wave of experimental investigations in low-symmetrty magnetic thin films. These results indeed hint at the fact that materials displaying these more exotic forms of DMI combining both Bloch and Neel stabilising effective fields, could be interesting candidates for novel DW motion based technology concepts.
## V Acknowledgement
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 860060 "Magnetism and the effect of Electric Field" (MagnEFi). |
2307.00144 | Abide by the Law and Follow the Flow: Conservation Laws for Gradient
Flows | Understanding the geometric properties of gradient descent dynamics is a key
ingredient in deciphering the recent success of very large machine learning
models. A striking observation is that trained over-parameterized models retain
some properties of the optimization initialization. This "implicit bias" is
believed to be responsible for some favorable properties of the trained models
and could explain their good generalization properties. The purpose of this
article is threefold. First, we rigorously expose the definition and basic
properties of "conservation laws", that define quantities conserved during
gradient flows of a given model (e.g. of a ReLU network with a given
architecture) with any training data and any loss. Then we explain how to find
the maximal number of independent conservation laws by performing
finite-dimensional algebraic manipulations on the Lie algebra generated by the
Jacobian of the model. Finally, we provide algorithms to: a) compute a family
of polynomial laws; b) compute the maximal number of (not necessarily
polynomial) independent conservation laws. We provide showcase examples that we
fully work out theoretically. Besides, applying the two algorithms confirms for
a number of ReLU network architectures that all known laws are recovered by the
algorithm, and that there are no other independent laws. Such computational
tools pave the way to understanding desirable properties of optimization
initialization in large machine learning models. | Sibylle Marcotte, RΓ©mi Gribonval, Gabriel PeyrΓ© | 2023-06-30T21:32:32Z | http://arxiv.org/abs/2307.00144v2 | # Abide by the Law and Follow the Flow:
###### Abstract
Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This "implicit bias" is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. The purpose of this article is threefold. First, we rigorously expose the definition and basic properties of "conservation laws", which are maximal sets of independent quantities conserved during gradient flows of a given model (e.g. of a ReLU network with a given architecture) with any training data and any loss. Then we explain how to find the exact number of these quantities by performing finite-dimensional algebraic manipulations on the Lie algebra generated by the Jacobian of the model. Finally, we provide algorithms (implemented in SageMath) to: a) compute a family of polynomial laws; b) compute the number of (not necessarily polynomial) conservation laws. We provide showcase examples that we fully work out theoretically. Besides, applying the two algorithms confirms for a number of ReLU network architectures that all known laws are recovered by the algorithm, and that there are no other laws. Such computational tools pave the way to understanding desirable properties of optimization initialization in large machine learning models.
## 1 Introduction
State-of-the-art approaches in machine learning rely on the conjunction of gradient-based optimization with vastly "over-parameterized" architectures. A large body of empirical [27] and theoretical [4] works suggest that, despite the ability of these models to almost interpolate the input data, they are still able to generalize well. Analyzing the training dynamics of these models is thus crucial to gain a better understanding of this phenomenon. Of particular interest is to understand what properties of the initialization are preserved during the dynamics, which is often loosely referred to as being an "implicit bias" of the training algorithm. The goal of this article is to make this statement precise, by properly defining maximal sets of such "conservation laws", by linking these quantities to algebraic computations (namely a Lie algebra) associated with the model parameterization (in our framework, this parameterization is embodied by a mapping \(\phi\)), and finally by exhibiting algorithms to implement these computations in SageMath [26].
Over-parameterized modelModern machine learning practitioners and researchers have found that over-parameterized neural networks (with more parameters than training data points), which are often trained until perfect interpolation, have impressive generalization properties [27; 4]. This performance seemingly contradicts classical learning theory [22], and a large part of the theoretical deep learning literature is aimed at explaining this puzzle. The choice of the optimization algorithm is crucial to the model generalization performance [9; 18; 12], thus inducing an _implicit bias_.
Implicit biasThe terminology "implicit bias" informally refers to properties of trained models which are induced by the optimization procedure, typically some form of regularization [19]. For gradient descent, in simple cases such as scalar linear neural networks or two-layer networks with a single neuron, it is actually possible to compute in closed form the implicit bias, which induces some approximate or exact sparsity regularization [9]. Another interesting case is logistic classification on separable data, where the implicit bias selects the max-margin classifier both for linear models [23] and for two-layer neural networks in the mean-field limit [7]. The key hypothesis to explicit the implicit bias is that the Riemannian metric associated to the over-parameterization is of Hessian type [9], which is a very strong constraint. Unfortunately, even for matrix factorization (so more than a single neuron), this is not the case, and no closed form is known for the implicit bias [10]. The work of [15] gives conditions on the over-parameterization for this to be possible (for instance the Lie brackets should vanish: they are (as could be expected
Conservation lawsFinding functions conserved during gradient flow optimization of neural networks (a continuous limit of gradient descent often used to model the optimization dynamics) is particularly useful to better understand the flow behavior. One can see conservation laws as a "weak" form of implicit bias: to explain, among a possibly infinite set of minimizers, which properties (e.g. in terms of sparsity, low-rank, etc.) are being favored by the dynamic. If there are enough conservation laws, one has an exact description of the dynamic, and in some cases, one can even determine explicitly the implicit bias. Otherwise, one can still predict what properties of the initialization are retained at convergence, and possibly leverage this knowledge. For example, in the case of linear neural networks, certain _balancedness properties_ are satisfied and provide a class of conserved functions [21, 8, 1, 2, 13, 25, 16]. These conservation laws enable for instance to prove the global convergence of the gradient flow [3] under some assumptions. We detail these laws in Proposition 4.1. A subset of these "balancedness" laws still holds in the case of a ReLU activation [8], which reflects the scaling invariance of these networks (see Section 4 for more details). More generally such conservation laws are a consequence [14] of the invariances of the model: to each 1-parameter group of transformation preserving the loss, one can associate a conserved quantity, which is in some sense analogous to Noether's theorem [20]. Similar reasoning is used by [28] to show the influence of initialization on convergence and generalization performance of the neural network. Our work is somehow complementary to this line of research: instead of assuming a priori known symmetries, we directly analyze the model and give access to conservation laws using algebraic computations. For matrix factorization as well as for certain ReLU network architectures, this allows us to show that the conservation laws reported in the literature are complete (there are no other independent quantities that would be preserved by all gradient flows).
### Contributions
We formalize the notion of a conservation law, a quantity preserved through all gradient flows given a model architecture (e.g. a ReLU neural network with prescribed layers) and a family of "data-fidelity functions", typically associated to the empirical loss on a training set. Our main contributions are:
* to show that for several classical losses, characterizing conservation laws for deep linear (resp. shallow ReLU) networks boils down to analyzing a finite dimensional space of vector fields;
* to propose an algorithm (coded in SageMath) identifying polynomial conservation laws on linear / ReLU network architectures; it identifies all known laws on selected examples;
* to formally define the maximum number of (not necessarily polynomial) independent conservation laws and characterize it a) theoretically via Lie algebra computations; and b) practically via an algorithm (coded in SageMath) computing this number on worked examples;
* to illustrate that in certain settings these findings allow to rewrite an over-parameterized flow as an "intrinsic" low-dimensional flow;
* to highlight that the cost function associated to the training of linear and ReLU networks, shallow or deep, with various losses (quadratic and more) fully fits the proposed framework.
A consequence of our results is to show for the first time that conservation laws commonly reported in the literature are maximal: there is no other independent preserved quantity (see Propositions 4.3, 4.2, Corollary 4.4) and Section 4.2).
Conservation Laws for Gradient Flows
After some reminders on gradient flows, we formalize the notion of conservation laws.
### Over-parameterized models
We consider learning problems, where we denote \(x_{i}\in\mathbb{R}^{m}\) the features and \(y_{i}\in\mathcal{Y}\) the targets (for regression) or labels (for classification) in the case of supervised learning, while \(y_{i}\) can be considered constant for unsupervised/self-supervised learning. The prediction is performed by a parametric mapping \(g_{\theta}:\mathbb{R}^{m}\to\mathbb{R}^{n}\) (for instance a neural network) which is trained by empirical risk minimization of a **cost**\(\mathcal{E}\)
\[\min_{\theta\in\mathbb{R}^{D}}\mathcal{E}(\theta)\coloneqq\sum_{i}\ell(g_{ \theta}(x_{i}),y_{i}), \tag{1}\]
where \(\ell\) is the **loss** function (for regression, one typically has \(\mathcal{Y}=\mathbb{R}^{n}\)). The goal of this paper is to analyze what are the functions \(h(\theta)\) which are preserved during the optimization by gradient descent of the cost \(\mathcal{E}(\theta)\). To make the mathematical analysis tractable and provide algorithmic procedure to determine these functions, our fundamental hypothesis is that the cost \(\mathcal{E}\) can be factored - at least _locally_, in a sense that will be made precise - in the form
\[\forall\theta\in\Omega,\quad\mathcal{E}(\theta)=f_{X,Y}(\phi(\theta)) \tag{2}\]
where the **data fidelity**\(f_{X,Y}\) depends on the data \(X\coloneqq(x_{i})_{i}\), \(Y\coloneqq(y_{i})_{i}\) and the loss \(\ell\), while the **mapping**\(\phi\) must be independent from these quantities. Formally, \(\Omega\) is a non-empty open subset of the domain of trainable parameters, \(\mathbb{R}^{D}\) (introduced to capture the local training dynamics) and \(\phi\in\mathcal{C}^{\infty}(\Omega,\mathbb{R}^{d})\).
_Example 2.1_.: (Factorization for _linear_ neural networks) In the two-layer case, with \(r\) neurons, denoting \(\theta=(U,V)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}\) (so that \(D=(n+m)r\)), we can factorize \(g_{\theta}(x)\coloneqq UV^{\top}x\) by the mapping \(\phi(\theta)\coloneqq UV^{\top}\) using \(f_{X,Y}(\cdot)=\sum_{i}\ell(\cdot x_{i},y_{i})\). More generally for \(q\) layers, with \(\theta=(U_{1},\cdots,U_{q})\), we can still factorize \(g_{\theta}(x)\coloneqq\vec{U}_{1}\cdots U_{q}x\) using \(\phi(\theta)\coloneqq U_{1}\cdots U_{q}\) and the same \(f_{X,Y}\). This factorization is _globally_ valid on \(\Omega=\mathbb{R}^{D}\) in the sense that \(f_{X,Y}\) does not depend on \(\theta\).
The notion of locality of the factorization \(f_{X,Y}\circ\phi\) is illustrated by the next example.
_Example 2.2_.: (Factorization for two-layer ReLU network without bias). Consider \(g_{\theta}(x)=\big{(}\sum_{j=1}^{r}u_{k,j}\sigma(\langle v_{j},x\rangle)\big{)} _{k=1}^{n}\), with \(\sigma(t)\coloneqq\max(t,0)\) the ReLU activation function and \(v_{j}\in\mathbb{R}^{m}\), \(u_{k,j}\in\mathbb{R}\). Then, denoting again \(\theta=(U,V)\) with \(U=(u_{k,j})_{k,j}=:(u_{1},\cdots,u_{r})\in\mathbb{R}^{n\times r}\) and \(V=(v_{1},\cdots,v_{r})\in\mathbb{R}^{m\times r}\) (so that \(D=(n+m)r\)), we rewrite \(g_{\theta}(x)=\sum_{j=1}^{r}u_{j}\varepsilon_{j,x}v_{j}^{\top}x\) where \(\varepsilon_{j,x}=\mathbb{I}(v_{j}^{\top}x>0)\) is piecewise constant with respect to \(\theta\). Thus, on any domain \(\Omega\subset\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}\) such that \(\varepsilon_{j,x_{i}}(\theta)\coloneqq\mathbb{I}(v_{j}^{\top}x_{i}>0)\) is constant over \(\theta\in\Omega\) for each training sample \(x_{i}\), the model \(g_{\theta}(x)\) can be factorized by the mapping \(\phi(\theta)=(\phi_{jkl})_{jkl}\coloneqq(u_{j}v_{j}^{\top})_{j=1}^{r}\in \mathbb{R}^{r\times n\times m}\) (here \(d=rmn\)) using \(f_{X,Y}(\phi):=\sum_{i}\ell(\sum_{j,k,l}\varepsilon_{j,x_{i}}\phi_{j,k,l}\;,\;y_ {i})\). On \(\Omega\) we obtain a factorizing mapping \(\phi(\theta)\) containing \(r\) matrices of size \(m\times n\) (of rank at most one) associated to a "local" data-fidelity \(f_{X,Y}\) valid in a neighborhood of \(\theta\). A similar factorization is possible for deeper ReLU networks, including with biases [24], as further discussed in the proof of Theorem 2.8 in Appendix B.
A priori, one can consider different "levels" of conservation, depending whether \(h\) is conserved:
1. during the optimization of \(\mathcal{E}\) for a given loss \(\ell\) and a given data set \((x_{i},y_{i})_{i}\); i.e. during the optimization of \(f_{X,Y}\circ\phi\), for a given \(f_{X,Y}\);
2. given a loss \(\ell\), during the optimization of \(\mathcal{E}\) for _any_ data set; i.e., during the optimization of \(f_{X,Y}\circ\phi\), for _every_ data set \((X,Y)\);
3. during the optimization of \(f\circ\phi\) for _any choice_ of smooth \(f\) (not necessarily associated to a data set \((X,Y)\)).
Our analysis focuses on last two cases, and shows that under certain assumptions, being conserved _for a given loss and every dataset_ is indeed equivalent to being conserved _for every smooth_\(f\). As a consequence, our theoretical analysis then studies functions \(h(\theta)\) preserved by flows of functions \(f\circ\phi\) for a fixed mapping \(\phi\) but any choice of fidelity \(f\). We call these functions "conservation laws" associated to the mapping \(\phi\), and they are formally defined in Section 2.3. Theorem 2.8 shows that in the two examples given above, this is equivalent to the conservation for all cost \(\mathcal{E}\) of the form (1).
### Gradient dynamics
We consider training using the gradient flow (the continuous time limit of gradient descent) of \(f\circ\phi\):
\[\hat{\theta}(t)=-\nabla(f\circ\phi)(\theta(t))=-[\partial\phi(\theta(t))]^{ \top}\nabla f(\phi(\theta(t))),\text{ with }\theta(0)=\theta_{\text{init}}, \tag{3}\]
where the "data fidelity function" \(f\) is differentiable and arises from (2) with some dataset \((x_{i},y_{i})_{i}\) and some loss function \(\ell\). Here \(\partial\phi(\theta)\in\mathbb{R}^{d\times D}\) is the Jacobian of the factorizing mapping. Note that using stochastic optimization methods and discrete gradients would break the exact preservation of the conservation laws, and only approximate conservation would hold, as remarked in [14].
The core of our analysis is to analyze the algebraic structure of the Jacobian vector fields involved in (3). In practice, the dimensions often satisfy \(\mathtt{rank}\partial\phi(\theta)<\min(d,D)\), i.e., \(\phi(\theta)\) lives in a manifold of lower dimension. This corresponds to the fact that \(\theta\) is an over-parameterized variable, and \(\phi\) is an over-parameterized model with nontrivial conserved quantities during the optimization. Our goal is to determine the "number" of independent functions conserved through _all_ such flows (i.e. for _every choice_ of data fidelity function \(f\)_restricted to_ the form (2)). We show in Section 2.3 that, under mild assumptions on the loss \(\ell\), these conserved functions are exactly the functions conserved through _all flows_ (3) _for every infinitely smooth_ data fidelity function \(f\in\mathcal{C}^{\infty}(\phi(\Omega),\mathbb{R})\).
_Example 2.3_.: As a first simple example, consider a two-layer _linear_ neural network in dimension 1 (both for the input and output), with a single neuron. For such - admittedly trivial - architecture, the function to minimize is factorized by the mapping \(\phi:(u\in\mathbb{R},v\in\mathbb{R})\mapsto uv\in\mathbb{R}\) with \(\theta\coloneqq(u,v)\). One can directly check that the function: \(h(u,v)=u^{2}-v^{2}\) satisfies that, for all initial conditions \((u_{\text{init}},v_{\text{init}})\in\mathbb{R}^{2}\), \(h(u(t),v(t))=h(u_{\text{init}},v_{\text{init}})\), as soon as \(\theta(t)\coloneqq(u(t),v(t))\) is a solution of the ODE (3) with _some_ differentiable data-fidelity function \(f\). We say in that case that \(h\) is a conservation law for \(\phi\). Are there other such functions? Example 3.6 explains that on this example the answer is negative. This results from algebraic computations, implemented in SageMath, see Section 3.3.
### Conserved functions and Conservation laws
We define conserved functions associated with (collections of) vector fields in \(\mathcal{X}(\Omega)\coloneqq\mathcal{C}^{\infty}(\Omega,\mathbb{R}^{D})\).
**Definition 2.4**.: Let \(\chi\in\mathcal{X}(\Omega)\) be an infinitely smooth vector field. By the Cauchy-Lipschitz theorem, for each initial condition \(\theta_{\text{init}}\), there exists a unique maximal solution \(t\in[0,T_{\theta_{\text{init}}})\mapsto\theta(t,\theta_{\text{init}})\) of the ODE \(\dot{\theta}(t)=\chi(\theta(t))\) with \(\theta(0)=\theta_{\text{init}}\). A function \(h:\Omega\subseteq\mathbb{R}^{D}\to\mathbb{R}\) is _conserved during the flow induced by \(\chi\)_ if \(h(\theta(t,\theta_{\text{init}}))=h(\theta_{\text{init}})\) for each choice of \(\theta_{\text{init}}\) and every \(t\in[0,T_{\theta_{\text{init}}})\).
It is _conserved through a subset_\(V\subset\mathcal{X}(\Omega)\) if \(h\) is conserved during all flows induced by all \(\chi\in V\).
A basic property of \(\mathcal{C}^{1}\) conserved functions (which proof can be found in Appendix A) corresponds to an "orthogonality" between their gradient and the considered vector fields.
**Proposition 2.5**.: _Given a subset \(V\subset\mathcal{X}(\Omega)\), consider its trace at \(\theta\in\Omega\), defined as the linear space_
\[V(\theta)\coloneqq\operatorname{span}\{\chi(\theta):\chi\in V\}\subseteq \mathbb{R}^{D}. \tag{4}\]
_A function \(h\in\mathcal{C}^{1}(\Omega,\mathbb{R})\) is conserved through \(V\) if, and only if, \(\nabla h(\theta)\perp V(\theta)\) for every \(\theta\in\Omega\)._
Given a family \(F\subseteq\mathcal{C}^{\infty}(\phi(\Omega),\mathbb{R})\) of data-fidelity functions, the set of functions that are conserved during all flows defined by the ODE (3), with each \(f\in F\), corresponds by definition to the functions that are conserved through the subset
\[V_{\phi}[F]\coloneqq\{\chi:\exists f\in F,\;\chi=\nabla(f\circ\phi)\text{ on }\Omega\}. \tag{5}\]
Given a loss \(\ell\), our goal is to study the functions conserved through \(V_{\phi}[F_{\ell}]\), where \(F_{\ell}\) collects all smooth data-fidelity functions \(f\in\mathcal{C}^{\infty}(\phi(\Omega),\mathbb{R})\) that satisfy \((f\circ\phi)(\theta)=\sum_{i=1}^{N}\ell(g_{\theta}(x_{i}),y_{i})\) for some training set of arbitrary size, i.e.
\[F_{\ell}\coloneqq\left\{f\in\mathcal{C}^{\infty}(\phi(\Omega),\mathbb{R}): \exists(X,Y),f\circ\phi(\theta)=f_{X,Y}\circ\phi(\theta)\coloneqq\sum_{i=1}^{N }\ell(g_{\theta}(x_{i}),y_{i})\text{ on }\Omega\right\}. \tag{6}\]
For linear and ReLU networks we show in Theorem 2.8 and Proposition 2.9 that:
1. under (mild) assumptions on the loss \(\ell(\cdot,\cdot)\), being conserved through \(V_{\phi}[F_{\ell}]\) is the same as being conserved through \(V_{\phi}[\mathcal{C}^{\infty}]\coloneqq V_{\phi}[\mathcal{C}^{\infty}(\phi( \Omega),\mathbb{R})]\), i.e. through _any infinitely smooth data-fidelity_;
2. being conserved through the (a priori infinite-dimensional) subspace \(V_{\phi}[\mathcal{C}^{\infty}]\) is in turn equivalent to being conserved through the _finite-dimensional_ subspace \[V_{\phi}\coloneqq\mathrm{span}\{\nabla\phi_{1}(\cdot),\cdots,\nabla\phi_{d}( \cdot)\}=\left\{\theta\mapsto\sum_{i}a_{i}\nabla\phi_{i}(\theta):(a_{1},\dots,a _{d})\in\mathbb{R}^{d}\right\}\] (7) where we write \(\partial\phi(\theta)^{\top}=(\nabla\phi_{1}(\theta),\cdots,\nabla\phi_{d}( \theta))\in\mathbb{R}^{D\times d}\), with \(\nabla\phi_{i}\in\mathcal{X}(\Omega)\).
The first point (that we establish below with Theorem 2.8) motivates the following definition
**Definition 2.6**.: A real-valued function \(h\) is a _conservation law of \(\phi\)_ if it is conserved through \(V_{\phi}[\mathcal{C}^{\infty}]\).
Proposition 2.5 yields the following intermediate result.
**Proposition 2.7**.: \(h\in\mathcal{C}^{1}(\Omega,\mathbb{R})\) _is a conservation law of \(\phi\) iff \(\nabla h(\theta)\perp V_{\phi}[\mathcal{C}^{\infty}](\theta)\), \(\forall\;\theta\in\Omega\)._
The following theorem (which proof can be found in Appendix B) establishes that in some cases, the functions conserved through \(V_{\phi}[F_{\ell}]\) are exactly the conservation laws of \(\phi\).
**Theorem 2.8**.: _Assume that the loss \((z,y)\mapsto\ell(z,y)\) satisfies the condition:_
\[\operatorname*{span}_{y\in\mathcal{Y}}\{\nabla_{z}\ell(z,y)\}=\mathbb{R}^{n}, \forall z\in\mathbb{R}^{n}, \tag{8}\]
_then for linear neural networks, the conservation laws of \(\phi\) are **exactly** the conserved functions through \(V_{\phi}[F_{\ell}]\), with \(\phi\) from Example 2.1. The same result holds for two-layer ReLU networks with \(\phi\) from Example 2.2 under an additional hypothesis on \(\Omega\): the parameter \(\theta\) of the network is such that hidden neurons are associated to pairwise distinct "hyperplanes" (cf Appendix B for details)._
Condition (8) holds for classical losses \(\ell\) (e.g. quadratic/logistic losses), as shown in Lemma B.5 in Appendix B. Note that the additional hypothesis of pairwise distinct hyperplanes for the two-layer ReLU case is a generic hypothesis and is usual (see e.g. the notion of twin neurons in [24]). The tools from Appendix B extend Theorem 2.8 beyond (deep) linear and shallow ReLU networks. An open problem is whether Theorem 2.8 still holds for deep ReLU networks.
For the second point (the link between conservation through \(V_{\phi}[\mathcal{C}^{\infty}]\) and through \(V_{\phi}\)), an apparent difficulty is that the space \(V_{\phi}[\mathcal{C}^{\infty}]\) of all gradient fields is a priori infinite-dimensional. In contrast, the space \(V_{\phi}\) defined in (7) introduces a much simpler _finite-dimensional_ proxy. A cornerstone of our analysis is to show that the study of conservation laws boils down to the study of this finite-dimensional vector space. This will be crucial in Section 4.1, to provide a tractable scheme (i.e. operating in finite dimension) to analyze the algebraic relationship induced by these vector fields. By combining Proposition 2.7 with the observation that for all \(\theta\in\Omega\) we have \(V_{\phi}[\mathcal{C}^{\infty}](\theta)=\mathrm{span}\{\nabla\phi_{1}(\theta),\dots,\nabla\phi_{d}(\theta)\}=\mathrm{range}(\partial\phi(\theta)^{\top}) =\;V_{\phi}(\theta),\) we obtain:
**Proposition 2.9**.: \(h\in\mathcal{C}^{1}(\Omega,\mathbb{R})\) _is a conservation law for \(\phi\) (cf Definition 2.6) if and only if it is conserved though the finite-dimensional space \(V_{\phi}\) defined in (7), i.e. if_
\[\nabla h(\theta)\perp\nabla\phi_{j}(\theta),\;\forall\;\theta\in\Omega,\; \forall j\in\{1,\dots,d\}.\]
_Example 2.10_.: Revisiting Example 2.3, with \(\phi:(u\in\mathbb{R},v\in\mathbb{R})\mapsto uv\) and \(\theta\coloneqq(u,v)\), we saw that \(h((u,v))\coloneqq u^{2}-v^{2}\) is conserved: and indeed \(\langle\nabla h(u,v),\nabla\phi(u,v)\rangle=2uv-2vu=0\), \(\forall(u,v)\).
In this simple example, the characterization of Proposition 2.9 gives a _constructive_ way to find such a conserved function: we only need to find a function \(h\) such that \(\langle\nabla h(u,v),\nabla\phi(u,v)\rangle=\langle\nabla h(u,v),(v,u)^{\top} \rangle=0\). The situation becomes more complex in higher dimensions, since one needs to understand the interplay between the different vector fields in \(V_{\phi}\).
### Constructibility of some conservation laws
Observe that in Example 2.10 both the mapping \(\phi\) and the conservation law \(h\) are polynomials, a property that surprisingly systematically holds in all examples of interest in the paper, making it possible to _algorithmically_ construct some conservation laws as detailed now.
By Proposition 2.9, a function \(h\) is conserved if it is in the kernel of the linear operator \(h\in\mathcal{C}^{1}(\Omega,\mathbb{R})\mapsto(\theta\in\Omega\mapsto(\langle \nabla h(\theta),\nabla\phi_{i}(\theta)\rangle)_{i=1,\dots,d})\). Thus, one could look for conservation laws in a prescribed finite-dimensional space by projecting these equations in a basis (as in finite-element
methods for PDEs). Choosing the finite-dimensional subspace could be generally tricky, but for the linear and ReLU cases all known conservation laws are actually polynomial "balancedness-type conditions" [1; 2; 8], see Section 4. In these cases, the vector fields in \(V_{\phi}\) are also polynomials (because \(\phi\) is polynomial, see Theorem B.4 and Lemma B.7 in Appendix B), hence \(\theta\mapsto\langle\nabla h(\theta),\nabla\phi_{i}(\theta)\rangle\) is a polynomial too. This allows us to compute a basis of independent polynomial conservation laws of a given degree (to be freely chosen) for these cases, by simply focusing on the corresponding subspace of polynomials. We coded the resulting equations in SageMath, and we found back on selected examples (see Appendix I) all existing known conservation laws both for ReLU and linear networks. Open-source code is available at [https://github.com/sibyllema/Conservation_laws](https://github.com/sibyllema/Conservation_laws).
### Independent conserved functions
Having an algorithm to build conservation laws is nice, yet how can we know if we have built "all" laws? This requires first defining a notion of a "maximal" set of functions, which would in some sense be independent. This does not correspond to linear independence of the functions themselves (for instance, if \(h\) is a conservation law, then so is \(h^{k}\) for each \(k\in\mathbb{N}\) but this does not add any other constraint), but rather to pointwise linear independence of their gradients. This notion of independence is closely related to the notion of "functional independence" studied in [6; 17]. For instance, it is shown in [17] that smooth functionally dependent functions are characterized by having dependent gradients everywhere. This motivates the following definition.
**Definition 2.11**.: A family of \(N\) functions \((h_{1},\cdots,h_{N})\) conserved through \(V\subset\mathcal{X}(\Omega)\) is said to be _independent_ if the vectors \((\nabla h_{1}(\theta),\cdots,\nabla h_{N}(\theta))\) are linearly independent for all \(\theta\in\Omega\).
The goal is thus to find the largest set of independent conserved functions. An immediate upper bound holds on the number \(N\) of functionally independent functions \(h_{1},\ldots,h_{N}\) conserved through \(V\): for \(\theta\in\Omega\subseteq\mathbb{R}^{D}\), the space \(W(\theta)\coloneqq\operatorname{span}\{\nabla h_{1}(\theta),\ldots,\nabla h_{ N}(\theta)\}\subseteq\mathbb{R}^{D}\) is of dimension \(N\) (by independence) and (by Proposition 2.9) orthogonal to \(V_{\phi}(\theta)\). Thus, it is necessary to have \(N\leq D-\dim V(\theta)\). As we will now see, this bound can be tight _under additional assumptions on \(V\) related to Lie brackets_ (corresponding to the so-called Frobenius theorem). This will in turn lead to a characterization of the maximum possible \(N\).
## 3 Conservation Laws using Lie Algebra
The study of hyper-surfaces trapping the solution of ODEs is a recurring theme in control theory, since the existence of such surfaces is the basic obstruction of controllability of such systems [5]. The basic result to study these surfaces is the so-called Frobenius theorem from differential calculus (See Section 1.4 of [11] for a good reference for this theorem). It relates the existence of such surfaces, and their dimensions, to some differential condition involving so-called "Lie brackets" \([u,v]\) between pairs of vector fields (see Section 3.1 below for a more detailed exposition of this operation). However, in most cases of practical interest (such as for instance matrix factorization), the Frobenius theorem is not suitable for a direct application to the space \(V_{\phi}\) because its Lie bracket condition is not satisfied. To identify the number of independent conservation laws, one needs to consider the algebraic closure of \(V_{\phi}\) under Lie brackets. The fundamental object of interest is thus the Lie algebra generated by the Jacobian vector fields, that we recall next.
NotationsGiven a vector subspace of infinitely smooth vector fields \(V\subseteq\mathcal{X}(\Omega)\coloneqq\mathcal{C}^{\infty}(\Omega,\mathbb{R} ^{D})\), we recall (cf Proposition 2.5) that its trace at some \(\theta\) is the subspace
\[V(\theta)\coloneqq\operatorname{span}\{\chi(\theta):\chi\in V\}\subseteq \mathbb{R}^{D}. \tag{9}\]
For each open subset \(\Omega^{\prime}\subseteq\Omega\), we introduce the subspace of \(\mathcal{X}(\Omega^{\prime})\): \(V_{|\Omega^{\prime}}\coloneqq\{\chi_{|\Omega^{\prime}}:\chi\in V\}\).
### Background on Lie algebra
A Lie algebra \(A\) is a vector space endowed with a bilinear map \([\cdot,\cdot]\), called a Lie bracket, that verifies for all \(X,Y,Z\in A\): \([X,X]=0\) and the Jacobi identity: \([X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0\).
For the purpose of this article, the Lie algebra of interest is the set of infinitely smooth vector fields \(\mathcal{X}(\Omega)\coloneqq\mathcal{C}^{\infty}(\Omega,\mathbb{R}^{D})\), endowed with the Lie bracket \([\cdot,\cdot]\) defined by
\[[\chi_{1},\chi_{2}]:\quad\theta\in\Omega\mapsto[\chi_{1},\chi_{2}](\theta) \coloneqq\partial\chi_{1}(\theta)\chi_{2}(\theta)-\partial\chi_{2}(\theta)\chi _{1}(\theta), \tag{10}\]
with \(\partial\chi(\theta)\in\mathbb{R}^{D\times D}\) the jacobian of \(\chi\) at \(\theta\). The space \(\mathbb{R}^{n\times n}\) of matrices is also a Lie algebra endowed with the Lie bracket \([A,B]\coloneqq AB-BA\). This can be seen as a special case of (10) in the case of _linear_ vector fields, i.e. \(\chi(\theta)=A\theta\).
Generated Lie algebraLet \(A\) be a Lie algebra and let \(V\subset A\) be a vector subspace of \(A\). There exists a smallest Lie algebra that contains \(V\). It is denoted \(\operatorname{Lie}(V)\) and called the generated Lie algebra of \(V\). The following proposition [5, Definition 20] constructively characterizes \(\operatorname{Lie}(V)\), where for vector subspaces \([V,V^{\prime}]\coloneqq\{[\chi_{1},\chi_{2}]:\chi_{1}\in V,\chi_{2}\in V^{ \prime}\}\), and \(V+V^{\prime}=\{\chi_{1}+\chi_{2}:\chi_{1}\in V,\chi_{2}\in V^{\prime}\}\).
**Proposition 3.1**.: _Given any vector subspace \(V\subseteq A\) we have \(\operatorname{Lie}(V)=\bigcup_{k}V_{k}\) where:_
\[\left\{\begin{array}{ll}V_{0}&\coloneqq V\\ V_{k}&\coloneqq V_{k-1}+[V_{0},V_{k-1}]\end{array}\right.\text{for }\ k\geq 1.\]
We will see in Section 3.2 that the number of conservation laws is characterized by the dimension of the trace \(\operatorname{Lie}(V_{\phi})(\theta)\) defined in (9). The following lemma (proved in Appendix C) gives a stopping criterion to algorithmically determine this dimension (see Section 3.3 for the algorithm).
**Lemma 3.2**.: _Given \(\theta\in\mathbb{R}^{D}\), if for a given \(i\), \(\dim V_{i+1}(\theta^{\prime})=\dim V_{i}(\theta)\) for every \(\theta^{\prime}\) in a neighborhood of \(\theta\), then there exists a neighborhood \(\Omega\) of \(\theta\) such that \(V_{k}(\theta^{\prime})=V_{i}(\theta^{\prime})\) for all \(\theta^{\prime}\in\Omega\) and \(k\geq i\), where the \(V_{i}\) are defined by Proposition 3.1. Thus \(\operatorname{Lie}(V)(\theta^{\prime})=V_{i}(\theta^{\prime})\) for all \(\theta^{\prime}\in\Omega\). In particular, the dimension of the trace of \(\operatorname{Lie}(V)\) is locally constant and equal to the dimension of \(V_{i}(\theta)\)._
### Number of conservation laws
The following theorem uses the Lie algebra generated by \(V_{\phi}\) to characterize precisely the number of conservation laws. The proof of this result is based on two successive uses of the Frobenius theorem and can be found in Appendix D (where we also recall Frobenius theorem for the sake of completeness).
**Theorem 3.3**.: _If \(\dim\operatorname{Lie}(V_{\phi})(\theta)\) is locally constant then each \(\theta\in\Omega\) admits a neighborhood \(\Omega^{\prime}\) such that there are \(D-\dim\operatorname{Lie}(V_{\phi})(\theta)\) (and no more) independent conserved functions through \(V_{\phi|\Omega^{\prime}}\)._
Combining Proposition 2.9 and Theorem 3.3 we obtain:
**Corollary 3.4**.: _If \(\dim\operatorname{Lie}(V_{\phi})(\theta)\) is locally constant then each \(\theta\in\Omega\) admits a neighborhood \(\Omega^{\prime}\) such that there are \(D-\dim\operatorname{Lie}(V_{\phi})(\theta)\) (and no more) independent conservation laws of \(\phi\) on \(\Omega^{\prime}\)._
_Remark 3.5_.: The proof of the Frobenius theorem (and therefore of our generalization Theorem 3.3) is actually constructive. From a given \(\phi\), conservation laws are obtained in the proof by integrating in time (_i.e._ solving an advection equation) the vector fields belonging to \(V_{\phi}\). Unfortunately, this cannot be achieved in _closed form_ in general, but in small dimensions, this could be carried out numerically (to compute approximate discretized laws on a grid or approximate them using parametric functions such as Fourier expansions or neural networks).
A fundamental aspect of Corollary 3.4 is to rely only on the computation of the _dimension of the trace_ of the Lie algebra associated with the finite-dimensional vector space \(V_{\phi}\). Yet, even if \(V_{\phi}\) is finite-dimensional, it might be the case that \(\operatorname{Lie}(V_{\phi})\) itself remains infinite-dimensional. Nevertheless, what matters is not the dimension of \(\operatorname{Lie}(V_{\phi})\), but that of _its trace_\(\operatorname{Lie}(V_{\phi})(\theta)\), which is _always_ finite (and potentially much smaller that \(\dim\operatorname{Lie}(V_{\phi})\) even when the latter is finite) and computationally tractable thanks to Lemma 3.2 as detailed in Section 3.3. In section 4.1 we work out the example of matrix factorization, a non-trivial case where the full Lie algebra \(\operatorname{Lie}(V_{\phi})\) itself remains finite-dimensional.
Corollary 3.4 requires that the dimension of the trace at \(\theta\) of the Lie algebra is locally constant. This is a technical assumption, which typically holds outside a set of pathological points. A good example is once again matrix factorization, where we show in Section 4.1 that this condition holds generically.
### Method and algorithm, with examples
Given a factorizing mapping \(\phi\) for the architectures to train, to determine the number of independent conservation laws of \(\phi\), we leverage the characterization 3.1 to algorithmically compute
\(\dim\mathrm{Lie}(V_{\phi})(\theta)\) using an iterative construction of bases for the subspaces \(V_{k}\) starting from \(V_{0}\coloneqq V_{\phi}\), and stopping as soon as the dimension stagnates thanks to Lemma 3.2. Our open-sourced code is available at [https://github.com/sibyllema/Conservation_laws](https://github.com/sibyllema/Conservation_laws) and uses SageMath. As we now show, this algorithmic principle allows to fully work out certain settings where the stopping criterion of Lemma 3.2 is reached at the first step (\(i=0\)) or the second one (\(i=1\)). Section 4.2 also discusses its numerical use for an empirical investigation of broader settings.
Example where the iterations of Lemma 3.2 stop at the first step.This corresponds to the case where \(\mathrm{Lie}V_{\phi}(\theta)=V_{1}(\theta)=V_{0}(\theta)\coloneqq V_{\phi}(\theta)\) on \(\Omega\). This is the case if and only if \(V_{\phi}\) satisfies that
\[[\chi_{1},\chi_{2}](\theta)\coloneqq\partial\chi_{1}(\theta)\chi_{2}(\theta)- \partial\chi_{2}(\theta)\chi_{1}(\theta)\in V_{\phi}(\theta),\qquad\text{for all $\chi_{1},\chi_{2}\in V_{\phi}$ and all $\theta\in\Omega$.} \tag{11}\]
i.e., when the Frobenius Theorem (see Theorem D.1 in Appendix D) applies directly. The first example is a follow-up to Example 2.2.
_Example 3.6_ (two-layer ReLU networks without bias).: Consider \(\theta=(U,V)\) with \(U\in\mathbb{R}^{n\times r},V\in\mathbb{R}^{m\times r}\), \(n,m,r\geq 1\) (so that \(D=(n+m)r\)), and the mapping \(\phi(\theta)\coloneqq(u_{i}v_{i}^{\top})_{i=1,\cdots,r}\in\mathbb{R}^{n\times m \times r}\),, where \(U=(u_{1};\cdots;u_{r})\) and \(V=(v_{1};\cdots;v_{r})\), As detailed in Appendix E.1, since \(\phi(\theta)\) is a collection of \(r\) rank-one \(n\times m\) matrices, \(\dim V_{\phi}(\theta)=\mathsf{rank}\partial\phi(\theta)=(n+m-1)r\) is constant on the domain \(\Omega\) such that \(u_{i},v_{j}\neq 0\), and \(V_{\phi}\) satisfies (11), hence by Corollary 3.4 each \(\theta\) has a neighborhood \(\Omega^{\prime}\) such that there exists \(r\) (and no more) independent conserved function through \(V_{\phi|_{\Omega^{\prime}}}\). The \(r\) known conserved functions [8] given by \(h_{i}:(U,V)\mapsto\|u_{i}\|^{2}-\|v_{i}\|^{2}\), \(i=1,\cdots,r\), are independent, hence they are complete.
Example where the iterations of Lemma 3.2 stop at the second step (but not the first one).Our primary example is matrix factorization, as a follow-up to Example 2.1.
_Example 3.7_ (two-layer _linear_ neural networks).: With \(\theta=(U,V)\), where \((U\in\mathbb{R}^{n\times r},V\in\mathbb{R}^{m\times r})\) as in Example 3.6, the mapping \(\phi(\theta)\coloneqq UV^{\top}\in\mathbb{R}^{n\times m}\), (here \(d=nm\)) factorizes the functions minimized during the training of linear neural networks with two layers (see Example 2.1). As shown in Appendix H, condition (11) is not satisfied when \(r>1\) and \(\mathsf{max}(n,m)>1\). Thus, the stopping criterion of Lemma 3.2 is not satisfied at the first step. However, as detailed in Proposition G.3 in Appendix G, \((V_{\phi})_{1}=(V_{\phi})_{2}=\mathrm{Lie}V_{\phi}\), hence the iterations of Lemma 3.2 stop at the second step.
We complete this example in the next section by showing that known conservation laws are indeed complete (see Corollary 4.4). Whether known conservation laws remain valid and/or _complete_ in this settings and extended ones is further studied in Section 4 and Appendix E using the toolset that we have presented.
### Application: recasting over-parameterized flows as low-dimensional Riemannian flows
One striking application of Corollary 3.4 (in simple cases where \(\dim V_{\phi}(\theta)=\dim\mathrm{Lie}V_{\phi}(\theta)\) is constant on \(\Omega\), i.e., \(\mathsf{rank}\partial\phi(\theta)\) is constant on \(\Omega\) and \(V_{\phi}\) satisfies (11)) is to fully rewrite the high-dimensional flow \(\theta(t)\in\mathbb{R}^{D}\) as a low-dimensional flow \(z(t)\in\mathbb{R}^{d}\), where this flow is associated with a Riemannian metric tensor \(M\) that is induced by \(\phi\) and depends on the initialization \(\theta_{\text{init}}\). We insist on the fact that this is only possible in very specific cases, but this phenomenon is underlying many existing works which aim at writing in closed form the implicit bias associated with some training dynamics (see Section 1 for some relevant literature. Our analysis shed some light on cases where this is possible (see Appendix F for a proof). Note that the metric \(M(z,\theta_{\text{init}})\) can have a kernel, typically when \(\phi(\Omega)\) is a sub-manifold. The evolution (12) should then be understood as a flow on this manifold. The kernel of \(M(z,\theta_{\text{init}})\) is orthogonal to the tangent space at \(z\) of this manifold.
**Proposition 3.8**.: _Assume that \(\mathsf{rank}(\partial\phi(\theta))\) is constant on \(\Omega\) and that \(V_{\phi}\) satisfies (11). If \(\theta(t)\in\mathbb{R}^{D}\) satisfies the ODE (3) where \(\theta_{\text{init}}\in\Omega\), then there is \(0<T^{\star}_{\theta_{\text{init}}}\leq T_{\theta_{\text{init}}}\) such that \(z(t)\coloneqq\phi(\theta(t))\in\mathbb{R}^{d}\) satisfies the ODE_
\[\dot{z}(t)=-M(z(t),\theta_{\text{init}})\nabla f(z(t))\quad\text{for all $t\in[0,T^{\star}_{\theta_{\text{init}}}),$ with $z(0)=\phi(\theta_{\text{init}}),$} \tag{12}\]
_where \(M(z(t),\theta_{\text{init}})\in\mathbb{R}^{d\times d}\) is a symmetric semi-definite matrix._
Revisiting Example 3.6 leads to the following analytic example.
_Example 3.9_.: Given the mapping \(\phi:(u\in\mathbb{R}^{*},v\in\mathbb{R}^{d})\mapsto uv\in\mathbb{R}^{d}\), the variable \(z\coloneqq uv\) satisfies (12) with \(M(z,\theta_{\mathtt{init}})=\|z\|_{\delta}\mathbf{I}_{d}+\|z\|_{\delta}^{-1}zz^{ \top},\) with \(\|z\|_{\delta}\coloneqq\delta+\sqrt{\delta^{2}+\|z\|^{2}}\), \(\delta\coloneqq 1/2(u_{\mathtt{init}}^{2}-\|v_{\mathtt{init}}\|^{2})\).
Another analytic example is discussed in Appendix F. In light of these results, an interesting perspective is to better understand the dependance of the Riemannian metric with respect to initialization, to possibly guide the choice of initialization for better convergence dynamics.
## 4 Conservation Laws for Linear and ReLU Neural Networks
To showcase the impact of our results, we show how they can be used to determine whether known conservation laws for linear (resp. ReLU) neural networks are complete, and to recover these laws _algorithmically_ using factorizing mappings \(\phi\) adapted to these two settings. Concretely, we study the conservation laws for neural networks with \(q\) layers, and either a linear or ReLU activation, with an emphasis on \(q=2\). We write \(\theta=(U_{1},\cdots,U_{q})\) with \(U_{i}\in\mathbb{R}^{n_{i-1}\times n_{i}}\) the weight matrices and we assume that \(\theta\) satisfies the gradient flow (3) for some data fidelity function \(f\in\mathcal{C}^{\infty}(\phi(\Omega),\mathbb{R})\). In the linear case the mapping is \(\phi_{\mathtt{Lin}}(\theta)\coloneqq U_{1}\cdots U_{q}\). For ReLU networks, we use the (polynomial) mapping \(\phi_{\mathtt{ReLU}}\) of [24, Definition 6], which is defined for any (deep) feedforward ReLU network, with or without bias. In the simplified setting of networks without biases it reads explicitly as:
\[\phi_{\mathtt{ReLU}}(U_{1},\cdots,U_{q})\coloneqq\Big{(}U_{1}[:,j_{1}]U_{2}[j _{1},j_{2}]\cdots U_{q-1}[j_{q-2},j_{q-1}]U_{q}[j_{q-1},:]\Big{)}_{j_{1},\cdots,j_{q-1}} \tag{13}\]
with \(U[i,j]\) the \((i,j)\)-th entry of \(U\). This covers \(\phi(\theta)\coloneqq(u_{j}v_{j}^{\top})_{j=1}^{r}\in\mathbb{R}^{n\times m \times r}\) from Example 2.2.
Some conservation laws are known for the linear case \(\phi_{\mathtt{Lin}}\)[1, 2] and for the ReLu case \(\phi_{\mathtt{ReLU}}\)[8].
**Proposition 4.1** ([1, 2, 8] ).: _If \(\theta\coloneqq(U_{1},\cdots,U_{q})\) satisfies the gradient flow (3), then for each \(i=1,\cdots,q-1\) the function \(\theta\mapsto U_{i}^{\top}U_{i}-U_{i+1}U_{i+1}^{\top}\) (resp. the function \(\theta\mapsto\text{diag}\left(U_{i}^{\top}U_{i}-U_{i+1}U_{i+1}^{\top}\right)\)) defines \(n_{i}\times(n_{i}+1)/2\) conservation laws for \(\phi_{\mathtt{Lin}}\) (resp. \(n_{i}\) conservation laws for \(\phi_{\mathtt{ReLU}}\))._
Proposition 4.1 defines \(\sum_{i=1}^{q-1}n_{i}\times(n_{i}+1)/2\) conserved functions for the linear case. In general they are _not_ independent, and we give below in Proposition 4.2, for the case of \(q=2\), the _exact_ number of independent conservation laws among these particular laws. Establishing whether there are other (previously unknown) conservation laws is an open problem in the general case \(q>2\). We already answered negatively to this question in the two-layer ReLu case without bias (See Example 3.6). In the following Section (Corollary 4.4), we show the same result in the linear case \(q=2\). Numerical computations suggest this is still the case for deeper linear ReLU networks as detailed in Section 4.2.
### The matrix factorization case (\(q=2\))
To simplify the analysis when \(q=2\), we rewrite \(\theta=(U,V)\) as a vertical matrix concatenation denoted \((U;V)\in\mathbb{R}^{(n+m)\times r}\), and \(\phi(\theta)=\phi_{\mathtt{Lin}}(\theta)=UV^{\top}\in\mathbb{R}^{n\times m}\).
How many independent conserved functions are already known?The following proposition refines Proposition 4.1 for \(q=2\) by detailing how many _independent_ conservation laws are already known. See Appendix G.1 for a proof.
**Proposition 4.2**.: _Consider \(\Psi:\theta=(U;V)\mapsto U^{\top}U-V^{\top}V\in\mathbb{R}^{r\times r}\) and assume that \((U;V)\) has full rank noted \(\mathtt{rk}\). Then the function \(\Psi\) gives \(\mathtt{rk}\cdot(2r+1-\mathtt{rk})/2\) independent conserved functions._
There exist no more independent conserved functions.We now come to the core of the analysis, which consists in actually computing \(\mathrm{Lie}(V_{\phi})\) as well as its traces \(\mathrm{Lie}(V_{\phi})(\theta)\) in the matrix factorization case. The crux of the analysis, which enables us to fully work out theoretically the case \(q=2\), is that \(V_{\phi}\) is composed of _linear_ vector fields (that are explicitly characterized in Proposition G.2 in Appendix G), the Lie bracket between two linear fields being itself linear and explicitly characterized with skew matrices, see Proposition G.3 in Appendix G. Eventually, what we need to compute is the dimension of the trace \(\mathrm{Lie}(V_{\phi})(U,V)\) for any \((U,V)\). We prove the following in Appendix G.
**Proposition 4.3**.: _If \((U;V)\in\mathbb{R}^{(n+m)\times r}\) has full rank noted \(\mathtt{rk}\), then: \(\dim\mathrm{Lie}(V_{\phi})(U;V)=(n+m)r-(2r+1-\mathtt{rk})/2\)._
With this explicit characterization of the trace of the generated Lie algebra and Proposition 4.2, we conclude that Proposition 4.1 has indeed exhausted the list of independent conservation laws.
**Corollary 4.4**.: _If \((U;V)\) has full rank, then all conserved functions are given by \(\Psi:(U,V)\mapsto U^{\top}U-V^{\top}V\). In particular, there exist no more independent conserved functions._
### Numerical guarantees in the general case
The expressions derived in the previous section are specific to the linear case \(q=2\). For deeper linear networks and for ReLU networks, the vector fields in \(V_{\phi}\) are non-linear polynomials, and computing Lie brackets of such fields can increase the degree, which could potentially make the generated Lie algebra infinite-dimensional. One can however use Lemma 3.2 and stop as soon as \(\dim\left((V_{\phi})_{k}(\theta)\right)\) stagnates. Numerically comparing this dimension with the number \(N\) of independent conserved functions known in the literature (predicted by Proposition 4.1) on a sample of depths/widths of small size, we empirically confirmed that there are no more conservation laws than the ones already known for deeper linear networks and for ReLU networks too (see Appendix I for details). Our code is open-sourced and is available at [https://github.com/sibyllema/Conservation_laws](https://github.com/sibyllema/Conservation_laws). It is worth mentioning again that in all tested cases \(\phi\) is polynomial, and there is a maximum set of conservation laws that are also polynomial, which are found algorithmically (as detailed in Section 2.4).
## Conclusion
In this article, we proposed a constructive program for determining the number of conservation laws. An important avenue for future work is the consideration of more general classes of architectures, such as deep convolutional networks, normalization, and attention layers. Note that while we focus in this article on gradient flows, our theory can be applied to any space of displacements in place of \(V_{\phi}\). This could be used to study conservation laws for flows with higher order time derivatives, for instance gradient descent with momentum, by lifting the flow to a higher dimensional phase space. A limitation that warrants further study is that our theory is restricted to continuous time gradient flow. Gradient descent with finite step size, as opposed to continuous flows, disrupts exact conservation. The study of approximate conservation presents an interesting avenue for future work.
## Acknowledgement
The work of G. Peyre was supported by the European Research Council (ERC project NORIA) and the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). The work of R. Gribonval was partially supported by the AllegroAssai ANR project ANR-19-CHIA-0009.
|
2309.15783 | AutoEFT: Automated Operator Construction for Effective Field Theories | The program AutoEFT is described. It allows one to generate Effective Field
Theories (EFTs) from a given set of fields and symmetries. Allowed fields
include scalars, spinors, gauge bosons, and gravitons. The symmetries can be
local or global Lie groups based on U(1) and SU(N). The mass dimension of the
EFT is limited only by the available computing resources. The operators are
stored in a compact, human and machine-readable format. Aside from the program
itself, we provide input files for EFTs based on the Standard Model and a
number of its extensions. These include additional particles and symmetries,
EFTs with minimal flavor violation, and gravitons. | Robert V. Harlander, Magnus C. Schaaf | 2023-09-27T16:57:43Z | http://arxiv.org/abs/2309.15783v2 | # AutoEFT: Automated Operator Construction for Effective Field Theories
###### Abstract
The program AutoEFT is described. It allows one to generate Effective Field Theories (EFTs) from a given set of fields and symmetries. Allowed fields include scalars, spinors, gauge bosons, and gravitons. The symmetries can be local or global Lie groups based on \(U(1)\) and \(SU(N)\). The mass dimension of the EFT is limited only by the available computing resources. The operators are stored in a compact, human and machine-readable format. Aside from the program itself, we provide input files for EFTs based on the Standard Model and a number of its extensions. These include additional particles and symmetries, EFTs with minimal flavor violation, and gravitons.
keywords: EFT, SMEFT, Operator Basis +
Footnote β : journal: TIFK-23-25
###### Abstract
We present a new method for constructing a non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant non-redundant nonredundant non-redundant non-redundant non-redundant non-redundant non-redundant nonredundant non-redundant nonredundant non-redundant nonredundant non-redundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundant nonredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant nonredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundantredundant non
## References
* [1] R.M. Fonseca, _Enumerating the operators of an effective field theory_, _Phys. Rev. D_**101** (2020) 035040, arXiv:1907.12584 [hep-ph].
* [2] H.-L. Li _et al._, _Complete set of dimension-eight operators in the standard model effective field theory_, _Phys. Rev. D_**104** (2021) 015026, arXiv:2005.00008 [hep-ph].
* [3] H.-L. Li _et al._, _Complete set of dimension-nine operators in the standard model effective field theory_, _Phys. Rev. D_**104** (2021) 015025, arXiv:2007.07899 [hep-ph].
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Installation
* 3.1 Installing AutoEFT from PyPI
* 3.2 Installing AutoEFT from conda-forge
* 3.3 Building AutoEFT from Source Code
* 4 The Model File
* 4.1 Basic Structure
* 4.2 Realistic Examples
* 4.2.1 QED
* 4.2.2 QCD
* 4.2.3 Standard Model
* 4.3 Extended Models
* 4.3.1 Additional Particles
* 4.3.2 Additional Gauge Symmetries
* 4.4 MFV Model
* 5 Constructing Operators
* 5.1 Running AutoEFT
* 5.2 Output Format
* 5.3 Counting SMEFT Operators
* 5.4 Limitations of AutoEFT
* 5.4.1 Computing Resources
* 5.4.2 Generators of the Symmetric Group
* 5.4.3 Conceptual Limitations
Working With Operator Files
* 6.1 Loading Operator Files
* 6.2 LaTe Output
* 7 Conclusion
* A Relations to Conventional Notation
* B Invoking autoeft
* B.1 autoeft Commands
* B.1.1 sample-model Command
* B.1.2 construct Command
* B.1.3 count Command
* B.1.4 latex Command
* B.1.5 generators Command
* B.2 Environment Variables
* C Model File
* D Output Files
* E Vocabulary Glossary
## 1 Introduction
The mass of the Higgs boson is in a region where the Standard Model (SM) remains free of theoretical inconsistencies up to very large mass scales [1, 2, 3, 4]. Disregarding arguments of naturalness which have lost much of their persuasive power due to the absence of any new particle discoveries in the TeV range, one may have to face the possibility that on-shell discoveries of particles belong to the past, and fundamental physics beyond the Standard Model will manifest itself at (current or future) particle colliders only through virtual effects [5].
Fortunately, such effects can be parameterized in a systematic way in terms of Effective Field Theories (EFTs). Ideally, the free parameters of an EFT, the Wilson coefficients, or characteristic subsets thereof, can be determined experimentally via precision measurements. Comparison to theoretical calculations of these coefficients via matching to theoretical models of the heavy physics could lead to new fundamental insights about the nature of UV physics.1
Footnote 1: Dubbed the βCinderella approachβ in Ref. [5].
An EFT is based on the field content of a renormalizable Lagrangian \({\cal L}_{\leq 4}\), and incorporates effects up to order \(\left(E/\Lambda\right)^{N}\) in processes at energies \(E\), where \(\Lambda\) is the scale of new physics. The corresponding effective Lagrangian can be written as
\[{\cal L}={\cal L}_{\leq 4}+\sum_{d=5}^{N+4}\sum_{n}\frac{C_{n}^{(d)}}{\Lambda^{d- 4}}{\cal O}_{n}^{(d)}\,, \tag{1}\]
where the higher-dimensional operators \({\cal O}_{n}^{(d)}\) are composed of all fields of \({\cal L}_{\leq 4}\), and \({\cal C}_{n}^{(d)}\) are the Wilson coefficients. For example, if \({\cal L}_{\leq 4}\) is the SM Lagrangian, then the higher-dimensional operators are composed of all SM fields, and \({\cal L}\) is referred to as the Standard Model Effective Field Theory (SMEFT) Lagrangian.
In a top-down approach, the effective operators follow from a UV-complete theory \(\mathcal{L}_{\rm UV}\) by integrating out the heavy degrees of freedom in the path integral of the generating functional. Such an approach is pursued in the UOLEA, for example, which also provides the Wilson coefficients in terms of the parameters of \(\mathcal{L}_{\rm UV}\)[6, 7, 8, 9, 10].
More common, however, is a bottom-up approach, which will be adopted in this paper. Here, one constructs all higher-dimensional operators by combining the fields of the low-energy theory \(\mathcal{L}_{\leq 4}\) in such a way that they obey all symmetry constraints. At the same time, however, one requires the set of operators to be non-redundant in order to ensure that the Wilson coefficients are well-defined. Redundancies among the set of operators can arise from several sources. First, since total derivatives in the Lagrangian do not contribute to the action, operators could be related by integration-by-parts (IbP) identities. Second, operators could be linearly dependent due to algebraic relations such as Fierz or Schouten identities. Third, higher-dimensional operators that vanish due to equations-of-motion (EoMs) can be eliminated from the EFT by field redefinitions in the path integral of the generating functional [11, 12]. Finally, operators could be related by permutations of the fields transforming in equal representations [13].
Obviously, the EFT which is most relevant from a phenomenological point of view is SMEFT. In fact, the dimension-five operators are strong candidates for being the source of neutrino masses. The multiple attempts needed to arrive at the SMEFT-bases at dimension six and seven testify to the complexity of constructing a complete and non-redundant set of operators, despite the fact that their numbers are still quite manageable (84 and 30, respectively) [14, 15, 16, 17].2 Towards higher mass dimension, this number increases roughly exponentially. It can actually be computed exactly using Hilbert-series techniques [18, 19, 20, 21, 22, 23], but also by more direct methods [24, 25, 26].3 Despite the fact that the number of operators in the SMEFT basis at mass dimension eight already amounts to 44807, it was still possible to construct them by largely manual efforts [28]. Nevertheless, an algorithmic procedure clearly becomes desirable.
Footnote 2: See Section 5.3 concerning the counting of operators.
Footnote 3: See also Ref. [27] for a summary of EFT software tools.
Indeed, Refs. [29, 30, 31] managed to establish and implement such an algorithm, partly building on earlier work in this direction [13, 32]. In Ref. [33], we reported on an independent implementation of that algorithm and used it to derive for the first time the SMEFT operator bases at mass dimensions 10, 11, and 12. The current paper accompanies the publication of the associated computer program, named AutoEFT.4
It is available as open source under the MIT license,5 is based on Python,6 and uses only publicly available software libraries, in particular SageMath.7 Since the algorithm of Refs. [29; 30] is not specific for SMEFT (see Refs. [34; 35; 36; 37; 38], for example), it is possible to use AutoEFT also in extended theories with additional light particles beyond the SM spectrum. For example, Ref. [33] also includes the operators of the gravity-extension of SMEFT (GRSMEFT) up to mass dimension 12.
Footnote 5: [https://spdx.org/licenses/MIT.html](https://spdx.org/licenses/MIT.html)
Footnote 6: [https://www.python.org/](https://www.python.org/)
Footnote 7: [https://www.sagemath.org](https://www.sagemath.org)
This paper provides an introduction to AutoEFT, describing the necessary notation, the preparation of the input file, the commands to generate the operator basis, and the format of the output files. Section 2 introduces the theoretical and notational background required to interpret the AutoEFT input and output. The installation of AutoEFT is described in Section 3. Section 4 explains the structure of the model file to be processed by AutoEFT, and provides comprehensive examples for various models. The operator construction using AutoEFT is showcased in Section 5, including a discussion on the output format, as well as AutoEFT's current limitations. Section 6 contains examples on how AutoEFT can further process the output. In addition, we include a reference manual in the appendix, which can be used to look up particular features or specifications related to the usage of AutoEFT.
## 2 Preliminaries
For fixed values of the Wilson coefficients and the parameters of the low-energy theory, an EFT can be considered as a vector in the space of all higher-dimensional operators. AutoEFT constructs a basis in this space of operators for a fixed, but in principle arbitrary value of \(d\). In doing so, it takes into account the constraints arising from external (Lorentz) and internal symmetries. It ensures that the basis is non-redundant, meaning that no two operators are interrelated through EoMs, IbP or algebraic identities.
The field content and the symmetry groups of the low-energy theory \(\mathcal{L}_{\leq 4}\) are supplied to AutoEFT via an input file, referred to as _model file_ in the following. Its detailed structure will be defined in Section 4 and Appendix C. In this section, we provide the notational background for its contents.
AutoEFT allows for particles with spin 0, 1/2, 1, and 2 in the spectrum of \(\mathcal{L}_{\leq 4}\).8 For spin 0 and spin 1/2, it makes no difference for the construction of an EFT [34]
whether they are massive or massless, and thus also for AutoEFT. Higher-spin particles represented by vector or tensor fields are currently restricted to the massless case though. Note that this is in line with SMEFT, which is formulated in the unbroken phase of the SM Lagrangian. The massive vector bosons are recovered by performing the electroweak symmetry breaking in SMEFT explicitly. Furthermore, since scalar and spinor fields are allowed to be massive, AutoEFT can also be used to generate EFTs in which all massive vector bosons are integrated out (e.g., Low-Energy Effective Field Theory (LEFT)/Weak Effective Field Theory (WEFT), parameterizing effects between the electroweak scale and \(\Lambda_{\rm QCD}\)). For AutoEFT, a particle is thus uniquely identified by its \(U(1)\) charges, the representations according to which it transforms under the Lorentz and the non-abelian internal symmetry groups, and a possible generation index.
The irreducible representations of the Lorentz group--which can be identified with \(SU(2)_{l}\times SU(2)_{r}\) for our purpose--are characterized by \((j_{l},j_{r})\), where \(j_{l/r}\) are non-negative integers or half-integers. The most important irreducible representations are given by \((0,0)\), \((1/2,0)\), and \((1,0)\), corresponding to scalars \(\phi\), left-handed Weyl spinors \(\psi_{{\rm L}\alpha}\), and self-dual 2-forms \(F_{{\rm L}\,\alpha\beta}\). For simplicity, we will refer to the latter also as "left-handed field-strength tensors" in the following. In addition, we consider self-dual ("left-handed") Weyl tensors \(C_{{\rm L}\,\alpha\beta\gamma\delta}\) transforming as \((2,0)\) which are required for gravity. Since \(j_{l}=0\) for all of these "elemental" representations, they can also be characterized by their _helicity_
\[h=j_{r}-j_{l}\,. \tag{2}\]
The conjugate ("right-handed") fields \(\psi_{\rm R}{}^{\dot{\alpha}}\), \(F_{\rm R}^{\,\dot{\alpha}\dot{\beta}}\), and \(C_{\rm R}^{\,\dot{\alpha}\dot{\beta}\dot{\gamma}\dot{\delta}}\) transform as \((0,1/2)\), \((0,1)\) and \((0,2)\) under the Lorentz group and thus carry the negative helicity of the corresponding left-handed fields. Here and in the following, \(\alpha\), \(\beta\), \(\dots\) and \(\dot{\alpha}\), \(\dot{\beta}\), \(\dots\) denote fundamental \(SU(2)_{l}\) and \(SU(2)_{r}\) spinor indices, respectively, unless indicated otherwise.9
Footnote 9: We do not consider the irreducible representation \((1/2,1/2)\) corresponding to Lorentz four-vectors explicitly, because we assume that vector fields always arise as gauge fields and thus appear only as part of a field-strength tensor or the covariant derivative. For more details, see the subsequent main text.
All other fields which occur in common Quantum Field Theories (QFTs) transform in representations which can be composed of these elemental representations \((|h|,0)\) and their conjugate versions \((0,|h|)\). For example, the bispinor and the field-strength tensor transform in the direct sums of the left- and right-handed Weyl spinor representations \((1/2,0)\oplus(0,1/2)\), and the self- and anti-self-dual 2-form representations
\((1,0)\oplus(0,1)\), respectively. In AutoEFT, however, one simply defines each irreducible component as a separate field. Concrete examples will be given in Section 4.
The output of AutoEFT is thus formulated in terms of the objects summarized in Table 1, as well as the covariant derivative \(D^{\dot{\alpha}}_{\alpha}\). The action of \(n\) derivatives on a field \(\Phi\) is understood in the AutoEFT output as the combined object
\[(D^{n}\Phi)^{(\dot{\alpha}\dot{\beta}\dots)}_{(\alpha\dot{\beta}\dots)}\sim(D^{ n}\Phi)^{\dot{\alpha}\dot{\beta}\dots}_{\alpha\beta\dots}+(D^{n}\Phi)^{\dot{ \alpha}\dot{\beta}\dots}_{\beta\alpha\dots}+(D^{n}\Phi)^{\dot{\beta}\dot{ \alpha}\dots}_{\alpha\beta\dots}+(D^{n}\Phi)^{\dot{\beta}\dot{\alpha}\dots}_{ \beta\alpha\dots}+\dots\,, \tag{3}\]
where the dotted and undotted indices are separately symmetrized. For each term on the right hand side of Eq. (3), the first \(n\) pairs of dotted and undotted indices belong to the covariant derivatives, whereas all remaining indices are part of the field \(\Phi\).
In order to facilitate the translation of the operators into the more common notation of bispinors \(\Psi\), field-strength tensors \(F^{\mu\nu}\), Weyl tensors \(C^{\mu\nu\rho\sigma}\), and covariant derivatives \(D^{\mu}\), with Lorentz four-vector indices \(\mu,\nu,\dots\), we collect the necessary relations in Appendix A.
Concerning internal symmetries, AutoEFT allows for local and global \(U(1)\) and \(SU(N)\) groups.10 All fields are assumed to transform in an irreducible representation of the internal symmetry groups. AutoEFT requires that each \(U(1)\) charge of a field is given by a (fractional) multiple of some elementary charge (which does not need to be specified further). In the model file, the \(U(1)\) charges are thus defined by rational numbers. Examples will be given in Section 4.
Footnote 10: Concerning \(U(N)\), see Section 4.4.
\begin{table}
\begin{tabular}{c c c c} \hline \hline field & \((j_{l},j_{r})\) & \(h\) & name \\ \hline \(\phi\) & \((0,0)\) & \(0\) & scalar \\ \(\psi_{\rm L}\) & \((1/2,0)\) & \(-1/2\) & left-handed spinor \\ \(\psi_{\rm R}\) & \((0,1/2)\) & \(+1/2\) & right-handed spinor \\ \(F_{\rm L}\) & \((1,0)\) & \(-1\) & left-handed field-strength tensor \\ \(F_{\rm R}\) & \((0,1)\) & \(+1\) & right-handed field-strength tensor \\ \(C_{\rm L}\) & \((2,0)\) & \(-2\) & left-handed Weyl tensor \\ \(C_{\rm R}\) & \((0,2)\) & \(+2\) & right-handed Weyl tensor \\ \hline \hline \end{tabular}
\end{table}
Table 1: The list of irreducible representations of the Lorentz group supported by AutoEFT. Each representation is associated with a placeholder symbol for the field, and a unique value for the helicity \(h\).
The irreducible representations of \(SU(N)\) are encoded via their one-to-one correspondence to Young diagrams, which can be represented by lists of non-increasing positive integers (also referred to as _integer partitions_ in the following).11 For example, the fundamental representation of \(SU(N)\) can be specified as
Footnote 11: AutoEFT also supports the characterization of these representations by Dynkin labels; see Section 4.2.2, for a concrete example.
\[\raisebox{-0.5pt}{\includegraphics[]{figures/C1.eps}}\quad\sim\quad[1]\,. \tag{4}\]
For the anti-fundamental representation, it is
\[\raisebox{-0.5pt}{\includegraphics[]{figures/C2.eps}}\quad\sim\quad[\underbrace{ 1,1,\ldots,1}_{N-1}]\equiv[1^{N-1}]\,, \tag{5}\]
and for the adjoint representation, the correspondence is
\[\raisebox{-0.5pt}{\includegraphics[]{figures/C3.eps}}\quad\sim\quad[2, \underbrace{1,\ldots,1}_{N-2}]\equiv[2,1^{N-2}]\,, \tag{6}\]
where we used a common short-hand notation for integer partitions with long sequences of the same number.
Similar to the Lorentz group, the fields composing the operators in the output of AutoEFT carry only fundamental indices of the internal symmetry groups. For fields transforming in the anti-fundamental or adjoint representations, one can translate this directly to a more common notation using the relations provided in Appendix A. While this is sufficient for SMEFT, it may be desirable to translate other representations in extended theories with light fields. In this case, the corresponding Clebsch-Gordan coefficients need to be taken into account.12
Footnote 12: For example, the sextet representation \(\raisebox{-0.5pt}{\includegraphics[]{figures/C4.eps}}\sim[2]\) of \(SU(3)\) can be related to the symmetric product of two fundamental representations using the Clebsch-Gordan coefficients computed in Ref. [39]. Consequently, a field transforming in this representation can be denoted either by one sextet index or two fundamental indices, related by the Clebsch-Gordan coefficients.
## 3 Installation
AutoEFT is implemented in Python and makes use of several functions provided by the free open-source mathematics software system SageMath. Since intermediate
expressions during the construction procedure can become exceedingly large, certain algebraic operations are passed to FORM[40; 41]. All remaining dependencies are third-party Python libraries and are included for the user's convenience, such as input validation and console markup. For a standard installation of AutoEFT, the following software needs to be installed on the system:
Python(_version 3.8_ or later)
This requirement is fulfilled by default in most cases. There is either a system wide Python installation that is also used by SageMath, or SageMath does come with its own version of the Python interpreter. If the installation is done via the conda/mamba package management system, a suitable Python version is automatically included in the virtual environment.
SageMath(_version 9.3_ or later)
The SageMath library only needs to be installed explicitly if AutoEFT is _not_ installed using the conda/mamba package management system.13 Installation details can be found at [https://doc.sagemath.org/html/en/installation/index.html](https://doc.sagemath.org/html/en/installation/index.html).
Footnote 13: Although there is some effort towards modularizing SageMath into separate distributions, the packages required by AutoEFT are only available in the complete library for now. We advice to either install SageMath by βhandβ, or to use the conda/mamba package management system, which installs SageMath automatically in a virtual environment.
FORM(_version 4.3_ or later)
The FORM home page can be found at [https://www.nikhef.nl/~form/](https://www.nikhef.nl/~form/). To use AutoEFT together with FORM, make sure that there is an executable named form on the system path or on a path specified by the environment variable AUTOEFT_PATH (cf. Appendix B.2).
### Installing AutoEFT from PyPI
This installation method requires an already installed and running version of SageMath. Afterwards, AutoEFT and its dependencies can be installed from the _Python Package Index (PyPI)14_ by simply running:15
Footnote 14: [https://pypi.org/](https://pypi.org/)
Footnote 15: On macOS using Homebrew, it may be necessary to precede this statement by PYTHONEXECUTABLE=</path/to/sage> with the proper path to the SageMath executable inserted. In addition, it may be necessary to add the path to SageMathβs executables to the $PATH environment variable.
* sage -pip install autoeft
### Installing AutoEFT from conda-forge
Since the SageMath distribution is part of the _conda-forge_[42] channel, there is no requirement for a prior installation. Using the conda16 package manager, AutoEFT and its dependencies can be installed from the _conda-forge_ channel by running:
Footnote 16: [https://conda.io/](https://conda.io/)
conda install autoeft -c conda-forge
If the mamba17 package manager is used instead, the _conda-forge_ channel is enabled by default. Hence, AutoEFT and its dependencies can be installed by running:
Footnote 17: [https://github.com/mamba-org/mamba](https://github.com/mamba-org/mamba)
Footnote 18: [https://pypi.org/project/build/](https://pypi.org/project/build/)
* mamba install autoeft
### Building AutoEFT from Source Code
To build AutoEFT from its source code, make sure the latest version of the Python Packaging Authority's build18 is installed. The distribution packages can then be generated by running:
Footnote 18: [https://pypi.org/project/build/](https://pypi.org/project/build/)
git clone [https://github.com/mamba-org/mamba](https://github.com/mamba-org/mamba)
cd autoeft/
python -m build
Note that the last command must be executed in the directory containing the file pyproject.toml. After this, there should be two archive files in the newly created dist/ directory: The source distribution autoeft-1.0.0.tar.gz as well as the build distribution autoeft-1.0.0-py3-none-any.whl. To install the local package, run:
* sage -pip install dist/autoeft-1.0.0-py3-none-any.whl
As AutoEFT is developing, the version number will have to be replaced accordingly in these commands, of course.
## 4 The Model File
To construct an EFT operator basis, the user must define a model describing the relevant details of the low-energy theory. This is done via the _model file_ which
encodes all information about the symmetries and field content of the model.19 A detailed description of all keywords and their type can be found in Appendix C.
Footnote 19: Technically, the format of the model file is YAML ([https://yaml.org/](https://yaml.org/)); all required specifications will be implicitly discussed below though.
### Basic Structure
A valid model file has to contain a minimal set of keywords (simply referred to as _keys_ in the following), which must be assigned appropriate values. In particular, every model file must contain the key name, set to a valid string that identifies the model. The other required keys are symmetries and fields. These three keys are sufficient to define a valid model file that AutoEFT can process. For example, in Listing 1, both symmetries and fields are set to the empty set '{}', corresponding to the trivial model without any fields.20
Footnote 20: We adopt the convention that variable input provided by the user is set in type-writer font and surrounded by single quotes in the main text. In the code listings, they are set in black color. The single quotes are missing for fixed code words that are not to be changed by the user (blue color in the listings).
```
1#AutoEFTmodelfile
2name:MinimalModel
3symmetries:{}
4fields:{}
```
Listing 1: Minimal data required in a model file.
As a non-trivial model, let us consider scalar Quantum Electrodynamics (QED), i.e. a \(U(1)\) gauge theory of a charged scalar field. The \(U(1)\) symmetry is implied by adding the sub-key u1_groups to symmetries, as displayed in Listing 2.
```
1symmetries:
2u1_groups:
3QED:{}
```
Listing 2: Symmetry definition of the scalar QED model file.
Note that the actual symmetry, identified by the string 'QED', has been added as another sub-key to u1_groups. In principle, we could specify additional attributes for this group (e.g., an allowed violation, a residual charge, or a LaTeX symbol; see Appendix C) by assigning it a non-trivial value. For our purposes, however, this is not necessary and we assign to it the empty set '{}'.
Next, we include a single complex scalar field \(\phi\) in the model, by adding the entry 'phi' to fields as shown in Listing 3.
* [6] fields:
* phi:
* representations:
* QED: -1
```
Listing 3: Definition of the file.
Again, the name 'phi' is arbitrary. To define the transformation properties of the field under the symmetry groups, the key representations must be added to 'phi'. In our case, there is only one symmetry group, so we add the entry 'QED: -1' to representations, which means that \(\phi\) carries one negative unit of the elementary \(U(1)\) charge, see line 9 in Listing 3. For every field defined in the model file, AutoEFT automatically takes into account the conjugate version and denotes it by appending the symbol '+' to the original field name. Thus, in our example, the conjugate field \(\phi^{\dagger}\) is taken into account automatically by AutoEFT, and it will be denoted by 'phi+' in the output.21
Footnote 21: The exception to this are fields all of whose representations are real (or equivalent to real ones). In this case, no conjugate field is generated. One can also prevent AutoEFT from including the conjugate fieldβfor whatever reason one may haveβby using the conjugate property; see Appendix C.
To make this theory an actual gauge theory, the \(U(1)\) gauge boson has to be defined as well. Gauge bosons can appear in two instances: encoded in field strength tensors or as part of the covariant derivative. The latter is automatically included by AutoEFT, while the former is decomposed into two separate fields which transform in irreducible representations of the Lorentz group, see Section 2. The first one, 'FL' \(\widehat{\Xi}\,\,F_{\mathrm{L}}\), transforming as \((1,0)\), can be defined in the model file by adding another entry to fields; see Listing 4.
```
[MISSING_PAGE_POST]
52
53
54
55
56
57
58
59
60
610
62
63
64
65
66
67
68
69
700
711
712
7201
7213
73
740
7514
752
753
754
7555
756
7576
7577
7588
7590
75910
7592
7593
7594
7595
7596
7597
8998
9999
10001
100112
10113
1113
1114
12151
13161
1417
15182
15211
16211
17311
1732
1733
1734
1735
1736
17378
17399
18001
19110
192011
19302
19403
1953
1964
1974
1985
1996
1997
200112
20122
20123
20124
20125
20126
20127
20128
20129
212013
21213
2124
2125
2126
2127
2128
2129
2213
22914
22215
22216
22217
22218
22218
22219
2222
2222
2223
2224
2225
2226
2227
228
2228
22914
2228
22915
22216
22217
2229
2222
2229
2225
226
2227
2228
2229
2230
2312
23231
324
3232
324
324
34
356
378
399
401
4101
421
432
444
457
458
46
479
591
524
592
502
534
512
525
535
549
540
554
555
562
575
576
58
597
598
599
600
611
62
630
642
65
666
679
680
692
611
64
693
612
624
632
644
65
667
682
694
695
696
797
798
799
899
899
999
999
999
10001
1011
1111
1112
112
113
1114
114
1115
116
1177
1182
1192
1117
1118
1119
1201
1222
123
124
125
126
1276
1276
128
1296
2999
1300
1410
429
402
412
413
414
45
466
476
477
88
899
999
999
1110
112
120
132
148
150
162
177
183
194
195
196
197
198
199
2010
2199
202
2199
219
220
23
24
252
253
254
556
578
599
613
70
719
814
815
816
817
818
190
819
219
202
24
255
26
276
276
28
29
299
27
310
328
329
329
403
41
429
520
510
52
531
53
542
555
56
579
629
70
710
711
72
720
732
74
75
76
777
88
89
999
99
112
80
81
821
83
822
83
844
85
86
878
89
999
100
911
92
93
94
959
96
979
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
140
151
162
172
183
194
195
196
197
198
100
101
102
103
104
105
105
106
107
108
109
119
120
119
121
122
123
124
125
126
127
128
1314
140
141
152
153
162
173
174
185
198
199
202
199
203
219
219
204
219
219
220
219
221
222
23
240
224
255
26
277
28
29
205
29
219
219
222
28
290
219
219
219
220
221
20
222
23
24
25
26
272
28
293
29
306
29
310
311
320
320
331
32
333
34
356
378
38
399
40
399
41
42
391
43
44
445
46
479
47
48
492
493
494
495
59
59
60
596
61
62
63
64
65
66
67
68
69
70
711
720
732
74
75
76
78
799
89
899
99
999
9999
100
111
111
112
113
114
115
116
117
18
1117
118
119
120
121
122
123
124
125
126
127
128
128
129
20
127
129
219
220
212
204
219
22
23
24
25
26
272
28
29
306
29
313
307
314
31
320
320
321
334
35
366
37
389
408
414
409
42
43
445
46
47
48
495
59
62
593
59
63
749
596
64
65
66
67
68
697
711
78
799
899
999
8999
9999
1000
111
112
114
115
1200
121
122
123
124
125
126
127
128
129
20
130
14
152
16
129
217
20
129
219
219
22
219
23
24
20
25
26
272
28
29
30
317
318
32
33
34
36
389
40
419
59
60
52
Combining the symmetry definition in Listing 2 and the field content definitions in Listings 3 and 4--and giving the model a suitable name--results in the entire model file, displayed in Listing 5.
### Realistic Examples
In this section, more realistic examples will be considered, starting from QED, generalizing to Quantum Chromodynamics (QCD), and finally the SM. In the course of this, we will discuss the definition of spinors and non-abelian \(SU(N)\) symmetry groups in the model file.
#### 4.2.1 Qed
To promote the example of scalar QED from the previous section to actual QED, one needs to introduce Dirac fermions. As described in Section 2, the Lorentz representation of bispinors is given by \((1/2,0)\oplus(0,1/2)\). The model file for QED with a single charged electron can thus be written as shown in Listing 6. Here, \(e_{\mathrm{L}}\,\widehat{=}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
```
1#AutoEFTmodelfile
2name:QED-EFT
3description:EffectiveFieldTheoryofQEDinteractions
4
5symmetries:
6u1_groups:
7QED:{}
8
9fields:
10eL:#EL=(eL,0)^T,ELbar=(0,eL+)
11representations:
12Lorentz:-1/2
13QED:-1
14eR:#ER=(0,eR)^T,ERbar=(eR+,0)
15representations:
16Lorentz:1/2
17QED:-1
18FL:
19representations:
20Lorentz:-1
```
Listing 6: QED model file.
In the literature it is quite common to adopt the _all-left_ chirality notation for the fundamental building blocks of an EFT. In this convention, all Weyl spinors are defined to be left-handed, so that the index "L" can be dropped. The right-handed components are then acquired by conjugation. In the above example this would mean that one defines 'e'\(\,\widehat{=}\,\,e\equiv e_{\text{L}}\) and its charge conjugate 'eC'\(\,\widehat{=}\,\,e_{\text{C}}\equiv e_{\text{R}}^{\dagger}\). One could thus define QED in AutoEFT also by replacing lines 10-17 in Listing 6 by the content of Listing 7.
```
10e:#EL=(e,0)^T,ELbar=(0,e+)
11representations:
12Lorentz:-1/2
13QED:-1
14eC:#ER=(0,eC+)^T,ERbar=(eC,0)
15representations:
16Lorentz:-1/2
17QED:1
```
Listing 7: All-left notation for the electron.
#### 4.2.2 Qcd
To generalize the example of QED to a non-abelian theory like QCD, \(SU(N)\) symmetries need to be introduced. They are defined in a similar way to \(U(1)\) symmetries in the model file but require additional information like their degree \(N\). A model file for QCD with a single quark flavor could be defined as displayed in Listing 8. The \(SU(3)\) symmetry of QCD is imposed by the lines 5-8. Under the keyword sun_groups, all
\(SU(N)\) symmetry groups of the model are listed; here, we only have 'QCD', for which we specify the degree by the entry 'N: 3' (note the indentation of line 8).
Lines 11-22 declare the field content of the model. Analogously to the example of QED discussed in Section 4.2.1, a Dirac quark spinor is implemented by specifying its left- and right-handed components, named \(q_{\rm L}\stackrel{{\simeq}}{{=}}\) 'qL' and \(q_{\rm R}\stackrel{{\simeq}}{{=}}\) 'qR' here. The fact that they transform in the fundamental representation of QCD is encoded by specifying the integer partition '[1]' in lines 14 and 18, cf. Eq. (4). As discussed above, AutoEFT automatically takes into account the corresponding conjugate fields 'qL+'\(\,\stackrel{{\simeq}}{{=}}\,q_{\rm L}^{\dagger}\) and 'qR+'\(\,\stackrel{{\simeq}}{{=}}\,q_{\rm R}^{\dagger}\) which transform in the anti-fundamental representation \([1,1]\) of QCD, cf. Eq. (5). Since the adjoint representation \([2,1]\) is real, only the left-handed component of the gluon field-strength tensor 'GL' must be defined in the model file explicitly, see lines 19-22 of Listing 8.
Instead of integer partitions, one may also use _Dynkin labels_ to specify the irreducible representation of \(SU(N)\) in which a field transforms. For AutoEFT, the difference is indicated by using round brackets instead of square ones. The fundamental and adjoint representations of \(SU(3)\) are denoted by the Dynkin labels (10) and (11), respectively. Lines 14 and 18 of Listing 8 could thus also be written as 'QCD: (1,0)', for example, and line 22 as 'QCD: (1,1)'.23 Internally, any Dynkin label is converted
to the respective partition.
Footnote 23: The \(SU(3)\times SU(2)\times U(1)\) symmetry is broken by the \(SU(3)\times SU(2)\times U(1)\) symmetry.
#### 4.2.3 Standard Model
The previous sections provide all the information required to compose a model file for the entire SM in the unbroken phase--including all symmetries and fields. The transition to the broken phase can be performed at the level of the operators by appropriate replacements of the Higgs field.
The SM gauge group is given by \(SU(3)\times SU(2)\times U(1)\) which can be defined in just a few lines in the model file, see lines 6-10 in Listing 9 below. Each gauge group is equipped with an associated multiplet of gauge bosons by defining the components '\(\tt GL\)' \(\widehat{=}\,G_{\rm L}\), '\(\tt WL\)' \(\widehat{=}\,W_{\rm L}\), and '\(\tt BL\)' \(\widehat{=}\,B_{\rm L}\), respectively (cf. lines 13-23).
The matter fields of the SM come in five distinct representations. Taking the first generation of fermions as an example, they are characterized by the Weyl spinors
\[\mbox{`{\tt QL'}}\,\widehat{=}\,Q_{\rm L}\,,\quad\mbox{`{\tt uR'}}\,\widehat{= }\,u_{\rm R}\,,\quad\mbox{`{\tt dR'}}\,\widehat{=}\,d_{\rm R}\,,\quad\mbox{` LL'}\,\widehat{=}\,L_{\rm L}\,,\quad\mbox{`{\tt eR'}}\, \widehat{=}\,e_{\rm R} \tag{8}\]
and their hermitian conjugate. Their representations w.r.t. the Lorentz and the SM gauge group are defined in lines 24-53 of the model file.24
Footnote 24: In the supplementary model files, the electromagnetic charge \(Q\) is defined by the relation \(Q=I_{3}+Y\) where \(I_{3}\) and \(Y\) are the 3rd component of weak-isospin and the \(U(1)\)-hypercharge, defined in Section 2.
In principle, the second and third generation of fermions could be implemented as separate copies of Eq. (8). More conveniently though, one may add the entry 'generations: 3' to every fermion declaration, see Listing 9. By using this option, AutoEFT will associate a generation index with these fields, which leads to a much more compact form of the output, of course. Note that, even though the sum over generation indices is not carried out explicitly in this case, the output does depend on the actual number of generations. This is because the external and internal symmetries may induce redundancies which depend on this number (see Ref. [13] for details).
To complete the SM, the complex Higgs doublet \(H\widehat{=}\) '\(\tt H\)' must be included as well. This is simply done by defining it as an \(SU(2)\) doublet and assigning an appropriate hypercharge, see lines 54-57 of Listing 9.
The entire model file for the SM with three generations of fermions is then given by Listing 9.25
* #AutoEFTmodelfile
* ```
* #AutoEFTmodelfile
*
```
* #SU3:{N:3}
[MISSING_PAGE_POST]
```
[MISSING_PAGE_POST]
* *
```
* ```
*
```
* ```
*
```
* ```
*
```
* ```
*
```
* ```
*
```
* * ```
*
```
* * ```
*
```
* * ```
[MISSING_PAGE_POST]
* *
```
* * ```
* *
```
* * ```
* *
```
* *
* * ```
* *
```
* * ```
*
```
* * ```
* * *
*
```
* * ```
* * *
Footnote 24: The \(\mathrm{SU}(2)\) model is not a good candidate for the \(\mathrm{SU}(2)\) model, but it is not a good candidate for the \(\mathrm{SU}(2)\) model.
### Extended Models
After reading Sections 4.1 and 4.2, and optionally consulting Appendix C, the user should be able to assemble custom model files from scratch. However, AutoEFT offers an alternative approach of creating model files using the sample-model command. Running this command will print the content of a predefined SM model file to the standard output (e.g., the terminal). Therefore, a custom model can also be obtained by running the command
autoeft sample-model > custom.yml
and subsequently modifying the newly created file custom.yml as desired. Alternatively, the user may base the custom model on one of the sample model files supplied with this paper. In the following, we consider specific examples for extending the SM as the low-energy theory.
#### 4.3.1 Additional Particles
Ref. [43] defines a list of possible extensions of the SM by adding new particles. In order to illustrate the simplicity of preparing a specific model file for AutoEFT, we explicitly describe the necessary modifications of the SM model file for all examples provided in this paper. Each model file can also be found in the supplementary material of this paper, or in the AutoEFT repository, see Footnote 4. It allows one to reconstruct the operator bases provided in Ref. [43], and to extend them to higher mass dimension.
Quite in general, new particles can be included in the EFT construction by adding new entries under the keyword fields and assigning them appropriate representations of the existing symmetry groups. In the following examples, we only show the lines that need to be added to the very end of the default model file produced by the sample-model command. Following Ref. [43] and adopting their notation, let us first consider the addition of uncolored particles.
A scalar \(\delta^{+}\,\widehat{=}\,\)'del' which only carries one unit of the hyper charge and otherwise transforms as a singlet can be implemented as:
* [79] del: representations: U1: 1 for example. Of course, other (rational) values of the hyper charge can be incorporated in an analogous way. For example, the doubly charged scalar named \(\rho^{++}\stackrel{{\Delta}}{{=}}\)'rho' in Ref. [43] is obtained from:
* [79] rho: representations: U1: 2 Similarly, the complex scalar \(SU(2)\)-triplet \(\Delta\stackrel{{\Delta}}{{=}}\)'Del' can be added as:
* [80] representations: SU2: [2] U1: 1 and the left-handed fermion triplet \(\Sigma\stackrel{{\Delta}}{{=}}\)'Sig' is defined as:
* [81] representations: Lorentz: -1/2 SU2: [2]
For vector-like leptons of various charges \((V_{\rm L,R},E_{\rm L,R},N_{\rm L,R})\stackrel{{\Delta}}{{=}}\) ('VL';'VR',...), one also needs to define the right-handed components:
```
79VL:
80representations:
81Lorentz: -1/2 SU2: [1] U1: -1/2 generations: 3 VR:
85representations:
86Lorentz: 1/2 SU2: [1] U1: -1/2 generations: 3
87Elements:
88Lorentz: -1/2 U1: -1
89generations:
90Operations: 3
910ER:
97representations:
98Lorentz: 1/2 U1: -1
990Operations: 3
911NL:
912representations:
92Lorentz: -1/2
93generations: 3
94generations: 3
95NR:
representations: Lorentz: 1/2 generations: 3
Finally, also higher representations of the gauge group can be accounted for. For example, the scalar \(SU(2)\)-quadruplet \(\Theta\cong\)'The' is given by:
```
79The:
80representations: SU2:[3] U1: 3/2
```
New _colored_ particles can be included in exactly the same way by assigning appropriate \(SU(3)\) representations. Again, we only show the lines that need to be added to the very end of the default model file. In particular, the various versions of lepto-quarks defined in Ref. [43] can be implemented as:
```
79chi1:
80representations: SU3:[1] SU2:[1] U1: 1/6
```
Lepto-Quark (\(\varphi_{1}\cong\)'phi1')
79phi1:
80representations: SU3:[1] U1: 2/3 ```
Lepto-Quark (\(\chi_{2}\cong\)'chi2')
79chi2:
80representations: SU3:[1] U1: 7/6
```
Lepto-Quark (\(\varphi_{2}\cong\)'phi2')
92phi2:
93representations: SU3:[1] SU2:[1] U1: 7/6 ```
Lepto-Quark (\(\varphi_{2}\cong\)'phi2')
94phi2:
[MISSING_PAGE_POST]
#### 4.3.2 Additional Gauge Symmetries
Additional gauge groups can be added by simply including their definition under the keyword symmetries. In the following example, there are two new abelian gauge groups \(U(1)^{\prime}\) and \(U(1)^{\prime\prime}\), extending the SM gauge group. Their respective gauge bosons are denoted by \(X\) and \(Y\) (corresponding to 'XL', 'YL' in the model file, plus the automatically included conjugate fields). In addition, global symmetries--like baryon- and lepton-number conservation--can be added in exactly the same way, with the only difference that there are no associated gauge bosons. In this example, each fermion gets assigned a specific baryon and lepton number and the resulting operators must conserve the total numbers exactly. Using the optional keys violation and residual, it would also be possible to allow for a certain degree of violation of the global \(U(1)\) symmetries, see Appendix C.
The entire model file is displayed in Listing 10, including the tex, tex_hc, and indices keys that tell AutoEFT how to represent the symmetries, fields, and indices in LaTeX format; see Appendix C. New non-abelian gauge groups can be added in close analogy to the procedure described above.
```
1#AutoEFTmodelfile
2name:U(1)-U(1)-SMEFT
3description:U(1)'xU(1)''extendedStandardModelEffectiveFieldTheory
4
5symmetries:
6lorentz_group:
7tex:SO^+(1,3)
8sun_groups:
9SU3:
10N:3
11tex:SU(3)
12indices:[a,b,c,d,e,f,g,h]
13SU2:
14N:2
15tex:SU(2)
16indices:[i,j,k,l,m,m,p,q]
17u1_groups:
18U1:
19tex:U(1)
20Up:
21tex:U(1)^\prime
22Upp:
23tex:U(1)^{\prime}prime}
24Bno:{}
25Lno:{}
26
27fields:
28GL:
29representations:
30Lorentz:-1
31SU3:[2,1]
32tex:G_{\mathrm{L}}
33tex_hc:G_{\mathrm{R}}
34WL:
representations:
* Lorentz: -1 SU2: [2] tex: W_{unathrm{L}} tex_hc: W_{unathrm{R}}
* BL:
* representations: Lorentz: -1 tex: B_{\nathrm{L}} tex_hc: B_{\nathrm{R}}
* XL: representations: Lorentz: -1 tex: X_{\nathrm{L}} tex_hc: X_{\nathrm{R}}
* YL: representations: Lorentz: -1 tex: Y_{\nathrm{L}} tex_hc: Y_{\nathrm{R}}
* QL:
* representations: Lorentz: -1/2 SU3: [1] SU2: [1] U1: 1/6 Bno: 1/3 generations: 3 tex: Q_{\nathrm{L}}
* uR: representations: Lorentz: 1/2 SU3: [1] U1: 2/3 Bno: 1/3 generations: 3 tex: u_{\nathrm{R}}
* dR: representations: Lorentz: 1/2 SU3: [1] U1: -1/3 Bno: 1/3 generations: 3 tex: d_{mathrm{R}}
* LL: representations: Lorentz: -1/2 SU2: [1] U1: -1/2 Lno: -1 generations: 3 tex: L_{mathrm{L}}
* R: representations: Lorentz: 1/2 U1: -1 Lno: -1 generations: 3
tex:e_{mathrm{R}}
*H:
*representations:
*SU2:[1]
*U1:1/2
*tex:H
Listing 10: extended model file;
### MFV Model
Instead of considering the three generations of fermions as independent entities, one can also introduce so-called flavor symmetries. In these models, the approximate flavor symmetry of the SM--which is only broken by the Yukawa sector--is also imposed on the EFT. A prominent example is Minimal Flavor Violation (MFV) [44, 45], which introduces a global \(U(3)^{5}\sim U(3)_{Q}\times U(3)_{u}\times U(3)_{d}\times U(3)_{L}\times U(3)_{e}\) flavor symmetry. Although AutoEFT does not support \(U(N)\) symmetries directly, one can always consider them as semidirect products \(U(N)=SU(N)\rtimes U(1)\). Hence, MFV is realized by assigning an \(SU(3)_{f}\stackrel{{\widehat{\rm=}}}{{=}}\)'SU3<f>' fundamental representation and a \(U(1)_{f}\stackrel{{\widehat{\rm=}}}{{=}}\)'U1<f>' (<f>\(\in\{\)q,u,d,l,e\(\}\)) charge of unity to every fermion:
\[\begin{split} Q\sim&\left\|\underline{\rule{0.0pt}{1.0pt}\right}_{SU(3)_{Q}}\otimes 1_{U(1)_{Q}}\,,\quad u\sim&\left\|\underline{\rule{0.0pt}{1.0pt}\right}_{SU(3)_{u}}\otimes 1_{U(1)_{u}}\,,\quad d\sim&\left\|\underline{ \rule{0.0pt}{1.0pt}\right}_{SU(3)_{d}}\otimes 1_{U(1)_{d}}\,,\\ & L\sim&\left\|\underline{\rule{0.0pt}{1.0pt} \right}_{SU(3)_{L}}\otimes 1_{U(1)_{L}}\,,\quad e\sim&\left\| \underline{\rule{0.0pt}{1.0pt}\right}_{SU(3)_{e}}\otimes 1_{U(1)_{e}}\,.\end{split} \tag{9}\]
```
1#AutoEFTmodelfile
2name:MVF-SMEFT
3description:MinimalFlavorViolationStandardModelEffectiveFieldTheory
4
5symmetries:
6sun_groups:
7SU3:{N:3}
8SU2:{N:2}
9SU3q:{N:3}
10SU3u:{N:3}
11SU3d:{N:3}
12SU3s:{N:3}
13SU3e:{N:3}
14u1_groups:{U1:{},U1q:{},U1u:{},U1d:{},U1l:{},U1e:{}}
15
16fields:
17GL:
18representations:{Lorentz:-1,SU3:[2,1]}
19WL:
20representations:{Lorentz:-1,SU2:[2]}
21BL:
22representations:{Lorentz:-1}
23QL:
24representations:{Lorentz:-1/2,SU3:[1],SU2:[1],U1:1/6,SU3q:[1],U1q:1}
* [25]uR: representations: {Lorentz: 1/2, SU3: [1], U1: 2/3, SU3u: [1], U1u: 1}
* [27]dR: representations: {Lorentz: 1/2, SU3: [1], U1: -1/3, SU3d: [1], U1d: 1}
* [28]LL: representations: {Lorentz: -1/2, SU2: [1], U1: -1/2, SU3l: [1], U1l: 1}
* [29]eR: representations: {Lorentz: 1/2, U1: -1, SU3e: [1], U1e: 1}
* [30]H: representations: {SU2: [1], U1: 1/2}
Listing 11: MFV model file.
Since now every fermion carries a fundamental \(SU(3)_{f}\) index, one must remove the entry 'generations: 3' of Listing 9 from all fermion declarations. The entire model file encoding MFV is shown in Listing 11. It can be used to construct the leading (i.e., flavor symmetric) terms in the MFV EFT basis.26 Of course, other realizations of flavor symmetry can be implemented in a similar fashion. For example, Refs. [46, 47, 48] examine various flavor symmetries in an EFT context.
Footnote 26: In principle, it would be possible to include the Yukawa couplings as spurion fieldsβalso transforming under the flavor symmetry. This would allow to construct the MFV EFT basis beyond the leading terms. However, the Yukawa couplings are dimensionless and should instead be expanded by some other small quantity. Such a declaration is not included in the model file specifications yet, but we intend to implement this feature in the next release of AutoEFT.
## 5 Constructing Operators
### Running AutoEFT
Given a valid model file, AutoEFT can be used to construct an EFT basis for a certain mass dimension. For example, to construct the SMEFT dimension-six operators, run the command:
autoeft construct sm.yml 6
where sm.yml denotes the model file of Listing 9. AutoEFT will first display a disclaimer followed by a summary of the loaded model. The summary includes the name and description of the model as well as a table containing all fields of the model, including the automatically generated conjugate fields. The table can be used to verify that the model file has been loaded correctly and the field representations are set up as desired. Afterwards, the operator construction starts and the number of families, types, terms, and operators is displayed in a live preview (see
Appendix E and Ref. [33] for the meaning of these expressions). After the operator construction is finished, AutoEFT terminates and returns to the shell prompt. During each run, AutoEFT writes a _log file_ called autoeft.log to the current working directory, capturing the console output.
During the construction, AutoEFT creates the output directory efts/sm-eft/6/ in the current working directory. The substring'sm' is derived from the name of the model file sm.yml, and '6' is the requested mass dimension. All output files of AutoEFT will be written into this directory or its subdirectories. If during the construction an operator type which is already present in the output directory is encountered, AutoEFT will skip the construction of this particular type.27
Footnote 27: Unless the --overwrite flag is set; see Appendix B.1.2.
The operator basis itself is written into the subdirectory basis/. This directory always contains the file model.json, serving as a reference to the model used during the construction, and the hidden file.autoeft containing metadata of the generation. All constructed operator files of a given family and type (cf. Appendix E) are included in further subdirectories of the form <N>/<family>/<type>.yml, where N denotes the total number of fields in the operator. The format of the operator files is explained in the next section.
A detailed description of all command-line options of the construct (short: c) command can be found in Appendix B.1.2. Here, we only mention the optional --select (short: -s) and --ignore (short: -i) options, which are particularly useful if only a specific subset of operators should be constructed. For example, to only construct dimension-six operators containing exactly two Higgs doublets, run the command:
autoeft c sm.yml 6 -s "{H: 2, H+: 0}" -s "{H: 0, H+: 2}" \ -s "{H: 1, H+: 1}"
On the other hand, the command
autoeft c sm.yml 6 -i "{GL: +}" -i "{GL+: +}"
will exclude all operators containing gluons. The -s and -i options can be combined, of course, whereupon the latter overrides the former in case of conflicts.
After a successful run, AutoEFT writes the file stats.yml to the output directory, containing the total number of families, types, terms, and operators in the basis. These numbers can also be obtained using the count command; see Appendix B.1.3.
### Output Format
The _operator files_ contain all information needed to reconstruct the EFT basis type-by-type. Here, we demonstrate how their content can be interpreted using the SMEFT operator type \(L_{\mathrm{L}}^{1}Q_{\mathrm{L}}^{3}\) as an example. The entire operator file, named 1LL_3QL.yml, is displayed in Listing 12.28
Footnote 28: In the supplemental material accompanying Ref. [33], which adopts the all-left notation for the fields, the corresponding file is named 1L_3Q.yml.
```
1#?TIL_3QL.yml'generatedbyAutoEFT1.0:0
2version:1.0.0
3type:
4-{LL:1,QL:3}
5-complex
6generations:{LL:3,QL:3}
7n_terms:3
8n_operators:57
9invariants:
10Lorentz:
11O(Lorentz,1): +eps(1_1,3_1)*eps(2_1,4_1)*LL(1_1)*QL(2_1)*QL(3_1)*QL(4_1)
12O(Lorentz,2): +eps(1_1,2_1)*eps(3_1,4_1)*LL(1_1)*QL(2_1)*QL(3_1)*QL(4_1)
13SUB:
14O(SU3,1): +eps(2_1,3_1,4_1)*LL*QL(2_1)*QL(3_1)*QL(4_1)
15SU2:
16O(SU2,1): +eps(1_1,3_1)*eps(2_1,4_1)*LL(1_1)*QL(2_1)*QL(3_1)*QL(4_1)
17O(SU2,2): +eps(1_1,2_1)*eps(3_1,4_1)*LL(1_1)*QL(2_1)*QL(3_1)*QL(4_1)
18permutation_symmetries:
19-vector:Lorentz*SU3*SU2
20-symmetry:{LL:[1],QL:[1,1,1]}
21n_terms:1
22n_operators:3 matrix:!-
24[0-110]
25-symmetry:{LL:[1],QL:[2,1]}
26n_terms:1
27n_operators:24 matrix:!-
28[-122-1]
29-symmetry:{LL:[1],QL:[3]}
30n_terms:1
31n_operators:30
32n_operators:1
33matrix:!-
34[2-1-12]
```
Listing 12: \(L_{\mathrm{L}}^{1}Q_{\mathrm{L}}^{3}\) operator file.
A summary of all keywords appearing in the operator files is included in Appendix D. For this particular example, they can be interpreted in the following way:
version:
The version of AutoEFT that was used to produce the output file.
type: The first entry denotes the operator type, in this example \(L_{\rm L}^{1}Q_{\rm L}^{3}\). The second entry states that this type is 'complex', meaning there is a distinct hermitian conjugate type (which is contained in 1LL+_3QL+.yml).
generations: For reference, the number of generations for each field is also displayed in the operator files. In the present case, the file was generated for three generations of leptons and quarks.
n_terms: The total number of operators with independent Lorentz and internal index contractions and definite permutation symmetry of the repeated fields (i.e. fields which differ at most in their generation index). It does not take into account the different generations though. In this example, the generation indices of the quarks can be decomposed into totally anti-symmetric \([1,1,1]\), mixed symmetric \([2,1]\), and totally symmetric \([3]\) tensors.
n_operators: The total number of independent operators, taking into account the independent values the generation indices can assume. Here, there are \(3\cdot(1+8+10)=57\) independent combinations of the \(L_{\rm L}\) and \(Q_{\rm L}\) generations.
invariants: The invariant contractions are given by:
\[{\cal O}_{1}^{\rm Lorentz} =\epsilon^{\alpha\gamma}\,\epsilon^{\beta\delta}\,L_{\rm L\alpha} \,Q_{\rm L\beta}\,Q_{\rm L\gamma}\,Q_{\rm L\delta}\,,\] \[{\cal O}_{2}^{\rm Lorentz} =\epsilon^{\alpha\beta}\,\epsilon^{\gamma\delta}\,L_{\rm L\alpha }\,Q_{\rm L\beta}\,Q_{\rm L\gamma}\,Q_{\rm L\delta}\,,\] \[{\cal O}_{1}^{SU(3)} =\epsilon^{bcd}\,L_{\rm L}\,Q_{\rm L\it b}\,Q_{\rm L\it c}\,Q_{ \rm L\it d}\,, \tag{10}\] \[{\cal O}_{1}^{SU(2)} =\epsilon^{ik}\,\epsilon^{jl}\,L_{\rm L\it i}\,Q_{\rm L\it j}\,Q_ {\rm L\it k}\,Q_{\rm L\it l}\,,\] \[{\cal O}_{2}^{SU(2)} =\epsilon^{ij}\,\epsilon^{kl}\,L_{\rm L\it i}\,Q_{\rm L\it j}\,Q_{ \rm L\it k}\,Q_{\rm L\it l}\,.\]
where only the relevant set of indices is displayed in each case.
permutation_symmetries: The first entry always denotes the order of the tensor product. In this case,
the combination is given by \[\texttt{vector}:\quad\vec{\mathcal{O}}\equiv\mathcal{O}^{\text{ Lorentz}}\otimes\mathcal{O}^{SU(3)}\otimes\mathcal{O}^{SU(2)}=\left(\begin{array}{l} \mathcal{O}_{1}^{\text{Lorentz}}\otimes\mathcal{O}_{1}^{SU(3)}\otimes \mathcal{O}_{1}^{SU(2)}\\ \mathcal{O}_{1}^{\text{Lorentz}}\otimes\mathcal{O}_{1}^{SU(3)}\otimes \mathcal{O}_{2}^{SU(2)}\\ \mathcal{O}_{2}^{\text{Lorentz}}\otimes\mathcal{O}_{1}^{SU(3)}\otimes \mathcal{O}_{1}^{SU(2)}\\ \mathcal{O}_{2}^{\text{Lorentz}}\otimes\mathcal{O}_{1}^{SU(3)}\otimes \mathcal{O}_{2}^{SU(2)}\end{array}\right).\] (11) The first element of this vector is to be read as \[\begin{split}\mathcal{O}_{1}^{\text{Lorentz}}\,\otimes\, \mathcal{O}_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{1}^{SU(2)}=\\ =\epsilon^{\alpha\gamma}\,\epsilon^{\beta\delta}\,\epsilon^{ik}\, \epsilon^{jl}\,\epsilon^{bcd}\,L_{\text{L}\,\alpha i}^{\,w}\,Q_{\text{L}\, \beta bj}^{x}\,Q_{\text{L}\gamma ck}^{y}\,Q_{\text{L}\,\delta dl}^{z}\,,\end{split}\] (12) for example, with generation indices \(w,x,y,z\in\{1,2,3\}\), while the other indices are those of Eq. (10). The choice of generation indices in the repeated fields is not arbitrary though. The remaining entries take this into account via the permutation symmetries of the repeated fields and the associated linearly independent combinations of the invariant contractions. In this case, there are three distinct permutation symmetries of the quark generation indices: \[\begin{split}&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\(3\cdot 1=3\) different operators (see line 22 of Listing 12), if the generation of the fields is taken into account.
The second term is
\[\mathcal{O}^{[1],[2,1]} \equiv\mathcal{K}^{[1],[2,1]}\cdot\vec{\mathcal{O}} \tag{17}\] \[=-\,\mathcal{O}_{1}^{\text{Lorentz}}\,\otimes\,\mathcal{O}_{1}^ {SU(3)}\,\otimes\,\mathcal{O}_{1}^{SU(2)}+2\,\mathcal{O}_{1}^{\text{Lorentz}} \,\otimes\,\mathcal{O}_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{2}^{SU(2)}\] \[\quad+2\,\mathcal{O}_{2}^{\text{Lorentz}}\,\otimes\,\mathcal{O }_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{1}^{SU(2)}-\,\mathcal{O}_{2}^{\text{Lorentz }}\,\otimes\,\mathcal{O}_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{2}^{SU(2)}\,.\]
For this permutation symmetry, there are eight independent combinations of the quark generation indices; one may choose them to be29
Footnote 29: They can be determined from the associated semi-standard Young tableaux \(\young{\young{x}{y}{z}}\), see Ref. [33] for details.
\[\begin{split}(x,y,z)\in&\{(1,1,2),(1,1,3),(1,2,2),(1,2,3),(1,3,2),\\ &(1,3,3),(2,2,3),(2,3,3)\}\,.\end{split} \tag{18}\]
Again taking into account the multiplicity of the lepton generations, this term represents \(3\cdot 8=24\) operators (see line 27).
The third term is
\[\begin{split}\mathcal{O}^{[1],[3]}&\equiv\mathcal{K }^{[1],[3]}\cdot\vec{\mathcal{O}}\\ &=2\,\mathcal{O}_{1}^{\text{Lorentz}}\,\otimes\,\mathcal{O}_{1} ^{SU(3)}\,\otimes\,\mathcal{O}_{1}^{SU(2)}-\,\mathcal{O}_{1}^{\text{Lorentz}} \,\otimes\,\mathcal{O}_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{2}^{SU(2)}\\ &\quad-\,\mathcal{O}_{2}^{\text{Lorentz}}\,\otimes\,\mathcal{O }_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{1}^{SU(2)}+2\,\mathcal{O}_{2}^{\text{Lorentz }}\,\otimes\,\mathcal{O}_{1}^{SU(3)}\,\otimes\,\mathcal{O}_{2}^{SU(2)}\,,\end{split} \tag{19}\]
where one can choose the following ten combinations of the quark generation indices:
\[\begin{split}(x,y,z)\in&\{(1,1,1),(1,1,2),(1,1,3),( 1,2,2),(1,2,3),\\ &(1,3,3),(2,2,2),(2,2,3),(2,3,3),(3,3,3)\}\,,\end{split} \tag{20}\]
and therefore this term represents \(3\cdot 10=30\) operators (see line 32).
More examples and detailed descriptions of the output format can be found in Ref. [33]. See also Section 6.1 for an example on how AutoEFT can be used to perform this expansion automatically.
### Counting SMEFT Operators
The number of families, types, terms, and operators (cf. Appendix E) can be obtained from an existing basis using the count command;30 see Appendix B.1.3.
AutoEFT also writes these numbers to the file stats.yml after each basis construction, see Section 5.1. In order to arrive at a well-defined number, operators (families, types, terms) and their distinct conjugate version are counted separately. This means that, for SMEFT at mass dimension five, AutoEFT counts two operators (equaling the number of families, terms, and types) if only one generation of leptons is taken into account: the Weinberg operator \(\sim L_{\mathrm{L}}L_{\mathrm{L}}HH\) and its conjugate \(\sim H^{\dagger}H^{\dagger}L_{\mathrm{L}}^{\dagger}L_{\mathrm{L}}^{\dagger}\). For three generations, each of the two terms leads to six operators (permutation symmetry eliminates three out of the nine possible combinations, cf. Section 5.2).
For SMEFT at mass dimension six, various different numbers can be found in the literature, depending on the way of counting, or possible additional symmetries that have been imposed. For example, assuming baryon-number conservation, AutoEFT generates 76 operators for a single generation of fermions. This is in line with the 59 operators reported in Ref. [15] if one counts the 17 hermitian conjugate operators of \(Q_{uG}\), \(Q_{dG}\), \(Q_{\varphi ud}\), \(Q_{ledq}\), \(Q_{quqd}^{(1)}\), \(Q_{quqd}^{(8)}\), \(Q_{lequ}^{(1)}\), \(Q_{lequ}^{(3)}\), and \(Q_{ij}\) with \(i\in\{e,u,d\}\), \(j\in\{\varphi,W,B\}\), as independent degrees of freedom. Relaxing the assumption of baryon-number conservation, AutoEFT reports a total of 84 operators, corresponding to the addition of the four baryon-number violating operators \(Q_{duq}\), \(Q_{quq}\), \(Q_{qqq}\), and \(Q_{duu}\), quoted in Ref. [15], and their hermitian conjugate versions. In the case of three fermion generations, AutoEFT generates 2499 operators if baryon-number conservation is imposed; otherwise the number increases to 3045. Enforcing flavor conservation as described in Section 4.4, AutoEFT only generates 47 operators at mass dimension six (cf. Ref. [47]).
The number of operators calculated with the Hilbert series [21; 22] matches the counting of AutoEFT for any mass dimension exactly, as we have verified for SMEFT up to mass dimension 12 [33].
### Limitations of AutoEFT
#### 5.4.1 Computing Resources
The algorithm implemented in AutoEFT works for an arbitrary mass dimension \(d\). However, the computational efforts increase exponentially with \(d\). For SMEFT, for example, while the basis at mass dimension 10 could be constructed within a few hours, dimension 12 took of the order of months. Currently, CPU time is therefore probably the most severe limiting factor for going to even higher mass dimension.
Other hardware limitations may arise from memory requirements, which mostly come from algebraic operations when projecting the general Lorentz and \(SU(N)\) tensors onto the tensor basis. Also the storage of the output files may exceed the
available hardware. In Ref. [33], we estimated the required disk space for SMEFT at mass dimension 26 to amount to about 1 PB.
#### 5.4.2 Generators of the Symmetric Group
The elimination of redundancies due to the occurrence of repeated fields in an operator requires representation matrices of the symmetric group \(S_{n}\). Since these are independent of the specific EFT under consideration, AutoEFT comes with a hard-coded version of generator matrices that are used to generate all required representation matrices up to \(n=9\), which is sufficient for problems with up to nine repeated fields in an operator.31 If the construction of an EFT requires a representation matrix which is not hard-coded, AutoEFT will terminate and request the missing matrices. AutoEFT also provides a functionality to (re-)compute these matrices, using the generators command, see Appendix B.1. Calculating them beyond \(n=9\) is very CPU expensive though.
Footnote 31: For SMEFT, this would be sufficient to generate the basis up to mass dimension 18.
#### 5.4.3 Conceptual Limitations
As indicated already in Section 2, AutoEFT is currently limited to fields with spin 0, 1/2, 1, and 2, where the latter two must be massless. Furthermore, internal symmetries must only be due to \(U(1)\) or \(SU(N)\) groups.
The operator basis will be non-redundant on-shell, which means that operators which are related by equations-of-motion and integration-by-parts identities have been identified. For the purpose of renormalization, operators proportional to equations-of-motion as well as gauge-variant operators are required in general though, but currently these cannot be generated by AutoEFT. In addition, AutoEFT only constructs operators that mediate proper interactions, meaning that any operator must be composed of at least three fields. Furthermore, AutoEFT does not take into account evanescent operators, as they may be required for calculations in dimensional regularization (see, e.g., Ref. [49]).
## 6 Working With Operator Files
Besides providing a command-line script for the operator basis construction, AutoEFT also serves as a SageMath library to work with EFT operators. In this section, we illustrate some of the features available by importing AutoEFT into SageMath, which
will be extended further in future releases of AutoEFT. We also introduce some additional command-line functionalities that are not directly related to the operator construction but may be useful for the user.
### Loading Operator Files
Once an operator basis is constructed using the construct command, it is possible to load the operator files into SageMath for further manipulation. A _valid_ operator basis that can be processed by AutoEFT is represented by a directory containing the file model.json referencing the model, a (hidden) file.autoeft containing metadata, and the respective operator files in the format described in Section 5.2.32 By default, the directories called basis created by AutoEFT are structured such that they can be directly loaded into SageMath. See also Appendix E.
Footnote 32: Note that, while the files model.json and.autoeft must be contained in the top-level directory of the basis, the operator files can be structured in subdirectories. The get_basis function searches all subdirectories for files with extension β.ymlβ and loads them as operator file.
Consider for example Listing 13, displaying how one can load the mass dimension-six SMEFT basis located at efts/sm-eft/6/basis (cf. line 5). The command lines in this listing can be entered into an interactive SageMath session, for example, or stored in a <file> first and passed to sage via a shell command'sage <file>'. AutoEFT provides the class autoeft.io.basis.BasisFile to interact with the basis stored on the disk. It must be initialized with the path pointing to the basis as displayed in line 6. Afterwards, the entire basis can be loaded into memory with the get_basis() function (cf. line 7), returning a dictionary that maps each operator type to the contents of the respective operator file. For example, to access the operators of type \(L^{1}_{\mathrm{L}}Q^{3}_{\mathrm{L}}\) (see also Section 5.2), the type can be identified by the mapping '{"LL": 1, "QL": 3}', following the same convention as the output files; see line 4 in Listing 12. The resulting object, assigned to the variable 'LQQQ' in line 9 of Listing 13, is an instance of the autoeft.base.basis.OperatorInfoPermutation class. This class contains all the information of the operator files as object properties, see for example lines 10-14. However, it also contains further properties and functions. For example, to obtain the actual terms of this type, the object can be expanded using the expanded() function, returning a new object that contains the terms as displayed in Eq. (16) (cf. lines 16-22 of Listing 13).
```
1frompathlibimportPath
2fromautoeft.io.basisimportBasisFile
3
4basis_path=Path("efts/sm-eft/6/basis")
```
```
6basis_file=BasisFile(basis_path)
7basis=basis_file.get_basis()
8
9LQQQ=basis[{"LL":1,"QL":3}]
10print(LQQQ)
11#LL(1)QL(3)
12
13print(LQQQ.n_terms,LQQQ.n_operators,sep="&")
14#3&57
15
16forterminLQQQ.expanded():
17print(term_symmetry)
18print(term)
19print(term.operators)
20{'LL':[1],'QL':[1,1]}
21#(-1)*eps(Lorentz_1_1,Lorentz_3_1)*eps(Lorentz_2_1,Lorentz_4_1)*eps(SU3\(2\)1,SU3_1,SU3\(4\)1)*eps(SU2\(1\)1,SU2\(2\)1)*eps(SU2\(3\)1,SU2\(4\)1)*LL(Lorentz_1,'SU2_1)*QL(Lorentz_2_1,'SU3\(2\)1,'SU2\(2\)1)*QL(Lorentz_3_1,'SU3\(3\)1,'SU2\(3\)1)*QL(Lorentz_4_1,'SU3\(4\)1,'SU2\(4\)1)+(1)*eps(Lorentz_1_1,Lorentz_2_1)*eps(Lorentz_3_1,Lorentz_4_1)*eps(SU3\(2\)1,SU3_3,1,SU3\(4\)1)*eps(SU2_1,SU2\(3\)1)*eps(SU2\(2\)1,SU2\(4\)1)*LL(Lorentz_1_1,'SU2\(1\)1)*QL(Lorentz_2_1,'SU3\(2\)1,'SU2\(2\)1)*QL(Lorentz_3_1,'SU3_3,1;SU2\(3\)1)*QL(Lorentz_4_1,'SU3\(4\)1,'SU2\(4\)1)
22#[(1,1,2,3),(2,1,2,3),(3,1,2,3)]
23
24#{'LL':[1],'QL':[2,1]}
25#(-1)*eps(Lorentz_1_1,Lorentz_3_1)*eps(Lorentz_2_1,Lorentz_4_1)*eps(SU3\(2\)1, SUS3_1,SUS4\(4\)1)*eps(SU2_1, SUS2_1,_SU2\(4\)1)*LL(Lorentz_1_1, SUB2_1)*QL(Lorentz_2_1,'SU3\(2\)1,'SU2\(2\)1)*QL(Lorentz_3_1,'SU3_3,1;SU2\(3\)1,'SU2\(3\)1)*QL(Lorentz_4_1;SUS4_1,'SU2\(4\)1)+(2)*eps(Lorentz_1_1,Lorentz_3_1)*eps(Lorentz_2_1,Lorentz_4_1)*eps(SUS2_1,_SU3_3,1,SU3\(4\)1)*eps(SU2_1, SUB2_1)*eps(SU2\(3\)1,SU2\(4\)1)*LL(Lorentz_1_1;SU2_1)*QL(Lorentz_2_1;SUS3\(2\)1, SUB2\(2\)1)*QL(Lorentz_3_1,'SU3_3,1;SU2\(3\)1)*QL(Lorentz_4_1;SUS4_1,'SU2\(4\)1)+(2)*eps(U2_1,Lorentz_2_1)*eps(Lorentz_3_1,Lorentz_4_1)*LL(Lorentz_4_1)*QL(Lorentz_4_1)*eps(SUS2_1, SUB2_1)*QL(Lorentz_4_1;SUS4_1,'SU2\(4\)1)*eps(SU3\(2\)1, SUB2_1)*QL(Lorentz_4_1;SUS4_1;SU2\(4\)1)*(L(Lorentz_4_1,'SU2\(4\)1)*(Lorentz_4_1,'SU3\(4\)1;SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*LL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*LL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)+(-1)*eps(Lorentz_1_1,Lorentz_2_1)*eps(Lorentz_3_1,'Lorentz_4_1)*eps(SU3\(2\)1, SUB2_1)*(-1)*eps(SU3\(2\)1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB2_1)*(-1)*eps(SU3\(2\)1, SUB2_1)*QL(Lorentz_3_1, SUB3_1, SUB2\(2\)1)*eps(SU2\(3\)1, SUB2_3, SUB2_3, INT2_1)*QL(Lorentz_3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1, SUB3_1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*(L(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*eps(SU2\(3\)1, SUB2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)*QL(Lorentz_4_1;'SU3\(4\)1;'SU2\(4\)1)+(2)*eps(Lorentz_3_1, INT2_1)*(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*eps(-1)*(-1)*eps(-1)*(-1)*eps(-1)*(-1)*eps(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*eps(-2.1)*eps(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*eps(-1)*(-1)*(-1)*(-1)*eps(-1)*(-1
S02\(2\)1)*QL(Lorentz_3_1;SU3\(3\)1;SU2\(3\)1)*QL(Lorentz_4_1;SU3\(4\)1;SU2\(4\)1)
80#[(1, 1, 1, 1), (1, 1, 1, 2), (1, 1, 1, 3), (1, 1, 2, 2), (1, 1, 2, 3), (1, 1, 3, 3), (1, 2, 2, 2), (1, 2, 2, 3), (1, 2, 3), (1, 3, 3), (2, 1, 1), (2, 1, 1), (2, 1, 1), (2, 1, 1), (2, 1, 2), (2, 1, 2, 3), (2, 1, 3), (2, 2, 2), (2, 2, 3), (2, 2, 3), (2, 3, 3), (2, 3, 3), (3, 1, 1), (3, 1, 1), (3, 1, 2), (3, 1, 2, 3), (3, 1, 3, 3), (3, 2, 2), (3, 2, 2, 3), (3, 2, 3), (3, 3, 3), (3, 3, 3)] ```
Listing 13: Example script loading an operator basis created by AutoEFT. The operator type \(L^{1}_{\mathrm{L}}Q^{3}_{\mathrm{L}}\) is selected and explicitly expanded into its terms as illustrated in Section 5.2.
### Etzv Output
AutoEFT provides the latex (short: l) command for the automatic LaTeX markup of the constructed operators. For example, to produce the TeX files corresponding to the operator basis located at efts/sm-eft/6/basis,33 run the command:
Footnote 33: See Appendix E for the definition of a valid operator basis that can be processed by AutoEFT.
``` autoeft latexefts/sm-eft/6/basis ```
By default, AutoEFT stores all.tex files under the directory tex/sm-eft/6/. If this directory does not contain a file called main.tex, an appropriate file will be generated automatically. From this LaTeX file (for example with the help of pdflatex), one can produce a PDF document which contains a table encoding the model; a table representing the numbers of types, terms, and operators per family; the respective Hilbert series; and the information encoded in the operator files for each type.34 The latex command also supports the --select and --ignore options, to restrict the generation of TeX files to a subset of the entire basis (cf. Section 5 and Appendix B.1.4).
Footnote 34: Using the option -c, one can directly compile main.tex with the AutoEFT call, see Appendix B.1.4.
## 7 Conclusion
We have presented AutoEFT, a completely independent implementation of the algorithms and concepts proposed in Refs. [13, 29, 30] for the automated bottom-up construction of on-shell operator bases for generic EFTs. AutoEFT has been successfully used to construct--for the first time--the complete and non-redundant SMEFT and GRSMEFT operator bases for mass dimensions 10, 11, and 12 [33]. Besides the SM, AutoEFT can accommodate various low-energy scenarios and generate the respective EFT operator bases, in principle up to arbitrary mass dimension. Due to the
simple format of the input files and the command-line utilities provided by AutoEFT, the user can compose custom models in a straight-forward way and construct EFT operator bases with minimal effort. Its phenomenological purpose is to eliminate the task of manually constructing EFTs from low-energy theories which may involve as-of-yet undiscovered light particles. But AutoEFT may also help to understand deeper structures of EFTs at the general level (see, e.g., Ref. [50]).
AutoEFT provides a foundation for future EFT frameworks and we plan to extend its capabilities in various respects, including the automated translation of the output to the FeynRules[51, 52] format, or the capability to transform Wilson coefficients between different bases.
AcknowledgmentsWe would like to thank Jakob Linder and Maximilian Rzehak for helpful comments and extensive \(\beta\)-tests, and Tim Kempkens for the inspiring collaboration at the earlier stages of the project. This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 400140256 - _GRK 2497: The physics of the heaviest particles at the LHC_, and grant 396021762 - _TRR 257: P3H - Particle Physics Phenomenology after the Higgs Discovery_.
## Appendix A Relations to Conventional Notation
In this section, we describe how the output of AutoEFT can be related to a more conventional notation, containing bispinors, field-strength tensors, and Weyl tensors with Lorentz four-vector indices as well as fields carrying anti-fundamental and adjoint indices of the internal symmetry groups.
The four-component bispinors can be decomposed into left- and right-handed part, such that they are equally represented by a single two component Weyl spinor:
\[\Psi_{\mathrm{L}}=\begin{pmatrix}\psi_{\mathrm{L}\alpha}\\ 0\end{pmatrix}\,,\quad\bar{\Psi}_{\mathrm{L}}=\begin{pmatrix}0,\psi_{\mathrm{ L}\dot{\alpha}}\end{pmatrix}\,,\quad\Psi_{\mathrm{R}}=\begin{pmatrix}0\\ \psi_{\mathrm{R}}^{\ \dot{\alpha}}\end{pmatrix}\,,\quad\bar{\Psi}_{\mathrm{R}}= \begin{pmatrix}\psi_{\mathrm{R}}^{\ \dot{\alpha}},0\end{pmatrix}\,.\] (A.1)
Note that in the all-left chirality notation, \(\psi\equiv\psi_{\mathrm{L}}\) and \(\psi_{\mathrm{C}}\equiv\psi_{\mathrm{R}}^{\dagger}\).
The output of AutoEFT containing Weyl spinors can be expressed in terms of bilinears involving the bispinors
\[\Psi=\begin{pmatrix}\psi_{\mathrm{L}\alpha}\\ \psi_{\mathrm{R}}^{\ \dot{\alpha}}\end{pmatrix}\,,\qquad\mathbf{\chi}= \begin{pmatrix}\chi_{\mathrm{L}\alpha}\\ \chi_{\mathrm{R}}^{\ \dot{\alpha}}\end{pmatrix}\,,\] (A.2)
using the relations
\[\begin{array}{rcl}\bar{\Psi}_{\rm R}{\boldsymbol{\chi}}_{\rm L}&=\psi_{\rm R}^{ \dagger\;\alpha}\chi_{\rm L\alpha}\,,&\bar{\Psi}_{\rm L}{\boldsymbol{\chi}}_{ \rm R}&=\psi_{\rm L}^{\dagger\;\dot{\!}}_{\dot{\alpha}}\chi_{\rm R}^{\;\dot{ \alpha}}\,,\\ \bar{\Psi}_{\rm R}\gamma^{\mu}{\boldsymbol{\chi}}_{\rm R}&=\psi_{\rm R}^{\dagger \;\alpha}(\sigma^{\mu})_{\alpha\dot{\alpha}}\chi_{\rm R}^{\;\dot{\alpha}}\,,& \bar{\Psi}_{\rm L}\gamma^{\mu}{\boldsymbol{\chi}}_{\rm L}&=\psi_{\rm L}^{\; \dagger\;\dot{\!}}_{\dot{\alpha}}(\bar{\sigma}^{\mu})^{\dot{\alpha}\alpha} \chi_{\rm L\alpha}\,,\\ \bar{\Psi}_{\rm R}\sigma^{\mu\nu}{\boldsymbol{\chi}}_{\rm L}&=\psi_{\rm R}^{ \dagger\;\alpha}(\sigma^{\mu\nu})_{\alpha}^{\;\;\beta}\chi_{\rm L\beta}\,,& \bar{\Psi}_{\rm L}\sigma^{\mu\nu}{\boldsymbol{\chi}}_{\rm R}&=\psi_{\rm L}^{ \dagger\;\dot{\!}}_{\dot{\alpha}}(\bar{\sigma}^{\mu\nu})^{\dot{\alpha}}{}_{ \dot{\beta}}\chi_{\rm R}^{\;\dot{\beta}}\,,\\ \Psi_{\rm R}^{\rm T}\,{\rm C}\,{\boldsymbol{\chi}}_{\rm R}&=\psi_{\rm R}{}_{ \dot{\alpha}}\chi_{\rm R}^{\;\dot{\alpha}}\,,&\Psi_{\rm L}^{\rm T}\,{\rm C}\,{ \boldsymbol{\chi}}_{\rm L}&=\psi_{\rm L}^{\;\alpha}\psi_{\rm L\alpha}\,,\\ \Psi_{\rm R}^{\rm T}\,{\rm C}\,\gamma^{\mu}{\boldsymbol{\chi}}_{\rm L}&=\psi_{ \rm R}{}_{\dot{\alpha}}(\bar{\sigma}^{\mu})^{\dot{\alpha}\alpha}\chi_{\rm L \alpha}\,,&\Psi_{\rm L}^{\rm T}\,{\rm C}\,\gamma^{\mu}{\boldsymbol{\chi}}_{\rm R }&=\psi_{\rm L}^{\;\alpha}(\sigma^{\mu})_{\alpha\dot{\alpha}}\chi_{\rm R}^{\; \dot{\alpha}}\,,\\ \Psi_{\rm R}^{\rm T}\,{\rm C}\,\sigma^{\mu\nu}{\boldsymbol{\chi}}_{\rm R}&= \psi_{\rm R}{}_{\dot{\alpha}}(\bar{\sigma}^{\mu\nu})^{\dot{\alpha}}{}_{\dot{ \beta}}\chi_{\rm R}^{\;\dot{\beta}}\,,&\Psi_{\rm L}^{\rm T}\,{\rm C}\,\sigma^{ \mu\nu}{\boldsymbol{\chi}}_{\rm L}&=\psi_{\rm L}^{\;\alpha}(\sigma^{\mu\nu}) _{\alpha}^{\;\;\beta}\psi_{\rm L\beta}\,,\\ \bar{\Psi}_{\rm R}\,{\rm C}\,\bar{\boldsymbol{\chi}}_{\rm R}^{\rm T}&=\psi_{\rm R }^{\dagger\;\alpha}\chi_{\rm R\alpha}^{\dagger\;\dot{\!}}\,,&\bar{\Psi}_{\rm L }\,{\rm C}\,\bar{\boldsymbol{\chi}}_{\rm L}^{\rm T}&=\psi_{\rm L}^{\dagger\; \dot{\!}}_{\dot{\alpha}}\chi_{\rm L}^{\dagger\;\dot{\alpha}}\,,\\ \bar{\Psi}_{\rm R}\gamma^{\mu}\,{\rm C}\,\bar{\boldsymbol{\chi}}_{\rm L}^{\rm T} &=\psi_{\rm R}^{\dagger\;\alpha}(\sigma^{\mu})_{\alpha\dot{\alpha}}\chi_{\rm L }^{\dagger\;\dot{\alpha}}\,,&\bar{\Psi}_{\rm L}\gamma^{\mu}\,{\rm C}\,\bar{ \boldsymbol{\chi}}_{\rm R}^{\rm T}&=\psi_{\rm L}^{\;\dagger}_{\dot{\alpha}}( \bar{\sigma}^{\mu})^{\dot{\alpha}\alpha}\chi_{\rm R\alpha}^{\dagger}\,,\\ \bar{\Psi}_{\rm R}\sigma^{\mu\nu}\,{\rm C}\,\bar{\boldsymbol{\chi}}_{\rm R}^{\rm T} &=\psi_{\rm R}^{\dagger\;\alpha}(\sigma^{\mu\nu})_{\alpha}^{\;\;\beta}\chi_{ \rm R\beta}^{\dagger\;\dot{\!}}\,,&\bar{\Psi}_{\rm L}\sigma^{\mu\nu}\,{\rm C} \,\bar{\boldsymbol{\chi}}_{\rm L}^{\rm T}&=\psi_{\rm L}^{\;\dagger}_{\dot{ \alpha}}(\bar{\sigma}^{\mu\nu})^{\dot{\alpha}}{}_{\dot{\beta}}\chi_{\rm L}^{ \dagger\;\dot{\beta}}\,,\end{array}\] (A.3)
where
\[\begin{array}{rcl}(\sigma^{\mu})_{\alpha\dot{\alpha}}=(I,\vec{\sigma})_{ \alpha\dot{\alpha}}\,,&(\bar{\sigma}^{\mu})^{\dot{\alpha}\alpha}=(I,-\vec{ \sigma})^{\dot{\alpha}\alpha}\,,\\ (\sigma^{\mu\nu})_{\alpha}^{\;\;\beta}=\frac{i}{2}(\sigma^{\mu}\bar{\sigma}^{ \nu}-\sigma^{\nu}\bar{\sigma}^{\mu})_{\alpha}^{\;\;\beta}\,,&(\bar{\sigma}_{ \mu\nu})^{\dot{\alpha}}{}_{\dot{\beta}}=\frac{i}{2}(\bar{\sigma}_{\mu}\sigma_ {\nu}-\bar{\sigma}_{\nu}\sigma_{\mu})^{\dot{\alpha}}{}_{\dot{\beta}}\,,\end{array}\] (A.4)
and
\[\begin{array}{rcl}\gamma^{\mu}=\begin{pmatrix}0&(\sigma^{\mu})_{\alpha\dot{ \beta}}\\ (\bar{\sigma}^{\mu})^{\dot{\alpha}\beta}&0\end{pmatrix}\,,&\sigma^{\mu\nu}=\frac {i}{2}\left[\gamma^{\mu},\gamma^{\nu}\right]\,,&\mathrm{C}=i\gamma^{0}\gamma^{ 2}\,.\end{array}\] (A.5)
Here, \(I\) is the \(2\times 2\) identity matrix and \(\vec{\sigma}=(\sigma^{1},\sigma^{2},\sigma^{3})\) denotes the Pauli matrices. Using the normalization of Li _et al._[36], the covariant derivative, field-strength tensor, and Weyl tensor with Lorentz four-vector indices are given by
\[\begin{array}{rcl}D_{\mu}=-\frac{1}{2}D_{\alpha}^{\dot{\alpha}}(\sigma_{\mu} )_{\dot{\alpha}}^{\alpha}\,,&D_{\alpha}^{\dot{\alpha}}=D_{\mu}(\sigma^{\mu})_{ \alpha}^{\dot{\alpha}}\,,\end{array}\] (A.6)
\[\begin{array}{rcl}F^{\mu\nu}=\frac{i}{4}F_{\rm L}^{\;\alpha\beta}\sigma_{ \alpha\beta}^{\mu\nu}-\frac{i}{4}F_{\rm R}^{\;\dot{\alpha}\dot{\beta}}\bar{ \sigma}_{\dot{\alpha}\dot{\beta}}^{\mu\nu}\,,\\ F_{\rm L}\,_{\alpha\beta}=\frac{i}{2}F_{\mu\nu}\sigma_{\alpha\beta}^{\mu\nu}\,,&F_{ \rm R}^{\;\dot{\alpha}\dot{\beta}}=-\frac{i}{2}F^{\mu\nu}\bar{\sigma}_{\mu\nu}^{ \dot{\alpha}\dot{\beta}}\,,\end{array}\] (A.7)
and
\[\begin{array}{rcl}C^{\mu\nu\rho\sigma}=-\frac{1}{16}C_{\rm L}^{\;\alpha\beta \gamma\delta}\sigma_{\alpha\beta}^{\mu\nu}\sigma_{\gamma\delta}^{\rho\sigma}- \frac{1}{16}C_{\rm R}^{\;\dot{\alpha}\dot{\beta}\dot{\gamma}\dot{\delta}}\bar{ \sigma}_{\dot{\alpha}\dot{\beta}}^{\mu\nu}\bar{\sigma}_{\dot{\gamma}\dot{\delta}}^ {\rho\sigma}\,,\\ C_{\rm L}\,_{\alpha\beta\gamma\delta}=-\frac{1}{4}C_{\mu\nu\rho\sigma}\sigma_{ \alpha\beta}^{\mu\nu}\sigma_{\gamma\delta}^{\rho\sigma}\,,&C_{\rm R}^{\; \dot{\alpha}\dot{\beta}\dot{\gamma}\dot{\delta}}=-\frac{1}{4}C^{\mu\nu\rho \sigma}\bar{\sigma}_{\mu\nu}^{\dot{\alpha}\dot{\beta}}\bar{\sigma}_{\rho\sigma} ^{\dot{\gamma}\dot{\delta}}\,,\end{array}\] (A.8)
respectively.
Similarly, fields with \(SU(N)\) anti-fundamental and adjoint indices are given by
\[\left(\psi^{\dagger}\right)^{b}=\frac{1}{(N-1)!}\epsilon^{a_{1}\ldots a_{N-1}b} \psi^{\dagger}_{a_{1}\ldots a_{N-1}}\,,\qquad\psi^{\dagger}_{a_{1}\ldots a_{N- 1}}=\epsilon_{a_{1}\ldots a_{N-1}b}(\psi^{\dagger})^{b}\,,\] (A.9)
and
\[\begin{array}{c}F^{A}=\frac{1}{T_{F}(N-1)!}\epsilon^{a_{1}a_{3}\ldots a_{N-1 }b}{(T^{A})}^{a_{2}}_{\phantom{a_{1}}b}F_{a_{1}a_{2}a_{3}\ldots a_{N}}\,,\\ F_{a_{1}a_{2}a_{3}\ldots a_{N}}=\epsilon_{a_{1}a_{3}\ldots a_{N-1}b}{(T^{A})} ^{b}_{\phantom{a_{2}}a_{2}}F^{A}\,,\end{array}\] (A.10)
respectively. Here, \(T^{A}\) denotes the \(SU(N)\) generators in the fundamental representation, and \(T_{F}\) defines their normalization via \(Tr(T^{A}T^{B})=T_{F}\,\delta^{AB}\).
## Appendix B Invoking autoeft
_Synopsis_
autoeft [options] command [args]
_Description_
This is the main command starting autoeft. Its general behavior is affected by the options. autoeft offers several commands which are explained in the following sections. The behavior of each command is controlled by args, including positional arguments and further options.
_Command-Line Options_
-h
-help
Print a usage message briefly summarizing the command-line options.
-v
-version
Print the version number of autoeft to the standard output stream.
-q
-quiet
Suppress all output to the standard output stream.
### autoeft Commands
autoeft sample-model
Print the Standard Model definition in YAML format to the standard output stream.
autoeft construct autoeft c
Construct an operator basis for a given model and mass dimension.
autoeft count
Count the number of families, types, terms, and operators for a given basis.
autoeft latex autoeft l
Generate and compile TeX files for a given basis.
autoeft generators autoeft g
View or create the symmetric group representation generators.
### sample-model Command
#### Description
This command prints the content of the model file (see [Model], page 48) for the Standard Model to the standard output stream. To obtain the model file sm.yml in the current working directory, simply run:
autoeft sample-model > sm.yml
#### 2.1.2 construct Command Synopsis
autoeft construct [options] model dimension
#### Description
This command starts the construction of an operator basis. The details of the basis depend on the provided model file model.yml (see [Model], page 48) and
mass dimension D. The result will be saved in the (default) output directory under <model>-eft/<D>/basis/. The operators are further collected by their family (see [family], page 52) and type (see [type], page 53) in subdirectories of structure <N>/<family>/<type>.yml, where N is the number of fields in the operator. See [Output], page 51 for the format of the output.
_Positional Arguments_
model
Path to the model file that should be used for the construction (see [Model], page 48).
dimension
The integer mass dimension the constructed operators should have.
_Options_
-h
--help
Print a usage message briefly summarizing the command-line options.
-v
--verbose
Print a tree-like structure of operator families and types during the construction.
-t n
--threads=n
Set the number of threads n that start form processes. By default, n is the number of CPUs in the system.
-o path
--output=path
Set the output path, where the operators will be saved. By default, the operators are saved under efts/ in the current working directory.
-s pattern
--select=pattern
Only construct operator types (see [type], page 53) that match with pattern. The pattern must be given as a string representing a mapping--denoted by curly braces--from the fields to their number of occurrences in the desired type (e.g., '{H: 3, H+: 3}'). Besides explicit numbers, the symbol '+' can be used to require at least one occurrence. Alternatively, a range can be provided by
'x..y', meaning there must be at least 'x' and at most 'y' occurrences. If one of the bounds is omitted (e.g., 'x..', or '..y'), only the remaining one is enforced. By default, no selection is performed and any operator will be constructed. If this option is used multiple times, an operator type must match at least one pattern to be constructed. If this option is combined with the -i(--ignore) option, _select_ is applied before _ignore_.
-i pattern --ignore=pattern -- Do not construct operator types (see [type], page 53) that match with pattern. The pattern must be given as a string representing a mapping--denoted by curly braces--from the fields to their number of occurrences in the desired type (e.g., '{H: 3, H+: 3}'). Besides explicit numbers, the symbol '+' can be used to require at least one occurrence. Alternatively, a range can be provided by 'x..y', meaning there must be at least 'x' and at most 'y' occurrences. If one of the bounds is omitted (e.g., 'x..', or '..y'), only the remaining one is enforced. By default, no exclusion is performed and all operators will be constructed. If this option is used multiple times, an operator type that matches at least one pattern will not be constructed. If this option is combined with the -s(--select) option, _select_ is applied before _ignore_.
-dry-run -- Only list the operator types (see [type], page 53) that match the provided selection. No explicit construction is performed.
-generators=path -- Set the generators path, where autoeft searches for the symmetric group representation generators (see [Generators], page 46). By default, this is set to gens/ in the current working directory. If the directory does not exist or some generator files are missing, autoeft loads fallback generators for the representations up to S\({}_{9}\).
-overwrite -- Overwrite existing operator files in the output directory.
#### b.1.3 count Command Synopsis
autoeft count [options] basis
#### Description
This command counts the number of families, types, terms, and operators for a given basis (see [Vocabulary], page 52).
#### Positional Arguments
basis
Path to the basis containing the operators (see [basis], page 54).
#### Options
-h
-help
Print a usage message briefly summarizing the command-line options.
-v
-verbose
Print a tree-like structure of operator families and types.
-o file
-output=file
Set the output file, where the numbers will be saved. By default, the numbers are saved in the file counts.yml in the current working directory.
-s pattern
-select=pattern
Only count operator types (see [type], page 53) that match with pattern. The pattern must be given as a string representing a mapping--denoted by curly braces--from the fields to their number of occurrences in the desired type (e.g., '{H: 3, H+: 3}'). Besides explicit numbers, the symbol '+' can be used to require at least one occurrence. Alternatively, a range can be provided by 'x..y', meaning there must be at least 'x' and at most 'y' occurrences. If one of the bounds is omitted (e.g., 'x..', or '..y'), only the remaining one is enforced. By default, no selection is performed and any operator will be counted. If this option is used multiple times, an operator type must match at least one pattern to be counted. If this option is combined with the -i(-ignore) option, _select_ is applied before _ignore_.
-i pattern
-ignore=pattern
Do not count operator types (see [type], page 53) that match with pattern. The pattern must be given as a string representing a mapping--denoted by
curity braces--from the fields to their number of occurrences in the desired type (e.g., '{H: 3, H+: 3}'). Besides explicit numbers, the symbol '+' can be used to require at least one occurrence. Alternatively, a range can be provided by 'x..y', meaning there must be at least 'x' and at most 'y' occurrences. If one of the bounds is omitted (e.g., 'x..', or '..y'), only the remaining one is enforced. By default, no exclusion is performed and all operators will be counted. If this option is used multiple times, an operator type that matches at least one pattern will not be counted. If this option is combined with the -s(--select) option, _select_ is applied before _ignore_.
--dry-run
Only list the operator types (see [type], page 53) that match the provided selection. No explicit counting is performed.
#### b.1.4 latex Command Synopsis
autoeft latex [options] basis
#### Description
This command generates TeX files for a given basis (see [basis], page 54). The TeX files represent all the information encoded in the operator files as LaTeX markup. The resulting files compose a valid LaTeX document that can be compiled to a single PDF file.
#### Positional Arguments
basis
Path to the basis containing the operators (see [basis], page 54).
#### Options
-h
--help
Print a usage message briefly summarizing the command-line options.
-c command
--compile=command
Compile the TeX files by invoking the command in the output directory.
-o path --output=path Set the output path, where the TeX files will be saved. By default, the TeX files are saved under tex/ in the current working directory.
-s pattern --select=pattern Only generate TeX files for operator types (see [type], page 53) that match with pattern. The pattern must be given as a string representing a mapping--denoted by curly braces--from the fields to their number of occurrences in the desired type (e.g., '{H: 3, H+: 3}'). Besides explicit numbers, the symbol '+' can be used to require at least one occurrence. Alternatively, a range can be provided by 'x..y', meaning there must be at least 'x' and at most 'y' occurrences. If one of the bounds is omitted (e.g., 'x..', or '..y'), only the remaining one is enforced. By default, no selection is performed and any operator will be included. If this option is used multiple times, an operator type must match at least one pattern to be included. If this option is combined with the -i(--ignore) option, _select_ is applied before _ignore_.
-i pattern --ignore=pattern Do not generate TeX files for operator types (see [type], page 53) that match with pattern. The pattern must be given as a string representing a mapping--denoted by curly braces--from the fields to their number of occurrences in the desired type (e.g., '{H: 3, H+: 3}'). Besides explicit numbers, the symbol '+' can be used to require at least one occurrence. Alternatively, a range can be provided by 'x..y', meaning there must be at least 'x' and at most 'y' occurrences. If one of the bounds is omitted (e.g., 'x..', or '..y'), only the remaining one is enforced. By default, no exclusion is performed and all operators will be included. If this option is used multiple times, an operator type that matches at least one pattern will be excluded. If this option is combined with the -s(--select) option, _select_ is applied before _ignore_.
-dry-run Only list the operator types (see [type], page 53) that match the provided selection. No explicit TeX files are generated.
#### b.1.5 generators Command Synopsis
autoeft generators [options]
#### Description
This command handles the pre-computed generator matrices for the symmetric group representations. If this command is executed _without_ the -S or -P options, a table of all generators that autoeff would load with the current options is printed to the standard output stream. Otherwise, the respective generators are computed and stored.
#### Options
-h --help -
Print a usage message briefly summarizing the command-line options.
-o path --output=path Set the output path, where the generators will be saved. By default, the generators are saved under gens/ in the current working directory.
-S\(N\) Create the generators for all irreducible representations of the symmetric group \(\mathrm{S}_{N}\) of degree \(N\).
-P\(p\)[\(p\)...] Create the generators for the irreducible representation given by the partition as a non-increasing list of integers \(p\).
-overwrite Overwrite existing generator files in the output directory.
### Environment Variables
AUTOEFT_PATH The environment variable AUTOEFT_PATH can be set to a path (or a list of paths, separated by ':'), to specify where autoeff searches for the form executable. If AUTOEFT_PATH is not set, the system PATH will be used instead.
AUTOEFT_CS The symbol appended to the field name to denote conjugate fields. By default, the symbol '+' is used.
AUTOEFT_DS The symbol appended to spinor indices to denote dotted indices. By default, the symbol '-' is used.
## Appendix C Model File
``` nameScalar The name of the model. This is usually a short identifier like 'SMEFT' or 'LEFT'. descriptionScalar Optional (longer) description of the model. symmetriesMapping Definition of the model's symmetries. The symmetries are divided into the sub-entries lorentz_group, sun_groups, and u1_groups. lorentz_group Mapping Properties of the Lorentz group--realized as \(SU(2)_{l}\times SU(2)_{r}\). If omitted, autoeft loads the Lorentz group with default values. nameScalar The name associated with the Lorentz group. * Group names must start with a letter ('A-z'). * Group names can only contain alpha-numeric characters and parentheses ('A-z', '0-9', '(', and ')'). * Group names must end with an alpha-numeric character or a parenthesis ('A-z', '0-9', '(', or ')'). By default, the name 'Lorentz' is used. texScalar The TeX string associated with the Lorentz group. By default, the group name surrounded by \matht{\(\bullet\)} is used. indicesSequence The list of TeX (spinor) indices associated with the Lorentz group. By default, the Greek letters '\(\alpha,\beta,\dots,\lambda\)' are used. sun_groupsMapping Definition of the model's non-abelian symmetries--realized as \(SU(N)\) groups. Each entry is a mapping from the symmetry's name (e.g., 'QCD' or 'SU(3)') to its properties. * Group names must start with a letter ('A-z'). * Group names can only contain alpha-numeric characters and parentheses ('A-z', '0-9', '(', and ')'). * Group names must end with an alpha-numeric character or a parenthesis ('A-z', '0-9', '(', or ')').
By default, sun_groups is the empty mapping '{}'.
N Scalar The degree of the respective \(SU(N)\) group. This must be a positive integer greater than '1'. tex Scalar The TeX string associated with the respective \(SU(N)\) group. By default, the group name surrounded by \mathtt{\(\bullet\)} is used. indices Sequence The list of TeX (fundamental) indices associated with the respective \(SU(N)\) group. By default, the characters 'a,b,...,k' are used.
u1_groups Mapping Definition of the model's abelian symmetries--realized as \(U(1)\) groups. Each entry is a mapping from the symmetry's name (e.g., 'QED' or 'U(1)') to its properties.
* Group names must start with a letter ('A-z').
* Group names can only contain alpha-numeric characters and parentheses ('A-z', '0-9', '(', and ')').
* Group names must end with an alpha-numeric character or a parenthesis ('A-z', '0-9', '(', or ')').
By default, u1_groups is the empty mapping '{}'.
violation Scalar A unit of charge the operators are **allowed** to deviate from the exact \(U(1)\) symmetry. The unit can be an integer value (e.g., '1') or a fractional value (e.g., '1/2'). The violation \(v\) can be combined with a residual charge \(R\), such that only operators with total charge \(Q\) satisfying \(|Q-R|\leq v\) are allowed. The default value for violation is '0'.
residual Scalar A unit of charge the operators **must** deviate from the exact \(U(1)\) symmetry. The unit can be an integer value (e.g., '1') or a fractional value (e.g., '1/2'). See also violation. The default value for residual is '0'.
tex Scalar The TeX string associated with the respective \(U(1)\) group. By default, the group name surrounded by \mathtt{\(\bullet\)} is used.
fields Mapping Definition of the model's field content. Each entry is a mapping from the field's name (e.g., 'Q' or 'H') to its properties.
* Field names must start with a letter ('A-z').
* Field names can only contain alpha-numeric characters ('A-z' and '0-9').
* Field names must end with an alpha-numeric character or a plus ('A-z', '0-9', or '+'). If the environment variable AUTOEFT_CS is set, it replaces the plus symbol '+' in the above restrictions (see [Environment], page 47).
* The field name cannot be 'D', as this symbol is reserved for the covariant derivative.
By default, fields is the empty mapping '{}'.
representations Mapping Definition of the field's representations. Each entry is a mapping from a group name--as defined under symmetries--to an irreducible representation associated with this group. Depending on the group, the irreducible representation is expressed as
* an integer/fraction (e.g., '1'/'1/2') denoting the helicity--Lorentz,
* a partition (e.g., '[2,1]') denoting the Young Diagram--\(SU(N)\),
* an integer/fraction (e.g., '1'/'1/2') denoting the charge--\(U(1)\).
The field is assumed to transform like a singlet under every group defined under symmetries that is not explicitly listed under representations. Hence, every representation not explicitly defined assumes an appropriate default value ('0' for the helicity/charge and '[]' for the Young Diagram).
anticommute Scalar If set to 'False' ('True'), the field is treated as commuting (anticommuting). By default, this property is derived from the field's helicity, respecting spin-statistics.
conjugate Scalar Whether to automatically include the conjugate field. By default, this property is derived from the field's representations. If all representations are equivalent to real representations, only the field explicitly defined is included. Otherwise, the conjugate field is included as well.
generations Scalar The number of copies with the same representations appearing in the
model. The number of generations must be a positive integer. By default, each field has just a single generation.
* Scalar tex_hc Scalar The TeX string associated with the field and its conjugate. By default, tex is set to the field name surrounded by \mathtt{*}. If tex_hc is omitted, it is set to the value of tex and--depending on the value of conjugate--optionally appended by \(\mathtt{\
For the Lorentz group, '<i>_<j>^' denotes dotted indices. If the environment variable AUTOEFT_DS is set, it replaces the tilde symbol '^' in the above notation (see [Environment], page 47). Note that the dotted and undotted spinor indices of the building blocks (i.e. fields plus derivatives) are understood to be (separately) symmetrized. The symbol 'eps' denotes the \(\epsilon\)-tensor with eps(1,2)=eps(2^,1^)=1 for the Lorentz group and eps(1,2,...,n)=1 for any internal \(SU(N)\) group. All indices not associated with the symmetry group in question are suppressed on the fields. If the operators contain only fields that are singlets under a particular symmetry group, there is no index to be contracted and the entry contains just a single element '+1' multiplied by the fields without indices.
```
permutation_symmetriesSequence Details of the type's permutation symmetries. The first entry is always a mapping from'vector' to a product of group names--as defined under symmetries in the model file--separated by '*'. This denotes the order of the tensor product of the invariant contractions given under invariants. All other entries represent explicit permutation symmetries.
``` symmetryMapping The permutation symmetry of each field, identified by an integer partition (e.g., '[2,1]') corresponding to an irreducible representation of the symmetric group. n_termsScalar The number of independent contractions respecting the permutation symmetry, not including multiple generations. n_operatorsScalar The number of independent contractions respecting the permutation symmetry, including multiple generations. matrixScalar The matrix representing the independent linear combinations of the invariant contractions respecting the permutation symmetry.
## Appendix E Vocabulary Glossary
family
A family represents all operators with the same **Lorentz** representations of the fields (i.e., same _kind_ of fields). For each field, the Lorentz representation
can be identified with the helicity value \(h\) and autoeft assigns the following symbols to each representation:
\begin{tabular}{l l l} \hline \hline object & helicity & symbol \\ \hline scalar & 0 & 'phi' \\ spinor & \(-1/2\) & 'psiL' \\ & \(+1/2\) & 'psiR' \\ rank-2 tensor & \(-1\) & 'FL' \\ & \(+1\) & 'FR' \\ rank-4 tensor & \(-2\) & 'CL' \\ & \(+2\) & 'CR' \\ covariant derivative & & 'D' \\ \hline \hline \end{tabular} Each family is then represented by a string consisting of these symbols, each preceded by its number of occurrences in the family and separated by an underscore '_'. To identify a family uniquely, the symbols are sorted by their helicity value and the covariant derivative is always added to the end. Representations not appearing in the family are simply dropped. For example, the family of the dimension-5 Weinberg operator--consisting of two spinors and two scalars--is given by '2psiL_2phi' and its conjugate by '2phi_2psiR'.
type A type represents all operators with the same **Lorentz** and **internal** representations of the fields (i.e., same _content_ of fields). Each type is then represented by a string consisting of the field name--as defined under fields in the model file--and each field is preceded by its number of occurrences in the type and separated by an underscore '_'. To identify a type uniquely, the fields are first sorted by their helicity value and fields with the same helicity are sorted by their name alpha-numerically. The covariant derivative is always added to the end. Representations not appearing in the type are simply dropped. For example, the type of the dimension-5 Weinberg operator--consisting of two lepton doublets 'L' and two Higgs doublets 'H'--is given by '2L_2H' and its conjugate by '2H+_2L+'.
term A term represents all operators with the same explicit contraction of the fields with the invariant tensor structures of the external and internal symmetries, retaining open generation indices.
For the number of terms to be unambiguous, a term is required to have a definite permutation symmetry for all repeated fields. That means, that general terms are always decomposed into their irreducible representations under the symmetric group with respect to the repeated fields.
operator
An operator is a particular instance of a _term_ with fixed generation indices for all fields. Since any term has a definite permutation symmetry for any repeated field, the independent operators correspond to the independent components of the respective Wilson coefficient. This means that the number of operators is equal to the independent degrees of freedom of the EFT (Effective Field Theory).
basis
A valid operator _basis_ that can be processed further is represented by a directory containing the file model.json referencing the model, a (hidden) file.autoeft containing metadata, and the respective operator files (see [Output], page 51). Note that, while the files model.json and.autoeft must be contained in the top-level directory of the basis, the operator files can be structured in subdirectories. By default, the directories called basis created by autoeft construct compose valid bases.
real family/type
A _real_ family contains types that are either real or both the type and its conjugate are part of the same family.
A _real_ type contains terms that are either hermitian or the hermitian conjugate terms are not independent and can be expressed as a combination of the terms of the same type.
complex family/type
A _complex_ family only contains complex types and the conjugate types are part of the distinct conjugate family.
A _complex_ type only contains terms that are not hermitian and the hermitian conjugate terms can be expressed as a combination of the terms of the distinct conjugate type.
**Index**
--compile (latex), 45 --dry-run (construct), 43 --dry-run (count), 45 --dry-run (latex), 46 --generators (construct), 43 --help, 40 --help (construct), 42 --help (count), 44 --help (generators), 47 --help (latex), 45 --ignore (construct), 43 --ignore (count), 44 --ignore (latex), 46 --output (construct), 42 --output (count), 44 --output (generators), 47 --output (latex), 46 --overwrite (construct), 43 --overwrite (generators), 47 --quiet, 40 --select (construct), 42 --select (count), 44 --select (latex), 46 --threads (construct), 42 --verbose (construct), 42 --verbose (count), 44 --version, 40 --P (generators), 47 -S (generators), 47 -c (latex), 45 -h, 40 -h (construct), 42 -h (count), 44 -h (generators), 47 -h (latex), 45 -i (count), 44 -i (latex), 46 -o (construct), 42 -o (count), 44 -o (generators), 47 -o (latex), 46 -q, 40 -s (construct), 42 -s (count), 44 -s (latex), 46 -t (construct), 42 -v (construct), 42 -v (count), 44 -anticommute (field), 50 -autoeft synopsis, 40 AUTOEFT_CS (environment variable), 47 AUTOEFT_DS (environment variable), 47 AUTOEFT_PATH (environment variable), 47 -variable), 47 -basis, 54 c, 41 complex family/type, 54 conjugate (field), 50 construct, 41 construct synopsis, 41 count, 41 count synopsis, 43 description (model), 48 family, 52 fields (model), 50
g, 41 generations (field), 50 generations (output), 51 generators, 41 generators synopsis, 46 indices (Lorentz group), 48 indices (\(SU(N)\) group), 49 invariants (output), 51
l, 41 latex, 41 latex synopsis, 45 lorentz_group (symmetry), 48 matrix (permutation symmetry), 52 N (\(SU(N)\) group), 49 n_operators (output), 51 n_operators (permutation symmetry), 52 n_terms (output), 51 n_terms (permutation symmetry), 52 name (Lorentz group), 48 name (model), 48 operator, 54 permutation_symmetries (output), 52 real family/type, 54 representations (field), 50 residual (\(U(1)\) group), 49 sample-model, 41 sample-model synopsis, 41 sun_groups (symmetry), 48 symmetries (model), 48 symmetry (permutation symmetry), 52 term, 53 tex (field), 51 tex (Lorentz group), 48 tex (\(SU(N)\) group), 49 tex (\(U(1)\) group), 49 tex_hc (field), 51 type, 53 type (output), 51 u1_groups (symmetry), 49 vector (permutation symmetry), 52 version (output), 51 violation (\(U(1)\) group), 49
|
2309.06106 | How numerical treatments of the transition region modify energy flux
into the solar corona | The large temperature gradients in the solar transition region present a
significant challenge to large scale numerical modelling of the Sun's
atmosphere. In response, a variety of techniques have been developed which
modify the thermodynamics of the system. This sacrifices accuracy in the
transition region in favour of accurately tracking the coronal response to
heating events. Invariably, the modification leads to an artificial broadening
of the transition region. Meanwhile, many contemporary models of the solar
atmosphere rely on tracking energy flux from the lower atmosphere, through the
transition region and into the corona. In this article, we quantify how the
thermodynamic modifications affect the rate of energy injection into the
corona. We consider a series of one-dimensional models of atmospheric loops
with different numerical resolutions and treatments of the thermodynamics.
Then, using Alfv\'en waves as a proxy, we consider how energy injection rates
are modified in each case. We find that the thermodynamic treatment and the
numerical resolution significantly modify Alfv\'en travel times, the
eigenfrequencies and eigenmodes of the system, and the rate at which energy is
injected into the corona. Alarmingly, we find that the modification of the
energy flux is frequency dependent, meaning that it may be difficult to compare
the effects of different velocity drivers on coronal heating if they are
imposed below an under-resolved transition region, even if the sophisticated
thermodynamic adaptations are implemented. | Thomas Howson, Cosima Breu | 2023-09-12T10:17:42Z | http://arxiv.org/abs/2309.06106v1 | # How numerical treatments of the transition region modify energy flux into the solar corona
###### Abstract
The large temperature gradients in the solar transition region present a significant challenge to large scale numerical modelling of the Sun's atmosphere. In response, a variety of techniques have been developed which modify the thermodynamics of the system. This sacrifices accuracy in the transition region in favour of accurately tracking the coronal response to heating events. Invariably, the modification leads to an artificial broadening of the transition region. Meanwhile, many contemporary models of the solar atmosphere rely on tracking energy flux from the lower atmosphere, through the transition region and into the corona. In this article, we quantify how the thermodynamic modifications affect the rate of energy injection into the corona. We consider a series of one-dimensional models of atmospheric loops with different numerical resolutions and treatments of the thermodynamics. Then, using Alfven waves as a proxy, we consider how energy injection rates are modified in each case. We find that the thermodynamic treatment and the numerical resolution significantly modify Alfven travel times, the eigenfrequencies and eigenmodes of the system, and the rate at which energy is injected into the corona. Alarmingly, we find that the modification of the energy flux is frequency dependent, meaning that it may be difficult to compare the effects of different velocity drivers on coronal heating if they are imposed below an under-resolved transition region, even if the sophisticated thermodynamic adaptations are implemented.
keywords: Sun: oscillations - Sun: corona - Sun: transition region
## 1 Introduction
The solar corona is the largest and hottest layer of the Sun's atmosphere and is the subject of one of the great outstanding questions in astrophysics; the coronal heating problem. This concerns how the surprisingly high temperatures of the corona are maintained against energy losses, including thermal conduction to the cooler layers below and radiative losses into Space. The complexity of the solar atmosphere has ensured that this problem has resisted decades of sustained effort including sophisticated observational, analytical and numerical studies. It is widely accepted that the required energy is ultimately injected into the atmosphere by complex convective motions at the solar surface and a wide variety of contemporary models show how a hot corona can be sustained as a result (e.g. Gudiksen and Nordlund, 2005; Bingert and Peter, 2011; Reale et al., 2016; Kanella and Gudiksen, 2018; Breu et al., 2022; Kuniyoshi et al., 2023; Reid et al., 2023). However, the specific nature of the energy dissipation mechanisms remains hotly contested. Thorough reviews of contemporary research in this area are presented by (e.g. Reale, 2014; Klimchuk, 2015; Van Doorsselaere et al., 2020; Viall et al., 2021).
In recent decades, significant growth in computational power has enabled the solar atmosphere to be modelled with large scale, high-resolution MHD simulations. Increasingly, these simulations include the full, gravitationally-stratified atmosphere with the different layers of the atmosphere considered within a single numerical model (e.g. Hansteen et al., 2015; Cheung et al., 2019; Howson and De Moortel, 2022; Robinson et al., 2022; Chen et al., 2023; Guo et al., 2023). As each layer is associated with distinct physical processes occurring on disparate spatial and temporal scales, incorporating the complete atmosphere within one simulation represents a significant numerical challenge. Despite this, contemporary models attempt to track the flux of energy and mass from convective layers at the base of the simulation volume, through a complex and dynamic chromosphere and into the corona, which is heated to realistic temperatures (e.g. simulations produced with the MuRAM and Bifrost codes Vogler et al., 2005; Gudiksen et al., 2011). The success of these models can then be assessed by generating synthetic emission from the simulation results for comparison with real observations. As emission from the corona is sensitive to the plasma density and temperature, the exchange of mass between the atmospheric layers and energy dissipation rates are important components of coronal heating models.
One particularly challenging aspect of solar atmospheric modelling concerns the transition region. This is a narrow layer of the solar atmosphere that sits at the interface between the relatively cool and dense chromosphere and the hot and tenuous corona. Over the transition region, the plasma temperature increases by more than two orders of magnitude, over a short distance. As a result, there are very large temperature gradients which present a considerable
problem for the finite difference schemes implemented within MHD solar atmospheric codes. Under-resolving these gradients can significantly impair the accuracy of simulations, including vastly underestimating the upflow of plasma into the corona following heating events Bradshaw and Cargill (2013) and artificially suppressing thermal non-equilibrium cycles Johnston et al. (2019). A potentially naive solution may be to simply increase the number of grid points used by the numerical schemes such that the transition region temperature gradients remain well-resolved. However, whilst this can be a prudent strategy in 1-D codes, the computational cost in three dimensions is prohibitive. In response, several numerical techniques have been developed (e.g. Lionello et al., 2009; Mikic et al., 2013; Johnston et al., 2017; Johnston and Bradshaw, 2019; Johnston et al., 2021). These are described in more detail in Section 2, however they generally work by adapting the effects of the transition region in order to accurately model the coronal response to heating events (e.g. density enhancement due to the evaporation of chromospheric and transition region plasma Hirayama, 1974; Fisher et al., 1985; Tian and Chen, 2018) even with relatively coarse numerical resolution. These techniques are designed to correctly track the evolution of plasma in the corona, such that synthetic emission can be generated from heating models for direct comparison with observations (e.g. Antolin et al., 2016; Pontin et al., 2017; Kanella and Gudiksen, 2019; Warren et al., 2020). However, they also broaden the transition region and thus modify the flux of other forms of energy (e.g. Poynting flux) from the lower atmosphere into the corona. By modifying this energy flux, the artificial TR broadening may have unintended consequences for large scale coronal heating simulations.
As these methods are being increasingly implemented in multi-dimensional models of the solar atmosphere (e.g. Van Damme et al., 2020; Zhou et al., 2021; Howson and De Moortel, 2022; Keppens et al., 2023; Li et al., 2023; Pelouze et al., 2023), it is now essential to quantify how coronal energy injection rates are affected by these thermodynamic treatments. To this end, in this paper, we consider how simple propagating Alfven waves are modified as they propagate through different simulated transition regions. By using these waves as a proxy for more complex atmospheric dynamics, we will discuss the frequency and resolution-dependent effects on energy transmission. This will allow us to estimate how mechanical energy flux is affected by transition region modifications in larger and more complex models. The remainder of the article is presented as follows: In section 2, we describe our simple models, in section 3, we describe our results and in section 4, we discuss the implications for contemporary modelling of the fully coupled solar atmosphere.
## 2 Numerical Methods
For the majority of the simulations conducted within this article, we used the Lare2D code (Arber et al., 2001; Arber, 2018). The code advances the full, non-ideal and non-linear MHD equations given in normalised form by:
\[\frac{\mathrm{D}\rho}{\mathrm{D}t}=-\rho\nabla\cdot\mathbf{v}, \tag{1}\]
\[\rho\frac{\mathrm{D}\mathbf{v}}{\mathrm{D}t}=\mathbf{j}\times\mathbf{B}- \nabla P-\rho\mathbf{g}+\mathbf{F}_{\mathrm{visc.}}, \tag{2}\]
\[\rho\frac{\mathrm{D}\epsilon}{\mathrm{D}t}=-P(\nabla\cdot\mathbf{v})-\nabla \cdot\mathbf{q}-\rho^{2}\Lambda(T)+\eta\mathbf{j}\mathbf{j}^{2}+Q_{\mathrm{ visc.}}+Q_{\mathrm{bg.}}, \tag{3}\]
\[\frac{\mathrm{D}\mathbf{B}}{\mathrm{D}t}=\left(\mathbf{B}\cdot\nabla\right) \mathbf{v}-\left(\nabla\cdot\mathbf{v}\right)\mathbf{B}-\nabla\times\left( \eta\nabla\times\mathbf{B}\right), \tag{4}\]
\[P=2k_{B}nT. \tag{5}\]
Here \(\rho\) is the plasma density, \(\mathbf{v}\) is the velocity, \(\mathbf{j}\) is the current density, \(\mathbf{B}\) is the magnetic field, \(P\) is the gas pressure, \(\mathbf{g}\) is the gravitational acceleration, \(\epsilon\) is the specific internal energy density, \(\eta\) is the resistivity, \(k_{B}\) is the Boltzmann constant, \(n\) is the number density and \(T\) is the temperature. For numerical stability, small shock viscosity terms are included, which contribute a frictional force to the equation of motion (2) and an associated small heating term to the energy equation (3). These are described in detail in (Arber, 2018; Reid et al., 2020). By testing different dissipation coefficients, we confirmed that the viscous effects are small (for these transport coefficients), such that any wave damping is effectively negligible.
The energy equation includes contributions from thermal conduction, \(\nabla\cdot\mathbf{q}\), optically thin radiation, \(\rho^{2}\Lambda(T)\), and a background heating, \(Q_{\mathrm{bg.}}\) The radiative loss curve is described in detail by Klimchuk et al. (2008) and the background heating term is implemented to maintain an initial equilibrium. The magnitude of this heating term is discussed in Sect. 2.1. The vector, \(\mathbf{q}\), represents the heat flux and is defined according to the Braginskii (1965) model for thermal conduction in a magnetised plasma. In particular, \(\mathbf{q}\) is given by
\[\mathbf{q}=\left(\mathbf{k}\cdot\nabla T\right)\mathbf{n}+\nabla\cdot\left( \frac{b_{\mathrm{min}}^{2}}{B^{2}+b_{\mathrm{min}}^{2}}\right)\kappa\nabla T, \tag{6}\]
where \(\mathbf{n}=\mathbf{B}/(\mathbf{B}^{2}+\mathbf{b}_{\mathrm{min}}^{2})\) is parallel to the magnetic field and \(k=\kappa\mathbf{n}\) with \(\kappa=\kappa_{0}T^{5/2}\) and \(\kappa_{0}=10^{-11}\) Jm\({}^{-1}\) K\({}^{-7/2}\) s\({}^{-1}\). The constant \(b_{\mathrm{min}}\) is defined as a small constant to avoid numerical issues at magnetic null points (\(\mathbf{B}=\mathbf{0}\)) and in the limit \(B^{2}\gg b_{\mathrm{min}}^{2}\), equation 6 recovers the Spitzer-Harm parallel conductivity, with efficient heat conduction along field lines but negligible energy transfer across them. Although, no magnetic null points occur within our simulations, for completeness, we note that if \(\mathbf{B}\rightarrow\mathbf{0}\), isotropic conduction would be recovered.
The second spatial derivatives of the plasma temperature required for advancing the energy equation 3 are particularly problematic in the transition region, where the temperature changes very rapidly with height. As discussed in Sect. 1, spatially under-resolving this region in coronal heating models has significant consequences for plasma evolution. In this article, we consider two numerical techniques that allow more accurate modelling of coronal plasma even at lower resolutions. These are the L09 method (Lionello et al., 2009; Mikic et al., 2013) and the TRAC approach (Johnston and Bradshaw, 2019; Johnston et al., 2021). The L09 method works by modifying the thermodynamics below a fixed cut-off temperature, \(T_{c}\). Specifically, at temperatures below \(T_{c}\), the parallel thermal conductivity is increased:
\[\kappa_{\parallel}^{*}(T)=\begin{cases}\kappa_{0}T^{5/2},&T\geq T_{c},\\ \kappa_{0}T_{c}^{5/2},&T<T_{c},\end{cases} \tag{7}\]
and the optically thin radiative losses are decreased:
\[\Lambda^{*}(T)=\begin{cases}\Lambda(T),&T\geq T_{c},\\ \Lambda(T)\left(\frac{T}{T_{c}}\right)^{5/2},&T<T_{c}.\end{cases} \tag{8}\]
Here, the asterisks represent the modified expressions implemented for our simulations. This will result in some broadening of the transition region, and this effect is greater for higher cut-off temperatures.
The TRAC method, on the other hand, uses a variable cut-off temperature \(T_{c}\), which is continuously updated in response to simulation conditions. In particular, it sets \(T_{c}\) as low as possible such that sufficient numerical resolution is maintained in the transition region. As such, the TRAC method results in the minimal possible transition region broadening for a given numerical resolution. Full details of the implementation are described in Johnston and Bradshaw (2019) and Johnston et al. (2021). We note that, in general, the TRAC approach is typically preferable as it eliminates unnecessary transition region broadening and does not require a suitable cut-off temperature to be identified a priori. In this article, we consider four different treatments for the transition region; the unmodified Spitzer-Hirm conduction (no temperature cut-off), the TRAC treatment and two L09 cases with fixed cut-off temperatures at \(T_{c}=2.5\times 10^{5}\) K and \(T_{c}=5\times 10^{5}\) K. These fixed temperature cut-offs are representative of values used in the existing literature (e.g. Howson and De Moortel, 2022; Van Damme et al., 2020). For brevity, we refer to these cases as SH, TRAC, L09a and L09b, respectively.
In addition to these numerical treatments, thermal conduction can be limited by the maximum conductive flux that the plasma is able to support (the free-streaming limit). This occurs when the energy-transporting particles are all travelling in the same direction at the electron thermal speed. In LareXd, this saturated flux, \(F_{s}\) is implemented as
\[F_{s}=\alpha nk_{B}Tv_{\rm th,\,e} \tag{9}\]
where \(\alpha\) is a user-imposed coefficient, \(n\) is the number density, \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature and \(v_{\rm th,\,e}=\sqrt{k_{B}T/m_{e}}\) is the electron thermal speed (\(m_{e}\) is the electron mass). For the classically derived expression, where the limit is imposed by the thermal speed, a value of \(\alpha=1.5\) is required (e.g. Bradshaw and Cargill, 2006). However, it has been suggested that using \(v_{\rm th,\,e}/6\) is more appropriate (e.g. \(\alpha=1.5/6=0.25\)). In either case, for the relatively cool loops considered here, the maximum conductive flux is well below the saturation threshold and thus the choice of \(\alpha\) (between these two values) is moot. The choice would become significant, however, for much hotter plasma (e.g. during solar flares). For the current study, we selected the classical value, \(\alpha=1.5\).
Figure 1: Initial conditions for the 256x (dashed) and 4096x (solid) resolution simulations with the SH (black), TRAC (blue), L09a (red) and L09b (green) thermodynamic treatments. The upper panels show the temperature profile over the whole loop (left) and zoomed in over the transition region (right). The crosses in the upper right hand panel reveal the simulation resolution (for the 256x case) by illustrating the location of the grid points. The lower left panel shows the logarithm of the density close to the lower foot point of the loop and the lower right panel shows the local AlfvΓ©n speed profile over the entire loop, for each model.
### Initial conditions and numerical relaxation
We followed standard techniques for modelling closed, curved coronal loops as straight magnetic field lines embedded in the solar photosphere at both magnetic foot points. We consider a field line of length \(l=120\) Mm (-60 Mm \(\leq y\leq y\) 60 Mm) which is assumed to be aligned with the central axis of a coronal loop. The magnetic field is uniform, parallel with the \(y\) axis and has a field strength of 25 G. For simplicity, we neglect any expansion of the loop cross-section with height. We have implemented a range of numerical resolutions: \(\{256,512,1024,2048,4096\}\) grid points along the field line. We note that although we are only considering one-dimensional problems, for consistency with existing multi-dimensional models of the Sun's atmosphere, we elected to use a two-dimensional code. However, in this paper we assume perfect invariance across the magnetic field. The cross-field direction (\(x\) axis) is simulated with a nominal number of grid points (4) and periodic boundaries to maintain this invariance. The \(z\) direction is perfectly invariant but the magnetic and velocity fields are permitted to have non-zero components in this direction.
In order to mimic the curved geometry of a coronal loop, we define gravity to act in the \(y\) direction with a sinusoidal profile:
\[g_{1}=g_{0}\sin\left(\frac{\pi y}{2\gamma_{\rm max}}\right), \tag{10}\]
where \(g_{0}\simeq 274\) m s\({}^{-2}\) is the gravitational acceleration at the solar surface (\(y=\pm 60\) Mm). The acceleration points vertically downwards for \(y<0\) and upwards (towards y=y\({}_{\rm max}\)) for \(y>0\) Mm. This field-aligned component of the gravity assumes that the loop has a semi-circular profile. We neglect any cross-field contribution due to gravity.
Coronal loops are embedded in the relatively cool and dense lower layers of the atmosphere (photosphere and chromosphere) and extend into the hot and tenuous corona. To include this in our initial conditions, we assume a temperature profile, \(T=T(y)\) defined by:
\[T(y)=T_{\rm Ch}+\frac{T_{\rm CO}-T_{\rm Ch}}{2}\left\{\tanh\left(\frac{y+a}{b }\right)-\tanh\left(\frac{y-a}{b}\right)\right\}, \tag{11}\]
where \(T_{\rm Ch}=2\times 10^{4}\) K is the chromospheric temperature, \(T_{\rm CO}=10^{6}\) K is the initial coronal temperature, \(a=55\) Mm controls the location of the transition region and \(b=0.5\) Mm controls the initial width of the temperature transition. In Bradshaw & Cargill (2013), the authors detail the numerical resolution requirements for accurately reproducing coronal evolution in gravitationally stratified one-dimensional solar atmospheric loops. As discussed in Sect. 1, spatially under-resolving the transition region will lead to lower evaporative upflows of dense plasma in response to heating, and ultimately, lower coronal densities. According to Table 1 in Bradshaw & Cargill (2013), for coronal loops with relatively cool apex temperatures (3 MK), a grid resolution of approximately 25 km in the transition region is sufficient. Given that the loops in our current study are cooler (1 MK), the resolution requirements are less stringent. As our highest resolution case has a grid size of 120 Mm/4096 \(\approx 29\) km, it is likely able to reproduce coronal behaviour accurately. For the lower resolutions cases, however, accurate coronal apex densities and temperatures can only be reproduced with the modifications in thermodynamics discussed above.
To find the loop's initial density profile, we assumed hydrostatic force balance and solved the resulting differential equation using a Runga-Kutte scheme. Whilst this ensures the system is in pressure balance, the conduction, radiation and heating terms in the energy equation 3 mean it is not in thermodynamic balance. In order to achieve this for our initial equilibrium we then perform a numerical relaxation using the Lare2d code. In this article we set the background heating to be \(Q_{\rm bg.}=5\times 10^{-6}\) J m\({}^{-3}\) s\({}^{-1}\) in all cases, which generates apex temperatures of approximately 1.3 MK (see upper left panel of Fig. 1). We impose a large viscosity term to damp field-aligned flows until the system settles into a state of numerical equilibrium and we take this as the initial conditions for our simulations. This relaxation viscosity is then set to zero for the subsequent wave driving simulations (Sects. 3.3 & 3.4).
The resulting density and temperature profiles are shown in Fig. 1 for the 256x (dashed lines) and 4096x (solid lines) resolution cases with each of the thermodynamic treatments. The upper left panel shows the temperature profiles along the full loop length. We note that the majority of the curves are very similar, closely approximating the profile of the high resolution SH model (solid black line). The exceptions are the 256x SH case (dashed black line) and the two L09 simulations (green lines). The 256x SH case significantly underestimates the coronal density (see lower left panel and Bradshaw & Cargill 2013) and, as a result, coronal radiative losses will be much lower in this case (the losses scale with the square of the density, see equation 3). Even though energy losses in the corona are dominated by conduction, the reduction in the radiative cooling rate is sufficient to allow noticeably higher temperatures in this setup. Conversely, the thermodynamic treatment in the L09b simulations leads to significant over-broadening of the transition region (see green curves in right hand panel). This ultimately leads to the lower coronal temperatures, demonstrating that cut-off temperatures which are too high can be unhelpful for coronal loop models. However, such high cut-offs may remain necessary for very high temperature loops where the resolution requirements are even more demanding.
The effects of each treatment on the transition region are clear in the zoomed-in temperature profile displayed in the upper right panel of Fig. 1. At both resolutions (256x and 4096x) the SH treatments produce the steepest transition regions with broadening evident for all other cases. The broadening is almost independent of resolution for the fixed temperature cut-off cases (L09a in red and L09b in green), however, we see much more broadening in the low resolution TRAC case (dashed blue) than in the high resolution equivalent (solid blue). This is because the temperature cut-off adjusts in the TRAC treatment according to the numerical resolution and the properties of the loop (in order to ensure coronal evolution is modelled accurately). As the SH and TRAC transition regions are very similar for the 4096x case (compare solid blue and black curves), we will assume that this represents sufficient resolution for this loop and thus use the 4096x SH case (solid black) as a benchmark simulation representing the _correct_ evolution.
The lower left panel of Fig. 1 shows the logarithm of the plasma density in the bottom 20 Mm of the simulation domain. Again, this shows the effects of the thermodynamic treatment on the transition region broadening. Both SH cases (solid and dashed black lines) and the high resolution TRAC case (solid blue line) produce a steep transition region, where the density decreases by approximately two orders of magnitude over a very narrow layer. In all other simulations, this decrease occurs over a wider region, allowing for the gradients to be better resolved in the low resolution cases. We note that despite this broadening, all simulations (with the exception of the 256x SH case) exhibit similar density profiles in the corona (e.g. -40 Mm \(\leq y\leq\) 40 Mm). These density profiles are associated with variations in the local Alfven speed, \(v_{A}=B/\sqrt{\mu_{0}\rho}\), displayed in the lower right hand panel of Fig. 1. We note that the significantly lower coronal density attained in the 256x SH setup, produces much higher coronal Alfven speeds than in all the other cases. In comparison, all other models generate similar Alfven speed profiles, especially in the upper atmosphere (e.g. -40 Mm \(\leq y\leq\) 40 Mm). However, as we discuss
in detail below, even the relatively small differences that persist are sufficient to cause significantly different wave dynamics within each model.
### Boundary conditions
We generated Alfven waves within our model, by transversely perturbing the magnetic field at the lower \(y\) boundary in the invariant \(z\) direction. We consider wave drivers of the form
\[v_{z}(t)=v_{0}\sin\omega t, \tag{12}\]
where \(v_{0}\) is a small amplitude to generate linear waves and \(\omega\) is the wave frequency. For the simulations described below, we use \(v_{0}=100\) m s\({}^{-1}\) which is less than 1% of the minimum Alfven speed in each simulation and much smaller than the coronal Alfven speeds (see lower right panel of Fig. 1). For the frequency, \(\omega\), we consider the natural frequencies of the system and this is discussed in more detail below. At the other foot point of the loop, we impose \(\mathbf{v}=\mathbf{0}\) for the velocity and a zero-gradient condition for all other quantities. This fixes a wave node at this upper boundary. As we are using a two dimensional MHD code, we also define the \(x\) boundaries to be periodic. This maintains the invariance in this direction.
## 3 Results
In order to illustrate why the different transition region treatments can have a large influence on Alfven wave dynamics (and hence, energy flux) in coronal loops, we first begin by considering the Alfven travel time for each of our initial conditions. This represents how long it takes an Alfven wave front to propagate from the driven foot point to a particular height. In other words, we calculate the travel time \(\tau(y)\) as
\[\tau(y)=\int_{0}^{y}\frac{d\mathbf{s}}{v_{A}}, \tag{13}\]
, where \(v_{A}\) is the local Alfven speed and \(s\) is an infinitesimal field line element. We show the function, \(\tau(y)\), for a variety of our initial conditions in Fig. 2. The left hand panel shows the profile over the entire length of the loop and, for clarity, the right hand panel shows \(\tau(y)\) close to the non-driven foot point. We see that the total travel time varies across the simulations from approximately 88s (for the low resolution SH case) to around 116 s (for the high resolution L09b case). We note that here we have only displayed the lowest (256x) and highest (4096x) numerical resolutions and all other cases lie within the two extremes for any given thermodynamic treatment. We see that the right hand panel clearly demonstrates the significant differences between the Alfven travel times for the different resolutions and transition region treatments. There are two pertinent points which drive these differences. Firstly, the different Alfven speed profiles in each case (as discussed for the lower right panel of Fig. 1), naturally lead to different Alfven travel times. For example, the low coronal densities in the 256x SH simulation (dashed black line) result in higher Alfven speeds and shorter travel times. We also note that even in cases where the coronal Alfven speeds are very similar, differences in the wave speed in the transition region can result in discrepancies in the total travel time. For example, the SH and L09a treatments exhibit around a 10% difference in the travel time despite the relatively short length of the transition region (it is much less than 10% of the volume even in the broadened cases). Differences in Alfven speeds in the lower atmosphere can have a significant impact due to the low speeds and, hence relatively long time that the wave takes to propagate through this region.
Secondly, the numerical resolution can modify the travel time simply because the local Alfven speed is only known at a relatively low number of grid points. In practice, the integral in equation 13 is calculated as a discrete summation over the simulation grid points. Therefore, in the transition region, where the wave speed changes rapidly, the low resolution simulation does not track the travel time well. As a result, the low resolution curves (dashed lines) show different travel times to their higher resolution counterparts. This effect is clearest for the fixed temperature cut-off models (as the cut-off does not change as a function of resolution), L09a (red) and L09b (green curves) which show very similar density and Alfven wave profiles (see lower row of Fig. 1) but still produce different travel times. Using sub-grid interpolation methods for equation 2 would reduce this difference in the calculated Alfven travel times but this would not reflect the wave propagation as simulated by the Lare2d code.
### Eigenfrequencies and eigenmodes
We can use the Alfven travel times calculated above, together with the WKB approximation, to provide estimates for the natural frequencies of the coronal loops. However, these will likely be inaccurate (especially for the fundamental mode and low number harmonics (e.g. Wright and Smith, 1990) due to the significant non-uniformity of the wave speed along the modelled loops. Instead, we calculate the eigenfrequencies and corresponding eigenmodes of the field lines by considering the wave equation for non-constant propagation speed:
\[\frac{\partial^{2}v_{z}}{\partial t^{2}}=v_{A}^{2}(y)\frac{\partial^{2}v_{z}} {\partial y^{2}}, \tag{14}\]
where \(v_{A}\) is the local Alfven speed displayed in the lower right panel of Fig. 1. Then, by considering non-trivial, oscillatory, separable solutions, \(v_{z}=Y(y)T(t)\), of this partial differential equation, we can express the spatial variation, \(Y(y)\) as
\[\frac{\mathrm{d}^{2}Y}{\mathrm{d}y^{2}}+\frac{\omega^{2}}{v_{A}^{2}(y)}Y=0, \tag{15}\]
where \(\omega\) is a real constant. The Alfven eigenmodes and corresponding eigenfrequencies are given by functions, \(v_{z}(y)\), and constants, \(\omega\), such that there are wave nodes (\(v_{z}=0\)) at the two field line foot points. In order to find these, we implement a shooting method to find numerical solutions of equation 15.
In Fig. 3, we display the eigenmodes of the first seven harmonics for the 4096x SH (benchmark) simulation. For clarity, we have normalised the maximum of each curve to \(1-n/10\), where \(n\) is the harmonic number beginning with \(n=0\) for the fundamental mode, \(n=1\) for the first overtone, and so on. As such, the amplitude of each eigenmode is arbitrary. We note that due to the relatively low Alfven speeds in the chromosphere and lower transition region, for higher overtones, the majority of the wave nodes are located close to the two loop foot points. As such, low resolution simulations (e.g. 256x) will not be able to resolve the short wavelengths in the chromosphere for higher frequency modes. This is discussed in more detail in Sect. 3.4. We also note that, due to the high density at these low altitudes, the magnitude of the eigenmodes here are much smaller than in the coronal volume.
We note that the exact forms of these eigenmodes will be sensitive to a wide variety of factors, including but not limited to the relative sizes of the chromosphere and corona, loop curvature, loop
asymmetry and loop expansion. As such, they may not be representative of eigenmodes in real atmospheric loops. However, here we simply wish to compare the effects of the numerical resolution and thermodynamic treatment on the natural frequencies of the modelled loops. To this end, in the left hand column of Fig. 4, we display the eigenfrequencies for the first 7 harmonics (including the fundamental mode as the zeroth harmonic) for different numerical resolutions and thermodynamic treatments. In the upper three panels, we show how these vary for different numerical resolutions in the TRAC, L09b and SH cases, respectively. In each panel, we also include the eigenfrequencies for the 4096x SH simulation as a benchmark (dashed line). For clarity, the right hand column of Fig. 4 shows the relative error between each calculated eigenfrequency and the benchmark solution. It is clear that as the resolution is increased, the TRAC (first row) and SH (third row) treatments converge to the benchmark eigenfrequencies. However, for the resolutions typically attainable in large scale, 3D MHD codes (e.g. corresponding to 256 or 512 grid points in the loop-aligned direction), the error in the frequency calculation can be approximately 40%. At low resolutions, all thermodynamic treatments produce poor estimates of the eigenfrequency calculated for the fully resolved loop (particularly for higher harmonics). The relative error for each of these cases will depend on how important the transition region is for determining the eigenfrequency of a given loop. In particular, the relative accuracy may depend on loop lengths and temperatures and the depth of the chromosphere. We also note that for the L09b treatment (second row), whilst the eigenfrequencies do converge at high resolution, they do not converge to the benchmark case. This is easy to see at higher harmonics. This behaviour arises because the fixed temperature cut-off consistently over-broadens the transition region even at high resolutions. This phenomenon is significantly reduced (albeit still present, particularly for higher harmonics) in L09a case due to the reduced broadening of the transition region.
In the bottom row of Fig. 4, we display the effects of the thermodynamic treatment on the calculated eigenfrequencies (left panel) for the two extreme resolutions (256x, dashed lines; 4096x, solid lines) and the relative error (compared to the benchmark solution). At low resolution, we see that the choice of thermodynamic treatment has significant implications for the natural frequencies of the system (particularly at higher harmonics) and, as discussed before, that no choice accurately reproduces the benchmark solution. At high resolution, however, all methods (except L09b) reproduce the eigenfrequencies reasonably accurately (with small errors at high harmonics for the L09a case, due to the fixed transition region broadening). This is not a particularly surprising result; modifying the Alfven speed profile in the lower atmosphere will certainly impact on the natural frequencies of the modelled loop. However, this can have important consequences for energy flux in solar atmospheric models as different resonances can be excited in each of these setups. In Sect. 3.4, we will consider whether this can lead to systematic errors for energy injection rates in general coronal heating models. However, as a brief aside,
Figure 3: Eigenmodes of the fundamental (black) mode and first (purple), second (dark blue), third (light blue), fourth (green), fifth (orange) and sixth (red) overtones.These are eigenmodes of equation 15 calculated for the 4096x SH model.
Figure 2: AlfvΓ©n travel time, \(\tau\left(y\right)\) (see equation 13, for the 256x (dashed lines) and 4096x (solid lines) resolutions with the SH (black), TRAC (blue), L09a (red) and L09b (green) thermodynamic treatments. The left-hand panel shows the travel time over the whole loop and, for clarity, the right hand panel restricts the domain to the region close to the upper \(y\) boundary. We note that the 4096x SH curves (solid black) is not visible as they almost exactly coincide with the 4096x TRAC curves (solid blue).
Figure 4: _Left column_: Eigenfrequencies of the fundamental (zeroth harmonic) and higher overtones for loops modelled with different numerical resolutions and thermodynamic treatments. In the upper three panels, we show the effects of resolution on loops modelled with the TRAC, L09b and SH treatments, respectively. For these panels, the dashed black line shows the 4096x SH frequencies which we use as a benchmark. The colour used for each resolution is consistent across these panels. The bottom left panel shows the effects of the thermodynamic treatment on the eigenfrequencies in the 256x (dashed lines) and 4096x (solid line) models. _Right column_: The relative error between the curves in the adjacent panels in the left-hand column and the benchmark case (4096x SH).
we will first consider implications of these results for seismological inversions.
### Seismological implications
At the broadest level, coronal seismology uses the properties of observed waves to deduce properties of the solar corona that cannot be measured directly (e.g. see reviews by Nakariakov and Verwichte, 2005; Andries et al., 2009; Ruderman and Erdelyi, 2009; Arregui et al., 2012; De Moortel and Nakariakov, 2012; Nakariakov and Kolotkov, 2020). It is a well-developed field and has been used to provide estimates of coronal magnetic field strengths (e.g. Nakariakov and Ofman, 2001; Soler et al., 2010), the transverse density structuring of loops (e.g. Goddard et al., 2017; Pascoe et al., 2017; Goddard et al., 2018; Pascoe et al., 2018) and, through the use of Bayesian inference, to provide evidence for and against different solar atmospheric models (e.g. Arregui and Asensio Ramos, 2011; Montes-Solis and Arregui, 2019; Arregui, 2022). These studies demonstrate how expected wave behaviour (e.g. propagation speed, damping rates) derived from mathematical models can be used to deduce unknown parameters (e.g. field strength) from solar observations. However, these methods can have large uncertainties associated with them, not least because it is often difficult to definitively identify what wave mode is being observed.
As discussed in the previous section, the lower atmosphere can play a very important role in establishing the natural frequency of these magnetic structures. They are not simply coronal loops, but are embedded in the transition region and chromosphere too. If wave nodes are established at the upper transition region, then assuming standing oscillations are purely coronal is likely a good approximation. This may be a reasonable view of oscillations excited impulsively in the corona (e.g. from a nearby solar flare Nakariakov et al., 1999; Li et al., 2023). In this paradigm, the large density gradients in the transition region may act as effective reflectors of wave energy, effectively forming wave nodes in the upper transition region. However, if these standing waves are driven by oscillatory flows in the chromosphere/photosphere (as we are assuming in this article and are often invoked for driving decayless oscillations, e.g. Nistico et al., 2013; Anfinogentov et al., 2015; Zhong et al., 2022; Petrova et al., 2023), then the contribution of the lower atmosphere to the natural frequencies must be accounted for.
As a very simple example of this idea in practice, let's suppose we observe a fundamental standing Alfven wave (in reality we may be more likely to observe a kink mode but the same argument applies). For our setup, we may expect to observe a frequency of approximately 0.0563 s\({}^{-1}\) (the fundamental frequency for the 4096x SH model). However, these waves will typically be interpreted as a coronal-only oscillations. Thus, if we instead consider the eigenfrequencies of a coronal-only system (using equation 15), then we find a fundamental frequency of 0.0947 s\({}^{-1}\). Indeed this is a similar value to the frequency of the second overtone of the full system which appears to have wave nodes at the base of the corona (see dark blue line in Fig. 3). As the observed frequency can be used to estimate the magnetic field strength, we will obtain an estimate that is 0.0947/0.0563 \(\approx\) 1.7 times too big. This simple calculation does not consider inaccuracies in numerical wave modelling due to the thermodynamic treatments discussed throughout this article. However, despite this, the simple argument highlights an important consideration for seismological inversions; the location of the nodes for observed standing modes must be clearly identified in order to understand the true natural frequencies of the oscillating structure.
### Modelling propagating waves
Returning to the effects of the thermodynamic model on simulated wave dynamics, we now consider the case of a wave that is continuously driven from the lower \(y\) boundary. Whereas in Sect. 3.1 we considered a purely analytical description, here we now model the propagation of Alfven waves using the Lare2d code. We impose a driver as described in equation 12 with frequency, \(\omega=0.069\) s\({}^{-1}\). We note that this is a comparable to the magnitude of the fundamental frequency in each case but is non-resonant.
In Fig. 5, we display the wave velocity (\(v_{z}\)) as a function of position along the loop for the four high resolution (4096x) simulations. The left hand panel shows the wave at an early time (\(t\approx 60\) s), before the propagating wave fronts have reached the opposite foot point. The right hand panel, on the other hand, is much later, (\(t\approx 550\) s), after several reflections have occurred. In both panels, we have normalised all curves by the maximum velocity obtained in the L09b simulation and the solid black line corresponds to the benchmark (4097x SH) solution. We see that at early times (left hand panel), the TRAC treatment (blue) provides very good agreement with the benchmark case. This is unsurprising given that TRAC produces minimal transition region broadening at this high resolution (compare solid blue and black lines in the upper right panel of Fig. 1). The L09a (red) and L09b (green) on the other hand both show lower mean propagation speeds, with the effect greatest for the more significant broadening in the L09b case. This is in agreement with the longer travel times for these setups shown in Fig. 2. We also note that they both exhibit larger amplitudes, suggesting that more energy is transmitted into the corona with these treatments. We will assess this point in more detail in Sect. 3.4. The small scale oscillations that follow the leading wave front in all cases are associated with wave reflections excited as the front passes through inhomogeneities in the local Alfven speed (e.g. Asgari-Targhi et al., 2021; Pascoe et al., 2022).
In the right hand panel, we see that at later times that there is little agreement between with L09a, L09b and benchmark solutions. After a few reflections, the relatively small differences visible in the left panel compound to produce very different waves. Once again, the L09b case has the largest amplitude, suggesting that energy is injected into the corona more efficiently in this case. The TRAC treatment, on the other hand, still reproduces the benchmark solution with reasonable accuracy. However, there are two important caveats to note here. Firstly, as time progresses and more wave reflections take place, the differences between the TRAC and benchmark solutions will become increasingly pronounced. Furthermore, this favourable result for the TRAC method is less applicable at lower resolutions, where the transition region broadening will be significant in the TRAC case too. In general, the TRAC treatment is beneficial because it reduces broadening when possible. However, if the resolution or loop parameters are such that significant broadening is required (e.g. for high temperature loops), then the TRAC case will perform as poorly as the fixed temperature cut-offs.
In Fig. 6, we provide a measure of the accuracy of the models produced by each thermodynamic treatment in the 4096x (upper) and 512x (lower) resolution cases. In particular, at every time output from the simulation, we measure the correlation (with the Pearson correlation coefficient) between the wave velocity (\(v_{z}\)) in each model and the benchmark solution (4096x SH). A correlation of 1 indicates a perfect match between the solutions, a correlation of 0 indicates there is no relationship between solutions and a score of -1 indicates the solutions are perfectly out-of-phase. In practice, a correlation of close to 1 indicates a good match. For the 4096x case, unsurprisingly we see
that the TRAC solution with minimal transition region broadening produces a good match with the benchmark. As explained above this match steadily worsens as time progresses. In general we see that reducing the modification of the thermodynamics is beneficial at high resolution and thus the L09a case performs better than the L09b simulation (with both worse than the TRAC case). Meanwhile, in the lower panel of Fig. 6, we see that for the lower resolution cases (although this still represents a high resolution in terms of large scale 3D simulations), all thermodynamic treatments produce a poor match with the benchmark solution. The reduced broadening cases (TRAC, blue and L09a, red) produce marginally better results than the SH (black) and L09b (green) cases but in general the benchmark solution is not reproduced.
These results are concerning given that this resolution is representative of that used in state-of-the-art models of the solar atmosphere. That said, we know that many of our models do not accurately reproduce dynamics in the lower atmosphere anyway (e.g. due to neglecting partial ionisation, radiative transfer etc.). For some studies, coronal modelless may take the view that we know there is wave energy in the corona, so, as long as we model waves with the correct amplitudes/wavelengths etc., understanding precisely how waves are injected into the corona is unimportant in comparison to the dynamics (e.g. phase mixing, resonant absorption, Kelvin-Helmholtz instability) that occur as they propagate within the corona. However, increasingly coronal models are being driven by photospheric motions (e.g. through self-consistent convection or through drivers imposed below the chromosphere). Additionally, precisely tracking the energy flux through the lower atmosphere is important for understanding the energetics of the atmosphere as a whole. Indeed if the thermodynamic treatments permit an artificially high (or low) energy flux into the corona, then we will obtain incorrect heating rates, for example. There is also a more subtle issue. If the different transition region treatments are associated with errors of different magnitudes for different driver types, then comparing the effects of different drivers becomes problematic. In particular, if a low frequency driver is relatively unaffected by these errors in comparison to a high frequency driver, then making a fair comparison is challenging. In Howson et al. (2020); Howson & De Moortel (2022), the authors presented comparisons of heating associated with long (DC) and short (AC) time scale driving and found DC driving to be more effective for heating the corona. The latter study used the L09a treatment and thus it is important to establish whether the energy injection rates are artificially enhanced for the different thermodynamic treatments and whether the DC or AC driving is affected more.
### Broadband driving
In order to explore this point in more detail, we will now turn our attention to a broadband wave driver to establish whether there are systematic errors in the energy injection rates. In the previous sections, we have discussed how the numerical treatment of the TR modifies the system in response to continuous transverse driving at a single frequency. However, it is unlikely that solar photospheric motions oscillate with a fixed frequency for long periods of time and thus it is important to quantify the impact of the TR treatment on energy flux for waves driven with a broadband driver. To this end, we consider drivers, \(v_{z}(t)\) defined by
\[v_{z}(t)=\sum_{i=1}^{N}u_{0}\sin\left(\omega_{i}t+\psi_{i}\right)\,. \tag{16}\]
These broadband wave drivers are defined as the sum of \(N\) sinusoidal components with different frequencies, \(\omega_{i}\) and phase shifts \(\psi_{i}\). These phase shifts are randomly selected from a uniform distribution on \([0,2\pi]\). For this article we take \(N=50\) and we restrict our consideration of the frequency space to a range between two cut-off frequencies, \(\omega_{\rm min}\) and \(\omega_{\rm max}=3\omega_{\rm min}\). In particular, the \(i^{\rm th}\) frequency is defined as
\[\omega_{i}=\frac{i}{N}\left(\omega_{\rm max}-\omega_{\rm min}\right)+\omega_{ \rm min}. \tag{17}\]
We consider two different frequency ranges, to represent low and high frequency broadband drivers defined by \(\omega_{\rm min}=0.8\omega_{f}\approx 0.041\) s\({}^{-1}\) and \(\omega_{\rm min}=3.2\omega_{f}\approx 0.164\) s\({}^{-1}\). Here, \(\omega_{f}\approx 0.0513\) s\({}^{-1}\) is the fundamental frequency in the 4096x SH loop (see Fig. 4). As the amplitude of each component, \(u_{0}\), is a constant, the power in the broadband driver is independent of frequency over the range
Figure 5: Transverse velocity, \(v_{z}(y)\), excited by a continuous, high frequency, sinusoidal wave driver imposed at the lower \(y\) boundary. We show results from the 4096x resolution simulations for the SH (black), TRAC (blue), L09a (red) and L09b (green) cases. The left panel shows an early time, \(t\approx 60\) s, and the right panel shows a later time, \(t\approx 550\) s, after several wave reflections have occurred. The solid black line (SH) is the benchmark result.
\([\omega_{\rm min},\omega_{\rm max}]\). As such, within our frequency range, the wave driver has a white noise profile. The temporal profile of the driver is shown in Fig. 7.
From the discussion outlined above on Figs. 5 & 6, we know that the simulated wave dynamics will be different for each thermodynamic treatment and numerical resolution. However, the important question here is whether more or less energy is injected into the corona for some treatments/resolutions in comparison to the benchmark case. In Fig. 8, we show the change in the energy density, \(E(t)\), within the range -40 Mm \(y\) 40 Mm as a function of time for high (4096x, upper panels) and low (512x, lower panels) resolution simulations. For this analysis we have calculated,
\[E(t)=\int_{y=-40~{}{\rm Mm}}^{y=40~{}{\rm Mm}}\left(\frac{B^{2}}{2\mu_{0}}+ \frac{\rho v^{2}}{2}+\frac{P}{\gamma-1}+\rho\Phi\right)~{}{\rm d}y, \tag{18}\]
where the terms in the integrand are the magnetic, kinetic, internal and gravitational potential energies, respectively. For each case, we subtracted the initial value to show the change in energy density. The \(y\) range for the integral ensures that we restrict our attention to the coronal subvolume of the domain. The left and right columns correspond to the low and high frequency broadband drivers, respectively. We show the results from SH (black), TRAC (blue), L09a (red) and L09b (green) simulations and all curves normalised by the maximum energy content attained in the high frequency, high resolution L09b case.
In the high resolution, low frequency case (upper left panel), we see that the TRAC simulation (blue) reproduces the coronal energy content of the benchmark case (black curve) well. The L09a and L09b provide successively worse estimates, with the coronal energy content being overestimated by 5x at some points during the L09b case. This shows that artificially broadening the transition region permits a greater average flux of wave energy into the corona. This is because the reduced Alfven speed gradients in the broadened cases reduce the efficiency of wave reflections back into the lower atmosphere. For the high frequency, high resolution cases (upper right panel), we see that the TRAC method still reproduces the benchmark solution well, however, the L09a cases now provides an equally poor estimate as the L09b simulation. The higher frequency waves have shorter wavelengths which are more liable to reflect in the transition region and hence even the weak broadening produces significant over estimations in the energy flux. This shows that the overestimation of energy injection rates can also be frequency dependent.
In the lower resolution simulations (lower panels), we see that the mean energy injection rate is lower than in their high resolution counterparts. This effect is particularly profound for the high frequency cases (lower right panel). This is largely a consequence of relatively high numerical dissipation rates in the lower atmosphere due to the relatively low resolution and short wave lengths (particularly for the high frequency waves). This point notwithstanding, we now see different energy injection rates for all simulations and the TRAC cases no longer coincide with the SH results (or indeed with the high resolution benchmark). For both frequency drivers, we see that the wave energy in the coronal volume at any given point is very sensitive to the particular thermodynamic treatment. As with the high resolution cases, there is a tendency for broadened transition regions to permit enhanced energy flux into the corona and again these results are frequency dependent. These results show that accurately modelling MHD waves as they propagate through the lower atmosphere and into the corona is extremely challenging, particularly for high frequency modes.
Figure 6: Pearson correlation between the wave velocity in the benchmark solution (4096x SH) and in the models with other thermodynamic treatments. The panels show how the correlation changes as a function of time at high resolution (4096x; upper panel) and at low resolution (512x; lower panel). The black curve (SH case) is not included in the upper panel because this would be measuring the correlation between one result and itself.
Figure 7: Temporal variation of the wave velocity imposed at the lower \(y\) footpoint by the low frequency broadband driver.
## 4 Discussion and Conclusions
In this article, we have considered how numerical resolution and a variety of thermodynamic treatments (Lionello et al., 2009; Mikic et al., 2013; Johnston & Bradshaw, 2019; Johnston et al., 2021) can modify the flux of energy from the lower solar atmosphere into the corona. As they are well understood mathematically, we have used Alfven waves with a range of frequencies as a proxy for energy injection mechanisms. We have shown that the Alfven travel times, eigenmodes and eigenfrequencies, and energy injection rates are all highly sensitive to the resolution and thermal conduction models used within simulations. Additionally, we have highlighted the importance of the lower atmosphere on seismological inversions when using wave modes excited beneath the transition region.
Increasingly, numerical models of the solar atmosphere are including each of the distinct layers (photosphere, chromosphere, transition region, corona) within the simulation volume. These contemporary studies treat the physics of the lower atmosphere with varying degrees of completeness. For example, simply treating the chromosphere as a mass reservoir (e.g. Van Damme et al., 2020; Reid et al., 2021; Cozzo et al., 2023; Skirvin et al., 2023), or including more accurate chromospheric physics such as radiative transfer (e.g. Battaglia et al., 2021; Nobrega-Siverio & Moreno-Insertis, 2022; Hansteen et al., 2023; Martinez-Sykora et al., 2023). Inevitably, these different approaches will lead to different energy densities at the top of the chromosphere. However, the nature of the transition region can then have further consequences for the amount of energy reaching the corona. We have shown that the choice of numerical treatment for thermodynamics in the transition region will modify the mechanical energy at higher altitudes. Furthermore, and perhaps more alarmingly, we have also shown that these treatments are associated with energy injection errors which depend on the frequency of the driver. In light of this, significant care is required when undertaking direct comparisons of coronal heating driven by different photospheric convection profiles (e.g. long vs short time scale driving in Howson et al., 2020; Howson & De Moortel, 2022).
In the present study, we have considered loops that are relatively easy to model numerically. In particular, they are not especially hot (e.g. Wang et al., 2003) and they are not dynamic. As such, our negative findings may be even worse in many other situations. For
Figure 8: Energy content in the coronal portion of the domain as a function of time for low (left panels) and high (right panels) frequency drivers. We show results from the SH (black), TRAC (blue), L09a (red) and L09b (green) simulations at high (4096x, upper panels) and low (512x, lower panels) resolution. We have normalised all curves to the maximum energy content in the high frequency, high resolution L09b simulation.
example, hot loops with apex temperatures several times larger than in our models, are commonplace in active regions, and are likely important for identifying heating events (e.g. Bradshaw & Klimchuk, 2011). These loops require higher numerical resolution or enhanced transition region broadening (either automatically with TRAC or with higher fixed temperature cut-offs for the L09 approach), and as such will likely exacerbate our negative results. It is also important to reiterate that short wave lengths in the chromosphere (due to the relatively low Alfven speeds) can lead to significant wave damping and reduced energy transmission, particularly for high frequencies. This may be unimportant for studies interested in wave propagation in the corona, however, as discussed above, this does have implications for comparing the effects of different photospheric drivers on coronal heating.
It may seem that waves reflected at the transition region may have further opportunities to be transmitted into the corona following a second (and subsequent) reflections at the driven boundary. However, whilst this is an important and subtle point, this effect does not necessarily permit enhanced coronal energy injection over long time periods. In our simulations, upon returning to the driven boundary, reflected waves interact with the imposed velocity field. This modifies the Poynting flux injected into the domain and can have important consequences for the energetics of the system. If the reflected waves are in-phase with the wave driver, then resonances will be excited and the Poynting flux will increase. However, for non-broadband drivers this is unlikely, and the waves will typically be non-resonant. If non-resonant reflected waves have the same amplitude as the driver, then, on average, the Poynting flux will be equally positive and negative over the course of a wave period. In such a case, there will be no net energy injection after the first reflected waves reach the boundary. However, typically the reflection coefficient at the transition region will be less than unity and the reflected waves will have lower amplitudes. In this case, the driver will still inject energy into the system but at a reduced rate. As such, it may be misleading to think that the reflected waves have multiple attempts to propagate into the corona as they can also reduce the rate at which energy is injected into the system.
On the basis of our results, we recommend the use of the TRAC method as the most suitable treatment for resolutions currently attainable in large scale, multi-dimensional coronal heating models. As the TRAC approach is associated with the minimum possible broadening (for any given loop and numerical resolution), it will generally produce the smallest errors in the simulated wave dynamics. That said, at attainable numerical resolutions, it will still provide a poor match to a fully resolved benchmark case, particularly for high frequency waves. At this stage, we offer no concrete solution to these issues and believe an appropriate first step will be accurately quantifying the effects of the thermodynamic treatments for a variety of loop lengths and geometries, magnetic field strengths, wave modes (e.g. slow waves, kink waves) and also for long timescale driving (e.g. DC heating mechanisms). Only then will we be able to determine how significant these issues are for contemporary three-dimensional solar atmospheric modelling.
## Acknowledgements
The research leading to these results has received funding from the UK Science and Technology Facilities Council (consolidated grant ST/S000402/1). The authors would like to thank Dr J Reid for his help and comments during the preparation of this manuscript. Finally, the authors would also like to thank the anonymous referee for considering our work and providing helpful suggestions to improve our article.
## Data Availability
The data from the numerical simulations and analysis presented in this paper are available from the corresponding author upon reasonable request.
|
2309.07218 | High Capacity Noisy Unruh--DeWitt Quantum Channels with Bosonic
Dephasing | Unruh--DeWitt (UDW) detectors implemented as UDW quantum gates provide a
framework for evaluating quantum Shannon theory properties of qubit-field
systems. UDW quantum channels consist of qubits encoding/decoding quantum
information onto/off of quantum fields. With the controlled unitary structure
of UDW gates, the encoding/decoding process happens on the diagonals of the
coherent state density matrix describing the field. However, given the
non-orthogonality of coherent states the output of UDW channels consists of
unwanted states and unwanted mixing of states that lower the channel capacity.
In idealized models, these appear in the off-diagonals and diagonals of the
field's density matrix in the coherent state basis. For this reason, we show
that UDW quantum channels have an unexpected representation as certain bosonic
dephasing channels with dephasing parameters captured by a combination of the
coupling, smearing, and switching functions of the UDW detector model. We
demonstrate the unexpected consequence that a larger dephasing parameter
results in higher channel capacity and helps alleviate unwanted state mixing.
We illustrate these properties through two examples: inserting an additional
ideal dephasing channel into the quantum channel and inserting cross-talk noise
via a third UDW gate. Remarkably, the cross-talk noise channel qualitatively
improves a lower bound on the quantum capacity suggesting UDW gates will have
unexpected performance improvements if realized in condensed matter
experiments. | Eric Aspling, Michael Lawler | 2023-09-13T18:00:01Z | http://arxiv.org/abs/2309.07218v1 | # High Capacity Noisy Unruh-DeWitt Quantum Channels with Bosonic Dephasing
###### Abstract
Unruh-DeWitt (UDW) detectors implemented as UDW quantum gates provide a framework for evaluating quantum Shannon theory properties of qubit-field systems. UDW quantum channels consist of qubits encoding/decoding quantum information onto/off of quantum fields. With the controlled unitary structure of UDW gates, the encoding/decoding process happens on the diagonals of the coherent state density matrix describing the field. However, given the non-orthogonality of coherent states the output of UDW channels consists of unwanted states and unwanted mixing of states that lower the channel capacity. In idealized models, these appear in the off-diagonals and diagonals of the field's density matrix in the coherent state basis. For this reason, we show that UDW quantum channels have an unexpected representation as certain bosonic dephasing channels with dephasing parameters captured by a combination of the coupling, smearing, and switching functions of the UDW detector model. We demonstrate the unexpected consequence that a larger dephasing parameter results in higher channel capacity and helps alleviate unwanted state mixing. We illustrate these properties through two examples: inserting an additional ideal dephasing channel into the quantum channel and inserting cross-talk noise via a third UDW gate. Remarkably, the cross-talk noise channel qualitatively improves a lower bound on the quantum capacity suggesting UDW gates will have unexpected performance improvements if realized in condensed matter experiments.
## I Introduction
Relativistic Quantum Information (RQI) has benefited tremendously by introducing Unruh-DeWitt (UDW) detectors modeling qubit-field interactions. Among these benefits includes introducing RQI to quantum Shannon theory [1; 2; 3; 4; 5; 6]. Often, quantum channels that utilize UDW gates involve encoding (decoding) quantum information onto (off of) fields using coherent states to represent the energy levels of the field. The non-orthogonality of coherent states can lead to information scrambling when the field is traced out as unwanted states carry non-zero weights. Conveniently, the parameters of the UDW model can be constrained to provide experimentally plausible quantum computers [7]. One particular constraint, that of strong coupling, is reminiscent of the role the dephasing parameter has in bosonic dephasing channels. As we will see, these are the same.
Encoding and decoding information onto and off of fields has been theoretically demonstrated with great success over the past decade. The coupling, smearing, and switching functions of the UDW model provide the necessary parameters to fine-tune the interactions between the qubits and fields, allowing for near-perfect quantum capacity in limits of strong coupling [2; 7]. The ability to demonstrate other quantities of Shannon theory, such as the impact of noise, will further solidify the current and future role UDW gates play in quantum computing and condensed matter systems.
Dephasing channels [8] represent the effects of noise on the off-diagonals of any input density matrix. Dephasing channels can be used to model decoherence as well as other effects that realize this prescription. Bosonic dephasing channels [9; 10; 11; 12; 13] have been the center of an open problem for more than a decade [12] and now with the recent advances in RQI, can be used to model decoherence of quantum channels between qubits and fields. The key connection here is the usage of coherent states to model the excitations of fields in the UDW quantum gate model.
Coherent states [14] are powerful tools for understanding bosonic channels, and in the setting of UDW gates, they provide the encoded states for carrying out state-transfer between qubits and fields [2; 4; 6]. Given the non-orthogonality of coherent states, the coupling, smearing, and switching parameters are designed to constrain the inner products of the coherent states. This is a matter of consequence for tracing over the field after the quantum information has been transmitted from one qubit to another (via. the field). In the weak coupling regime, these parameters provide a channel capacity near zero. **For this reason, we require strong coupling where the unwanted terms go to zero in the limit of infinite coupling strength.**
Unruh-DeWitt quantum computing [7] is a new field that aims to understand entanglement propagation in quantum materials. As with any quantum computing setup, modeling noise is a key component of the theory. Understanding UDW channels from a bosonic dephasing channel perspective not only indicates where the noise comes from but provides an avenue for numerically modeling noise and subsequently offers additional solutions.
Therefore, we find several areas of these parameters worth exploring in the context of bosonic dephasing channels. Firstly, the current method of fine-tuned parameters fits nicely into the context of a "dephasing rate". The constant associated with the dephasing rate, commonly denoted as \(\gamma\) is some prefactor of the coherent amplitude. As we will show, this prefactor exists in the
coupling, smearing, and switching functions of the UDW detector model. Adjustments to these parameters explicitly change the coherent information of a given qubit-field UDW channel, as one would expect of any dephasing rate constant. Secondly, with this new perspective in place, noise via. similar interactions as well as interactions with the environment, should be calculable quantities. We show that in the context of the idealized canonical bosonic dephasing channel, one where the system interacts with the environment, no such change to the quantum information is present. This lack of effect is a consequence of the input already being constrained to near-zero valued off-diagonals, so any additional constraints to the off-diagonals are negligible. Lastly, we demonstrate with a given probability distribution the ability to calculate the coherence of a noisy UDW quantum channel, where the noise is a consequence of some addition of cross-talk noise that follows from unwanted interactions with unknown detectors.
## II UDW quantum channels
The overlap between the communities of quantum Shannon theory and RQI is minimal. Namely, the overlap exists mostly as those working directly with UDW detectors in RQI. For those familiar with this setup, feel free to skip to Sec. III. For the rest, we take this section to outline the setup of the UDW detector channel and explain the constraints that will be utilized when recasting our UDW channel from a dephasing perspective.
### Coherent States
A 1-D scalar field \(\hat{\varphi}(x)\) and it's associated conjugate momentum \(\hat{\Pi}(x)\) are given as
\[\hat{\varphi}(x) =\int\frac{dk}{2\pi}\sqrt{\frac{v}{2\omega(k)}}[\hat{a}(k)e^{ikx} +\hat{a}^{\dagger}(k)e^{-ikx}] \tag{1}\] \[\hat{\Pi}(x) =\int\frac{dk}{2\pi}\sqrt{\frac{\omega(k)}{2v}}[-i\hat{a}(k)e^{ ikx}+i\hat{a}^{\dagger}(k)e^{-ikx}]. \tag{2}\]
In this continuous space, these field observables can be expressed as displacement operators \(D[\alpha(x)]\) with a point-wise coherent amplitude
\[\alpha_{\varphi}(x)=\sqrt{\frac{v}{2\omega(k)}}e^{-ikx}. \tag{3}\]
For convenience we have introduced subscripts to differentiate between coherent amplitudes of the field and conjugate momentum observables. A Fourier transform following the usual prescription
\[f(k)\coloneqq\frac{1}{\sqrt{(2\pi)}}\int dxf(x)e^{ikx} \tag{4}\]
allows us to express a continuous displacement operator of a scalar bosonic field[15] as
\[\exp\left(\pm i\hat{\varphi}\right)\left|0\right>=\hat{D}[\alpha (k)]\left|0\right>\\ =\exp\left(\int dk\left[\alpha(k)\hat{a}_{k}^{\dagger}-\alpha(k) ^{*}\hat{a}_{k}\right]\right)\left|0\right>\equiv\left|\pm\alpha\right>. \tag{5}\]
The final equivalence is a notational convenience we will adopt in Sec. IV when carrying out simulations. This notational convenience can be understood as single mode excitation of a continuous spectrum. For now we proceed with the more general form.
These field observables follow the equal-time canonical commutation relations \([\hat{\varphi}(x),\hat{\varphi}(x^{\prime})]=0\), \([\hat{\Pi}(x),\hat{\Pi}(x^{\prime})]=0\), and \([\hat{\varphi}(x),\hat{\Pi}(x^{\prime})]=i\delta(x-x^{\prime})\) and consequently produce displacement operators that obey commutation relations
\[[\hat{a}_{\varphi}(k^{\prime}),\hat{D}(\alpha_{\varphi}(k))]=\alpha_{\varphi} (k^{\prime})\hat{D}_{\varphi}(\alpha(k)). \tag{6}\]
### Unruh-DeWitt Quantum Gates
A unitary operator in the form of a quantum gate can generate and/or propagate quantum information through a quantum circuit [16]. We expect this same consequence to be achievable with UDW detector models [2]. Incorporating our scalar field from Eq. 1 we present the well known UDW detector model as
\[U_{UDW}(t_{1},t_{2})=Te^{-i\int_{1}^{t_{2}}dt\hat{\Pi}_{int}(t)} \tag{7}\]
with
\[\hat{\Pi}_{int}(t)=\lambda\chi(t)\int_{\mathbb{R}}dx\ F(k)\hat{\mu}(t)\otimes \hat{\varphi}(k,t). \tag{8}\]
where \(\chi(t)\) and \(F(k)\) are the switching and smearing functions respectively and \(\lambda\) is a time dependent coupling constant that indicates what gate is interacting at what time and \(\hat{\mu}(t)\) is a two state (qubit) detector that has the form
\[\lambda\hat{\mu}(t)=\frac{\lambda}{2}(\hat{S_{+}}e^{-i\Omega t}+\hat{S_{-}}e^ {+i\Omega t})\equiv\lambda\hat{\mu}(t) \tag{9}\]
where \(\hat{S_{i}}\) is the projector onto some spin state and \(\Omega\) is some real valued number that indicates a non-trivial energy difference between spin states. By using a delta like switching function we can indicate the time ordering of our Unitary and find that is simplifies to
\[\hat{U}_{UDW}=\exp{(-i\lambda\hat{\mu}\otimes\hat{\varphi})}. \tag{10}\]
The structure of \(\hat{\mu}\) as discussed above is that of a qubit observable and for this we can rewrite the gate by introducing projectors \(\hat{P_{s}}\) onto the eigenstates of \(\hat{\mu}\) as
\[\hat{U}_{UDW}=\sum_{s\in\pm}\hat{P_{s}}\otimes e^{is\hat{\varphi}(F)}. \tag{11}\]
In this form we see that \(\hat{U}_{UDW}\) is like a controlled gate, it acts with one unitary on the field in one eigenspace of the qubit and another unitary in the other eigenspace of the qubit. This is part of the "encoding" process that imparts the quantum information from the qubit to the field. Here we have redefined the field observables to include the smearing and coupling of the UDW model
\[\hat{\varphi}(F) \coloneqq\lambda_{\varphi}\int dxF(k)\hat{\varphi}(k,t) \tag{12}\] \[\hat{\Pi}(F) \coloneqq\lambda_{\Pi}\int dxF(k)\hat{\Pi}(k,t). \tag{13}\]
Equation 11 captures one of the two criteria discussed in the beginning of this section, namely entanglement generation. However, given that the interest of the dephasing channel scheme is to monitor the changes in quantum capacity we instead aim for a channel that can preserves quantum information.
### UDW State Transfer Channel
#### ii.3.1 Setting the Channel Up
It was shown in Ref. [2] that to write down a channel that preserves entanglement and performs a state transfer, we would need two of the controlled unitaries presented in Eq. 11. This will lead to the unitary
\[\hat{U}_{\nu\varphi}=\sum_{z,x\in\pm}\hat{P}_{x}\hat{P}_{z}\otimes e^{ix\hat{ \Pi}(F)_{\nu}}e^{iz\hat{\varphi}(F)_{\nu}} \tag{14}\]
where we introduce the notation that \(\hat{P}_{x}\), with an \(x\)-index, and \(\hat{P}_{z}\), with a \(z\)-index, are the projection operators onto the eigenstates of Pauli matrices \(\hat{\sigma}_{x}\) and \(\hat{\sigma}_{z}\) respectively and \(\nu\) indicates the qubit interacting with the field at a given time \(t_{\nu}\). Figure 1 presents a circuit diagram for this channel which has the mathematical structure
\[\Xi_{A\to B}=\mathrm{Tr}_{A\varphi}[U_{A\hat{\varphi}}U_{\varphi B}(\hat{ \rho}_{A,0}\otimes\hat{\rho}_{\varphi}\otimes\hat{\rho}_{B,0})U_{\varphi B}^{ \dagger}U_{A\varphi}^{\dagger}] \tag{15}\]
and can be expanded out as
\[\Xi_{A\to B}=\\ \sum_{l,l^{\prime},x_{i},z_{i}}\langle 0|e^{iz_{1}\hat{\varphi}_{A}}e^ {ix_{1}\hat{\Pi}_{A}}e^{ix_{2}\hat{\Pi}_{A}}e^{iz_{2}\hat{\varphi}_{A}}\\ \times e^{iz_{1}\hat{\varphi}_{A}}e^{ix_{1}\hat{\Pi}_{A}}e^{ix_{2} \hat{\Pi}_{A}}e^{iz_{2}\hat{\varphi}_{A}}|0\rangle\\ \times\langle l^{\prime}_{z}|\hat{P}_{-z_{1}}\hat{P}_{-x_{1}}\hat {P}_{x_{4}}\hat{P}_{z_{4}}|l_{z}\rangle_{C}\left\langle l^{\prime}_{z}|\right. \\ \left.\otimes\hat{P}_{-z_{3}}\hat{P}_{-x_{3}}|+_{y}\right\rangle_ {B}\left\langle+_{y}\right|\hat{P}_{x_{2}}\hat{P}_{z_{2}}. \tag{16}\]
The correlator in Eq. 16, which is a result of the trace over the field, is a tricky value to calculate. One may recognize the correlator as a vacuum expectation value of eight vertex operators, often utilized in conformal field theory and string theory. If utilizing coherent states, we recognize that they will lead to non-orthogonal inner products \(|\left\langle+\alpha(k)\right|-\alpha(k)\rangle|\) that contain the specific unwanted states that if we could dephase away, would improve the coherent information of the channel. Furthermore, any additional interactions on the field will only yield a larger correlator. Therefore, we also explore the possibility that a dephasing perspective may eliminate the necessity to increase the size of the correlator for additional interactions.
#### ii.3.2 Choosing the Correct Parameters
Within Eq. 8 and Eqs. 12 and 13, there are still a few free parameters remaining. Namely, the coupling constants \(\lambda_{\varphi}\) and \(\lambda_{\Pi}\) as well as the smearing function \(F(k)\). We therefore utilize these parameters to remove unwanted final states.
These unwanted states show up in two ways, firstly during the application of \(e^{ix\Pi}\), which when applied to a coherent state will generally change the value of the state, and secondly with non-zero values of the inner product \(|\left\langle+\alpha(k)\right|-\alpha(k)\rangle|\). In RQI literature, it is common to redefine the coherent amplitude, given in Eq. 3, to include the coupling constants and the smearing function. By doing this we can utilize equation 6 and the following identity
\[e^{\pm i\Pi}\hat{a}_{k}e^{\mp i\Pi}=\hat{a}_{k}\mp\alpha_{\Pi}(k) \tag{17}\]
to prove the relation
\[\hat{\Pi}\left|\pm\alpha(k)\right\rangle=\pm\Gamma\left|\pm\alpha(k)\right\rangle +e^{i\Pi}\Pi\left|0\right\rangle \tag{18}\]
which in the limit of \(\Gamma^{2}\gg\langle 0|\Pi^{2}|0\rangle\) gives us the following constraint in (1+1) dimensions
\[\left(\lambda_{\varphi}\int dk\left|\tilde{F}_{\nu}(k)\right|^{2}\right)^{2} \gg\frac{1}{2}\int dk\,\omega(k)\left|\tilde{F}_{\nu}(k)\right|^{2}. \tag{19}\]
Figure 1: Operators \(\hat{U}_{A\varphi}\) and \(\hat{U}_{\varphi B}\) are employed to encode and decode quantum information onto and off of the field \(\hat{\varphi}\) thus transferring the quantum information from qubit A to qubit B.
This constraint specifically allows for the application of \(e^{ix\Pi\mathrm{I}}\) to a coherent state and results in a phase constant. Furthermore, constraining this phase constant by
\[\Gamma\coloneqq\lambda_{\Pi}\lambda_{\varphi}\int dk\,|\tilde{F}_{\nu}(k)|^{2}= \frac{\pi}{4}\,\mathrm{mod}\,2\pi \tag{20}\]
results in a Bloch rotation of the output qubit A's states that eliminates the first batch of unwanted states1. This constraint will remain throughout the rest of this letter.
Footnote 1: For a more detailed explanation of these constraints see Ref. [2].
The second group of unwanted states, and the states directly targeted for dephasing, follow from the non-orthogonal inner product discussed at the end of the previous section. The inner product has the form,
\[|\bra{+\alpha(k)}-\alpha(k)\rangle\,|=\exp\left[-(\lambda_{\varphi})^{2}\, \int\frac{dk}{2\omega(k)}\,|\tilde{F}_{\nu}(k)|^{2}\right]. \tag{21}\]
Choosing a Gaussian smearing function with Gaussian width \(\sigma\), and setting \(\lambda\gg\sigma\) renders these terms approximately zero. With these constraints, the coherent information of the field mediated communication channel from qubit A to qubit B in Fig. 1 grows toward one as we in increase the strength of the coupling. Which is demonstrated in Fig. [4] of Ref. [2], as well as Figs. 3.
These recent results are well understood in the RQI community, but follow from the linear nature of the field observables in Eqs. 1 and 2. For certain nonlinear cases, it becomes a challenge to redefine the coherent amplitudes to include the coupling and smearing constants. It is for this reason, as well as previously discussed reasons, we aim to generalize these UDW gates to be understood as bosonic dephasing channels.
## III Unruh-deWitt bosonic dephasing channels
### The UDW Channel as a Bosonic Dephasing Channel
In quantum Shannon theory it is often useful to discuss multipartite channels that experience decoherence as dephasing channels, usually under the+ guise of interactions with the environment. To that regard, we may see a dephasing channel2 written in the Fock basis as
Footnote 2: It is the prerogative of the author that quantum channels including fields and qubits be denoted with \(\Xi\) and channels that contain only qubits \(\mathcal{N}\).
\[\mathcal{N}_{\gamma}(\rho)=\sum_{m,n=0}^{\infty}e^{-\frac{\gamma}{2}(n-m)}(c_ {n}c_{m}^{*})\ket{n}\bra{m} \tag{22}\]
where \(\gamma\) is the factor that determines the rate of decoherence. For example as \(\gamma\to\infty\) the diagonal terms vanish. We revisit the concept of dephasing due to the environment in Sec. IV.1.
In case of the standard UDW channel, we aim to show that the dephasing is a necessity following the non-orthogonality of the coherent states. Generating the off-diagonals of the coherent state density matrix, scrambles the encoded information from \(\hat{U}_{A\varphi}\) as the controlled structure of our unitaries encodes (decodes) QI onto (off of) the diagonals of the coherent state density matrix. Utilizing the dephasing perspective, along with our free parameters to send these states to zero, can accomplish near perfect channel capacity.
To show the relation between the current formalism and the proposed dephasing perspective, we can reformulate the state transfer channel of Ref. [2] to
\[\hat{U}_{\nu\varphi}=\sum_{z,x\in\pm}\hat{P}_{x}\hat{P}_{z}\otimes \exp\left[ix\int dk\,\sqrt{\gamma_{\Pi}(k)}\hat{\Pi}_{\nu}(k)\right]\\ \times\exp\left[iz\int dk\,\sqrt{\gamma_{\varphi}(k)}\hat{\varphi} _{\nu}(k)\right] \tag{23}\]
where we have set
\[\hat{\varphi}(\tilde{\gamma})_{\nu}\coloneqq\lambda_{\varphi}\int dk\tilde{F}_ {\nu}(k)\hat{\varphi}(k,t_{\nu})=\int dk\sqrt{\gamma_{\varphi}(k)}\hat{\varphi }(k,t_{\nu}) \tag{24}\]
\[\hat{\Pi}(\tilde{\gamma})_{\nu}\coloneqq\lambda_{\Pi}\int dk\tilde{F}_{\nu}(k) \hat{\Pi}(k,t_{\nu})=\int dk\sqrt{\gamma_{\Pi}(k)}\hat{\Pi}(k,t_{\nu}). \tag{25}\]
Instead of redefining the coherent amplitudes to include our free parameters as we did in Sec. II.3.2, we have wrapped the free parameters up in some function \(\gamma_{O}(k)\) and leave the coherent amplitudes as they exist in the field observables \(\hat{O}\). This simplification allows for general treatments of the channel regardless of the linearity of the ladder operators in the field observables.
Now carrying out the inner product of Eq. 21 yields
\[|\bra{+\sqrt{\gamma_{\varphi}(k)}\alpha(k)}-\sqrt{\gamma_{\varphi}(k)}\alpha (k)\rangle\,|=e^{-2\gamma_{\varphi}(k)|\alpha(k)|^{2}} \tag{26}\]
where we have utilized the non-orthogonality of coherent states identity
\[\langle\beta(k)|\alpha(k)\rangle=\exp\left(-\frac{1}{2}|\alpha(k)|^{2}-\frac {1}{2}|\beta(k)|^{2}+\beta^{*}(k)\alpha(k)\right). \tag{27}\]
It is obvious from Eq.26 that the coherent states are part of a bosonic dephasing channel \(\Xi_{\varphi\to\varphi^{\prime}}\) with dephasing constant \(\sqrt{\gamma_{O}(k)}\). However, unlike canonical dephasing channels the coupling, smearing, and switching are the mechanisms of dephasing. Sending our dephasing function \(\gamma_{\varphi}(k)\to\infty\) for any value \(k\), removes off diagonal elements from this channel. This should be expected following the constraints in Sec. II.3.2.
Applications of a UDW channel from a bosonic dephasing perspective.
The main result thus far has been that in the RQI channel outlined in Fig. 1, we can model the coupling constant and smearing function as a dephasing function. For the remainder of this letter, we demonstrate the results of dephasing models using numerical simulations. For this reason, we utilize a Gaussian smearing function and subsequently treat the dephasing function as a constant \(\gamma_{\hat{O}}\). Since we aim to take advantage of the new perspective let's consider the canonical dephasing channel.
### Canonical Bosonic Dephasing Channel
The canonical dephasing channel is one where the system interacts with the environment, and the multipartite quantum signal depreciates. Therefore, given the paradoxical result in our UDW system, outlining such a dephasing channel (one where the field is allowed to interact with the environment), may provide such a boost in QI through our channel. However, given that dephasing channels acts on the diagonal components of the density matrix we may find that the unwanted states remain undeterred.
To evaluate this channel, we begin with a standard unitary that takes advantage of the number operator
\[\hat{U}_{\text{E}}=e^{-i\sqrt{\pi_{E}}\hat{a}^{\dagger}\hat{a}(\hat{b}+\hat{b} ^{\dagger})} \tag{28}\]
where \(\gamma_{E}\) is the dephasing constant associated with the interaction between the field and environment and \(\hat{b}^{\dagger}(\hat{b})\) are the creation (annihilation) operators of the environment. We can model the channel's effect on the field by writing out the composite channel
\[\Xi_{\varphi\rightarrow\varphi^{\prime},E}=\text{Tr}_{E}[\hat{U}_{\text{E}}( \hat{\rho}_{1,\varphi}\otimes\left|0\right\rangle\left\langle 0\right|)\hat{U}_{ \text{E}}^{\dagger}]. \tag{29}\]
Implanting this in the usual prescription of bosonic dephasing channels, similar to that of Eq. 36, we get
\[\Xi_{\varphi\rightarrow\varphi^{\prime},E}(\hat{\rho}_{\varphi})=\int_{- \infty}^{\infty}d\phi\,p(\phi)\,e^{-i\hat{a}^{\dagger}\hat{a}\phi}\hat{\rho}_ {1,\varphi}e^{i\hat{a}^{\dagger}\hat{a}\phi}, \tag{30}\]
where \(\Xi_{\varphi\rightarrow\varphi^{\prime},E}(\hat{\rho}_{\varphi})\) represents the dephasing channel and the effects on our field \(\varphi\), and \(p(\phi)\) is a Gaussian probability density given by
\[p(\phi)=\sqrt{\frac{1}{2\pi\gamma_{E}}}e^{-\frac{1}{2}\frac{\phi^{2}}{\gamma_ {E}}}. \tag{31}\]
To evaluate this channel we convert to the Fock basis and continue by substituting in Eq. 38 and simplifying Eq. 30 which then becomes
\[\Xi_{\varphi\rightarrow\varphi^{\prime},E}(\hat{\rho}_{\varphi})= \sum_{s,s^{\prime}\in\pm}\int_{-\infty}^{\infty}d\phi\,p(\phi)\\ \times e^{-i\hat{a}^{\dagger}\hat{a}\phi}\left|s\sqrt{\gamma_{ \varphi}}\alpha\right\rangle\left\langle s^{\prime}\sqrt{\gamma_{\varphi}} \alpha\right|e^{i\hat{a}^{\dagger}\hat{a}\phi}\\ =\sum_{s,s^{\prime}\in\pm}\sum_{m,n}\int_{-\infty}^{\infty}d \phi\,p(\phi)e^{-\frac{1}{2}(\gamma_{\phi}\left|\alpha\right|^{2}(s^{2}+s^{ \prime 2}))}\\ \times\frac{(s\sqrt{\gamma_{\varphi}}\alpha)^{n}(s^{\prime}\sqrt{ \gamma_{\varphi}}\alpha^{*})^{m}}{\sqrt{n!}\sqrt{m!}}e^{-i\hat{a}^{\dagger} \hat{a}\phi}\left|n\right\rangle\left\langle m\right|e^{i\hat{a}^{\dagger}\hat {a}\phi}. \tag{32}\]
Evaluating the operators we obtain
\[\Xi_{\varphi\rightarrow\varphi^{\prime},E}(\hat{\rho}_{\varphi})= \sum_{s,s^{\prime}\in\pm}\sum_{m,n}\int_{-\infty}^{\infty}d\phi\,p(\phi)e^{- i\phi(m-n)}\\ \times e^{-\frac{1}{2}(\gamma_{\phi}\left|\alpha\right|^{2}(s^{2} +s^{\prime 2}))}\frac{(s\sqrt{\gamma_{\varphi}}\alpha)^{n}(s^{\prime}\sqrt{ \gamma_{\varphi}}\alpha^{*})^{m}}{\sqrt{n!}\sqrt{m!}}\left|n\right\rangle \left\langle m\right|. \tag{33}\]
Carrying out the integral yields
\[\Xi_{\varphi\rightarrow\varphi^{\prime},E}(\hat{\rho}_{\varphi})= \sum_{s,s^{\prime}\in\pm}\sum_{m,n}e^{-\frac{1}{2}(m-n)^{2}\sqrt{\gamma_{E}}} \\ \times e^{-\frac{1}{2}(\gamma_{\varphi}\left|\alpha\right|^{2}(s^{ 2}+s^{\prime 2}))}\frac{(s\sqrt{\gamma_{\varphi}}\alpha)^{n}(s^{\prime}\sqrt{ \gamma_{\varphi}}\alpha^{*})^{m}}{\sqrt{n!}\sqrt{m!}}\left|n\right\rangle \left\langle m\right|. \tag{34}\]
It is evident that when \(m=n\), we get no effect from the dephasing channel. Moreover, when considering the change that this dephasing makes to our original correlator in Eq. 16, we trace over the final state of the field and assess the inner product of our new coherent states. What we find is that non-zero results require \(m=n\). Therefore, acting on the field with the number operator results in an inner product identical to Eq. 26 and subsequently will not change the coherent information of the UDW channel. This result indicates that in this idealized dephasing channel, the environment will have no effect on the quantum information through our channel. Coincidentally, the condensed matter systems where these channels have been proposed, take advantage of topologically protected edge states to transmit quantum information [7]. Therefore, interactions with the environment were expected to be minimal regardless.
### Cross-Talk Noise
#### iv.2.1 Setting up a UDW Noise Channel
One possible generation of unwanted disturbances of quantum information in proposed condensed matter approaches to UDW channels, is unwanted interactions with detectors, a type of Cross-Talk (CT) noise (see Fig. 2). Traditional calculations of these additional interactions would increase the amount of vertex operators in the correlator of Eq. 16, making computational times significantly longer as well as more involved. However,
considering the dephasing perspective, we instead can incorporate additional detector effects into the dephasing parameter.
Let's assume that the new gate is defined as
\[\hat{U}_{\text{CT}}=\sum_{z\in\pm}\hat{P}_{z}\otimes\exp\left[iz\sqrt{\gamma_{N}} \hat{\varphi}_{\nu}\right], \tag{35}\]
similar to that of Eq. 11. Following the prescription in Sec. IV.1 we model the channel as
\[\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}(\hat{\rho}_{\varphi})=\int_ {-\infty}^{\infty}d\phi\,p(\phi)\,e^{i\hat{\varphi}\phi}\hat{\rho}_{1,\varphi} e^{-i\hat{\varphi}\phi} \tag{36}\]
which expresses the channel \(\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}(\hat{\rho}_{\varphi})\), a noisy composite channel describing the field evolution throughout the channel \(\Xi_{A\to B,\text{N}}\). The channel indicates the probability (which follows a random probably distribution \(p(x)\)[9; 10]) that the CT detector produces a different coherent state. For this calculation we will utilize the probability distribution of Eq. 31 with the substitution of the dephasing parameter \(\gamma_{N}\). \(\hat{\rho}_{1,\varphi}\) is the state after the interaction between the field and qubit A and has the explicit density form
\[\hat{\rho}_{1,\varphi} =\sum_{s,s^{\prime}\in\pm}|s\sqrt{\gamma_{\varphi}}\alpha\rangle \,\langle s^{\prime}\sqrt{\gamma_{\varphi}}\alpha| \tag{37}\] \[=\sum_{s,s^{\prime}\in\pm}e^{s\sqrt{\gamma_{\varphi}}\varphi} \,|0\rangle\,\langle 0|\,e^{-s^{\prime}\sqrt{\gamma_{\varphi}}\hat{\varphi}}. \tag{38}\]
Substituting Eq.38 into Eq.36 we get
\[\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}(\hat{\rho}_{ \varphi}) =\sum_{s,s^{\prime}\in\pm}\int_{-\infty}^{\infty}d\phi\,p(\phi)\] \[\times e^{i(\phi+\sqrt{\gamma_{\varphi}})\hat{\rho}}\hat{\rho}_{ \varphi}e^{i(\phi+\sqrt{\gamma}_{\varphi})\hat{\varphi}} \tag{39}\] \[=\sum_{s,s^{\prime}\in\pm}\int_{-\infty}^{\infty}d\phi\,p(\phi)\] \[\times|s(\phi+\sqrt{\gamma}_{\varphi})\alpha\rangle\,\langle s^{ \prime}(\phi+\sqrt{\gamma}_{\varphi})\alpha|\,. \tag{40}\]
As was shown in Sec. II.3, the unwanted states result from the correlator formed by the partial trace over the field and the non-orthogonality of coherent states. Tracing over \(\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}(\hat{\rho}_{\varphi})\) to reproduce the effect of the correlator in Eq. 16 will demonstrate the changes made to the inner product of the coherent states which we have denoted by \(\langle\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}(\hat{\rho}_{\varphi})\rangle\) and defined as
\[\text{Tr}\,\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}( \hat{\rho}_{\varphi})\equiv\langle\Xi_{\varphi\rightarrow\varphi^{\prime}, \text{N}}(\hat{\rho}_{\varphi})\rangle=\sum_{s,s^{\prime}\in\pm}\int_{-\infty }^{\infty}d\phi\,p(\phi)\] \[\times\langle s^{\prime}(\phi+\sqrt{\gamma}_{\varphi})\alpha|s( \phi+\sqrt{\gamma}_{\varphi})\alpha\rangle\] \[=\sum_{s,s^{\prime}\in\pm}\int_{-\infty}^{\infty}d\phi\,p(\phi) \,\exp\bigg{(}-\frac{1}{2}s^{\prime 2}(\phi+\sqrt{\gamma}_{\varphi})^{2}|\alpha|^{2} \bigg{)}\] \[\quad-\frac{1}{2}s^{2}(\phi+\sqrt{\gamma}_{\varphi})^{2}|\alpha| ^{2}+s^{\prime}s(\phi+\sqrt{\gamma}_{\varphi})^{2}|\alpha|^{2}\bigg{)}\] \[=\sum_{s,s^{\prime}\in\pm}\int_{-\infty}^{\infty}d\phi\,p(\phi) \,\exp\bigg{(}-\frac{1}{2}(\phi+\sqrt{\gamma}_{\varphi})^{2}|\alpha|^{2}(s^{ \prime}-s)^{2}\bigg{)} \tag{41}\]
where we have utilized Eq. 27 as we did in Sec. III.1. Notice Eq. 41 has the same structure as Eq. 26 but allows new dephasing construction of the parameters. As expected when \(s^{\prime}=s\) the expectation value is trivially one and will not effect the outcome of channel Eq.15. However, when \(s^{\prime}\neq s\) we now have two parameters, \(\gamma_{\varphi}\) and \(\phi\), to enforce dephasing of unwanted states.
Plugging the above probability distribution into Eq. 41 and evaluating the integral for \(s^{\prime}\neq s\) we get
\[\langle\Xi_{\varphi\rightarrow\varphi^{\prime},\text{N}}(\hat{\rho}_{\varphi}) \rangle=\frac{\exp\bigg{(}-\frac{2\gamma_{\varphi}|\alpha|^{2}}{1+4|\alpha|^{ 2}b^{2}\gamma_{\varphi}}\bigg{)}}{\sqrt{1+4|\alpha|^{2}b^{2}\gamma_{\varphi}}} \tag{42}\]
where we have redefined our CT dephasing parameter as a multiple of the original \(\gamma_{N}=b\gamma_{\varphi}\). Eq. 42 is directly comparable to Eq. 26. It is straightforward to see in Fig. 3a as \(b\to 0\) the interaction with the noise is turned off, at \(b=1\) the channel is the noisiest, and as \(b\rightarrow\infty\)\(\gamma_{N}\) acts as the primary dephasing factor for small values of \(\gamma_{\varphi}\) which is demonstrated in Fig. 3b.
#### iii.1.2 Noisy UDW Effects on Coherent Information
To understand the difference these new parameters make to the coherent information we can look at the definition of \(\gamma_{\varphi}\) in Eq. 24. Since \(\gamma_{\varphi}\) is a function of \(\lambda_{\varphi}\) we can set up the new inner product in terms of a new coupling constant \(\lambda_{\varphi,b}\) given by the relation
\[\frac{\exp\bigg{(}-\frac{2\gamma_{\varphi}|\alpha|^{2}}{1+4b^{2} \gamma_{\varphi}|\alpha|^{2}}\bigg{)}}{\sqrt{1+4b^{2}\gamma_{\varphi}|\alpha|^{ 2}}}=\exp\bigg{[}-2(\lambda_{\varphi,b})^{2}\\ \times\int dk\,|\tilde{F}_{\nu}(k)|^{2}|\alpha|^{2}\bigg{]}. \tag{43}\]
Figure 2: Introducing another UDW gate acts as additional information entering the channel and subsequently will introduce cross-talk noise to the original channel in Eq. 15.
We then solve for \(\lambda_{\varphi,b}\) in terms of \(\lambda_{\varphi}\) and ascertain
\[\lambda_{\varphi,b}=\left[\frac{\lambda_{\varphi}^{2}}{1+\frac{4b^{ 2}|\alpha|^{2}\lambda_{\varphi}^{2}}{\sqrt{(2\pi)^{3}}\sigma}}-\frac{\sqrt{(2\pi )^{3}}\sigma}{2|\alpha|^{2}}\\ \times\ln\left(1+\frac{4b^{2}|\alpha|^{2}\lambda_{\varphi}^{2}}{ \sqrt{(2\pi)^{3}}\sigma}\right)\right]^{\frac{1}{2}} \tag{44}\]
where the term \(\sqrt{(2\pi)^{3}}\sigma\) is a consequence of Gaussian smearing with width \(\sigma\). Calculating channel capacity with this new value \(\lambda_{\varphi,b}\), while keeping the remaining parameters intact, still reaches near perfect channel capacity but reaches it slower, when \(b\leq 1\) demonstrated by Fig. 2(c). Furthermore, for high values of \(b\) and low values of \(\lambda_{\varphi}\) one would expect \(b\) to act as the primary dephasing factor as shown in Fig. 2(d).
## V Summary and Results
We have shown that the formalism of quantum channels produced by UDW detectors provides a bosonic dephasing channel perspective. With this perspective, we have demonstrated that the purpose of the dephasing is to remove unwanted states that lead to the scrambling of the quantum information encoded on the field. We have shown that the dephasing constant can be written in terms of the strength of coupling between the qubit and field.
We aimed to demonstrate several applications of the dephasing perspective that provide interpretations of unwanted interactions. Firstly, we applied the canonical form of a bosonic dephasing channel, allowing the system to interact with the environment. In this idealized dephasing channel, the prior constraints on the off-diagonals were the only effects that remained when tracing over the field. Subsequently, this indicated no additional noise generated from interacting with the environment under this prescription.
Figure 3: Comparing the coherent information of channels with and without noise, it is clear to see that the noisy channel \(\Xi_{A\to B,N}(\lambda_{\varphi,b})\) reaches a channel capacity of one at a slower rate. At \(b=1\) the noisy signal is the highest, this is due to a signal boost in the unwanted states. Regardless, we can see in (d) an increase in the lower bound of coherent information for small values of \(\gamma_{\varphi}\).
Secondly, how does coherent information change given some noise due to additional UDW detectors? To evaluate this numerically, we presented a unitary operation that introduces CT noise to the system and calculated how that noise affects the strength of coupling between the qubits and fields. Figure 3 demonstrates how the additional noise can affect the propagation of quantum information through the channel given in Eq. 15. These values make intuitive sense as one may expect an additional signal boost to non-orthogonal off-diagonal elements of the density matrix in Eq. 40 which act to scramble the QI.
An open problem remaining is the possibility of writing down a unitary dephasing channel that increases coherent information overall. One might notice that non-unitary interactions that increase the strength of the diagonal elements of the coherent state density matrix while decreasing the off-diagonal elements are possible. However, given the nature of dephasing channels, accomplishing this with unitary operations requires much care.
## VI Acknowledgments
We thank Ludovico Lami and Mark Wilde for useful discussions on dephasing channels and key insights into the role coherent states play in quantum information theory. We would also like to thank Justin Kulp for several conversations, resulting in more precise and quicker data collection and interpretation.
|
2309.10764 | An optical Ising spin glass simulator with tuneable short range
couplings | Non-deterministic polynomial-time (NP) problems are ubiquitous in almost
every field of study. Recently, all-optical approaches have been explored for
solving classic NP problems based on the spin-glass Ising Hamiltonian. However,
obtaining programmable spin-couplings in large-scale optical Ising simulators,
on the other hand, remains challenging. Here, we demonstrate control of the
interaction length between user-defined parts of a fully-connected Ising
system. This is achieved by exploiting the knowledge of the transmission matrix
of a random medium and by using diffusers of various thickness. Finally, we
exploit our spin-coupling control to observe replica-to-replica fluctuations
and its analogy to standard replica symmetry breaking. | Louis Delloye, Gianni Jacucci, Raj Pandya, Davide Pierangeli, Claudio Conti, Sylvain Gigan | 2023-09-19T17:02:37Z | http://arxiv.org/abs/2309.10764v1 | # An optical Ising spin glass simulator with tuneable short range couplings
###### Abstract
Non-deterministic polynomial-time (NP) problems are ubiquitous in almost every field of study. Recently, all-optical approaches have been explored for solving classic NP problems based on the spin-glass Ising Hamiltonian. However, obtaining programmable spin-couplings in large-scale optical Ising simulators, on the other hand, remains challenging. Here, we demonstrate control of the interaction length between user-defined parts of a fully-connected Ising system. This is achieved by exploiting the knowledge of the transmission matrix of a random medium and by using diffusers of various thickness. Finally, we exploit our spin-coupling control to observe replica-to-replica fluctuations and its analogy to standard replica symmetry breaking.
## I Introduction
Non-deterministic polynomial-time problems (NP-problems) are important in many fields of study from the physical to social sciences [1; 2; 3; 4; 5; 6; 7]. However, they often have intractable solve-times with classical computers as their solve-time scales exponentially with the size of the input [8]. An archetypal NP-problem is finding the ground-state of an Ising spin-system [9; 10; 11; 12; 13]. This is of particular interest as many wide-spread NP-problems can be analytically mapped [14; 7] onto an Ising Hamiltonian (c.f. Equation 1). Solving any of these particular NP problem thus reduces to finding the ground state of the corresponding Ising system.
Recently, Ising models have been experimentally simulated in a number of ways, both by using classical and quantum systems [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. A class of very promising systems are optical Ising simulators based on optical parametric oscillators (OPOs) [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42] and photonic annealers based on wavefront shaping [43; 44; 45; 46; 47]. The latter uses propagation in complex media whereas the other exploits OPOs and time-multiplexing. OPO-based methods have a high tunability but lack scalability. On the other hand, WS-based methods present a high scalability and connectivity but are limited in their tunability. The degree of customization of WS-based simulators can be increased by leveraging the knowledge of the transmission matrix [14].
In this Letter, we demonstrate the control of the spin interaction length by using thin diffusive media. By tuning the interaction length we observe replica-to-replica fluctuations analogous to replica symmetry breaking (RSB) in spin glasses [48; 49]. Our results represent a relevant step toward the realization of fully-programmable Ising machines on free-sapce optical-platforms, capable of solving complex spin-glass Hamiltonians on a large scale.
A system of \(N\) coupled spins is described by the Hamiltonian in Equation 1. For the Sherrington-Kirkpatrick model [11], the couplings (\(J_{ij}\)) are all-to-all random couplings, and are drawn from a Gaus
Figure 1: **Optical spin-glass simulator.** A laser is collimated on a spatial light modulator (SLM) and passes through a thin diffuser to then reach the imaging plane. The surface of the SLM is directly imaged (via a 4f-system) on the surface of the diffuser. The light is focused on two output modes. The inset shows the two rows of the transmission matrix \((t_{im})_{im}\) summed and then reshaped to SLM dimensions. The light reaching the pink (resp. yellow) focus can only come from certain pixels within the pink (resp. yellow) cone on the SLM plane \((\sigma_{i})\), of which the extent is given by the distance \(d\) and/or the diffuserβs matrix \(t_{im}\), corresponding to solving an Ising Hamiltonian with local couplings.
sian distribution. Finding the ground-state of Equation 1 is an NP-hard problem. This problem can be mapped onto a physical hardware thanks to coherent light-propagation in disordered medium. In this configuration, one can show that the transmitted intensity after a multiply-scattering medium takes the form [16]:
\[H=-\sum_{i,j=1}^{N}J_{ij}\sigma_{i}\sigma_{j}=-I \tag{1}\]
with \(J_{ij}=-\sum_{m}^{M}\mathrm{Re}\big{\{}\overline{t_{im}}t_{jm}\big{\}}\), where \(m\) runs on the output modes and \(t_{im}\) and \(t_{jm}\) are transmission matrix (\(T\)) elements of the random medium. The matrix \(T\) links the input modes on the spatial light modulator (SLM) (\(E_{in}\)) and the output modes on the camera (\(E_{out}\)) via \(E_{out}=TE_{in}\)[50]. The spins are encoded on the input modes (pixels) with a binary phase of 0 and \(\pi\), corresponding to \(\pm 1\) spin states, respectively [51, 16]. Thanks to the physical system described in Figure 1 and using an optimization algorithm one can find the ground state of the Ising problem [15].
The couplings distribution (\(J_{ij}\)) can be tuned by using the knowledge of the transmission matrix \(T\)[14]. In the present work, we use thin diffusive media [52, 53]. These allow us --in contrast to thick scattering media [15, 16, 17]-- to control the extent of spins contributing to the Hamiltonian. The further the camera is from the diffuser, the more the speckle pattern will spatially expand and the more the light coming from various input modes is mixed. In terms of couplings, it means that for a given input mode, the number of connected other input modes will increase as we get further from the medium. This effect can be used to control the spatial extent of the spins' couplings. The insets in Figure 2 show a graphical representation of the amount of pixels contributing to the intensity on one single CCD pixel at two different distances. In terms of spin couplings, it means that for a given spin, the number of connected spins will increase as we get further from the medium. This effect can be used to control the spatial extent of the spin interaction. Remarkably, there exists two extreme points. The first one being when the distance is such that the speckle is fully developed, leading to an all-to-all coupled spin system (e.g. at position \(d_{2}\) in Figure 2). The second one is when the distance is such that the spins are not coupled at all (e.g. at position \(d_{1}\) in Figure 2). In this situation, the distance is so small that the light reaching the specific output can only come from one input mode (or SLM macro pixel). In other words, it is an Ising system of N decoupled spins.
In Reference [14] we demonstrate the ability to tune the Hamiltonian of the system such that we obtain magnetized ground states for a given set of couplings. As shown in Figure 2, we apply the same technique to thin diffusers to tune the couplings such that we obtain a ground state with a localized magnetization. Figure 2a shows, for both simulations and experiments, three ground states for three different (increasing) distances between diffuser and CCD. As shown in Figure 2, we observe that the magnetized region is localized and has a finite size, whereas fully-magnetized
Figure 3: **Controlling the interaction between spin clusters.****a)** Simulation (first columns) and experimental (second columns) spin ground states. **b)** TM amplitudes obtained summing all rows of the measured TM. Two isolated regions of magnetized spins (when optimizing over to two foci at the CCD plane) as displayed on the SLM (first row). As the distance between CCD and diffuser increases the extent of the magnetized regions also increases, the regions start to overlap and therefore interact (second row).
Figure 2: **Tuning the spin correlation length by optical propagation.****a)** Schematic representation of the setup in two different distance configurations: \(d_{1}\) in red and \(d_{2}\) in orange, where the colored cones represent where the light reaching one CCD pixel comes from on the SLM. **b)** Simulation (first col) and experimental (second col) spin maps at three different imaging plane CCD-DIFF distances. The further away the plane, the wider the correlation. **c)** The amplitudes of the experimental transmission matrices at these same three distances.
ground states are expected for a thick medium [14]. Figure 2c offers a preview of the amplitude of the transmission matrix coefficients (\(|T|\)) for the three previous distances. It is evident here that the light that reaches this given CCD pixel (chosen for the optimization) comes from only a subset of the input modes (i.e. spins). Secondly, one can note that the interaction length between the spins grows with the distance to the diffuser (c.f. Figure 2c).
The ability to tune the spatial extent of the region where spins are mutually coupled allows us to create several clusters and explore the interactions between them by selecting two pixels at some distance, and optimizing over the sum of their intensities. We define the interaction area as the zone where the two regions overlap. The spins within this area contribute to both foci at the two different camera-pixels corresponding respectively to the two regions. Figure 3 displays an example of two regions, varying their distance and the interaction length. In detail, Figure 3a shows the ground states, for both simulations and experiments, obtained for the whole system (both regions) and for two different distances between diffuser and CCD. The first column shows the case where the two regions are not overlapping, the second column shows the case where the two regions are overlapping. Figure 3b shows the transmission matrix amplitudes for the two previous cases. We can also tune the overlap between spin clusters by keeping the distance \(d\) fixed and by translating the two pixels on the CCD -- effectively optimizing for two closer pixels.
We finally investigate the probability of the two interaction regions being magnetized in an uncorrelated way as their distance varies. When regions are not coupled, they are independent and the magnetization is thus the same 50% of the time. When they get closer, their magnetization tends to be correlated. This is due the fact that the regions are not independent anymore, as there is an overlap between them, corresponding to a coupling between the two regions. To quantify the correlations between various replica, we defined--in analog manner as the Parisi order parameter [54, 55, 56, 19, 56]--the following metric for fully-magnetized spin configurations:
\[q_{\alpha\beta}=1-\frac{|\sum_{i=1}^{N}X_{i}^{\alpha}-X_{i}^{\beta}|}{N}\]
where \(\alpha\) and \(\beta\) refer to two replicas, \(N\) is the number of spins in the region of interest and \(X_{i}^{\alpha,\beta}\in\{-1,1\}\) are spin configurations. The zone of interest in which the correlation between replica ground states is calculated is a smaller region than the full input modes mask (c.f. green square in inset schematics of Figure 4a).
Figure 4a shows the correlation between replicas as a function of the distance between the two regions. The correlation is defined as the maximum of the probability distribution of the correlation between replicas \(q_{max}\) (which is defined as \(q_{\text{max}}=\text{argmax}_{q}P(q)\), where \(P\) represents the distribution of the degree of correlation \(q\) between ground states). The correlation is close
Figure 4: **Controlling frustration and replica symmetry breaking.****a)** Two regions of fixed size are increasingly getting closer on the SLM plane. Each point corresponds to ten replicas for one given distance. The graph shows the maximum of the probability distribution of the correlation between replicas \(q_{max}\) as a function of the distance between the regions (in px on the CCD image). Replica-to-replica fluctuations vanish when the correlation is close to 1. **b)** and **c)** Histograms of the ground-state correlation for different replicas at maximum and minimum distance respectively.
to unity when replica-to-replica fluctuations drop to 0. Figure 4b and Figure 4c show the histograms of the correlation between ground states of different replicas when the two regions are the furthest and the closest respectively. We observe that by increasing the overlap between the two areas --and therefore their interaction-- the correlation increases, in analogy with the replica symmetry breaking transition, typical of random spin systems (c.f. Figure 4c). A simulation of our system can be found in the Appendix C, describing some conditions on the overlap for which one observes RSB-like behavior and what the assumptions on the couplings amplitude distribution are.
In short, we have observed and demonstrated experimental control over the interaction length of an Ising spin-glass system based on free-space optics and disordered media. We have also shown that we can control the interaction between two regions of spins and induce replica symmetry breaking.
The proposed system is a step towards encoding more complex Hamiltonians in hope of solving more complex NP-problems. It is also a new platform for studying replica symmetry breaking. Moreover, the degeneracy changing when regions interact means that the systems seems to develop long-range couplings from short-range couplings. This effect could be leveraged as a new type of annealing approach. Indeed, one could anneal a subset of the system and then -- driven by the interaction -- the whole system would anneal.
Furthermore, one could even consider generalizing this Ising system to a Hopfield system [51] as our experimental setup is algorithm-agnostic and the spins are defined with a continuous encoding, therefore a continuous orientation of spins could be explored. Other platforms could also be envisioned, such as using multiple SLMs or using non-linear or more complex media to obtain more complex couplings distributions (bimodal, multimodal, etc).
###### Acknowledgements.
L.D., G.J., R.P., D.P., C.C., and S.G. designed the project. L.D. carried out experiments and data analysis, G.J. and R.P. numerical simulations. L.D. and D.P. wrote the paper with contributions from all the authors. This project was funded by the European Research Council under the grant agreement No. 724473 (SMARTIES). R.P. thanks Clare College, University of Cambridge for funding via a Junior Research Fellowship.
## Methods
**Experimental setup.** We evaluated experimentally our approach for controlling the couplings of the spin simulator (c.f. Figure 1) as well as its extension with the additions described below. A laser (Coherent Sapphire SF 532, \(\lambda=532nm\)) is directed onto a reflective phase-only, liquid-crystal SLM (Meadowlark Optics HSP192-532, \(1920\times 1152\) pixels, aggregated into \(N=256\) macro-pixels) divided into N macro-pixels (spins). The Fourier transform of the modulated light is projected on the objective back focal-plane (OBJ1, \(10\times\), \(\text{NA}=0.1\)) and focused on a scattering medium (DIFF). As a scattering, medium we used a surface-diffuser that is commercially available (Edmund, 12.5mm, 25'). In practice, using a thin volumetric diffuser or combining a surface diffuser and free-space propagation are equivalent in our scheme. The scattered light is then collected by a second objective (OBJ2, \(20\times\), \(\text{NA}=0.4\)) and the transmitted intensity is detected by a CCD camera (Basler acA2040-55\(\mu\)m, \(2048\times 1536\) pixels). The spins and bias (from the TM [14]) are encoded by a spatial light modulator (SLM) in a phase pattern whose binary part is sequentially updated until the ground-state is reached. Note that for the optimization any algorithm can be used, i.e., the setup is algorithm agnostic, as the advantage of the aforementioned simulator resides in the parallel measurements of the energy [16].
**Transmission matrix calculation and ground-state search.** The transmission matrix of the scattering medium was estimated as in [50]. In detail, each row of the TM can be reconstructed by monitoring how the intensity on a given CCD pixel changes when a phase modulation is applied to the input patterns on the SLM.Those interferometry measurements provide the TM. Taking the phase conjugate of this matrix gives the SLM mask necessary for proper focusing [14]. The TM is sensitive to translations and rotations of the scattering medium as well as to the input and detection hardware. In this work, we define the stability-time as a variation within \(10\%\) of it original value. This time (typically \(\sim 120\) minutes) is long enough to run our experiments but for larger systems one would need more stable architectures. The ground-state search is conducted sequentially by means of the recurrent digital feedback. Computation starts from a random configuration of N binary macro-pixels (spins) on the SLM. The measured intensity distribution determines the feedback signal. At each iteration, an arbitrary batch of spins is randomly flipped if it increases the intensity at a chosen output mode. The batch size decreases over the optimization procedure, starting from \(12\%\) of the pixels to a single pixel for the last \(\sim 600\) iterations.
**Numerical methods.** The numerical model used in this work is a generalization of [16]. The optical SG is numerically simulated by forming N pixel blocks (SLM plane). The initial optical field has a constant amplitude, and its phase is a random configuration of N binary phases, \(\phi_{i}=0,\pi\). A gaussian i.i.d. transmission matrix T with random complex numbers is generated. At each iteration, a randomly selected single spin is flipped. The input phase is updated if the output total intensity increased after the linear propagation of the field. The bias in the numerical framework is calculated as in the experiment by starting from the knowledge of T. Numerical evaluation of \(I_{T}\) corresponds to a measurement with a detector in a noiseless system. In general, within this scheme, \(\sim 10N\) iterations are sufficient for a good convergence, i.e., when focus intensity reaches a plateau. All codes are implemented in MATLAB on an Intel processor with 14 cores running at 3.7 GHz and supported by 64 GB ram.
|
2305.19495 | Random Vibration Testing of Microelectromechanical Deformable Mirrors
for Space-based High-Contrast Imaging | Space-based stellar coronagraph instruments aim to directly image exoplanets
that are a fraction of an arcsecond separation and ten billion times fainter
than their host star. To achieve this, one or more deformable mirrors (DMs) are
used in concert with coronagraph masks to control the wavefront and minimize
diffracted starlight in a region of the image known as the ``dark zone" or
``dark hole." The DMs must have a high number of actuators (50 to 96 across) to
allow dark holes that are large enough to image a range of desired exoplanet
separations. In addition, the surfaces of the DMs must be controlled at the
picometer level to enable the required contrast. Any defect in the mechanical
structure of the DMs or electronic system could significantly impact the
scientific potential of the mission. Thus, NASA's Exoplanet Exploration Program
(ExEP) procured two 50$\times$50 microelectromechanical (MEMS) DMs manufactured
by Boston Micromachines Corporation (BMC) to test their robustness to the
vibrational environment that the DMs will be exposed to during launch. The DMs
were subjected to a battery of functional and high-contrast imaging tests
before and after exposure to flight-like random vibrations. The DMs did not
show any significant functional nor performance degradation at $10^{-8}$
contrast levels. | Axel Potier, Camilo Mejia Prada, Garreth Ruane, Hong Tang, Wesley Baxter, Duncan Liu, A J Eldorado Riggs, Phillip K. Poon, Eduardo Bendek, Nick Siegler, Mary Soria, Mark Hetzel, Charlie Lamb, Paul Bierden | 2023-05-31T02:06:00Z | http://arxiv.org/abs/2305.19495v1 | Random Vibration Testing of Microelectromechanical Deformable Mirrors for Space-based High-Contrast Imaging
###### Abstract
Space-based stellar coronagraph instruments aim to directly image exoplanets that are a fraction of an arcsecond separation and ten billion times fainter than their host star. To achieve this, one or more deformable mirrors (DMs) are used in concert with coronagraph masks to control the wavefront and minimize diffracted starlight in a region of the image known as the "dark zone" or "dark hole." The DMs must have a high number of actuators (50 to 96 across) to allow dark holes that are large enough to image a range of desired exoplanet separations. In addition, the surfaces of the DMs must be controlled at the picometer level to enable the required contrast. Any defect in the mechanical structure of the DMs or electronic system could significantly impact the scientific potential of the mission. Thus, NASA's Exoplanet Exploration Program (ExEP) procured two 50\(\times\)50 microelectromechanical (MEMS) DMs manufactured by Boston Micromachines Corporation (BMC) to test their robustness to the vibrational environment that the DMs will be exposed to during launch. The DMs were subjected to a battery of functional and high-contrast imaging tests before and after exposure to flight-like random vibrations. The DMs did not show any significant functional nor performance degradation at \(10^{-8}\) contrast levels.
Deformable mirrors, High-contrast imaging, Wavefront sensing and control, Exoplanets.
*Axel Potier, [email protected]
## 1 Introduction
The 2020 Decadal Survey on Astronomy and Astrophysics[1] prioritized the development of technologies for directly imaging Earth-like exoplanets with the future Habitable World Observatory (HWO) flagship mission. The document has recommended a large (\(\sim\)6-meter) infrared/ optical/ ultraviolet (IR/O/UV) telescope with a stellar coronagraph or starshade to be launched in the first half of the 2040's. If a coronagraph instrument is selected, it will be designed to attenuate the diffracted light from the host star to create a region of high contrast in the image (known as the "dark zone" or "dark hole") where exoplanets that are \(\sim 10^{10}\) times fainter than the star may be imaged at angular separations of \(<\)1 arcsecond. To accomplish this, a series of coronagraph masks
and one or more deformable mirrors (DMs) will be used to minimize the stellar intensity in the dark hole [2, 3].
DM technologies are being developed for both ground- and space-based high-contrast imaging applications. Electrostrictive devices [4, 5] and microelectromechanical (MEMS) [6, 7, 8] systems are the most advanced for space applications and have been at least partially flight qualified. On one hand, the Roman Space Telescope (RST) Coronagraph Instrument will make use of two 48\(\times\)48 electrostrictive devices manufactured by AOA Xinetics [9]. These DMs have proven high reliability and high performance but the contact between the electrodes and the reflective face sheet may increase the potential for unwanted motions due to thermal variations. Moreover, these devices have a relatively large (\(\geq\)1 mm) inter-actuator pitch. On the other hand, MEMS DMs have also demonstrated promising performance [10, 11, 12]. Their contactless technology mitigates hysteresis and other instabilities caused by environmental factors and allows a smaller pitch (0.3-0.4 mm), which makes them attractive candidates for the future HWO mission [13, 14]. However, the technological readiness of the MEMS DMs lags behind the electrostrictive DMs that will be demonstrated in flight by RST. Indeed, lead magnesium niobate electroceramic actuators (PMN), manufactured by AOA Xinetics, have successfully completed the full space qualification process for use in RST Coronagraph Instrument. However, the mechanical construction differs, making it impossible to transfer the heritage to MEMS devices.
Boston Micromachines Corporation (BMC) MEMS DMs have been extensively tested in vacuum at High Contrast Imaging Testbed (HCIT) demonstrating their ability to function and endure in a vacuum. However, it was found that the absence of air allows residual high-frequency electrical noise to cause mechanical resonance of the DM membrane. This issue was resolved by implementing RC filters on each channel to dampen the electrical noise before it could induce mechanical
vibration [8]. Thereafter, proper operation in a vacuum chamber was successfully demonstrated [12].
The next milestone in space qualification for the MEMS DM is proving its ability to survive the General Environmental Verification Standard (GEVS) vibration profile. But, we inform the reader that assessing the survivability and operational capabilities of DM during launch and in space would require multiple additional stages. Before allocating more resources for further tests, such as acoustics, shock, radiation, and EMI, we first sought to confirm the MEMS DM's endurance under these conditions. Prior attempts to evaluate MEMS vibration survivability revealed that the tested devices exhibited altered behavior following the shake and vibe process [15]. However, after discussing the experimental setup with the authors, we concluded that the change might have been caused by other factors, such as particle contamination. Lacking sufficient information to evaluate the MEMS DM's vibration resistance, we decided to conduct a new study, which is presented in this paper.
In that context, and as part of a NASA Small Business Innovation Research project (SBIR) titled "Improved Yield, Performance and Reliability of High-Actuator-Count Deformable Mirrors", BMC developed a new fabrication process and several design modifications that were integrated in a complete fabrication cycle, producing fully operational 2040-actuator continuous face-sheet MEMS DMs.
With support from NASA's Exoplanet Exploration Program's (ExEP) coronagraph technology development efforts, several sets of these DMs were tested for reliability in a flight-like environment. Expanding on our previous results [16], this paper reports on experiments carried out at the HCIT facility at NASA's Jet Propulsion Laboratory to demonstrate the robustness of BMC's MEMS DM technology to random vibrations during rocket launch. In Sec. 2, we present the manufacturing and integration of these DMs and their design from the dedicated electronics to the DM
face-sheet. In Sec. 3, we describe the workflow of the tests performed in HCIT and the different facilities used in these experiments. Finally, Sec. 4 presents the results that demonstrate the survivability and robustness of high-actuator-count MEMS DMs to random vibrations.
## 2 Design and fabrication of the MEMS DMs
### DM design and wafer fabrication
For this study, we procured two 2040-actuator DMs from BMC with the characteristics specified in Table 1 and layout illustrated in Fig. 1. The DMs were manufactured on 1.1 mm thick substrates that is more than four times stiffer than the standard substrate. This thickness was optimized through a finite element analysis studying the stresses on the mirror caused by random vibrations using conservative RST coronagraph instrument flight acceptance specifications (see blue curve in Fig. 7). It brings about a higher resistance to bending stresses exerted by the thin films. This change also reduces the surface deformation of the unpowered DM and then increases the DM usable stroke for wavefront control after flattening. BMC developed custom tooling at the commercial MEMS foundry to work with the new substrate and process these thicker wafers.
Figure 1: Diagram of the devices under test. (a) A zoomed-in view of a 2\(\times\)2 actuator region. Each actuator (0.4 mm across) consists of a flexure anchored at its edges. (b) The full DM layout with 2040 actuators as well as wire traces extending radially to the edges of the 32.8 mm substrate. (c) The full wafer layout as manufactured, which typically has several DMs of different formats to make best use of the available wafer area.
Beside the substrate thickness, no step in the manufacturing process is different from usual BMC DMs. The procedure, illustrated in Fig. 2, was as follows:
1. Silicon dioxide and a low stress silicon nitride layer were deposited on a single crystal silicon substrate to electrically insulate the conductive substrate from the MEMS devices (Fig. 2, step 1).
2. The first layer of polysilicon (referred to as Poly 0a) was then deposited, patterned, and etched to create actuator base electrodes and wire routing for the array (Fig. 2, step 2).
3. A low-temperature oxide (LTO) layer was deposited and polished using chemo-mechanical polishing techniques to flatten the layer. A second dielectric film, low-stress silicon nitride was then deposited (Fig. 2, step 3-4).
4. The LTO and silicon nitride layer was lithographically patterned and etched to provide a path for electrical connectivity between the wire traces, actuator electrodes, and grounded landing pads, which were produced in a subsequent polysilicon thin film (Poly 0b) deposition and patterning process (Fig. 2, steps 4).
5. An array of actuator electrodes was fabricated (Fig. 2, step 5). Then, a thick sacrificial
\begin{table}
\begin{tabular}{l c} \hline \hline \# of actuators in active area & 2040 \\ \hline \# of actuators across the active area & 50 \\ \hline Pupil diameter & 19.6 mm \\ \hline Actuator pitch & 400 \(\mu\)m \\ \hline Substrate thickness & 1.1 mm \\ \hline Actuator stroke & 1.0 \(\mu\)m \\ \hline Operating voltage & 0-98V \\ \hline \hline \end{tabular}
\end{table}
Table 1: High-actuator-count MEMS DMs properties
layer (Oxide 1) made up of phosphosilicate glass (PSG) and a thin barrier layer of LTO was deposited on the Poly 0b layer to create the actuator gap (Fig. 2, step 6). The stroke of the electrostatic actuators depends on its thickness.
4. The Oxide 1 film was patterned and etched once more to create the actuator anchor features
Fig 2: BMCβs MEMS DM fabrication process.
(Fig. 2, step 7).
* A second layer of polysilicon, Poly 1, was deposited, patterned, and etched to create actuator anchors and compliant actuator flexures with integrated hard stops (Fig. 2, step 8).
* A second sacrificial layer, Oxide 2, was deposited and polished to remove print-through from the underlying films. The mirror attachment post features were then patterned and etched into the Oxide 2 film (Fig. 2, steps 9-11).
* A final polysilicon layer, Poly 2, was then deposited, polished, and patterned (Fig. 2, step 12).
* Metal bond-pads were added to allow for wire-bonding of the device, and to allow the wafers to be diced into individual DM devices.
* Finally, the sacrificial oxide layers were removed using a wet etch "release" process (Fig. 2, step 13).
Once the devices were received from the foundry, BMC inspected the DMs using visible and IR optical microscopy and interferometry to identify potential optical, electrical, and subsurface manufacturing defects. Using a custom probe station, each candidate die that passed this initial inspection was tested to determine actuator response. Their electro-mechanical and optical performance were then characterized, including responsiveness of each actuator, stroke limits, unpowered surface error, and actuator defects.
We selected two devices for this study, which we refer to as Device Under Test (DUT) 1 and 2. Their initial surface properties are summarized in Table 2. DUT 1 is a 100% functional unit that was coated with an evaporated thin film of aluminum at BMC's facility. The purpose of DUT 1
is to confirm or reject the hypothesis that a fully functional 2K MEMS DM can survive a launch environment. On the other hand, DUT 2 had some unresponsive actuators. We kept its face sheet uncoated to allow for the post vibration infrared inspection of the DM surface to help understanding any failure mode. This DUT aimed to test the hypothesis that defects causing anomalous actuators can propagate to neighboring actuators during random vibrations [16].
### Electrical connections
For both DUTs, coated-dies were attached with adhesive to a ceramic package specifically designed for the 2K DMs (see in Fig. 3). The DM die and the ceramic package were electrically connected using gold wire-bonds applied with a high-precision automated tool at BMC's facility. JPL also fabricated flex cables that were connected in the back of the new chip carrier through a pin-grid array (PGA) and terminated in 528-pin MEG-Array connectors (see Fig. 4).
The packaged DMs were tested using high-voltage drivers commercially available from BMC to characterize their electro-mechanical and optical performance. BMC's electronics connect to the DM via the MEG-Array connectors.
\begin{table}
\begin{tabular}{l c c} \hline \hline & DUT 1 & DUT 2 \\ \hline Initial yield & 100\% & 99.3\% \\ \hline Unpowered surface deformation (PV/RMS, in nm) & 604/116 & 797/100 \\ \hline Maximum powered surface deformation (PV/RMS, in nm) & 586/113 & 1382/115 \\ \hline Flat map deformation (PV/RMS, in nm) & 89/3.4 & 787/17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Initial surface quality properties of DUT 1 and DUT 2. Unpowered and powered surface errors are dominated by a strong astigmatism, characteristic of MEMS DMs.
## 3 Procedure
### Testing overview
Both DUT 1 and 2 underwent a battery of tests before and after exposure to random vibrations. The carrier, die, die bonds, PGA joints and carrier to test mount bonds were inspected in all phases. The actuator functionality and performance were tested using the steps outlined below. The MEG-Array connectors and receptacles are not envisioned as part of the flight system and are therefore not included in our analysis. The workflow of these experiments for both DUTs is illustrated in Fig. 5. Results of these tests are described in Sec. 4.
Figure 4: Pin-grid array (PGA) assembly at the back of the ceramic chip. The MEG-Array connector was not considered part of the random vibe test.
Figure 3: Front schematic of the ceramic chip carrier, mount, and rigid flex cables designed for the 2K MEMS DMs.
### Infrared inspection
Before applying the face sheet coating to DUT 1, both DUTs were inspected at BMC using transmissive infrared microscopy. The system was automated to translate the DM and image many actuators in sequence. To achieve this, a MLS203 fast X-Y stage was installed on a BX51 Olympus microscope. The microscope was also equipped with a MFC1 Motorized Microscope focus controller to focus either on the wiring or on the mirror layer. The wiring layer was inspected on the entire die of dimension 32.8\(\times\)32.8 mm by steps of 400 \(\mu\)m (size of the actuator pitch) for a total of 6,224 images. The mirror layer inspection was restricted to the device area to image each actuator individually and 4,080 images were taken in a serpentine clockwise path. In total, 10,804 images were recorded per DM, to be compared before and after random vibe in case of failure. DUT 2 had three actuators with clearly visible defects just after fabrication (see the infrared image of an anomalous actuator in Fig. 14).
Figure 5: Testing workflow. The tests before exposure to random vibrations consisted of an IR microscope inspection and functional tests using a Fizeau interferometer. DUT 1 was also used for wavefront control (WFC) performance testing on a coronagraph testbed. After random vibe, each DM was functionally testing using an interferometer. DUT 1 was then used for WFC performance testing and DUT 2 was inspected using the IR microscope.
### Functional testing
DUT 1 was then coated and both DMs were sent to the HCIT facility at JPL for testing. Functional testing was done using a Fizeau interferometer (Zygo Verifire). The DMs were placed inside of a plastic enclosure that was purged with a continuous flow of dry air to maintain a relative humidity of \(<\)30% during operation and to avoid any electrostatic discharge event [17]. The devices were connected through the MEG-Array connectors to a commercial 14-bit electronics provided by BMC for the 2K-DM to perform a battery of functional spatio-temporal tests. The input voltage was limited to 90 V.
The HCIT team developed a series of functionality tests aimed at highlighting defective actuators before and after random vibe that were sorted in several categories. A "pinned" actuator is fixed to its unpowered position and is easily noticeable when uniform voltage is applied to the DM. A "free-floating" actuator does not move through electric commands but remains free to move to follow the displacements of its neighbors. "Tied" actuators occur when more than one actuator respond to a command sent to a single actuator index. A "weak" actuator moves significantly less than its neighbors with the same command. Finally, an "anomalous" actuator can be any of the above categories or otherwise defective. The standard functionality test process consists of the routines described below. For each surface measurement recorded by the Zygo interferometer, the uncommanded surface was subtracted and piston, tip and tilt aberrations were removed from the data in post-processing.
The functionality test routines are as follows:
Applying a uniform voltage.A uniform voltage is applied to all actuators. The measurement is taken at increasing voltage levels. Pinned actuators are particularly apparent in the resulting data.
Poking individual rows and columns.A uniform voltage is applied to the actuators in the same rows and then columns (100 measurements for a 2K MEMS DM). This highlights anomalous actuators and helps determine their index. This is also used to confirm the mapping between high voltage channels in the electronics and the actuators. Tied actuators are also noticeable in the data, except if they are located in the same row or column.
Poking grids of actuators.The actuators are divided in \(4\times 4\) regions and one actuator of each region is poked simultaneously such that it creates a regular grid, with large enough separation to avoid coupling effects. This process is repeated 16 times to cover all the DM actuators. Free-floating actuators are particularly visible in the resulting data. This is a standard calibration routine for our DMs since it can be used to estimate the voltage to surface displacement conversion for each actuator using a limited number of images.
Poking individual actuators.The actuators are poked one-by-one. This is used to find the index of each anomalous actuator that was noticed in earlier stages and to solve ambiguities.
Stability measurement.First, a uniform voltage is applied once to all actuators and one measurement is recorded every minute for two hours (uncommanded stability). Second, a uniform voltage is applied every minute for two hours immediately followed by a measurement with the interferometer (commanded stability). These tests aim to measure DM drifts over time. The low order spatial aberrations are filtered to monitor the drift of individual actuators. The mean of the recorded time series is subtracted for each image and the standard deviation is measured. An animation of the processed images is also visually inspected for anomalies.
Repeatability measurement.The voltage is cycled between zero and a uniform voltage for all actuators every five seconds for 50 s and a measurement is recorded after each cycle. This aims to highlight any hysteresis due to the DM or the electronics. The images are processed the same way as the stability measurements.
Temporal response measurement.We apply zero volts to the DM followed by a uniform bias. Ten measurements are then recorded as quickly as possible with the interferometer for about 20 s. The aim is to identify slow responding actuators. One measurement takes about two second on average which prevents detection of temporal frequencies higher than 0.25 Hz. The low order spatial aberrations are removed in post-processing and the time series is subtracted by its last image to highlight any differences in our visual inspection.
Calibrating the DM.The grid of actuators is used to determine the locations of each actuators with respect to the Zygo beam and the surface displacement for a given voltage. The DM is then flattened iteratively using the information collected in this way. The linear or quadratic voltage-to-surface height conversion is then measured at flat DM state, which is then used in the model for the wavefront sensing and control method used to create the dark hole in the coronagraph instrument.
### Performance testing
DUT 1 was also tested for high-contrast imaging purposes using the In-Air Coronagraph Testbed (IACT)[18] in the HCIT facility at JPL. The IACT optical layout is shown in Fig. 6. A 637 nm monochromatic light source was injected into the enclosed testbed through a single mode fiber to simulate a star. A charge 6 vector vortex coronagraph (VVC) was used to limit the sensitivity to low order wavefront aberrations due to air turbulence.[19, 20, 21, 22]
The injected light was passed through a linear polarizer (LP) and a quarter wave plate (QWP) to circularly polarize the light source upstream of the focal plane mask (FPM). The LP had an extension ratio of \(10^{5}\) and the QWP had a retardance of 0.24\(\lambda\) at 637nm. The off-axis parabola (OAP) 1 then collimated the beam and reflected it toward a 18.48 mm pupil, immediately followed by DUT 1. There were 46.2 actuators across the beam at the DM. Similar to the functional test described above, DUT 1 was placed inside a plastic box with a continuous flow of dry air to reduce the humidity. To limit air turbulence, the flow was optimized to reach a relative humidity of 25% to meet the DM specification. A Fluke DewK thermo-hygrometer was inserted in the dry box to actively sense the humidity level through a software watchdog. The box had an opening on the front side to allow the beam to reflect off DM to the 1524 mm focal length OAP2 that focused the light on the VVC FPM. The FPM was fixed to a 3-axis mount. The diffracted light was then reflected on the 762 mm focal plane OAP3 and blocked by a Lyot Stop (LS) of diameter 7.5 mm
Figure 6: Schematic of the optical layout of the In-Air Coronagraph Testbed (IACT) in the HCIT facility at JPL. Not to scale.
on a 2-axis mount. Considering the magnification of the OAPs, the LS diameter was 81.2% of the pupil image.
A "D" shaped field stop (FS) of size 3-10\(\lambda/D\) was added in the downstream conjugated focal plane. The purposed of the FS was to enhance the contrast in the final focal plane at the camera by minimizing stray light or photo-electrons inside the corrected regions adjacent to saturated regions. The FS was placed on a 3-axis mount to optimize focus and can be moved in and out for calibration purposes. The dark images that are later subtracted from the dark hole images were recorded by fully blocking the light at the FS plane.
After the FS, the beam is then collimated by OAP5 to pass through another set of QWP and LP that minimize the incoherent leakage caused by the imperfect retardance in the VVC FPM. The rotation angles of the QWP+LP were optimized by minimizing the signal on the science detector with the VVC FPM fully removed from the beam. Finally, the OAP6 directed the light to the science detector where the final image is formed. A neutral density filter wheel can be used for calibration purposes, for instance to prevent the over-exposure of the unocculted PSF used for calibration, and also has an optical lens to allow pupil imaging. The science camera Andor Neo sCMOS electrically cooled to -40 \({}^{\circ}\)C and the generated heat was removed with a water cooler. The camera was mounted on a single-axis stage to control the focus. The pixel pitch was 6.5 \(\mu m\) and the resolution of the focal plane images is 24.7 pixels per \(\lambda/D\).
Standard wavefront sensing and control (WS&C) algorithms and calibration procedures dedicated to high-contrast imaging were used to minimize the simulated stellar intensity at the detector plane and improve the raw contrast level (intensity of the attenuated starlight normalized by the maximum of the unocculted PSF).[23] Phase retrieval algorithms based on both Gerchberg-Saxton formalism[24] and the fitting of low-order Zernike modes were used to flatten the DM and calibrate
its response. At least three images close to the focal plane and three images close to the pupil plane were used to run the algorithm. After flattening the DM the Strehl ratio of the unocculted PSF was very close to 1.0. The VVC FPM and the LS were automatically centered on the beam iteratively through the acquisition of pupil images. Dark images and off-axis PSFs were then recorded and the FS is introduced in the beam to allow the desired off-axis dark hole region to pass through.
Wavefront sensing and wavefront control are performed respectively with pair-wise probing (PWP) and electric field conjugation (EFC)[25, 26] through FALCO software[27]. Both algorithms require a high-performance DM to achieve contrast levels of \(\sim 10^{-8}\). The dark hole (DH) region where the stellar residuals are attenuated is defined by the FS aperture that goes from 3 to 10 \(\lambda/D\). While the contrast improves in the DH, the exposure time on the science camera is increased from 100 s to 300 s to maintain a sufficient signal-to-noise ratio. The \(\beta\)-bumping technique[28] is regularly used to achieve the best possible coherent contrast in the dark hole region. The raw contrast in the dark hole is intricately linked to the DM performance. Performance testing results for DUT1 are presented in Sec. 4.1.
### Random vibration environment
In previous work, we have demonstrated robustness of actuators that were surrounded by functional actuators under flight-like thermal cycles and vibrations as well as the compatibility of partially-functional 50\(\times\)50 MEMS DMs with vacuum environments[16]. This work expands the previous study since we specifically test the robustness of actuators at the vicinity of defective actuators as well as fully-functional 50x50 MEMS DMs.. The DMs and their respective mounts were shaken at JPL on a 10-inch cube shaker. The flex cable was fixed to the edge of the mount with a flex clamp while the other end of the flex was curled loosely and taped down onto the moving platform of the
shaker to avoid any damage. Particle contamination and humidity were controlled during the test to ensure the DMs are not subject to alternative sources of electric degradation and failure [29], as suspected in previous studies [15]. The applied signal ranges between 20 Hz and 2000 Hz, which corresponds to typical frequency ranges for most launch vehicles. The temporal acceleration spectral density of the vibrations in each of the three spatial axes was between 0.01 g\({}^{2}\)/Hz and 0.4 g\({}^{2}\)/Hz and is shown in Fig. 7. The DMs therefore underwent 11.7 gRMS over all frequencies for 2 min per axis. This qualification test is conservative to any potential launch vehicles and surpasses the flight acceptance followed by the Roman Space Telescope Coronagraph Instrument specifications.
## 4 Test results
### Dut 1
DUT1 is a 100% yield device intended to validate that a fully functioning MEMS DM passes random vibration testing. The criteria of success are visual inspection of the structure; the actuator
Figure 7: Power spectral density of the vibration experienced by the MEMS DMs in the three axes.
responsiveness, stroke, and voltage-to-displacement gain; and the ability to create a dark hole at relevant contrast levels (\(\sim 10^{-8}\)). This section compares the results of functional and performance testing before and after random vibration at JPL.
Fig. 8 represents one of the 16 regular grids of actuators that were poked during the functional test of DUT1 before and after the random vibe test. The image resolution has been slightly decreased after vibe because we did not insure same Zygo zoom settings before and after random vibe test. Such a resolution remains acceptable for direct comparison with pre-vibe data since we have many more than the required 4 pixels per DM actuator. As in the 15 other grids, the behavior of the actuators was identical before and after random vibe regardless the applied voltage. No anomaly in the influence function shape nor displacement of the DM surface occurred. After DM flattening, the final shape was measured to be 3.41 nm RMS wavefront error before and 3.37 nm RMS after random vibe. Around the flat setting, the quadratic relationship between the surface displacement amplitude of each actuator and the applied voltage remains identical before and after
Figure 8: Optical path difference (in nanometers) that corresponds to a functional test where a regular grid of actuator was poked, before (left) and after (right) the shaking of DUT1.
vibe. All functional tests performed did not show any anomalies on DUT1 either before and after the DM was random vibed.
After functional testing, DUT1 was installed on IACT to test the DM performance in a coronagraph instrument. Figure 9 shows the normalized intensities in the science image before and after random vibe, and after a few dozen WS&C iterations. The mean contrast in the DH is \(1.19\times 10^{-8}\) (before random vibe) and \(9.53\times 10^{-9}\) (after random vibe) while the spatial standard deviation is equal to \(1.42\times 10^{-8}\) and \(1.28\times 10^{-8}\), respectively. The mean coherent contrast is equal to \(3.80\times 10^{-9}\) before and \(4.34\times 10^{-9}\) after with a respective standard deviation of \(3.71\times 10^{-9}\) and \(3.86\times 10^{-9}\). The small difference in coherent contrast is explained by a slightly higher internal turbulence on the testbed after random vibe. We also observe in Fig. 9 some horizontal artifacts that result from diffraction effects caused by a slight misalignment of the field stop in the z-direction. These highly localized effects did not impact convergence of the PW+EFC algorithm nor the computation of contrast performance. The decrease of flux after random vibe might induce an underestimation of the mean incoherent intensity leading to a slight overestimation of the
Figure 9: Post-WS&C contrast maps (\(\times 10^{8}\)) before (top) and after (bottom) the DUT1 underwent flight-like shaking. The total (left), coherent (center) and incoherent (right) intensities are presented. The exposure time for both total intensity images is 300 s but the source injection unit has been moved between the experiments, explaining the noise discrepancy.
contrast performance after random vibe.
Figure 10 overlays the radial profile of each total intensity image whose mean contrast is calculated in annulus of \(\lambda/8D\). On one hand, both Fig. 9 and Fig. 10 emphasize that IACT performance are limited at low separations by an Airy pattern. This pattern is not sensed by PWP. This pattern is known to be an incoherent leakage due to manufacturing defects in the VVC FPM and in the LP and QWP [23]. This leakage could be further reduced by improving the retardance error in the QWP and extinction of the LP that are currently used. On the other hand, the coherent component in the DH was measured below \(10^{-8}\) in both cases and its speckle intensity structure was modified at each iteration. We therefore attribute the remaining coherent component to the internal turbulence in IACT on timescales of a single WS&C iteration. This effect could be reduced on IACT by lowering the dry air-flow or by installing an additional WS&C system to specifically control low order spatial aberrations at higher temporal frequencies [30]. Nonetheless, from the results of both
Figure 10: Post-WS&C radial profiles of the raw contrast on the science detector before (blue) and after (red) the DUT1 underwent random vibe testing.
performance and functional test on DUT1, we can conclude that DUT1, a 100% functional 2K MEMS DM, survived random vibrations similar to a launch vehicle.
### Dut 2
DUT2 had a few defective actuators and no metallic coating with the intention of testing the hypothesis that anomalous actuators can propagate to neighbors during rocket launch. DUT2 is not coated to allow IR inspection after random vibe. DUT2 also underwent the battery of functional tests described above, but was not used for performance testing.
In pre-vibe function testing, DUT2 was found to be \(\sim 99.3\)% functional (see in Fig. 11: it had three pinned actuators, two couples of tied actuators, and two couples and one triplet of weak and tied actuators (whose voltage to amplitude gain is divided by the number of associated actuators).
Figure 11: Grid of DUT2 actuators. Yellow: Tied actuators (60-89, 129-164). Green: Tied and weak actuators (502-549, 593-641-690, 1337-1338). Red: Pre-vibe pinned actuators (1283, 1701, 1999). Orange: Post-vibe pinned actuators due to poor connections at the MEG-Array connectors level (1392, 1616).
One result of functional test is shown in Fig. 12 where the same grid of actuators was poked with respect to the grid in Fig. 8. The poke grid measurements show that DUT2 did not change behavior at actuator level. As an example, Fig. 13 shows the deflection for neighbors of one defective actuator (index 1283) while applying individual voltage of 0.025 BMC unit on top of a flat bias of 0.05 BMC unit. Their influence functions have been fitted with a Gaussian whose maximum amplitude is reported in this plot. Error bars are computed as the standard deviation of the measured amplitude for the 8 neighbors. Deflection of 1283 actuator's neighbors remain identical before and after random vibration: failure did not propagate during rocket launch simulation. These results were confirmed by the remaining functional tests, described in Sec. 3.3.
Preliminary tests with post-vibe DUT2 presented new anomalous actuators with respect to pre-vibe, particularly tied characteristics. After further investigation, we realized these anomalies were due to poor connections at the MEG-Array connectors level rather than the DM. Indeed, the connectors were disconnected before the random vibe and then reconnected for the functional
Figure 12: Optical path difference (in nanometers) that corresponds to a stage of functional test, where a regular grid of actuator was poked, before (left) and after (right) random vibe testing of DUT2.
tests. This process is sensitive since the pins can be easily bent if the connectors are not carefully handled. The defective connectors were fixed either by disconnecting and then reconnecting the faulty MEG-Array connector or by replacing the connector savers if the initial reconnection appeared unsuccessful. The state of the MEG-Array connectors was carefully inspected throughout the whole process. Given the challenges faced by our team related to connectors, we advocate for the development of more robust high-density connector technology and more practical DM driver electronics[8].
We also imaged the initial defective actuators and their neighbors with the infrared microscope to confirm the DM was not affected by the simulated rocket launch. No changes to the anomalous actuators nor their neighbors were notice during the post-vibe infrared inspection. Figure 14 shows the infrared image of one tied actuator as well as the neighbor of another, focusing either on the wiring layer or on the mirror layer, before and after random vibe. The anomaly shown on the pinned actuator is apparent on both layers. The comparison of the images before and after random vibe shows that the damage has not propagated from the initial defect. The second set of images
Figure 13: Deflection of actuator 1283βs neighbors. We applied 0.025 BMC unit on these individual actuators on top of 0.05 BMC unit applied to all DUT 2 actuators.
shows that none of the neighbor carrier, die, die bonds, PGA joints, nor actuators were affected by the random vibe test. From these results, we saw no evidence that anomalous actuators propagate to neighbors during random vibe.
## 5 Conclusion
As part of a NASA SBIR, BMC and JPL jointly developed a new fabrication process for 50\(\times\)50 MEMS DMs. Two of these DMs underwent a battery of experiments to test their ability to survive in a launch vehicle. We have demonstrated that 1) a 100% functional 2K MEMS DM maintains 100% functionality and 2) that anomalous actuators do not propagate to neighboring actuators after undergoing launch-level vibrations. In conclusion, BMC's 2K continuous face sheet MEMS
Figure 14: Pre- (left) and post- (right) vibe infrared images of both a pinned actuator (top) and the direct neighbor of a pinned actuator (bottom). Left images are focused on the wiring layer. Right images are focused on the mirror layer in reflection due to the DM package.
DMs have passed three-axes random vibe environmental testing at bounding launch loads encompassing those of future launch vehicles. Acoustics, shock, and radiation testing remain key steps towards achieving TRL 6 for BMC's MEMS DM technology. In addition, we recommend further development of connector systems for flight DMs in order to lower the risk of creating anomalous actuators during future DM testing, flight qualification, and mission development.
### Acknowledgments
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). The authors thank the anonymous reviewers for their detailed feedback of this manuscript.
### Disclosure
This paper is the end product of the intermediary work presented in the SPIE Proceedings: Prada et al. 2021 ("Environmental testing of high-actuator-count MEMS deformable mirrors for space-based applications," Proc. SPIE 11823, Techniques and Instrumentation for Detection of Exoplanets X, 118230M (1 September 2021); doi: 10.1117/12.2594263).
|
2308.00138 | No Strings Attached: Boundaries and Defects in the Cubic Code | Haah's cubic code is the prototypical type-II fracton topological order. It
instantiates the no string-like operator property that underlies the favorable
scaling of its code distance and logical energy barrier. Previously, the cubic
code was only explored in translation-invariant systems on infinite and
periodic lattices. In these settings, the code distance scales superlinearly
with the linear system size, while the number of logical qubits within the
degenerate ground space exhibits a complicated functional dependence that
undergoes large fluctuations within a linear envelope. Here, we extend the
cubic code to systems with open boundary conditions and crystal lattice
defects. We characterize the condensation of topological excitations in the
vicinity of these boundaries and defects, finding that their inclusion can
introduce local string-like operators and enhance the mobility of otherwise
fractonic excitations. Despite this, we use these boundaries and defects to
define new encodings where the number of logical qubits scales linearly without
fluctuations, and the code distance scales superlinearly, with the linear
system size. These include a subsystem encoding with open boundary conditions
and a subspace encoding using lattice defects. | Cory T. Aitchison, Daniel Bulmash, Arpit Dua, Andrew C. Doherty, Dominic J. Williamson | 2023-07-31T20:12:09Z | http://arxiv.org/abs/2308.00138v1 | # No Strings Attached: Boundaries and Defects in the Cubic Code
###### Abstract
Haah's cubic code is the prototypical type-II fracton topological order. It instantiates the no string-like operator property that underlies the favorable scaling of its code distance and logical energy barrier. Previously, the cubic code was only explored in translation-invariant systems on infinite and periodic lattices. In these settings, the code distance scales superlinearly with the linear system size, while the number of logical qubits within the degenerate ground space exhibits a complicated functional dependence that undergoes large fluctuations within a linear envelope. Here, we extend the cubic code to systems with open boundary conditions and crystal lattice defects. We characterize the condensation of topological excitations in the vicinity of these boundaries and defects, finding that their inclusion can introduce local string-like operators and enhance the mobility of otherwise fractonic excitations. Despite this, we use these boundaries and defects to define new encodings where the number of logical qubits scales linearly without fluctuations, and the code distance scales superlinearly, with the linear system size. These include a subsystem encoding with open boundary conditions and a subspace encoding using lattice defects.
###### Contents
* I Introduction
* I.1 Summary of Results
* I.2 Outline of Paper
* II Background
* II.1 Review of Quantum Error-Correcting Codes
* II.2 The Cubic Code
* II.2.1 Lattice Symmetries
* II.2.2 Encoding Properties
* II.2.3 Fracton Mobility
* II.3 Numerical Methods
* III Boundaries
* III.1 Semi-Infinite Boundaries
* III.1.1 Boundary Excitations
* III.1.2 Periodic Behavior
* III.1.3 Translational-Symmetry Violations
* III.2 Boundary Seams
* IV Superlinear-Distance Boundary Codes
* IV.1 Tennis Ball
* IV.2 Subsystem Codes
* V Defects
* V.1 Vacancies
* V.2 Edge Dislocations
* V.3 Screw Dislocations
* VI Superlinear-Distance Defect Codes
* VI.1 Vacancy Encodings
* VI.2 Edge Dislocation Encodings
* VII Conclusion
* A Additional Boundary Codes
* VII.1 Simple Cases
* VII.2.1 Tennis Ball
* VII.2.2.1 Tube
* VII.2.2 Periodic Boundaries
* VII.2.3 Additional Defect Codes
* VII.2.4 Additional Defect Codes
* VII.2.5 Additional V Vacancy Encodings
* VII.2.6 Additional Edge Dislocation Encodings
* VII.2.7 Screw Dislocation Encodings
* VII.3 Additional Material
* VII.3.1 Proof of Eq. (9)
* VII.3.2 Color Code Correspondence
* VII.3.3 Code Availability
## I Introduction
Quantum computers are required to operate effectively in the presence of errors and noisy operations [1; 2; 3; 4]. A primitive component of a quantum computer is the quantum hard drive: a system capable of safely storing quantum information for long periods of time. In compari
son to leading approaches such as the surface code [5; 6], which require _active_ procedures to continually detect and correct for errors [7], such a hard drive should be _passively_ self-correcting. To this end, one can envision a system where quantum information is encoded in an energetic ground state and errors that corrupt this information are suppressed by macroscopic energy barriers [8]. Unfortunately, this behavior is impossible to achieve in many cases - such as the surface code - due to no-go theorems that prohibit self-correction in \((2+1)\)D systems [8; 9; 10; 11; 12; 13; 14; 15].
Fortunately, these theorems do not apply to higher spatial dimensions. Already in \((3+1)\)D there are topological codes with no _string-like logical operators_ that have significantly better energy barriers than any \((2+1)\)D code [16; 17]. The earliest such example is Haah's _cubic code_, which was found via a computational search [16]. The cubic code model is part of a larger classification of unconventional topological phases of matter, known as _fracton topological orders_[16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. In this classification, topological codes with no string-like operators are called type-II fracton phases [24]. As a type-II fracton phase, the cubic code only supports topological excitations that are completely immobile [16]. When used as an error-correcting code, this immobility results in a code distance that scales superlinearly with the linear system size i.e., \(d\sim\mathcal{O}(L^{\alpha})\) for \(\alpha>1\), where \(L\) is the number of lattice sites along one axis. Moreover, the minimum energy required to map between degenerate ground states via local operations - also known as the energy barrier - scales as \(\mathcal{O}(\log(L))\)[26]. This energy barrier enables the cubic code to be partially self-correcting: its quantum memory time increases with the system size only up to a finite threshold that decreases with temperature [26]. For comparison, the surface code in \((2+1)\)D has a memory time independent of system size. This property makes the cubic code a leading candidate for creating a quantum hard drive in \((3+1)\)D.
There are, however, additional features of the cubic code that are undesirable for applications to quantum error correction (QEC). The number of encoded qubits, \(k\), varies sporadically with the system size [17] (see Fig. 3). In an actual implementation, achieving a large \(k\), therefore, requires the system to be from a family of carefully chosen system sizes that have large jumps between them. Moreover, the model was formulated in a translation-invariant setting with periodic boundary conditions, arranging the physical qubits on a 3-torus. This topology is not feasible in a strictly local architecture.
Since the discovery of the cubic code, there has not yet been an examination into how the model's topology or geometry may be modified, and how these modifications may affect its core characteristics, such as its no-string property or fracton topological order. A significant question is whether open boundary conditions or the inclusion of lattice defects affect the error-correcting properties of the code, either by improving or worsening them. In other codes, including such modifications has the potential to increase the number of encoded qubits or enable additional fault-tolerant quantum gates [27; 28; 29; 6; 29]. However, in our setting, open boundary conditions and defects have the potential to introduce string-like operators and reduce the energy barrier for logical errors.
In this work, we characterize the properties of boundaries and defects in the cubic code, including their interactions with quasiparticle excitations. We investigate whether modifying the cubic code by introducing boundaries and defects can affect the scaling of the number of qubits while maintaining the superlinear code distance and favorable energy barrier of the periodic model. We approach this with a combination of analytic arguments, visualizations, and numerical computations. For the latter, we simulate lattices with up to approximately \(20^{3}\) qubits, and assume a consistent extrapolation for larger systems.
### Summary of Results
In this paper, we consider the construction of \(X\)- or \(Z\)-type open boundaries normal to a crystallographic axis. These boundaries are gapped using _plaquette_ stabilizers formed from truncated bulk \(X\) and \(Z\) cube stabilizers respectively. Both cases exhibit two topologically distinct interactions with the fractonic excitations: \(X\)-type boundaries on the negative-oriented faces of the lattice (and \(Z\)-type on the positive faces) condense single fractons. Conversely, \(X\)-type positive faces (and \(Z\)-type negative faces) cause their corresponding fractons to gain a \((2+1)\)D mobility within diagonal subsystems along the surface. These boundary layers have a direct correspondence to the 6-6-6 color code [30].
Our results demonstrate that the no-string property of the closed cubic code model is not readily retained in the presence of open boundary conditions. Because of this, it is nontrivial to construct QEC codes with open boundary conditions that have the desired superlinear code distance. Table 1 summarizes the different configurations of open boundary conditions considered in this paper; none contain only logical operators with superlinear weight. Notably, however, the scaling of \(k\) in all nontrivial cases is now a simple function of the linear system size \(L\) that does not exhibit large fluctuations.
It is nevertheless possible to create a cubic code with open boundary conditions that has a superlinear distance. For this, we use the _tennis ball 1_ configuration from Table 1. In the corresponding code, logical \(\bar{X}\) and \(\bar{Z}\) operators stretch between two boundaries in the \(\hat{x}\) and \(\hat{y}\) lattice directions respectively. There exist logical \(\bar{X}\) operators supported solely near the \(+\hat{z}\) face, and \(\bar{Z}\) operators near the \(-\hat{z}\) face, that have linear weights. However, those further in the bulk have weight superlinear in \(L\). A subsystem code [31; 32] can be used to gauge out the logical qubits with either an \(\bar{X}\) or \(\bar{Z}\) that can be supported near a boundary face - thus producing a code with superlinear distance.
Alternatively, with boundary conditions that are pe
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Diagram & Notation & Name & Encoded Qubits (\(k\)) & Logical Weight \\ \hline \multirow{2}{*}{\((eee;eee)\)} & \multirow{2}{*}{Only \(e\)} & \multirow{2}{*}{0} & \multirow{2}{*}{-} \\ & & & \\ \hline \multirow{2}{*}{\((eee;mee)\)} & \multirow{2}{*}{One \((m)\)} & \multirow{2}{*}{0} & \multirow{2}{*}{-} \\ & & & \\ \hline \multirow{2}{*}{\((mee;eee)\)} & \multirow{2}{*}{One \((m_{ABC})\)} & \multirow{2}{*}{0} & \multirow{2}{*}{-} \\ & & & \\ \hline \multirow{2}{*}{\((eee;mem)\)} & \multirow{2}{*}{Two \((m)\)} & \multirow{2}{*}{\(2\min\{L_{x},L_{z}\}-6\)} & \multirow{2}{*}{Constant, Superlinear} \\ & & & \\ \hline \multirow{2}{*}{\((mem;eee)\)} & \multirow{2}{*}{Two \((m_{ABC})\)} & \multirow{2}{*}{0} & \multirow{2}{*}{-} \\ & & & \\ \hline \multirow{2}{*}{\((mee;eme)\)} & \multirow{2}{*}{\((m)\) \& \((m_{ABC})\)} & \multirow{2}{*}{0} & \multirow{2}{*}{-} \\ & & & \\ \hline \multirow{2}{*}{\((mem;mee)\)} & \multirow{2}{*}{\(Tennis ball 1\)} & \multirow{2}{*}{\(2L_{z}\)} & \multirow{2}{*}{Linear, Superlinear} \\ & & & \\ \hline \multirow{2}{*}{\((mee;mem)\)} & \multirow{2}{*}{\(Tennis ball 2\)} & \multirow{2}{*}{\(2L_{z}-6\)} & \multirow{2}{*}{Constant, Superlinear} \\ & & & \\ \hline \multirow{2}{*}{\((mee;mee)\)} & \multirow{2}{*}{\(Tube\)} & \multirow{2}{*}{\(2(L_{y}+L_{z}-L_{x})-3\)} & \multirow{2}{*}{Constant, Linear, Superlinear} \\ & & & \\ \hline \multirow{2}{*}{\((mmm;eee)\)} & \multirow{2}{*}{\(Half\)-half 1} & \multirow{2}{*}{0} & \multirow{2}{*}{-} \\ & & & \\ \hline \multirow{2}{*}{\((eee;mmm)\)} & \multirow{2}{*}{\(Half\)-half 2} & \multirow{2}{*}{\(4\min\{L_{x},L_{y},L_{z}\}-12\)} & \multirow{2}{*}{Constant, Superlinear} \\ & & & \\ \hline \multirow{2}{*}{-} & \multirow{2}{*}{\(Triangular\)} & \multirow{2}{*}{\(4L\)β\(4\)} & \multirow{2}{*}{Linear, Superlinear} \\ & & & \\ \end{tabular}
\end{table}
Table 1: Summary of the properties for codes with open boundaries on a lattice with linear system size \((L_{x},L_{y},L_{z})\), showing the positive faces on the left and the negative faces on the right diagram. Red and dark red (blue and light blue) indicate \(X\) (\(Z\)) stabilizers. Vertex stabilizers are implied when three edges of the same color meet. The half-length edges at the vertices of the _triangular_ configuration have vertex stabilizers but no edge stabilizers. The notation column corresponds to the \((xyz;\bar{x}\bar{y}\bar{z})\) faces of the lattice, where \(p\) indicates a periodic boundary and \(e\) and \(m\) are open boundaries that interact with \(e\) and \(m\) excitations respectively. The final column identifies the presence of at least one logical operator (\(\bar{X}\) or \(\bar{Z}\)) with a minimum weight that scales with the indicated function of \(L_{x},L_{y}\), and/or \(L_{z}\).
riodic in the \(\hat{z}\) direction only, we are able to construct stabilizer codes with simple linear scaling of \(k\) and a superlinear code distance without resorting to subsystem codes. This result, along with other periodic boundary condition codes, is summarized in Table 2.
In addition to open boundary conditions, we study the inclusion of crystal lattice defects including vacancies, edge dislocations, and screw dislocations. Similar to the open boundaries, we focus on configurations that are aligned with the crystallographic axes. While modified stabilizer terms are provided for vacancies and edge dislocations, screw dislocations do not admit additional deformed stabilizers. We explore how condensation of fractons on defects affects the fracton mobility in the vicinity of these features. We propose several encodings using defects; configurations such as a pair of edge dislocations or multiple vacancies wrapped around a periodic boundary can form stabilizer code families with superlinear distances and a simple linear scaling of \(k\). These results are discussed in detail in Section VI.
To the best of our knowledge, this work constitutes the first exploration of boundaries and defects in a type-II fracton topological order. Our results demonstrate that introducing defects and boundaries into the cubic code leads to encodings with new features that could prove advantageous over encodings based on periodic boundary conditions. This includes encodings with a number of logical qubits that scales linearly with the linear system size, without fluctuations. This work is a first step towards a general theory of translation symmetry enrichment in type-II fracton topological orders.
### Outline of Paper
This paper is organized as follows: In Section II we present background on quantum error-correcting codes and outline the key properties of the cubic code. In Section III we characterize the open boundary conditions of the cubic code. In Section IV we discuss constructions of superlinear-distance codes. In Section V we characterize the inclusion of defects - vacancies, edge dislocations, and screw dislocations. In Section VI we discuss the use of defects to construct superlinear-distance codes. In Section VII we present our conclusions. The appendices include a discussion of further open and periodic boundary codes (Appendix A) and defect codes (Appendix B) considered in this study. A summary of all the potential boundary encodings is provided in Tables 1 and 2.
## II Background
We begin with a brief review of quantum error-correcting codes and self-correcting quantum memories, before discussing the cubic code in particular.
### Review of Quantum Error-Correcting Codes
A quantum error-correcting (QEC) code is a scheme to encode one or more quantum states within a higher-dimensional Hilbert space in order to provide the ability to detect and correct a class of errors. Arbitrary errors are generated by the algebra of single-qubit Pauli errors, spanned by the Pauli \(X\) and \(Z\) operators
\[X:=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\qquad Z:=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}, \tag{1}\]
written in the computational basis \(\{\ket{0},\ket{1}\}\) corresponding to the two states of a physical qubit. One approach for constructing QEC codes is to employ the _stabilizer formalism_[33, 34]: Consider a system of \(n\) physical qubits. We select a commuting collection of (tensor) products of Pauli operators and consider the stabilizer group \(\mathcal{S}\) that they generate. We require that \(-I\notin\mathcal{S}\), where \(I\) is the identity operator. Quantum information is then encoded in the eigenvectors of the degenerate \(+1\)-eigenspace common to all elements of \(\mathcal{S}\). That is, any physical measurement of the encoded state using \(S_{i}\in\mathcal{S}\) returns a \(+1\) value. Importantly, single-qubit \(X\) and \(Z\) operators anti-commute with some \(S_{i}\), thus mapping any encoded state out of the \(+1\)-eigenspace and producing a change in the measurement outcomes. This can be detected and corrected with appropriate QEC codes.
For a system with \(s\) independent stabilizer generators and \(n\) physical qubits, the degeneracy of the \(+1\)-eigenspace is \(2^{n-s}\). Equivalently, the number of encoded _logical qubits_ that are protected from errors is \(k=n-s\).
An effective QEC code should have a large number of encoded qubits but also should make it difficult for errors
\begin{table}
\begin{tabular}{c c c} \hline \hline Notation & \multicolumn{2}{c}{Encoded Qubits (\(k\))} & Code Distance \\ \hline \((ppp;ppp)\) & Eq. (6) & Superlinear \\ \((ppe;ppe)\) & \(\frac{1}{2}k_{(ppp;ppp)}+2\tau(L;\,\infty)\) & Linear \\ \((ppm;ppe)\) & \(4\min\{\tau(L_{x};\,L_{z}),\,\tau(L_{y};\,L_{z})\}\) & Linear \\ \((ppe;ppm)\) & \(0\) & - \\ \((pmm;pem)\) & \(2\tau(L_{x};\,L_{z})\) & Linear \\ \((pem;pem)\) & \(2L_{x}\) & Superlinear\({}^{*}\) \\ \((pem;pme)\) & \(0\) & - \\ \((pemm)\) & \(0\) & - \\ \((pmm;pem)\) & \(0\) & - \\ \((pem;pee)\) & \(0\) & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the properties for boundary codes on a lattice with linear system size (\(L_{x},L_{y},L_{z}\)), where some direction is periodic. In cases where the behavior of anisotropic systems is unclear, we take \(L_{x}=L_{y}=L_{z}\equiv L\). The notation column corresponds to the \((xy\bar{z};\bar{x}\bar{y}\bar{z})\) faces of the lattice, where \(p\) indicates a periodic boundary and \(e\) and \(m\) are open boundaries that interact with \(e\) and \(m\) excitations respectively. \(\tau(L_{1};\,L_{2})\) is a fractal-like function defined in Section III.1.2 that encapsulates the number of string-like operators that can form logical operators by wrapping around a periodic boundary. \({}^{*}\)Superlinearity is only ensured when \(3\!\nmid\!L_{x}\), otherwise there are linear-weight operators.
to affect the encoded information. A logical operator, denoted as \(\bar{X}\) or \(\bar{Z}\), is an operator that commutes with all stabilizers, yet is not itself in the stabilizer group. In this way, logical operators act on the encoded states within the \(+1\)-subspace, changing the state of the logical qubit while not being detectable using stabilizer measurements. Effective QEC codes must therefore make it difficult for errors to create a logical operator (logical error). We quantify this difficulty in two ways: code distance and energy barriers.
Code DistanceOperators that act on encoded quantum states are only uniquely defined modulo multiplication by stabilizers. The weight of a logical operator - the number of single-qubit Pauli operators required to construct it - is therefore variable. The code distance is defined as the minimum weight operator that can create a logical error on the code, taking into account this multiplication by stabilizers.
Energy BarriersAdditionally, we can consider the physical qubits in a QEC code as forming a quantum condensed matter system, evolving under a Hamiltonian
\[H=-\sum_{i=1}^{s}S_{i} \tag{2}\]
where \(\{S_{i}\}_{i=1}^{s}\) are spatially-local operators that generate the stabilizer group \(\mathcal{S}\). Since the encoded states belong to the \(+1\)-eigenspace of all \(S_{i}\), they also correspond to the ground state (minimum energy state) of this system. Pauli errors then map an encoded state into the \(-1\)-eigenspace of some \(S_{i}\), thus increasing the energy. These _flipped_ or _excited_ stabilizers can be interpreted as the location of excitations or quasiparticles, with emergent behavior such as mobility, charge, and even braiding statistics. The ability of a code to correct against local errors is equivalent to the condition of _topological order_: the state of the system cannot be determined solely by local operations [35, 36, 37, 7].
The energy barrier of this system is then defined as the minimum energy that must be surpassed in order to create a logical error by sequentially applying single-qubit Pauli errors. Importantly, larger energy barriers will cause the evolution of the system to naturally suppress the creation of logical errors when at nonzero temperatures. That is, for an energy barrier \(E\), entropy change \(S\), temperature \(T\) and Boltzmann constant \(k_{B}\), the time a system can remain in its encoded state (quantum memory time) scales approximately via the Arrhenius Law [38]
\[\tau_{\text{lifetime}}\sim\exp\left(\frac{E-TS}{k_{B}T}\right) \tag{3}\]
A _self-correcting quantum memory_ at finite temperature is then defined as a QEC code where the lifetime grows without bound in the number of physical qubits, at sufficiently small nonzero temperature [39]. A necessary condition for this, therefore, is that the energy barrier must grow with the system size. However, this behavior is impossible in all stabilizer codes formed by arranging the qubits in \((2+1)\)D and demanding the stabilizers be spatially local [8, 9, 10, 11, 12, 13, 14, 15]. These no-go theorems do not necessarily apply to higher dimensions. In particular, self-correction is readily possible in \((4+1)\)D [12, 7]. In \((3+1)\)D, there have been several attempts at creating such behavior [8, 13, 38, 39, 40, 41, 42]. One of the more promising candidates, and one of the only exactly-solvable candidates, is known as the _cubic code_.
### The Cubic Code
Initially proposed in Ref. [16], the cubic code is defined on a \((3+1)\)D simple cubic lattice with two qubits at each
Figure 1: Generators of the cubic code stabilizers, comprised of single-qubit Pauli operators. (Left) The \(C_{X}\) operator. (Right) The \(C_{Z}\) operator.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Charge & Created By & Excited Stabilizer & Color \\ \hline \(e\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{././././} \end{array}\) & \(C_{X}\) & \\ \(m\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{./././} \end{array}\) & \(C_{Z}\) & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Elementary charges of the cubic code model. Filled triangles and circles are used throughout this manuscript to refer to the Pauli operators. Color refers to the convention for indicating created charges.
Figure 2: As per Table 3, each single-qubit Pauli operator creates 4 excitations of cubic code stabilizers, arranged in a tetrahedral shape. Transparency is used to show cubes hidden by the 3D perspective.
lattice site and periodically identified boundaries in all directions, forming a 3-torus. We use the notation \(XI\) to denote the operator \(X\otimes I\) acting on the two qubits at a particular lattice site, with the identity on all other qubits. The model has two kinds of stabilizer generators, both with support on a subset of the 16 qubits at the 8 vertices of a unit cube. These generators, referred to here as \(C_{X}\) and \(C_{Z}\), are shown in Fig. 1.
Pauli operators create tetrahedral excitation patterns in the neighboring stabilizers, highlighted in Fig. 2. Following the convention with the surface code, we denote these two types of excitations as \(e\) and \(m\) as in Table 3.
#### ii.1.1 Lattice Symmetries
Noting the form of the stabilizers and their excitation patterns, there are three lattice symmetries of the cubic code relevant for discussions in this paper:
1. 3-fold rotation about \(\hat{x}+\hat{y}+\hat{z}\).
2. Mirror symmetry about the plane normal to \(\hat{x}-\hat{y}\).
3. The map \(\{IX\leftrightarrow ZI,IZ\leftrightarrow XI\}\) combined with spatial inversion.
These symmetries are used to relate boundaries and defects with different orientations in later sections.
#### ii.1.2 Encoding Properties
The defining property of the cubic code is that all its nontrivial topological quasiparticle excitations1 are fundamentally immobile _fractons_. This is equivalent to a property called "no string-like operators" [16]. A string-like operator is any operator that creates two constant-sized regions of nontrivial topological excitations, separated by an arbitrary distance \(\Delta\), with a weight that scales linearly with \(\Delta\). In doing so, these operators incur a maximum energy cost that is independent of \(\Delta\). Since such string-like operators do not exist in the cubic code, fractons cannot be moved large distances with constant energy. This immobility property is discussed further in Section II.2.3.
Footnote 1: A nontrivial topological excitation is one that cannot be created locally.
Importantly, this behavior results in a code distance that scales superlinearly with the linear system size \(L\) (the number of sites in each lattice direction), and an energy barrier that scales logarithmically with \(L\). Although seemingly promising for use in self-correction, we also need to consider the effects of entropy. The entropy of a system is typically extensive, scaling polynomially with system size [26]. As per Eq. (3), at finite temperature there will thus exist a given system size where increasing \(n\) further results in a decrease to the lifetime, as entropic contributions overwhelm the system. This energy barrier is therefore enough to ensure only _partial self-correction_ of the system [43]. Nevertheless, it remains one of the only stabilizer codes to achieve even this behavior.
A key theorem for self-correcting quantum memories is that for there to be no string-like operators, the ground state degeneracy - or equivalently, the number of encoded qubits \(k\) - in a \((3+1)\)D translation-invariant stabilizer code must depend on the system size [13]. In the case of the cubic code, \(k\) fluctuates significantly, bounded by \(2\leq k\leq 4L-2\), where we take a geometry with equal linear system size \(L\) in \(x,y,\) and \(z\). For notation, we define
\[q_{n}(L)=\begin{cases}1&n|L\\ 0&\text{otherwise}\end{cases} \tag{4}\]
and
\[\zeta(L)=\begin{cases}\max\left\{2^{z}\,:\,2^{z}|L,\,z\in\mathbb{Z}\right\}& L\in\mathbb{Z}\\ 0&\text{otherwise}\end{cases} \tag{5}\]
Using this, the exact empirical formula for \(2\leq L\leq 200\) is given by (see Ref. [16])
\[k=2\left[1-2q_{2}+2\zeta(L)\left(q_{2}+12q_{15}+60q_{63}\right)\right] \tag{6}\]
where we have written \(q_{2}=q_{2}(L)\) etc. for readability. This relationship is plotted in Fig. 3. A number-theoretic exact formula for \(k(L)\) is known for all \(L\) but the formula is cumbersome to write down and not enlightening for our purposes [44]. Importantly, there is a strong dependence on the exact divisibility of \(L\); changing \(L\mapsto L+1\) can cause \(k\) to fluctuate by several orders of magnitude. A guiding question for this work is whether the scaling can become a more consistent - ideally linear - function of \(L\) by introducing open boundaries and lattice defects.
Figure 3: Number of encoded qubits, \(k\), in the periodic cubic code model with linear system size \((L,L,L)\) in \(x,y,z\), as per Eq. (6). The value is bounded by \(2\leq k\leq 4L-2\).
#### iii.2.3 Fracton Mobility
As noted previously, there are no string-like operators through the bulk of the cubic code model. Consequently, the topological quasiparticle excitations are strictly immobile and cannot be moved through the lattice without incurring an additional energy penalty that scales with the distance, \(\Delta\). There are three sufficient behaviors, presented below, that describe this generalized motion and form the basis of our arguments for code distance and energy barriers in later sections:
Fractal OperatorsFirstly, fractons can be moved through the bulk of the model using operators arranged in the shape of _fractal_ tetrahedra. Originally described in Ref. [16], the excitation patterns in Fig. 2 can be repeated in a fractal pattern to create increasingly larger separations of charge, as in Fig. 4. Notably, doing so requires multiple excitations to move outwards and it involves intermediary high-energy states. It was from this fractal behavior that the logarithmic energy barrier of the periodic cubic code was derived [26]. Due to this fractal nature, this process can create excitations separated by a distance \(\Delta\) where \(\Delta=2^{j}\) for \(j=0,1,2,\ldots\). This dependence on powers of 2 contributes towards the sporadic scaling of logical qubits in the periodic cubic code: if the width of the lattice is not a power of 2, multiple smaller tetrahedra need to be combined in a nontrivial way, wrapping around the periodic boundary to annihilate all charges. This motivates the form of Eq. (5).
Cascade OperatorsSecondly, excitations can be moved in a _cascading_ operation (using the terminology from Ref. [29]) that creates additional excitations at each step of the motion. To highlight this, we introduce three operators for \(e\) and \(m\), as in Fig. 5. Using these operators repeatedly creates the cascading procedure shown in Fig. 6. Importantly, this process moves excitations through the bulk while requiring a Pauli weight that scales superlinearly with the distance \(\Delta\), and creates additional excitations whose number scales linearly with \(\Delta\). Numerically continuing this process for larger separations produces the results in Fig. 7.
Cage OperatorsThirdly, following the terminology of Ref. [29], a _cage_ operator is a generalization of a Wilson loop that moves excitations around a closed path in the bulk, starting and ending from the vacuum state (ground state). This process is shown in Fig. 8. Notably, the locations of Pauli operators, as well as the intermediary excitations, occupy a cylindrical shell with the required height of the cylinder scaling linearly with the radius. When viewed from a particular direction (\(\hat{z}\) in Fig. 8), this height can be compacted, leaving a 2D loop. Cage operators are particularly relevant for the discussion of defects in Section V.
### Numerical Methods
In this work, we aim to derive formulae for the number of encoded qubits, \(k\), and the distance, \(d\), of variants of the cubic code with defects and modified boundary conditions. In general, however, it is difficult to rigorously derive exact equations for these properties. Therefore, we present motivating discussions, backed up by numerical computations of small system sizes where we can determine empirical formulae for these results. We then assume a consistent extrapolation for larger models.
To perform these calculations, the stabilizers in a system of \(n\) qubits are represented by a binary vector in \(\mathbb{Z}_{2}^{2n}\), with a 1 corresponding to a Pauli \(X\) or \(Z\) acting on a particular qubit [45]. Commutation is a bilinear map between two such vectors, and finding nontrivial logical operators reduces to determining the kernel of this map when applied to the stabilizers. In this way, the number of logical operators (or equivalently, the ground-state degeneracy), and also examples of particular logical algebras, can be computed exactly for system sizes up to the order of \(20^{3}\) qubits. By restricting to a subset of the physical qubits, we can also determine the properties of the support of these logical operators. These results form the basis for the conclusions drawn in the follow
Figure 4: Fractal pattern used to expand the separation between a set of \(4\)\(m\) excitations in the bulk, using repeated \(XI\) Pauli operators (purple circles). The shaded purple regions are used to highlight repeated applications of the leftmost operator, showing similarities to a Sierpinski pyramid.
Figure 5: The \(F\) operators that can cascade excitations through the lattice, as in Fig. 6. (Left) \(F_{x}^{xy}\), \(F_{x}^{xz}\), and \(F_{y}^{yz}\) operators. The labels indicate the plane and directions along which the excitations can move. (Right) \(F_{m}^{xy},F_{m}^{xz}\), and \(F_{m}^{yz}\) operators, which move excitations in the negative lattice directions.
Figure 6: The βcascading procedureβ: The \(F_{m}^{\bar{g}x}\) operator in (a) can move excitations in the \(-\hat{z}\) direction. Starting with the excitation pattern in (a), we translate and re-apply \(F_{m}^{\bar{g}x}\) so that the \(\star\) is aligned with the \(\star\) in (b). In doing so, the stabilizers marked by the black squares are flipped. Repeating this at the \(\star\) in (c) removes all excitations in the second row. We can repeat this process in each row to separate \(m\) charges in the \(-\hat{z}\) direction. Larger separations \(\Delta\) create additional excitations and the weight of the operator (number of single-qubit Pauli operators) scales superlinearly with \(\Delta\).
Figure 7: Scaling properties of the \(F\) operators as the separation between charges (\(\Delta\)) is increased, computed by numerically continuing the process in Fig. 6. This demonstrates a superlinear upper bound on the weight and a near-linear polynomial upper bound on the energy barrier of a logical operator.
ing sections. Code for these calculations is provided in Appendix C.3.
## III Boundaries
In this section, we present and analyze formulations of the cubic code that incorporate combinations of open and periodic boundaries. By doing so, we identify the behavior of excitations on and near these boundaries, to inform the discussion of encoding properties in Section IV. We first consider isolated open boundaries in a semi-infinite system, before combining multiple boundaries via edges and vertices.
To characterize these boundaries, we introduce the following notation: Consider a rectangular prism centered at the origin of a Cartesian \((3+1)\)D coordinate system, with the terminating boundaries normal to the axes. In similar notation to Miller indices, we use \((100)\) to denote the boundary that forms across the positive \(x\) side of the prism and \((\bar{1}00)\) to denote the boundary on the negative \(x\) side. Additionally, define the _sign_ of a boundary to be _positive_ for \((100)\), \((010)\), and \((001)\) orientations and _negative_ for \((\bar{1}00)\), \((0\bar{1}0)\), and \((00\bar{1})\). An additional notation for specifying the types of stabilizers on these boundary faces is introduced in Section III.2. It is possible to consider other more general boundaries, such as those with Miller index \((110)\). We present one such example in Appendix C.2 but will defer the systematic study of these boundaries to future work.
### Semi-Infinite Boundaries
We first consider the family of semi-infinite systems, such as \(\mathbb{R}^{2}\times(-\infty,0]\) with a single terminating boundary corresponding to a crystallographic plane, and propose a set of maximal stabilizer generators to populate this boundary. There are two possible constructions of translation-invariant Hamiltonian terms that maintain the required commutation relations with both the neighboring bulk and boundary stabilizers. These correspond to truncated \(C_{X}\) and \(C_{Z}\) operators, as shown in Fig. 9, where the bulk operator is continued outwards and all terms that lie beyond the system boundaries are ignored. We denote these single-face (or plaquette) operators as \(P_{X}\) and \(P_{Z}\).
Let \(X\)-type denote a boundary Hamiltonian consisting of only \(P_{X}\), and \(Z\)-type for \(P_{Z}\). More general configurations are possible, such as the _triangular_ code in Table 1, but their properties can be explained solely by a discussion of the purely \(X\) or \(Z\)-type boundaries.
Importantly, the orientation of a boundary affects its behavior. That is, the boundary Hamiltonian of an \(X\)-type \((001)\) is not equivalent to that of an \(X\)-type \((00\bar{1})\). We see this by observing the two inequivalent forms of truncated \(C_{X},C_{Z}\) operators, as well as the symmetries noted in Section II.2.1.
#### iii.1.1 Boundary Excitations
Various applications of Pauli operators on a boundary are shown in Fig. 10. When a single particle is able to
Figure 8: Using \(F_{e}^{yz}\) and a new operator, \(G\), to move excitations around a closed path (βcageβ) in the \(xy\) plane. (a) The \(F_{e}^{yz}\) operator that we use to move excitations in the \(\hat{y}\) direction. (b) A new operator, which we call \(G\) (see Fig. 25 in the appendix), that can be used to move excitations in the \(\hat{x}+\hat{y}\) direction. (c) Starting with \(F_{e}^{yz}\), we move an excitation in the second column into the \(\hat{x}+\hat{y}\) direction using the tip of the new operator, \(G\). Wireframe cubes indicate the annihilated charges from the previous step. (d) We then repeat this, using \(G\) to move the second excitation in the second column. (e) We now move all three excitations in the third column into the \(\hat{x}+\hat{y}\) direction, using \(G\). (f) We now use \(F_{e}^{yz}\) to begin to move these charges back in the \(-\hat{y}\) direction. (g) We continue this process to move the remaining charges back in the \(-\hat{y}\) direction. To complete the loop, we would use \(G\) once again. This process can be generalized to larger loops, in other planes, and also with \(m\) charges.
Figure 9: From top to bottom: Truncated stabilizers on the edges, plaquettes, and vertices of a finite lattice, formed by taking a section of the bulk \(C_{X},C_{Z}\) stabilizers.
be created or destroyed in isolation by a local operator in the vicinity of the boundary layer, we refer to this process as _condensation_. Of note, positive \(Z\)-type boundaries condense \(e\) excitations, while negative \(X\)-type boundaries condense \(m\) excitations; we refer to these boundaries as \((e)\) and \((m)\) type respectively. However, on negative \(Z\)-type and positive \(X\)-type boundaries, single excitations cannot be condensed. Instead, to describe their behavior, we introduce three new charges as subtypes of both the \(e\) and \(m\): denoted \(e_{A},e_{B},e_{C}\) and \(m_{A},m_{B},m_{C}\).
Consider a positive \(X\)-type boundary in Fig. 11, where we striate the lattice to color squares along the diagonals as \(A,B\), or \(C\) (such that an \(m\) excitation on an \(A\)-type square will have an \(m_{A}\) charge, etc.). Importantly, a single \(m_{A}\) (or an \(m_{A}\) and an \(m_{B}\), for example) cannot be condensed in isolation. Instead, it gains \((2+1)\)D mobility along the boundary plane: \(m_{A}\) charges are mobile along the \(A\) diagonals via applications of \(IX\), and can hop to other \(A\) diagonals using a combination of \(IX,XI\). Equivalent results hold for \(m_{B}\) and \(m_{C}\). This increased mobility resembles phenomena observed on the boundaries of type-I fracton models, like the X-cube model [24; 29].
Moreover, \(XI\) creates a topologically nontrivial composite of \(m_{A}\), \(m_{B}\), and \(m_{C}\) excitations on the boundary. Motivated by these behaviors, we define the fusion rules
\[m_{A}\times m_{A}\sim 1,\quad m_{B}\times m_{B}\sim 1,\quad m_{C} \times m_{C}\sim 1,\] \[m_{A}\times m_{B}\times m_{C}\sim 1 \tag{7}\]
that describe how combinations of excitations can be created via local operators. We note that these are \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) fusion rules. An analogous result holds for \(e_{A},e_{B}\), and \(e_{C}\) by substituting \(m\mapsto e\). Given this fundamentally different behavior to \((m)\) boundaries, we denote these positive \(X\)-type boundaries as \((m_{ABC})\), and similarly \((e_{ABC})\) for negative \(Z\)-type boundaries. A summary of the new boundary notation is given in Table 4.
For these charges to be considered topologically distinct, there must not be a local operator that can fuse \(m_{A}\times m_{B}\sim 1\), for example. By considering the action of \(IX\) and \(XI\), this is trivially true using only operators with support on the boundary. Since such a fusion operator will create excitations only along the boundary, we complete the argument by using the cleaning process from Refs. [16; 46] to reduce the support of any bulk operator to just terms on the boundary; we refer the reader there for a more detailed description. This argument holds when we consider operators stretching from the \((m_{ABC})\) boundary into the bulk, as long as the operator does not have support on other boundaries (such as an opposing \((m)\)). Therefore, it is valid to consider \(m_{A}\), \(m_{B}\), and \(m_{C}\) as distinct charges with the boundary fusion properties above, if they are separated from additional boundaries by distances larger than the correlation length of the ground state.
Due to the triplet \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) fusion nature of these charges, these fusion rules bear a resemblance to that of the _color code_[47; 30; 48]. In fact, the color code can be directly transformed into this \((2+1)\)D layer: Consider the hexagonal lattice of the 6-6-6 color code in Fig. 12, with a qubit at each lattice point (white circle). The stabilizers of this code are the product of \(X\) or \(Z\) on all 6 qubits of each hexagon. We begin the transformation by first overlaying a rhomboidal lattice such that its vertices lie between exactly two qubits \(i,j\), as shown in the inset. Note that acting on both qubits with \(X_{i}X_{j}\), for example, excites the two adjacent stabilizers on the green hexagons. This is equivalent to exciting two \(A\) squares along the diagonal of the cubic code's \((m_{ABC})\) using \(IX\) in Fig. 11. Moreover, acting with just \(X_{i}I_{j}\) excites the three adjacent red, green, and blue hexagons - equivalent to exciting \(m_{A},m_{B}\), and \(m_{C}\) using \(XI\) on the cubic code. Corresponding similarities also apply for \(Z\) operators acting on an \((e_{ABC})\) boundary. We, therefore, have
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Boundary & Stabiliser & Sign & Color & Behavior \\ \hline \((m)\) & \(X\) & \(-\) & \(\blacksquare\) & \(m\) condenses \\ \((m_{ABC})\) & \(X\) & \(+\) & \(\blacksquare\) & Eq. (7) \\ \((e)\) & \(Z\) & \(+\) & \(\blacksquare\) & \(e\) condenses \\ \((e_{ABC})\) & \(Z\) & \(-\) & \(\blacksquare\) & Eq. (7) (for \(e\)) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Boundary labels based on the type of topological charge that condenses. Boundary sign is defined as per Section III. Color refers to the convention used in the figures.
Figure 11: An \(X\)-type positive boundary, colored to indicate three types of \(m\) charges: \(m_{A},m_{B}\) and \(m_{C}\). Each charge is mobile within its particular set of diagonals.
Figure 10: Positive (left) and negative (right), \(X\)-type (red) and \(Z\)-type (blue) boundaries, showing the excitation patterns for each combination of single-qubit Pauli operators.
the following map relating excitations of the color code to excitations of the cubic code boundaries:
\[X_{i}X_{j}\mapsto IX,\,X_{i}I_{j}\mapsto XI,\,Z_{i}Z_{j}\mapsto ZI,\,I_{i}Z_{j} \mapsto IZ \tag{8}\]
Notably, in the Heisenberg representation this transformation is equivalent to acting on each \(i,j\) pair with a CNOT gate, controlled on qubit \(j\)[34]. If we consider the \((m_{ABC})\) boundary on \((100)\) and \((e_{ABC})\) on \((\bar{1}00)\), then this transformation directly maps the truncated boundary \(X\) and \(Z\) stabilizers of the cubic code onto the \(X\) and \(Z\) stabilizers of the color code. This property is discussed further in Appendix C.2. It remains an open question as to how or whether this correspondence generalizes to the bulk stabilizers of the cubic code, and moreover for the other cubic codes proposed by Haah [16].
#### iii.3.2 Periodic Behavior
Consider a lattice that is periodic in \(x,y\in[0,L]\) and occupies \(z\in[0,\infty)\). Along the \((00\bar{1})\) face we construct an \((e_{ABC})\) boundary using \(P_{Z}\) stabilizers. If \(3|L\), then the diagonal striation (as in Fig. 11) of this boundary layer is self-consistent and we have distinct \(e_{A},e_{B}\), and \(e_{C}\) charges that are mobile within their \((2+1)\)D diagonal subspaces. That is, if an \(e_{A}\) were to move around the periodic boundary, it would retain its \(e_{A}\) charge. We can thus consider an operator that creates two \(e_{A}\) out of the vacuum, hops one around the periodic boundary, and annihilates it with the other to create a logical operator - comparable to those in the \((2+1)\)D toric code, for example, [5]. Due to the two unique charges (as \(A\times B\sim C\) by the fusion rules), these surface string operators define two independent logical operators along an \((e_{ABC})\) or \((m_{ABC})\) boundary. When \(3\!\!\upharpoonright\!L\), it can be checked that hopping an excitation three times around a boundary (to return it to its original labeling) produces the identity operator.
However, these string operators can be extended to additional layers beyond the boundary, creating operators that extend into the bulk while requiring longer periods. To describe this procedure, we consider each \(xy\) plane to be a generalization of the boundary \((e_{ABC})\) layer. That is, acting with \(IZ\) or \(ZI\) at \(z=1\) creates \(e_{ABC}\) charges in both the \(z=0\) and \(z=1\) layers. Repeating the hopping process at \(z=1\) to move an \(e_{A}\) around the periodic boundary will remove all charges in that layer while introducing _residual_ charge at \(z=0\). If this residual charge is _trivial_ - that is, it is equivalent to the vacuum state up to the fusion operators in Eq. (7) - then the charge in that layer can also be cleaned away. Importantly, these processes only deposit residual charge in layers at smaller \(z\), not affecting layers that have already been cleaned of excitations. Since \((e_{ABC})\) restricts any remaining charge from forming beyond the \((00\bar{1})\) face itself, completing the process by annihilating charges at \(z=0\) results in a (non-trivial) logical operator.
If this process were started instead at \(z>1\), then this cleaning can be continued downwards until either the top-most remaining layer has a nontrivial residual charge or all charges are annihilated or condensed into the \((e_{ABC})\) boundary. In this way, nontrivial logical operators can be constructed at varying depths within the bulk, which wrap around the periodic boundary and iteratively clean layers of charge down to \(z=0\).
It can be shown (see Appendix C.1) that for a given lattice that is periodic in linear system size \(L\), the maximum number of \(z\) layers that can be cleaned until a nontrivial residual charge is created is
\[z_{\text{max}}(L)=\zeta(L/3) \tag{9}\]
using the notation in Eq. (5). This function is plotted in Fig. 13. Each layer introduces two additional nontrivial
Figure 12: A hexagonal color code, with a qubit at each white circle. Overlayed is a rhomboidal lattice, with vertices corresponding to the lattice points in the boundary layer of the cubic code. As per the inset, at each vertex the adjacent two color code qubits \(i,j\) are identified with the two qubits at each cubic code lattice site. To map onto the cubic code stabilizers a CNOT is used on each pair, controlled on qubit \(j\).
Figure 13: Numerical calculation of \(\tau(L_{x},L_{\infty})\equiv\zeta(L_{x}/3)\) from Eq. (5) for increasing linear system sizes \(L_{x}\) with \(L_{\infty}\gg L_{x}\). There is a clear fractal nature to this behavior.
mutually commuting logical operators, giving
\[k=2\zeta(L/3) \tag{10}\]
Note that if \(L_{z}<z_{\text{max}}(L)\) and the (001) boundary is of type \((m)\) or \((m_{ABC})\), then \(k=2L_{z}\) since the maximum number of layers cannot fit in the given lattice. Incorporating this, we define the function
\[\tau(L_{1};\;L_{2})=\min\{\zeta(L_{1}/3),\,L_{2}\} \tag{11}\]
For rectangular lattices with \(L_{x}\neq L_{y}\), then we instead have
\[k=2\min\{\tau(L_{x};\;L_{z}),\,\tau(L_{y};\;L_{z})\} \tag{12}\]
defined by the shortest path around the periodic boundary. As a result, \(k=0\) if one of \(L_{x}\) or \(L_{y}\) is not a multiple of \(3\).
#### iii.1.3 Translational-Symmetry Violations
If translational invariance is relaxed, there are additional possible commuting configurations of the boundary Hamiltonian. The commutation relation between neighboring plaquette terms is given in Fig 14. Notably, these relations allow for the construction of a natural _diagonal_ commuting interface between \(P_{X},P_{Z}\) operators, such as the configuration labeled _triangular_ in Table 1.
As with the other boundaries discussed so far, each sector of the mixed boundary is associated with an \((e),(m),(e_{ABC}),\) or \((m_{ABC})\) behavior. Due to this equivalence, this paper focuses on pure \(X\) and \(Z\) boundary configurations.
### Boundary Seams
As in Fig. 9, two adjacent boundaries can be joined along an edge, and three boundaries at a vertex. These features are required to construct the full codes discussed in Section IV.
Notably, the plaquette-plaquette commutation relations in Fig. 14 are equivalent for edge-plaquette commutation since geometrically, neighboring plaquettes only share support on at most two sites - identical to plaquettes. Given these constraints, we can thus specify a fully-gapped configuration for a finite prism such as in Table 1. To describe these configurations, the notation \((xyz;\bar{x}\bar{y}\bar{z})\) is used to specify the boundary type on each of the six lattice boundaries (see Fig. 15). For open boundaries, we use \(e\) and \(m\) as defined in Table 4, where the \(ABC\) subscript is dropped for readability. Periodic boundaries are denoted by \(p\). It is assumed in each case that the edge and vertex stabilizers are chosen to create a maximally commuting stabilizer group of local operators (that is, where possible there are no local Pauli operators with support on a single plaquette, edge, or vertex that commute with all the stabilizers).
Using this notation, we can revisit the symmetries described in Section II.2.1:
1. \((abc;def)\) is equivalent to \((cab;fde)\) and \((bca;efd)\).
2. \((abc;def)\) is equivalent to \((bac;edf)\).
3. \((abc;def)\) is equivalent to \((def;abc)\) with all \(e\leftrightarrow m\) swapped.
Combined, the first two symmetries imply that for \((abc;def)\), all permutations of \(abc\) produce equivalent codes, given that the same permutation is also applied to \(def\).
## IV Superlinear-Distance Boundary Codes
A core motivation for this work is to maintain the partial self-correction of the cubic code, without the requirement of nonlocality to implement a 3-torus. Given the discussion in Section II, operators with superlinear weight (necessary for self-correction) arise when excitations _cascade_ through the bulk of the lattice, while unwanted string-like operators appear near certain boundary configurations. Motivated by this, we thus consider
Figure 14: Anti-commutation relations for plaquette operators on the boundary. (Left) When \(P_{Z}\) is applied to the blue square on the positive faces of the lattice, the red squares on the same face correspond to \(P_{X}\) that anti-commute. (Right) When \(P_{X}\) is applied to the red square on the negative faces of the lattice, the blue squares correspond to \(P_{Z}\) that anti-commute.
Figure 15: Notation used to specify the choice of boundaries, \((abc;def)\), where each letter is \(e\) or \(m\) (as defined in Table 4 and the \(ABC\) subscript is implied), or \(p\) for a periodic boundary.
codes that contain opposing \((e),(e_{ABC})\) boundaries and opposing \((m),(m_{ABC})\) boundaries. This specifies four of the six faces of the lattice, leaving (up to symmetries) a potential three unique configurations. In the following section, we highlight one such case, dubbed _tennis ball 1_, while the others are discussed in Appendix A.
### Tennis Ball 1
This configuration is constructed with \((e_{ABC})\) and \((m_{ABC})\) on the remaining two unspecified faces. For example, \((mem;mee)\) as shown in Table 1.
To construct \(\bar{X}\) logical operators, \(m\) charges must cascade from (100) and condense at \((\bar{1}00)\) using \(F_{m}^{\bar{x}\bar{y}}\) and \(F_{m}^{\bar{x}\bar{z}}\), as shown in Fig. 16. If the \(m\) excitations produced in the cascade cannot appear on the (001) boundary, this procedure produces an operator with a weight superlinear in \(L_{x}\) and an increasing energy barrier, since excitations must cascade through the bulk by a distance \(L_{x}\). However, if \(m\) excitations _can_ be produced on the (001) boundary during the cascade, these excitations become mobile on the surface and can hop to the \((\bar{1}00)\) face using a string operator. Hence, any cascade that begins less than a distance of order \(\mathcal{O}(L_{x})\) from the (001) boundary creates a string-like operator: the height is constant in \(L_{x}\), the weight is linear, and the energy cost is constant. An analogous argument holds for producing \(\bar{Z}\) logical operators by cascading \(e\) excitations; only cascades which begin a distance at least \(\simeq L_{y}\) from the \((00\bar{1})\) boundary produce logical operators with superlinear weight.
Notably, the \(\bar{X}\) logical operators extend in the \(\hat{z}\) direction and \(\bar{Z}\) in the \(-\hat{z}\) direction when cascading. Pairs of \(\bar{X}\) and \(\bar{Z}\) must then anti-commute along their shared \(xy\)-plane. To determine the number of independent logical pairs, first note that using the _long_ edge of the \(F\) operators to move charges through the bulk is equivalent to the hopping operator in Fig. 11. We can then identify this cascading procedure with hopping an \(A,B\) or \(C\) charge from one boundary to the other, creating neighboring intermediary excitations that must also be hopped into a condensing boundary. In a similar argument to Section III.1.2, we, therefore, expect each layer to contribute 2 encoded qubits. Since only one orientation, namely the \(xy\) planes, have opposing \(e\) and \(m\)-condensing boundaries - and therefore can support mutually anti-commuting pairs of both \(e\) and \(m\) operators - the scaling is thus expected to be \(k=2L_{z}\). We confirm this by numerically computing the ground state degeneracy for small values of \(L_{x},L_{y},L_{z}\); an example of the computed logical algebra cleaned into a single \(xy\) plane is shown in Fig. 17. This same analysis cannot be conducted with \(xz\) planes, for example, since these will have three adjoining \(m\)-type boundaries and only one \(e\)-type. As with the surface code, such a configuration does not support any nontrivial logical operators.
Notably, this scaling confirms the results in Ref. [49], where Dua et al. showed that the periodic cubic code can be interpreted as interwoven layers of toric code. Specifically, a model of linear system size \((L_{x},L_{y},L_{z})\) is equivalent to \(2L_{z}\) copies of toric code, up to a unitary that is non-local in \(z\). When placed on open boundaries as done here, the corresponding surface codes should each contain one encoded qubit, thus giving \(k=2L_{z}\) total.
### Subsystem Codes
The presence of the string-like operators initially appears detrimental to the desired self-correcting behavior of the _tennis ball_ code. However, we can still ensure a superlinear-distance encoding by using a subsystem code [50]. By treating all logical qubits with linear distance as gauge qubits, the subsequent dressed logical operators should remain superlinear in weight. We argue this claim as follows:
First, note that the support of the bare superlinear-weight logical operators extends further in the \(z\) direction than the bare linear-weight logical operators, by definition. This extra support includes a region in the bulk where the \(\bar{X}\) and \(\bar{Z}\) pairs intersect and anti-commute. Since \(\bar{X}\) commutes with all stabilizers and all other \(\bar{Z}\) operators apart from its pair, this anti-commutation re
Figure 16: Logical operators on the first _tennis ball_ configuration, \((mem;mee)\). Solid shapes indicate repeated applications of the \(F\) operators to cascade \(m\) (yellow) and \(e\) (cyan) charges. Red faces represent \(X\), and blue for \(\bar{Z}\), stabilizer choices on the boundaries.
lation must be maintained when multiplied with these other operators. Therefore, the product of bare superlinear and bare linear weight operators will always contain support on a region in the bulk of the lattice. This region is either entirely isolated from the boundary or extends to the boundary. In the former case, by the properties of the original cubic code, there are no string-like operators in the bulk of the lattice. For the latter, the operator now occupies a support that is larger than the original superlinear-weight support, which didn't extend to the boundary. In both situations, the dressed operator must still have superlinear weight.
With _tennis ball 1_, we can, for example, label all logical operators that can be supported solely in \(z\geq 2L_{z}/3\) or \(z\leq L_{z}/3\) as ancillary. Computing the remaining ground state degeneracy numerically, the number of logical qubits is
\[k=2\left(\left\lfloor\frac{L_{z}}{3}\right\rfloor+(L_{z}\ \mathrm{mod}\ 3)\right) \tag{13}\]
The periodic scaling is a result of specifying discrete lattice points for the cut-off regions when 3 may not divide
Figure 17: Two pairs of logical operators on the first _tennis ball_ code, \((mem;mee)\), for a linear system size \((11,11,L_{z})\), considering a slice of the \(xy\) plane at some \(z\) far from the \((001)\) and \((00\bar{1})\) boundaries. These were calculated and plotted computationally; A slight jitter is applied to each point to distinguish when \(XI\) and \(IX\) occur on the same lattice point. Note that there are only two such pairs of logical operators that can be cleaned onto just this particular \(xy\) plane. If we consider the plane at \(z+1\) we would find an analogous result of two new independent pairs of logical operators; in this way, the _tennis ball 1_ code has \(k=2L_{z}\) logical pairs.
\(L_{z}\).
It is therefore possible to modify the family of cubic codes such that we maintain the superlinear distance and partial self-correction while improving \(k\) to be a simple linear function of \(L_{i}\), with only a constant-order periodic correction. Moreover, these codes do not require any periodic boundaries and are thus comparably more realistic for implementations in physical systems.
## V Defects
Having presented the key properties of boundaries in the cubic code in the previous section, in the following section we similarly introduce and characterize three defect types: vacancies, edge dislocations, and screw dislocations. A discussion of their use in QEC codes is then provided in Section VI. Defects have been used in previous work to encode additional quantum information and to perform logical Clifford operations [27, 28, 51, 52, 53, 54, 55].
From a condensed matter perspective, defects are an important consideration when forming a complete understanding of a given phase of matter. For example, encircling a defect can induce a lattice translation. Since fractons are fundamentally immobile, this translation has the potential to cause nontrivial changes to the fracton topological order [56], such as in the form of increased mobility.
### Vacancies
Vacancies, the removal of lattice sites (and qubits) within the bulk of the model, behave similarly to boundaries. Stabilizers obtained by truncating the operators acting on the vacant site still commute with all stabilizers away from the vacancy and with other truncated stabilizers of the same type. For simplicity, we consider only vacancies with single choices of truncated stabilizers, which we denote as \(\langle m\rangle\) and \(\langle e\rangle\) for \(X\) and \(Z\) respectively. As with the boundaries, these results could be generalized to more complex constructions, such as (110) orientations or combinations of \(e\) and \(m\) faces, in future works.
We identify here three key ways in which bulk excitations interact with nearby vacancies, the first two of which are equivalent to the boundary interactions.
Firstly, fractons can condense into certain faces of \(\langle m\rangle\) and \(\langle e\rangle\) vacancies, just as with the exterior lattice boundaries. The analogous results to Table 4 are summarized in Table 5.
Secondly, if the vacancy extends in a particular direction, such as \(\hat{z}\), mobilized charges on \((m_{ABC})\) and \((e_{ABC})\) faces of the vacancy are able to repeatedly _hop_ along \(\hat{z}\). If the vacancy extends around a lattice periodic in \(z\), and with \(3|L_{z}\), then an operator can create two \(m_{A}\) out of the vacuum, hop one charge around the periodic boundary and annihilate it with the other. This is equivalent to the boundary behavior in Section III.1.2.
Finally, by employing a _cage operator_, fractons can encircle a vacancy. As described in Ref. [29] and Section II.2.3.3, these operators create a set of fractons out of the vacuum, separate them via a cascading procedure in two directions, before inverting the cascading operation to bring the excitations back together to annihilate - returning to the vacuum state. Such an operation in the bulk will typically be trivial, resulting in a product of stabilizers. However, the vacancy removes terms from the stabilizer group, causing a cage that encircles a vacancy to be potentially nontrivial.
### Edge Dislocations
Edge dislocations in (3+1)D are analogous to those in the (2+1)D surface code [27], with the additional feature that the dislocation becomes a line extending into the third spatial dimension, as in Fig. 18. Along the dislocation line, we include trapezoidal prism stabilizer terms - known as _twists_ in Refs. [51, 27]. Joining the twists are a region of slanted \(C_{X}\) and \(C_{Z}\) operators (shown as white in Fig. 18) that naturally commute with the adjoining bulk stabilizers. For the twists themselves, however, there are only two choices of stabilizers: one from \(X\) terms (denoted \(T_{X}\)) and another from \(Z\) (\(T_{Z}\)). These are provided in Fig. 19. Importantly, although \(T_{X}\) and \(T_{Z}\) commute at the same location, adjacent \(T_{X}\) and \(T_{Z}\) anti-commute. Constructing a gapped edge dislocation, therefore, requires taking either pure \(T_{X}\), pure \(T_{Z}\), or \(T_{X}\) and \(T_{Z}\) jointly on every second stabilizer (or some combination of the above). We could further consider a non-commuting, potentially gapless, Hamiltonian along the defect including all \(T_{X}\) and \(T_{Z}\) terms with arbitrary weights. Due to the commutation relations, this Hamilto
Figure 18: Edge dislocations on the cubic code contain twists (blue). An operator that moves a fracton around a closed cage (red loop) in the bulk will no longer be closed when encircling a twist. This translation potentially increases the mobility of fractons when near a defect.
nian is seen to be equivalent to two copies of the \((1+1)D\) quantum Ising model. We proceed to consider only the pure \(T_{X}\) or \(T_{Z}\) defects, leaving more complicated configurations to future work.
As sketched in Fig. 18b, using a cage operator to propagate a fracton around a twist does not return the fracton to its original position; instead, the fracton moves by the Burgers vector of the dislocation. The once-immobile fractons thus gain limited mobility in the vicinity of a twist. This behavior has the potential to introduce non-trivial logical operators.
As with boundaries and vacancies, bulk excitations can condense onto particular twists. A \(T_{X}\) twist condenses \(m\) charges, while \(T_{Z}\) twist condenses \(e\). Since an edge dislocation is characterized by two twists, we denote, for example, \(\langle em\rangle\) to be a \(T_{X}\) twist on the positive side of the pair, and \(T_{Z}\) on the negative side.
### Screw Dislocations
The final defect type we consider is the _screw dislocation_, shown in Fig. 20. We label the screw as \(\langle L\rangle\) or \(\langle R\rangle\) by the handedness of the Burgers vector when traversing around the dislocation. Unlike the edge dislocation, the screw invokes a translation vector along the defect line itself. Importantly, this means that a fracton can continuously wind around the defect via cage operators while translating along the screw. Fractons thus gain 1D mobility in the vicinity of screw dislocations.
Unlike vacancies and twists, however, for the screw dislocations with a Burgers vector of 1 considered here, there is no local operator supported on the dislocation line that commutes with the bulk stabilizers. Because of this, \(\langle L\rangle\) and \(\langle R\rangle\) screws have no inherent stabilizer and can condense both \(m\) and \(e\) excitations. Notably, this condensation behavior largely negates any change to the mobility of fractons near this screw, since individual \(e\) and \(m\) can be created or destroyed at any location along the defect, and thus become trivial.
## VI Superlinear-distance defect codes
Similar to the discussion of boundary codes in Section IV, in this section we examine how defects can be used with the cubic code to create QEC codes with superlinear distances. Further configurations without this property are discussed in Appendix B. Notably, although there are codes using screw dislocations with a superlinear dis
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Vacancy Type & Vacancy Boundary & Stabilizer & Boundary Sign & Behavior \\ \hline \(\langle m\rangle\) & \((m)\) & \(X\) & \(+\) & \(m\) condenses \\ \(\langle m\rangle\) & \((m_{ABC})\) & \(X\) & \(-\) & Eq. (7) \\ \(\langle e\rangle\) & \((e)\) & \(Z\) & \(-\) & \(e\) condenses \\ \(\langle e\rangle\) & \((e_{ABC})\) & \(Z\) & \(+\) & Eq. (7) (for \(e\)) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Terminology for vacancies, analogous to Table 4 for open boundaries.
Figure 19: Stabilizers for the twists occurring at the ends of edge dislocations. There are four types, corresponding to \(T_{X}\) (left pane) and \(T_{Z}\) (right pane), as well as at the bottom and top of the edge dislocation. The stabilizers were found by considering the commutation with the neighboring bulk \(C_{X}\) and \(C_{Z}\) stabilizers.
Figure 20: The two independent forms of screw dislocations in a (3+1)D lattice, such as the cubic code. The screws are labeled by the handedness of the Burgers vector when traversing around the dislocation. On the left is a right-handed screw, \(\langle R\rangle\), and on the right is a left-handed screw, \(\langle L\rangle\).
tance, the number of encoded qubits there is constant. We therefore defer the discussion of screw-dislocation codes to Appendix B.3.
### Vacancy Encodings
Consider a lattice of size \((L_{x},L_{y},L_{z})=(L_{\infty},L_{\infty},L_{z})\), where \(L_{\infty}\gg L_{z},\kappa\) with \(\kappa\) the correlation length of the ground state, such that interactions with the \(x,y\) boundaries are negligible. We also place periodic boundary conditions in the \(z\) direction. Within this, there is an \(\langle m\rangle\) vacancy of width \((w_{x},w_{y},L_{z})\) such that it extends around the periodic boundary. We expect that a single vacancy in this configuration does not modify the ground state degeneracy, since creating and annihilating \(m\) on the same vacancy is trivial. Indeed, this result is confirmed by numerical computation, in all cases except when \(3|L_{z}\). As discussed in Section III.1.2, \(m\) charges gain 1D mobility along a periodic boundary, producing additional logical operators. This gives a number of encoded qubits
\[k=4\tau(L_{z};\;L_{\infty}) \tag{14}\]
where \(\tau\) is defined in Eq. (11), with the factor of 4 arising from the two unique string-like operators on the two (\(m_{ABC}\)) boundaries of the vacancy. These logical operators all have a linear weight.
Once additional \(\langle m\rangle\) vacancies are introduced, however, additional logical operators arise. Numerically computing the ground state degeneracies, the number of encoded qubits scales with
\[k=2(v-1)L_{z}+4\tau(L_{z};\;L_{\infty}) \tag{15}\]
where \(v\) is the number of vacancies.
\(m\) excitations can now cascade through the bulk from one vacancy to another, creating an \(\bar{X}\) logical operator with a weight that is superlinear in the vacancy separation \(\Delta\) (using the Manhattan distance). Each additional vacancy introduces an additional independent site for condensation, thus increasing \(k\). Moreover, since translation is a nontrivial action in the bulk of the model, performing the cascading procedure at different values of \(z\) produces independent operators. This gives the dependence on \(L_{z}\).
On the other hand, the \(\bar{Z}\) logical operators are formed from cages of \(e\) excitations encircling a vacancy. Importantly, because these cage operators are moving charges through the bulk around the vacancy, these must also have a weight that scales superlinearly with the widths \(w_{x},w_{y}\). If these widths were scaled with the separation between the vacancies, this encoding has the potential to be partially self-correcting (when \(3\!\nmid\!L_{z}\)), while also supporting a number of encoded qubits that scale linearly in \(L_{z}\) and the number of vacancies.
If the assumption of \(L_{x},L_{y}\gg L_{z}\) is relaxed, such as for an \((mmp;mmp)\) code with \(v\) number of \(\langle m\rangle\) vacancies wrapping around the periodic boundary, we get two additional logical operators per \(L_{z}\), giving
\[k=2vL_{z}+4\tau(L_{z};\;L_{\infty}) \tag{16}\]
in comparison to Eq. (15). This is because \(m\) charges can now also cascade from the vacancies to a boundary.
Overall, this code is a notable improvement over the original cubic code model and does not require a subsystem code as in Section IV. Unlike vacancies in the surface code, there is also no significant trade-off between the code distance and the number of encoded qubits, since \(v\) can be kept constant while using \(L_{z}\) to increase \(k\), and using the \(x,y\) dimensions (vacancy width and linear system size) to increase the code distance. However, this construction does require a topology with one periodic direction.
### Edge Dislocation Encodings
A single edge dislocation in the bulk, far from any boundary, encodes no additional logical qubits.
However, more appealing behavior arises using multiple edge dislocations. Consider two dislocations, in Fig. 11, each of height \(h\) in the \(z\) direction and separated by perpendicular distance \(\Delta\) in the \(y\) direction. The dislocation line extends along \(x\). Numerically computing the ground space degeneracy yields
\[k=4L_{x}+\mathcal{O}(1) \tag{17}\]
for some additional constant arising depending on the exact choice of twist stabilizers, \(\Delta\), and \(h\).
Effectively, these logical operators consist of movement of \(e\) and \(m\) excitations around and between two of the four twists, as in Fig. 21. Importantly, because these cage operators are moving charges through the bulk, the prohibition of string-like operators means that the width of the support in the \(x\) direction must increase with \(h\) and \(\Delta\). This result is confirmed by numerically computing the minimum width that supports logical operators as we scale \(\Delta=h\) in Fig. 21. Slight variations in the widths are due to the particulars for constructing the operators around the twists. However, in all cases, we observe an overall linear trend.
If the analysis above is repeated except with open or periodic boundary conditions in \(x\), we observe the same scaling as in Eq. (17). There is an additional, constant number of logical operators that have support solely on a single twist and are string-like in the \(x\) direction. Unlike the case with vacancies, these operators remain even if \(3\!\nmid\!L_{x}\). As with the case for _tennis ball 1_, considering a subsystem code to ignore these extra logical operators may be sufficient to ensure the dressed code has a superlinear distance as \(\Delta\) and \(h\) are increased. A rigorous proof of this solution is deferred to future works.
As with periodic vacancies, pairs of twists provide a promising approach to improve the cubic code: maintaining the desirable code distance while encoding a number of qubits that scales linearly with \(L_{x}\).
## VII Conclusion
In this work, we have presented a systematic study of open boundaries and defects in Haah's cubic code. We focused on planar (100)-like boundaries normal to the crystallographic axes and constructed \(X\)- and \(Z\)-type open boundary conditions using truncated plaquette, edge and vertex stabilizer terms. The interaction of these boundaries with fractonic topological excitations depends intrinsically on their orientation: \(X\)-type negative faces and \(Z\)-type positive faces condense single fractons of the respective type, while the opposing faces lead to increased fracton mobility within their vicinity. These otherwise fractonic excitations become mobile within a \((2+1)\)D diagonal subsystem along the surface. This implies that the fundamental no string-like operator property of the original cubic code is violated in the vicinity of these boundaries.
Similar behavior was observed in the vicinity of vacancies, edge dislocations, and screw dislocations: patterns of fracton condensation that lead to increased mobility were seen to depend on the orientation and stabilizer type of each defect. The nontrivial action of translation symmetry on a type-II fracton topological phase means that encircling a dislocation defect with a cage operator can also lead to increased topological excitation mobility. We found that dislocation defect encodings were therefore able to support additional forms of logical operators that do not appear for encodings constructed from boundaries and vacancies.
The cubic code is known to form a partially self-correcting quantum memory on periodic boundary conditions [43]. The absence of any string-like logical operator is essential to enable such a quantum memory [16]. With this in mind, we aimed to determine if it is possible to retain the superlinear distance of the original cubic code model while making the number of encoded qubits scale as a simple linear function of the linear system size, without sporadic fluctuation. We have shown that it is possible to achieve this using a combination of open and periodic boundaries, vacancies, and defects, despite the no string-like operator property potentially being lost when translation-invariance is violated by the introduction of defects. For cubic codes with open boundary conditions - which are typically easier to realize in a physical implementation - it is possible to achieve a superlinear code distance by restricting the encoded states to a subsystem, with all dressed logical operators maintaining a weight that scales superlinearly with the linear system size. We have shown this to be possible with the _tennis ball 1_ code construction. It is an open question whether this can be generalized to other open boundary condition codes, such as those in Appendix A. Our results also focused on planar (100)-like boundaries; it remains an open question as to whether alternate constructions such as (110) boundaries yield fundamentally different behavior.
We showed that emergent \((2+1)\)D topological order arises on certain open boundary conditions, supporting particles with increased mobility. Interestingly, this phase can be unitarily transformed into the 6-6-6 color code. We leave a further investigation of this correspondence, such as how it extends into the bulk, to future research. Similarly, it would be interesting to relate the bulk cubic code defects to defects of the
Figure 21: Pairs of edge dislocations support superlinear-weight logical operators.
color code [48] on the boundary. Presumably, the emergent color code on the boundary has an associated non-invertible anomaly [57] which is a property of the type-II fracton bulk phase. Understanding the nature of the surface topological order could uncover further insights into type-II fracton topological order via a bulk-boundary correspondence.
In this work we have only explored Pauli-\(X\) or \(Z\) type boundaries of the cubic code, rather than more complicated mixed or twisted boundary conditions. We suspect that such boundary conditions exist and are inequivalent to those we have studied. Our reasoning uses the construction of twisted boundaries via gauging symmetry-protected topological (SPT) domain walls [58]. For the cubic code, this allows a construction of boundaries via the gauging duality to a 3D fractal Ising model [24; 25]. For one type of boundary condition of the fractal Ising model, the symmetry action on the boundary is simply the action of the bulk fractal symmetry restricted to the boundary plane. This symmetry action on the boundary appears to also be a fractal of the form considered in Ref. [59]. Following Ref. [59], one can consider stacking the fractal Ising model with a reflected copy of itself such that the resulting boundary fractal symmetry supports a nontrivial fractal SPT phase. We can then twist the boundary condition by the associated SPT entangler and gauge the resulting model to obtain a twisted boundary condition of a cubic code stacked with a reflected cubic code. We leave the detailed study and classification of such twisted boundary conditions to future work. We anticipate that existing constructions of type-II fracton models from layers of fractal SPTs may prove advantageous for this study [60].
We now highlight several further directions for future consideration. It remains to be seen how the results in this paper can be generalized to other type-II fracton topological phases, such as the additional codes introduced in Ref. [16] or the fractal spin liquid codes [22]. It would be interesting to relate defects in the latter codes to known defects in \((2+1)\)D topological codes via the fractalization procedure of Ref. [61]. Our work, together with previous work on type-I fracton phases [29], raises the question of developing a general theory of boundaries and defects for fracton topological orders. A promising approach to this is the inclusion of conventional topological boundaries and defects into the topological defect network framework for fracton topological order [62; 63; 64]. Another interesting open question is the development of a notion of Lagrangian algebra objects [65; 66; 67; 68] for fracton topological orders that can be used to classify possible gapped boundaries in relation to fracton braiding statistics [69; 70]. A further challenge is the extension of these concepts to nonabelian fracton models [71; 72; 73; 74; 75; 76; 77; 78].
Our work leaves open the question of superlinear code distances for subspace stabilizer encodings in type-II codes with open boundary conditions. It could be interesting to search for bounds on the best achievable parameters for a topological subspace stabilizer code with open boundary conditions in \((3+1)\)D. Proving rigorous lower bounds on the code distance and energy barriers, as well as modifying the decoding algorithms from Ref. [17] and [79] in the presence of boundaries and defects, would allow us to make a more definitive judgment on the feasibility of these codes as self-correcting quantum memories. It would also be useful to consider how lattice defects can be braided to produce Clifford gate sets, such as with the surface code [27]. Another open direction for future work is calculating the fusion of multiple twist and screw dislocations to create defects with larger Burgers vectors, and the relationship between braiding and condensations on these defects. This raises the challenge of developing a theory of translation symmetry enrichment for fracton topological orders, extending the \((2+1)\)D results of Ref. [52]. This is a promising lens through which to understand the general structure of fracton topological orders on crystal lattices. We remark that the possible phenomena exhibited by known fracton models reveal a far richer theory than the two-dimensional analog [80]. In particular, a general theory should capture the interplay between subgroups of translation symmetry and bifurcating entanglement-renormalization group flows satisfied by fracton topological orders including the cubic code [81; 82; 83; 84].
###### Acknowledgements.
The authors acknowledge fruitful collaboration with Tom Iadecola and Meng Cheng during the early stages of this work. AD and DB are supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651438, AD; 651440, DB). AD is also supported by the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (PHY-1733907). ACD is supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems (EQUS, CE170100009). DW is supported by the Australian Research Council Discovery Early Career Research Award (DE220100625).
|
2309.16473 | QUBO Resolution of the Job Reassignment Problem | We present a subproblemation scheme for heuristical solving of the JSP (Job
Reassignment Problem). The cost function of the JSP is described via a QUBO
hamiltonian to allow implementation in both gate-based and annealing quantum
computers. For a job pool of $K$ jobs, $\mathcal{O}(K^2)$ binary variables --
qubits -- are needed to solve the full problem, for a runtime of
$\mathcal{O}(2^{K^2})$. With the presented heuristics, the average variable
number of each of the $D$ subproblems to solve is $\mathcal{O}(K^2/2D)$, and
the expected total runtime $\mathcal{O}(D2^{K^2/2D})$, achieving an exponential
speedup. | IΓ±igo Perez Delgado, Beatriz GarcΓa Markaida, Alejandro Mata Ali, Aitor Moreno Fdez. de Leceta | 2023-09-28T14:37:23Z | http://arxiv.org/abs/2309.16473v2 | # QUBO Resolution of the Job Reassignment Problem
###### Abstract
We present a subproblemation scheme for heuristical solving of the JSP (Job Reassignment Problem). The cost function of the JSP is described via a QUBO hamiltonian to allow implementation in both gate-based and annealing quantum computers. For a job pool of \(K\) jobs, \(\mathcal{O}(K^{2})\) binary variables -qubits- are needed to solve the full problem, for a runtime of \(\mathcal{O}(2^{K^{2}})\). With the presented heuristics, the average variable number of each of the \(D\) subproblems to solve is \(\mathcal{O}(K^{2}/2D)\), and the expected total runtime \(\mathcal{O}(D2^{K^{2}/2D})\), achieving an exponential speedup.
Simulation and Modeling; Intelligent Logistics; Management of Exceptional Events: Incidents, Evacuation, Emergency Management
## I Introduction
QUBO (_Quadratic Unconstrained Binary Optimization_) problems [1], which can be solved both by quantum annealing devices and gate-based quantum computers, have been already used to give answer to relevant NP-hard problems [2] such as the Travelling Salesperson Problem [3, 4] or the Hamiltonian Cycle problem [5]. In this paper we will treat one of these QUBOs named the Job Reassignment Problem (JRP). In the JRP a number \(J\) of agents - which can take the form of workers, machines, vehicles- have to be reallocated to a new configuration of jobs due to some unexpected circumstance such as a shift on the production priority or the incapacitation of some of the originally assigned agents. Due to those unexpected causes, there exist a number \(I\) of relevant jobs with high priority score which do not have a paired agent. The resolution of the JRP implies finding the agents which better suit those jobs, moving them from their current low-priority jobs to those unassigned high-priority ones. This is done by taking into account some \(\mathcal{S}_{ij}\) scores defined between each vacant job \(i\) and each agent \(j\). The goal of the optimization problem will be to choose the \(i,j\) pairs such that the sum of their \(\mathcal{S}_{ij}\) is maximized, with the full problem using \(N=J\times I\) binary variables.
At first glance the number of variables required to solve the complete JRP grows linearly with each of two independent parameters \(J\) and \(I\). However, if typically a fraction \(p\) of the total number \(K=J+I\) of jobs are emptied, meaning that \(I=pK\) and \(J=(1-p)K\), both the number \(J\) of agents ready to perform the jobs and the number \(I\) of unexpectedly empty jobs would escalate equally. Then, \(N=J\times I=\mathcal{O}(K^{2})\). The search space of possible solutions, and thus the time required to find its optimum, has size \(2^{J\times I}=\mathcal{O}(2^{K^{2}})\).
In this paper we present a heuristical approach to the JRP, with several variable-reduction methods that aim to paliate the \(N=\mathcal{O}(K^{2})\) scaling of the full problem. This scheme divides the full \(\mathcal{O}(K^{2})\)-variable problem into \(D\) different \(\mathcal{O}(K^{2}/2D)\)-variable subproblems, reducing the search space of each subproblem to size \(\mathcal{O}(2^{K^{2}/2D})\) and thus the total runtime of the \(D\) subproblems to \(\mathcal{O}(D2^{K^{2}/2D})\).
It is important to note that these proportionality figures are expected values, and the true scaling will vary between instances of the problem. Moreover, these methods, being of heuristical nature, do not guarantee an acceptable solution for all cases of the problem. This does not mean, as it is the case of other heuristics, that no advantage is to be expected from their usage. In fact, the subproblemation allows for agents to be reallocated to originally non-vacant jobs, which is not the case of the original full problem. This expanded answer space suggests that, in some cases, the optimum of the approximated problem can be of better quality than the optimum of the original full hamiltonian.
The presented methods have been tested in the real context of a joint project between i3b (_Instituto de Innovacion Ibermatica_) and the ONCE (_Organizacion Nacional de Ciegos Espanoles_).
## II Problem description
The \(\mathcal{S}_{ij}\) coefficients of the cost function of the problem that relate each agent with each vacant job are given by the combination of several values. The first term to take into account is the priority gain \(\Delta_{ij}^{P}=\mathcal{P}_{i}^{V}-\mathcal{P}_{j}^{C}\) between the priority of the vacant job \(\mathcal{P}_{i}^{V}\in(0,1]\) and the priority of the job the agent is currently covering \(\mathcal{P}_{j}^{C}\in(0,1]\). These priority values are known for each job, being given by the statement of the problem. However, it is allowed for these values to be changed between instances of the resolution of the problem, since a certain job can have different priorities in different contexts. In any case, none of the considered jobs should have
a priority of \(0\), because in that case the job could just not be considered part of the problem. Priority can have a discrete range of values when each job is ranked in one of \(D\) priority categories. In that case, all jobs of the same category will have the same \(\mathcal{P}_{i}^{V}\) or \(\mathcal{P}_{j}^{C}\) value, usually \(\in\{1,2,...,D\}\).
The second term of \(\mathcal{S}_{ij}\) is the affinity gain \(\Delta_{ij}^{A}=\mathcal{A}_{ij}^{V}-\mathcal{A}_{jj}^{C}\) between the personal affinity of agent \(j\) with the vacant job \(i\), \(\mathcal{A}_{ij}^{V}\in[0,1)\), and the personal affinity of that agent with the job they are currently covering \(\mathcal{A}_{jj}^{C}\in[0,1)\). These affinities are noted collectively as \(\mathcal{A}_{kj}\), where \(k\) encompasses all \(K=I+J\) jobs, vacant or assigned.
The total score is then calculated as
\[\mathcal{S}_{ij}=c^{\mathcal{P}}\Delta_{ij}^{\mathcal{P}}+c^{\mathcal{A}} \Delta_{ij}^{\mathcal{A}} \tag{1}\]
where \(c^{\mathcal{P}}\) and \(c^{\mathcal{A}}\) are the two positive constants that give the relative weight of the optimization terms. They are given by the statement of the problem.
A simple way to model the \(\mathcal{A}_{kj}\) personal affinities is by counting the number of times the agent \(j\) was assigned to a particular job \(k\) in a historical record and mapping that count to a monotonically ascending function such as
\[\mathcal{A}_{kj}=1-\frac{1}{1+M_{kj}}\, \tag{2}\]
where \(M_{kj}\) is the number of times agent \(j\) has been assigned to job \(k\). Note that, since an assigned agent has been assigned to their current job at least for the current instance, in this model \(\mathcal{A}_{jj}^{C}\in[0.5,1)\).
## III Variable selection
In order to solve this optimization problem one binary variable \(x_{ij}\in\{0,1\}\) will be assigned to each 'vacant job - agent' pair. If \(x_{ij}=1\), then agent \(j\) will be reassigned to the vacant job \(i\), leaving vacant their current assigned job. If \(x_{ij}=0\), then agent \(j\) will not be reassigned to job \(i\), but could be reassigned for other vacant job. This means that
\[\text{if }\sum_{j}x_{ij}=0\text{ then vacant job }i\text{ remains vacant}, \tag{3}\]
and
\[\text{if }\sum_{i}x_{ij}=0\text{ then agent }j\text{ is not reassigned}. \tag{4}\]
For \(J\) agents with assigned jobs and \(I\) vacant jobs, the number of binary variables needed to solve the full problem is \(N=J\times I\). In Fig. 1 each of the \(x_{ij}\in\{0,1\}\) variables of the full problem is represented with a grey line: if the agent from job \(a_{j}\) has been reallocated to the vacant job \(v_{i}\), then \(x_{ij}=1\) and the line is colored black. Else, \(x_{ij}=0\) and the line remains grey.
## IV Hamiltonian construction
The cost function hamiltonian \(H\) will be divided in two parts: the core hamiltonian \(H^{0}\), where we will encode the optimization problem, and the restriction hamiltonian \(H^{R}\) whose terms will effectively reduce the search space to only physically plausible states. Then, \(H\equiv H^{0}+H^{R}\).
The core hamiltonian takes into account the total score of all active variables, and is composed of two terms, as shown in (5): the priority gain and the affinity gain.
\[H^{0}=-\sum_{ij}\mathcal{S}_{ij}x_{ij}=-c^{\mathcal{P}}\sum_{ij}\Delta_{ij}^{ \mathcal{P}}x_{ij}-c^{\mathcal{A}}\sum_{ij}\Delta_{ij}^{\mathcal{A}}x_{ij}. \tag{5}\]
Note how even though JRP is a maximization problem, the value of \(H^{0}\) is smaller for high \(\Delta_{ij}^{\mathcal{P}}\) and \(\Delta_{ij}^{\mathcal{A}}\). This happens because the annealing systems decay towards the state of minimum energy of their hamiltonian, so the original maximization cost function has to be translated into its analogous minimization cost function by the introduction of a \(-1\) factor.
The restriction hamiltonian has two terms too, as shown in (6):
\[H^{R}=H_{1}^{R}+H_{2}^{R}\, \tag{6}\]
where the first term
\[H_{1}^{R}=\lambda_{1}^{R}\sum_{i}\left(\sum_{j}x_{ij}-0.5\right)^{2} \tag{7}\]
ensures, for a large enough \(\lambda_{1}^{R}>0\), that each job \(i\) can be done by at most one agent and the second term
\[H_{2}^{R}=\lambda_{2}^{R}\sum_{j}\left(\sum_{i}x_{ij}-0.5\right)^{2} \tag{8}\]
makes, for a large enough \(\lambda_{2}^{R}>0\), each agent \(j\) to be reassigned to at most one job. Allowing the restriction coefficient to be the fractionary number \(0.5\) allows the number of active
Figure 1: Representation of the full hamiltonian for a problem with \(J\) agents -that is, \(J\) jobs with assigned agents- and \(I\) vacant jobs, where the solution to the problem has involved moving agent \(a_{1}\) and \(a_{3}\) from their original jobs to the originally vacant jobs \(v_{2}\) and \(v_{I}\). Notice how not all vacants are necessarily filled and not all agents are necessarily reallocated.
binary variables of the sums inside the parentheses to be either \(0\) or \(1\), since those are the integer values that are closer to \(0.5\)[6], without introducing any dummy variables. Using only integer restriction coefficients would force the introduction of \(I+J\) dummy variables.
The complete hamiltonian is then, merging (5), (7) and (8),
\[\begin{split} H=-\sum_{ij}&\bigg{[}c^{\mathcal{P}} \left(\mathcal{P}_{i}^{V}-\mathcal{P}_{j}^{C}\right)+c^{\mathcal{A}}\left( \mathcal{A}_{ij}^{V}-\mathcal{A}_{jj}^{C}\right)\bigg{]}x_{ij}\\ &+\lambda_{1}^{R}\sum_{i}\bigg{(}\sum_{j}x_{ij}-0.5\bigg{)}^{2}\\ &+\lambda_{2}^{R}\sum_{j}\bigg{(}\sum_{i}x_{ij}-0.5\bigg{)}^{2} \,\end{split} \tag{9}\]
summed over the \(i,j\) pairs represented by the lines of the graph.
## V Heuristical variable reduction and Problem segmentation
In order to diminish the needed number of qubits \(N\), which in the full problem of Fig. 1 equals \(J\times I\), certain simplifications will be taken.
Firstly, only changes with \(\mathcal{S}_{ij}>0\) will be taken into account. Changes with a negative score will be ignored, even if they would allow a second, positive-scored change which resulted in a net gain. It is assumed that the initial distribution is already in a sensible state, and as such it would be difficult to obtain a gain with that kind of second-order movement.
Secondly, changes with \(\Delta_{ij}^{\mathcal{P}}<0\) will also be ignored, since the ultimate goal of the optimization is to maximize the total priority of the assigned jobs.
Assuming about half of the \(i,j\) pairs have \(\mathcal{S}_{ij}>0\) or \(\Delta_{ij}^{\mathcal{P}}<0\) -since the two quantities are strongly correlated, otherwise \(\nicefrac{{3}}{{4}}\) of the pairs would be assumed-, these simplifications reduce the expected number of qubits needed to approximately \(N\approx(J\times I)/2\), thus reducing the search space to a size of \(\sqrt{2^{J\times I}}\) and achieving a quadratical speedup.
Moreover, ignoring changes with \(\Delta_{ij}^{\mathcal{P}}<0\) allows us to divide the problem into smaller subproblems. These subproblems will be generated in function of the \(\mathcal{P}_{i}^{V}\) priorities of the vacant values. If those values are discrete, being able to take \(D\) different values, then for each value one subproblem will be created. If the values are continuous, they can be grouped in \(D\) intervals of length \(1/D\). Then for the \(d\)th subproblem, only those vacant jobs with \(\mathcal{P}_{i}^{V}\geq 1-\frac{d}{D}\) are considered. After having solved the \(d=1\) subproblem, those vacants which have been succesfully reassigned will not be included in the \(d=2\) subproblem. However, those jobs that have been emptied as a result of their agent being reassigned will be included as vacant jobs. This means all \(\mathcal{A}_{kj}\) are potentially needed for the resolution, not only the \(\mathcal{A}_{ij}^{V}\) and \(\mathcal{A}_{jj}^{C}\) of the original full problem. Moreover, as \(\Delta_{ij}^{\mathcal{P}}<0\) variables are ignored, only those agents with jobs with \(\mathcal{P}_{j}^{C}\leq\mathcal{P}_{d\ MAX}^{V}\) will be considered, where \(\mathcal{P}_{d\ MAX}^{V}\) is the maximum priority between all considered \(\mathcal{P}_{i}^{V}\geq 1-\frac{d}{D}\).
We can also estimate the effect of the subproblem on the size of the search space and time. It segmentates the \(I\) vacant jobs of the complete problem into \(D\) subproblems, giving an average value of \(\langle I_{D}\rangle=I/D\) empty jobs for subproblem. As one can see in Fig. 2, The number of available agents is also reduced with each step, with each step having \(\alpha\langle I_{D}\rangle\) less agents than the previous one, where \(\alpha\) is the average proportion of vacant jobs that are covered on each subproblem. Then, the expected number of agents of the \(d\)th subproblem, for \(d\in\{1,...,D\}\), would be \(J-\alpha D(d-1)\), for an average of \(\langle J_{D}\rangle=J-\alpha D(D-1)/2\) over the \(D\) subproblems. In total, each subproblem will have an expected size of \(\langle N_{D}\rangle=\langle J_{D}\rangle\times\langle I_{D}\rangle=JI/D- \alpha I(D-1)/2=\mathcal{O}(K^{2}/D)\). This in turn makes the search space of each subproblem size \(\mathcal{O}(2^{K^{2}/D})\), and the total runtime of the \(D\) subproblems \(\mathcal{O}(D2^{K^{2}/D})\).
With all the heuristics combined, the search space of each subproblem is expected to be reduced to size \(\mathcal{O}(2^{K^{2}/2D})\), and the total runtime of the \(D\) subproblems to \(\mathcal{O}(D2^{K^{2}/2D})\).
In Fig. 2 a representative example of such a subproblem-tion scheme is shown, for a \(D=2\) case. Notice how, in the example, \(a_{j}\) jobs emptied in the first subproblem can be taken by other agents in the second, lower-priority subproblem. In this case, subproblem has reduced a size \(J\times I=4\times 5=20\) problem down to a \(4\times 3=12\) and a \(2\times 3=6\) problem: not only the total size is smaller \((18<20)\), but each of the subproblems can be executed in a machine with a significantly lower number of logical qubits. Moreover, applying our first variable-reduction criterion, only \(x_{ij}\) variables whose \(\mathcal{S}_{ij}\) score are positive are taken into account. This is represented by the omission of some of the gray connections on the figure, In this example, this reduces
Figure 2: Representation of the subproblem hamiltonians for a problem with \(J=4\) agents and \(I=5\) vacant jobs, divided into \(D=2\) subproblems. The first solves for the high \(\mathcal{P}_{i}^{V}\) vacants \(\{v_{i}\}_{1}=\{v_{1},v_{2},v_{3}\}\), and then a second subproblem takes care of the remaining \(\{v_{4},v_{5}\}\), as well as the newly-emptied \(\{a_{3}\}\), which makes \(\{v_{i}\}_{2}=\{v_{4},v_{5},a_{3}\}\). Meanwhile \(\{a_{1}\}\), a specially low-priority job, is not even considered for this playoff. After resolution, the list of jobs with assigned agents has become \(\{v_{1},v_{4},v_{3},a_{3}\}\).
the needed number of qubits of the two subproblems to \(7\) and \(4\) respectively.
## VI Summary of the quantum algorithm
**Inputs:**
* including the identity of the assigned agent \(j\).
* The \(\mathcal{P}^{C}_{j}\) priorities of the \(\{a_{j}\}\) jobs,
* The \(\{v_{i}\}\) list of the \(I\) vacant jobs.
* The \(\mathcal{P}^{V}_{i}\) priorities of the \(\{v_{i}\}\) jobs,
* The \(\mathcal{A}_{ik}\) affinities between each agent \(j\) and each job \(k\). These affinity scores will be used in the form of the affinity of agent \(j\) with the vacant job \(i\), \(\mathcal{A}^{V}_{ij}\) and the affinity of of agent \(j\) with the job they are covering \(\mathcal{A}^{C}_{jj}\).
* The \(c^{\mathcal{P}}\) and \(c^{\mathcal{A}}\) coefficients regulating the relative weights of the priority and affinity terms.
**Procedure:**
1. Decide, depending on the caracteristics of \(\{v_{i}\}\) and \(\mathcal{P}^{V}_{i}\), the number \(D\) of subproblems: 1. If the values of \(\mathcal{P}^{V}_{i}\) are discrete, then for each value one subproblem will be created. 2. If the values of \(\mathcal{P}^{V}_{i}\) are continuous, the elements of \(\{v_{i}\}\) can be grouped into \(D\) intervals of average length \(1/D\).
2. Divide \(\{v_{i}\}\) into the \(D\) different sublists \(\{v_{i}\}_{d}\), with \(d\) ordered from high to low \(\mathcal{P}^{V}_{i}\). Each \(\{v_{i}\}_{d}\) will contain vacant jobs with priorities \(\mathcal{P}^{V}_{i}\in[\mathcal{P}^{V}_{d\ min},\mathcal{P}^{V}_{d\ MAX}]\).
3. For \(\delta\in\{1,...,D\}\), repeat iteratively steps a) \(\rightarrow\) e): 1. Knowing only \(\Delta^{\mathcal{P}}_{ij}>0\) changes can happen, create a sublist \(\{a_{j}\}_{\delta}\) with only those assigned jobs with \(\mathcal{P}^{C}_{j}<\mathcal{P}^{V}_{MAX}\). 2. Create a graph connecting all elements of \(\{a_{j}\}_{\delta}\) with all elements of \(\{v_{i}\}_{\delta}\), and then remove all \(i,j\) connections with \(\mathcal{S}_{ij}\leq 0\). 3. Find the minimum of the \(H\) hamiltonian described by the graph, which has the QUBO form given by (9), taking into account that each connection is a representation of one \(x_{ij}\) binary variable. 4. Update the \(\{a_{j}\}\) list of jobs which have an assigned worker, taking into account the new locations of the reallocated workers by removing new vacants and adding newly filled jobs. Note that the elements of the list change and thus the \(j\) indices refer to different jobs in different iterations. However, the number of elements \(J\) does not change. 5. Update all the \(\{v_{i}\}_{d}\) to include the jobs the reallocated agents have just left vacant. Only \(\{v_{i}\}_{d}\) with \(d\in\{\delta+1,...D\}\) will need an update. 1. For discrete values of \(\mathcal{P}^{V}_{i}\), each of the new vacants will naturally be included in the \(\{v_{i}\}_{d}\) group that corresponds to their \(\mathcal{P}^{C}_{j}\). 2. For continuous values of \(\mathcal{P}^{V}_{i}\) sometimes jobs with \(\mathcal{P}^{C}_{j}\in[\mathcal{P}^{V}_{\delta\ min},\mathcal{P}^{V}_{\delta\ MAX}]\) will be left vacant. In that case, they should be included into the immediately subsequent list of \(\{v_{i}\}_{\delta+1}\), for them to be included in the subproblem of the next iteration. \(\mathcal{P}^{V}_{\delta+1MAX}\) will then need to be updated, taking the value of the highest \(\mathcal{P}^{C}_{j}\) included.
**Output:** An updated version of the \(\{a_{j}\}\) list with the jobs that have ended up with an assigned agent -and the identity of that agent for each job.
## VII Conclusions
In this paper we described the heuristical subproblem scheme developed for solving the Job Reassignment Problem posed in the joint project between i3b (_Instituto de Innovacion Ibermatica_) and the ONCE (_Organizacion Nacional de Ciegos Espaides_).
This algorithm, as all heuristical methods, does not guarantee a speedup or a solution of quality for all possible instances of the problem. However, the performance of the method for an average problem can be calculated, proving an exponential speedup over the resolution of the full problem without subproblemation. Moreover, the subproblemation scheme uniquely allows for agents to be reallocated to originally non-vacant jobs, which means that in some cases the given answer can be of better quality than the optimum of the original full problem.
The presented algorithm works over a QUBO hamiltonian cost function to allow resolution by both gate-based and annealing quantum computers, as well as classical and quantum-inspired resolution methods. As it divides the QUBO hamiltonian of the full problem into several smaller problems which are still in QUBO form, the advantage of the subproblem scheme is device-agnostic and stacks with the advantages of other QUBO resolution methods. As usual, all the advantages of the method remain when using the Ising forms of the hamiltonians.
## Acknowledgments
We thank ONCE INNOVA for the project proposal which kickstarted this work.
The research leading to this paper has received funding from the Q4Real project (Quantum Computing for Real Industries), HAZITEK 2022, no. ZE-2022/00033.
(c) 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
|
2309.08895 | CDDM: Channel Denoising Diffusion Models for Wireless Semantic
Communications | Diffusion models (DM) can gradually learn to remove noise, which have been
widely used in artificial intelligence generated content (AIGC) in recent
years. The property of DM for eliminating noise leads us to wonder whether DM
can be applied to wireless communications to help the receiver mitigate the
channel noise. To address this, we propose channel denoising diffusion models
(CDDM) for semantic communications over wireless channels in this paper. CDDM
can be applied as a new physical layer module after the channel equalization to
learn the distribution of the channel input signal, and then utilizes this
learned knowledge to remove the channel noise. We derive corresponding training
and sampling algorithms of CDDM according to the forward diffusion process
specially designed to adapt the channel models and theoretically prove that the
well-trained CDDM can effectively reduce the conditional entropy of the
received signal under small sampling steps. Moreover, we apply CDDM to a
semantic communications system based on joint source-channel coding (JSCC) for
image transmission. Extensive experimental results demonstrate that CDDM can
further reduce the mean square error (MSE) after minimum mean square error
(MMSE) equalizer, and the joint CDDM and JSCC system achieves better
performance than the JSCC system and the traditional JPEG2000 with low-density
parity-check (LDPC) code approach. | Tong Wu, Zhiyong Chen, Dazhi He, Liang Qian, Yin Xu, Meixia Tao, Wenjun Zhang | 2023-09-16T06:32:13Z | http://arxiv.org/abs/2309.08895v1 | # CDDM: Channel Denoising Diffusion Models for Wireless Semantic Communications
###### Abstract
Diffusion models (DM) can gradually learn to remove noise, which have been widely used in artificial intelligence generated content (AIGC) in recent years. The property of DM for eliminating noise leads us to wonder whether DM can be applied to wireless communications to help the receiver mitigate the channel noise. To address this, we propose channel denoising diffusion models (CDDM) for semantic communications over wireless channels in this paper. CDDM can be applied as a new physical layer module after the channel equalization to learn the distribution of the channel input signal, and then utilizes this learned knowledge to remove the channel noise. We derive corresponding training and sampling algorithms of CDDM according to the forward diffusion process specially designed to adapt the channel models and theoretically prove that the well-trained CDDM can effectively reduce the conditional entropy of the received signal under small sampling steps. Moreover, we apply CDDM to a semantic communications system based on joint source-channel coding (JSCC) for image transmission. Extensive experimental results demonstrate that CDDM can further reduce the mean square error (MSE) after minimum mean square error (MMSE) equalizer, and the joint CDDM and JSCC system achieves better performance than the JSCC system and the traditional JPEG2000 with low-density parity-check (LDPC) code approach.
Diffusion models, wireless image transmission, semantic communications, joint source-channel coding.
## I Introduction
Diffusion models (DM) [2, 3, 4] have recently achieved unprecedented success in artificial intelligence generated content (AIGC) [5], including multimodal image generation and edition [6, 7], text, and video generation [8, 9]. DM is a class of latent variable models inspired by non-equilibrium thermodynamics. They directly model the score function of the likelihood function through variational lower bounds, resulting in advanced generative performance. Compared to previous generative models such as variational auto-encoder (VAE) [10], generative adversarial network (GAN) [11], and normalization flow (NF) [12], DM can learn fine-grained knowledge of the distribution, allowing it to generate contents with rich details. Additionally, diffusion models are capable of generating more diverse images and have been shown to be resistant to mode collapse. The emergence of implicit classifiers endows diffusion models with flexibility controllability, enhanced efficiency and ensuring faithful generation in conditional generation tasks.
More specifically, DM gradually adds Gaussian noise to the available training data in the forward diffusion process until the data becomes pure noise. Then, in the reverse sampling process, it learns to recover the data from the noise, as shown in Fig. 1. Generally, given a data distribution \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\), the forward diffusion process generates the \(t\)-th sample of \(\mathbf{x}_{t}\) by sampling a Gaussian vector \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\) as following
\[\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t }}\epsilon, \tag{1}\]
where \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\) and \(\alpha_{i}\in(0,1)\) are hyperparameters.
In wireless communications, it is well known that the received signal \(y\) is a noisy and distorted version of the transmitted signal \(x\), e.g., we have the following for the additive white Gaussian noise (AWGN) channel
\[y=x+n, \tag{2}\]
where \(n\) is white Gaussian noise.
Interestingly, compared to (1) and (2), we can find that the designed process of DM and the wireless communications system are similar. DM progressively learns to effectively remove noise, thereby generating data that closely resembles the original distribution, while the receiver in the wireless communications system aims to recover the transmitted signal from the received signal. Clearly, **can DM be applied to the wireless communications system to help the receiver remove noise?**
Motivated by this, in this paper, we design channel denoising diffusion models (CDDM) for semantic communications in the wireless communications system. The proposed CDDM is conditioned on the received signal and channel estimation
Figure 1: The forward diffusion process with transition kernel \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) and the reverse sampling process with learnable transition kernel \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) of diffusion model in [3].
results to eliminate channel noise. In contrast to conventional generative models that only generate data adhering to the original data distribution, CDDM directly generates data that closely resembles the transmitted signal \(\mathbf{x}\), consequently enhancing the performance of the communications system. By employing carefully designed forward diffusion and reverse sampling processes based on an explicit conditional probabilistic model of the received signal, CDDM can adapt to different channel conditions, such as AWGN channel and Rayleigh fading channel with different signal-to-noise ratios (SNR). To leverage the received signal, we starting the reverse sampling process from the received signal rather than pure noise, greatly reducing the number of reverse sampling steps and thus accelerating the process.
In contrast to the extensive research on DM in AIGC, there have been few works of DM in wireless communications so far. In [13], DM is employed to generate the wireless channel for an end-to-end communications system, achieving almost the same performance as the channel-aware case. In [14], DM with an adapted diffusion process is proposed for the decoding of algebraic block codes. Additionally, [15] applies DM as the semantic decoder to generate the image condition on the transmitted semantic segment labels of the original image, achieving excellent mean intersection over union (mIoU) and learned perceptual image patch similarity (LPIPS) performance.
On the other hand, semantic communications [16, 17] have emerged as a novel paradigm that facilitates the seamless integration of information and communication technology with artificial intelligence (AI), which have been recognized as a highly promising solution for the sixth-generation (6G) wireless networks [18]. Semantic communications emphasize the transmission of valuable semantic information rather than bits, thereby guaranteeing improved transmission efficiency and reliability. One fundamental concept behind semantic communications is to bridge the source and channel components of Shannon theory [19], thereby enhancing the overall performance of end-to-end transmission. The paradigm focusing on the integrated design of source and channel coding processing is known as joint source-channel coding (JSCC), which is a classical subject in the coding theory and information theory [20, 21, 22]. However, traditional JSCC techniques are predominantly rooted in complex and explicit probabilistic models, heavily relying on expert manual designs which often face challenges when dealing with complex sources. Moreover, these JSCC techniques overlook semantic aspects and lack optimization for specific tasks or human visual perception.
Many previous studies investigate deep-learning based JSCC techniques for semantic communications [23, 24, 25, 26, 27, 28]. Most studies concentrate on designing specific frameworks for different data modals and have achieved better performance compared with traditional wireless transmission schemes. For wireless image transmission, [24] proposes a novel JSCC method based on attention mechanisms, which can automatically adapt to various channel conditions. In [26], an entropy model is proposed to achieve adaptive rate control for deep learning based JSCC architecture for semantic communications. In [27], the swin transformer [29] is integrated into the deep JSCC framework to improve the performance of wireless image transmission. [28] develops a joint coding-modulation method and achieves end-to-end digital semantic communication system for image transmission, outperforming the analog-based JSCC system at low SNRs. Generally, the deep-learning based JSCC have shown great performance surpassing classic separation-based JPEG2000 source coding and advanced low-density parity-check (LDPC) channel coding, especially for small size images and under human visual perception evaluation matric such as muti-scale structure similarity index measure (MSSSIM) [30].
Despite its great potential, previous studies predominantly concentrate on the development of a more sophisticated model architecture with increased capacity to enhance overall performance. The channel distortion is handled through direct end-to-end optimization. In this case, the JSCC models solely learn coding and decoding strategies by utilizing received signal samples, combating channel interference. To more effectively mitigate channel interference, we integrate the CDDM with the JSCC-based semantic communications system for wireless image transmission, where the signal after CDDM is fed into the JSCC decoder to recover the image. As previously discussed, our CDDM is specially developed to mitigate channel distortion by eliminating channel noise based on an explicit probability of the received signal, thereby improving the performance of the JSCC-based semantic communication system.
The contributions of this paper can be summarized as follows.
* We design a CDDM module based on the U-Net framework in wireless communications, which lies after the channel equalization (or without channel equalization) over the Rayleigh channel (or AWGN channel). The CDDM module learns the distribution of the channel input signal to predict the channel noise and remove it. The model is trained through the forward diffusion process specially designed to adapt the channel models, requiring no knowledge of the current channel state. After training, the CDDM addresses the received signal after equalization with the corresponding sampling algorithm, succeeding in eliminating the channel noise.
* We derive the explicit condition probability of the received signal after equalization according to the channel mathmatical model and the equalization algorithm, which instructs us to design the corresponding forward diffusion process to match the conditional distribution. The training of the proposed CDDM is accomplished by maximizing the variational lower bound of the logarithm maximun likelihood function, which is relaxed by introducing a series of latent variables in the forward diffusion process. Furthermore, we decompose the variational lower bound into multiple components associated with the latent variables and derive the final loss function using re-parameterization and re-weighted techniques to optimize these components respectively. By utilizing the Bayesian conditional posterior probability, we obtain a sampling algorithm that successfully and effectively mitigates the channel noise.
* We derive the sufficient condition for the reverse sampling algorithm reducing the conditional entropy of the received signal. Through Monte Carlo experiments, we discover the magnitude of the reduction in the upper bound of the conditional entropy differs from various sampling steps, providing insights for selecting the maximum sampling steps.
* We apply the CDDM to a semantic communications system based on the JSCC technique for wireless image transmission, called the joint CDDM and JSCC system. Experiments on the mean square error (MSE) between the transmitted signal and the received signal after CDDM prove that compared to the system without CDDM, the system with CDDM has a smaller MSE performance for both Rayleigh fading channel and AWGN channel, indicating that the proposed CDDM can effectively reduce the impact of channel noise through learning. Finally, extensive experimental results on different datasets demonstrate that the joint CDDM and JSCC system outperforms both the JSCC system and the traditional JPEG2000 with LDPC codec system under both AWGN and Rayleigh fading channels in terms of the peak signal-to-noise ratio (PSNR) and MSSSIM. We also evaluate its inherent robustness to channel estimation errors and its adaptability to various SNRs.
The rest of this paper is organized as follows. The system model is introduced in Section II. The detail of the proposed CDDM is presented in Section III. The joint CDDM and JSCC system for semantic communications is introduced in Section IV. Finally, extensive experimental results are presented in Section V, and conclusions are drawn in Section VI.
## II System model
In this section, we describe the system which the proposed CDDM is employed after the channel equalization as shown in Fig. 2. CDDM is trained using a specialized noise schedule adapted to the wireless channel, which enables it to effectively eliminate channel noise through sampling algorithm.
Let \(\mathbf{x}\in\mathbb{R}^{2k}\) be the real-valued symbols. Here, \(k\) is the number of channel uses. \(\mathbf{x}_{\mathbf{e}}\in\mathbb{C}^{k}\) is the complex-valued symbols which can be transmitted through the wireless channel, and the \(i\)-th transmitted symbol of \(\mathbf{x}_{\mathbf{e}}\) can be expressed as \(x_{c,i}=x_{i}+jx_{i+k}\), for \(i=1,...,k\).
Thus, the \(i\)-th received symbol of the received signal \(\mathbf{y}_{\mathbf{e}}\) is
\[y_{c,i}=h_{c,i}x_{c,i}+n_{c,i}, \tag{3}\]
where \(h_{c,i}\sim\mathbb{CN}(0,1)\) are independent and identically distributed (i.i.d.) Rayleigh fading gains, \(\mathbf{x}_{\mathbf{e}}\) has a power constraint \(\mathbb{E}[||\mathbf{x}_{\mathbf{e}}||_{2}^{2}]\leq 1\), and \(n_{c,i}\sim\mathbb{CN}(0,2\sigma^{2})\) are i.i.d. AWGN samples.
\(\mathbf{y}_{\mathbf{e}}\) is then addressed by equalization as \(\mathbf{y}_{\mathbf{e}\mathbf{q}}\in\mathbb{C}^{k}\), following a normalization-reshape module outputing a real vector \(\mathbf{y}_{\mathbf{r}}\in\mathbb{R}^{2k}\). We consider that the receiver can obtain the channel state \(\mathbf{h}_{\mathbf{e}}=[h_{c,1},...,h_{c,k}]\) through channel estimation and in this paper, we apply minimum mean square error (MMSE) as the equalizer. Therefore, we can derive the conditional distribution of \(\mathbf{y}_{\mathbf{r}}\) with known \(\mathbf{x}\) and \(\mathbf{h}_{\mathbf{c}}\), which can be formulated to instruct the forward diffusion and reverse sampling processes of CDDM.
**Proposition 1**.: _With MMSE, the conditional distribution of \(\mathbf{y}_{\mathbf{r}}\) with known \(\mathbf{x}\) and \(\mathbf{h}_{\mathbf{c}}\) under Rayleigh fading channel is_
\[p(\mathbf{y}_{\mathbf{r}}|\mathbf{x},\mathbf{h}_{\mathbf{c}})\sim\mathcal{N} (\mathbf{y}_{\mathbf{r}};\frac{1}{\sqrt{1+\sigma^{2}}}\mathbf{W}_{\mathbf{s}} \mathbf{x},\frac{\sigma^{2}}{1+\sigma^{2}}\mathbf{W}_{\mathbf{n}}^{2}), \tag{4}\]
_where \(\mathbf{H}_{\mathbf{r}}=diag(\mathbf{h}_{\mathbf{r}})\), \(\mathbf{h}_{\mathbf{r}}=\begin{bmatrix}|\mathbf{h}_{\mathbf{c}}|\\ |\mathbf{h}_{\mathbf{c}}|\end{bmatrix}\in\mathbb{R}^{2k}\), and_
\[\mathbf{W}_{\mathbf{s}}=\mathbf{H}_{\mathbf{r}}^{2}(\mathbf{H}_{\mathbf{r}}^ {2}+2\sigma^{2}\mathbf{I})^{-1},\mathbf{W}_{\mathbf{n}}=\mathbf{H}_{\mathbf{r }}(\mathbf{H}_{\mathbf{r}}^{2}+2\sigma^{2}\mathbf{I})^{-1}. \tag{5}\]
Proof.: Based on the defination, \(\mathbf{W}_{\mathbf{s}}\) and \(\mathbf{W}_{\mathbf{n}}\) are diagonal matrix, where the \(i\)-th and (\(i+k\))-th diagonal element are
\[W_{s,i}=W_{s,i+k}=\frac{|h_{c,i}|^{2}}{|h_{c,i}|^{2}+2\sigma^{2}},\] \[W_{n,i}=W_{n,i+k}=\frac{|h_{c,i}|}{|h_{c,i}|^{2}+2\sigma^{2}}. \tag{6}\]
Figure 2: Architecture of the joint CDDM and JSCC system.
The \(i\)-th output of MMSE \(y_{eq,i}\) can be expressed as
\[y_{eq,i}=\frac{|h_{c,i}|^{2}x_{c,i}+h_{c,i}^{H}n_{c,i}}{|h_{c,i}|^{2}+2\sigma^{2}}. \tag{7}\]
Based on (6), we have
\[\frac{|h_{c,i}|^{2}x_{c,i}}{|h_{c,i}|^{2}+2\sigma^{2}}=W_{s,i}x_{c,i}. \tag{8}\]
With the resampling trick, the conditional distributions of real part and imaginary part of \(\frac{h_{c,i}^{H}n_{c,i}}{|h_{c,i}|^{2}+2\sigma^{2}}\) are
\[p(Re(\frac{h_{c,i}^{H}n_{c,i}}{|h_{c,i}|^{2}+2\sigma^{2}})|h_{c,i}) \sim\mathcal{N}(0,\sigma^{2}(\frac{|h_{c,i}|}{|h_{c,i}|^{2}+2 \sigma^{2}})^{2})\] \[=\mathcal{N}(0,\sigma^{2}W_{n,i}^{2}), \tag{9}\]
\[p(Im(\frac{h_{c,i}^{H}n_{c,i}}{|h_{c,i}|^{2}+2\sigma^{2}})|h_{c,i})\sim \mathcal{N}(0,\sigma^{2}W_{n,i}^{2}). \tag{10}\]
Accordingly, we can rewrite \(\mathbf{y_{r}}\) as
\[\mathbf{y_{r}}=\frac{1}{\sqrt{1+\sigma^{2}}}(\mathbf{W_{s}x}+\mathbf{n_{r}}), \tag{11}\]
and the distribution \(p(\mathbf{n_{r}}|\mathbf{h_{c}})\) is \(\mathcal{N}(0,\sigma^{2}\mathbf{W_{n}^{2}})\).
Therefore, we have
\[p(\mathbf{y_{r}}|\mathbf{x},\mathbf{h_{c}})\sim\mathcal{N}(\mathbf{y_{r}}; \frac{1}{\sqrt{1+\sigma^{2}}}\mathbf{W_{s}x},\frac{\sigma^{2}}{1+\sigma^{2}} \mathbf{W_{n}^{2}}). \tag{12}\]
Similarly, we have the following proposition for AWGN channel.
**Proposition 2**.: _Under AWGN channel, the conditional distribution of \(\mathbf{y_{r}}\) with known \(\mathbf{x}\) is_
\[p(\mathbf{y_{r}}|\mathbf{x})\sim\mathcal{N}(\mathbf{y_{r}};\frac{1}{\sqrt{1+ \sigma^{2}}}\mathbf{W_{s}x},\frac{\sigma^{2}}{1+\sigma^{2}}\mathbf{W_{n}^{2} }), \tag{13}\]
_where \(\mathbf{W_{s}}\) and \(\mathbf{W_{n}}\) both bacume \(\mathbf{I}_{2k}\) under AWGN channel._
Proposition 1 an Proposition 2 demonstrate that the channel noise after equalization and normalization-reshape can be re-sampled using \(\epsilon\sim\mathcal{N}(0,\mathbf{I}_{2k})\). Additionally, the noise coefficient matrix \(\mathbf{W_{n}}\) is related to the modulo form of \(\mathbf{h_{c}}\). As a result, \(\mathbf{y_{r}}\) can be expressed as
\[\mathbf{y_{r}}=\frac{1}{\sqrt{1+\sigma^{2}}}\mathbf{W_{s}x}+\frac{\sigma}{ \sqrt{1+\sigma^{2}}}\mathbf{W_{n}}\epsilon. \tag{14}\]
Therefore, the proposed CDDM is trained to obtain \(\epsilon_{\theta}(\cdot)\), which is an estimation of \(\epsilon\). Here, \(\theta\) is all parameters of CDDM. By using \(\epsilon_{\theta}(\cdot)\) and \(\mathbf{W_{n}}\), a sampling algorithm is proposed to obtain \(\mathbf{y}\) with the aim to recover \(\mathbf{W_{s}x}\), which will be described in the next section.
## III Channel Denoising Diffusion Models
The whole strcuture of the CDDM forward diffusion and reverse sampling process is illustrated in Fig. 3. In this section, we first describe the training algorithm and sampling algorithm of the proposed CDDM. We then derive the sufficient condition for the reverse sampling algorithm reducing the conditional entropy of the received signal.
### _Training Algorithm of CDDM_
For the forward process of the proposed CDDM, the original source \(\mathbf{x}_{0}\) is
\[\mathbf{x}_{0}=\mathbf{W_{s}x}. \tag{15}\]
Let \(T\) be the hyperparameter. Similar to (1), for all \(t\in\{1,2,...,T\}\), we define
\[\mathbf{x}_{t}=\sqrt{\alpha_{t}}\mathbf{x}_{t-1}+\sqrt{1-\alpha_{t}}\mathbf{W _{n}}\epsilon, \tag{16}\]
and then it can be re-parametered as
\[\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}} \mathbf{W_{n}}\epsilon, \tag{17}\]
such that the distribution \(q(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{h_{r}})\) is
\[q(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{h_{r}})\sim\mathcal{N}(\mathbf{x}_{t} ;\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{W_{n}^{2} }). \tag{18}\]
Based on (4) and (18), if \(\bar{\alpha}_{m}=\frac{1}{1+\sigma^{2}}\), the Kullback-Leibler (KL) divergence is
\[D_{KL}(q(\mathbf{x}_{m}|\mathbf{x}_{0},\mathbf{h_{r}})||p(\mathbf{y_{r}}| \mathbf{x}_{0},\mathbf{h_{c}}))=0, \tag{19}\]
This indicates that through defining a forward diffusion process, we progressively generate a signal following the same distribution as the one passed through the real channel and equalizer. Such that **CDDM can be trained on \(\mathbf{x}_{m}\) instead of \(\mathbf{y_{r}}\)**. \(\mathbf{x}_{m}\) is defined by \(m\) steps as (16) such that in sampling process, the predicted distribution by CDDM can be decomposed into \(m\) small steps and each of them is \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{h_{r}})\) for \(t\in\{1,2,...,m\}\).
The goal of CDDM is to recover \(\mathbf{x}_{0}\) by learning the distribution of \(\mathbf{x}_{0}\) and removing the channel noise. Therefore, the
Figure 3: The forward diffusion process and reverse sampling process of the proposed CDDM.
training of CDDM is performed by optimizing the variational bound on negative log likehood \(L\). The variational bound of \(L\) is form by \(\mathbf{x}_{0:m}\) and \(\mathbf{y_{r}}\), which is given by
\[L=\mathbb{E}~{}[-\log~{}p_{\theta}(\mathbf{x}_{0}|\mathbf{h}_{ \mathbf{r}})]\leq\mathbb{E}_{q}[-\log(\underbrace{p_{\theta}(\mathbf{x}_{0:m}, \mathbf{y_{r}}|\mathbf{h}_{\mathbf{r}})}_{d(\mathbf{x}_{1:m},\mathbf{y_{r}}| \mathbf{x}_{0},\mathbf{h}_{\mathbf{r}})})]\] \[=\mathbb{E}_{q}~{}\underbrace{[D_{KL}(q(\mathbf{y_{r}}|\mathbf{x }_{0},\mathbf{h}_{\mathbf{r}})||p(\mathbf{y_{r}}|\mathbf{h}_{\mathbf{r}}))}_{L _{y}}-\underbrace{\log p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1},\mathbf{h}_{ \mathbf{r}})}_{L_{0}}\] \[+\underbrace{D_{KL}(q(\mathbf{x}_{m}|\mathbf{y_{r}},\mathbf{x}_{0 },\mathbf{h}_{\mathbf{r}})||p_{\theta}(\mathbf{x}_{m}|\mathbf{y_{r}},\mathbf{ h}_{\mathbf{r}}))}_{L_{m}}\] \[+\sum_{t=1}^{m}\underbrace{D_{KL}(q(\mathbf{x}_{t-1}|\mathbf{x}_{ t},\mathbf{x}_{0},\mathbf{h}_{\mathbf{r}})||p_{\theta}(\mathbf{x}_{t-1}| \mathbf{x}_{t},\mathbf{h}_{\mathbf{r}}))}_{L_{t-1}}], \tag{20}\]
where \(L_{m}\) instructs to select the hyperparameter \(m\). In this paper, we select \(m\) by
\[arg\min_{m}~{}2\sigma^{2}-\frac{1-\bar{\alpha}_{m}}{\bar{\alpha}_{m}}. \tag{21}\]
Similar to the process in [3], \(L_{t-1}\) can be calculated in closed-form using the Rao-Blackwellized method. The optimization object of \(L_{t-1}\) can be simplified by adopting re-parameterization and re-weighting methods as following
\[\mathbb{E}_{\mathbf{x}_{0},\epsilon}(||\mathbf{W}_{\mathbf{n}} \epsilon-\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{ \mathbf{r}},t)||_{2}^{2}), \tag{22}\]
where \(\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}},t)\) is the output of CDDM. Moreover, (22) can be re-weighted by ignoring the noise coefficient matrix \(\mathbf{W}_{\mathbf{n}}\) as following
\[\mathbb{E}_{\mathbf{x}_{0},\epsilon}(||\epsilon-\epsilon_{\theta}(\sqrt{\bar {\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{W}_{\mathbf{n}} \epsilon)||_{2}^{2}). \tag{23}\]
Finally, to optimize (23) for all \(t\in\{1,2,...,T\}\), the loss function of the proposed CDDM is expressed as follows
\[L_{CDDM}(\theta)=\mathbb{E}_{\mathbf{x}_{0},\epsilon,t}(|| \epsilon-\epsilon_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar {\alpha}_{t}}\mathbf{W}_{\mathbf{n}}\epsilon)||_{2}^{2}). \tag{24}\]
In summary, the proposed CDDM has the capability to estimate noise, due to its ability to learn to approximate the real posterior distribution \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0},\mathbf{h}_{\mathbf{r}})\) with its parameterized distribution \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}})\) in the training process. The distribution approximation can be derived into noise estimation, as shown in (24). The training procedures of the proposed CDDM are summarized in Algorithm 1.
```
0:\(\mathbf{y_{r}}\),\(\mathbf{h}_{\mathbf{r}}\),hyperparameter \(m\)
0:\(\mathbf{y}\)
1:\(\mathbf{x}_{m}=\mathbf{y_{r}}\)
2:for\(t=m,...,2\)do
3:\(\mathbf{z}=\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_ {\mathbf{r}},t)\)
4:\(\mathbf{x}_{t-1}=\sqrt{\bar{\alpha}_{t-1}}(\frac{\mathbf{x}_{t}-\sqrt{1-\bar{ \alpha}_{t}}\mathbf{z}}{\sqrt{\bar{\alpha}_{t}}})+\sqrt{1-\bar{\alpha}_{t-1} }\mathbf{z}\)
5:endfor
6:\(t=1\)
7:\(\mathbf{z}=\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}(\mathbf{x}_{1},\mathbf{h}_ {\mathbf{r}},1)\)
8:\(\mathbf{y}=\frac{\mathbf{x}_{1}-\sqrt{1-\bar{\alpha}_{1}}\mathbf{z}}{\sqrt{ \bar{\alpha}_{1}}}\)
```
**Algorithm 2** Sampling algorithm of CDDM
### _Sampling Algorithm of CDDM_
To reduce the time consumption of sampling process and recover the transmitted signal accurately, (20) implies that selecting \(m\) according to (21) and setting \(\mathbf{x}_{m}=\mathbf{y_{r}}\) is a promising way. By utilizing the received signal \(\mathbf{y_{r}}\), only \(m\) steps are needed to be executed. For each time step \(t\in\{1,2,...,m\}\), the trained CDDM outputs \(\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}},t)\), which attempts to predict \(\epsilon\) from \(\mathbf{x}_{t}\) without knowledge of \(\mathbf{x}_{0}\). A sampling algorithm is required to sample \(\mathbf{x}_{t-1}\). The process is excuted for \(m\) times such that \(\mathbf{x}_{0}\) can be computed out finally.
We first define the sampling process \(f(\mathbf{x}_{t-1})\) with the knowledge of \(\epsilon\) as following
\[f(\mathbf{x}_{t-1})=q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0},\mathbf{h}_ {\mathbf{r}}). \tag{25}\]
Applying Bayes rule, the distribution can be expressed as a Gaussian distribution
\[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0},\mathbf{h}_{ \mathbf{r}})\] \[\sim\mathcal{N}(\mathbf{x}_{t-1};\sqrt{\bar{\alpha}_{t-1}} \mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t-1}}\frac{\mathbf{x}_{t}-\sqrt{\bar{ \alpha}_{t}}\mathbf{x}_{0}}{\sqrt{1-\bar{\alpha}_{t}}},0), \tag{26}\]
where \(\mathbf{x}_{0}\) is acquired by re-writing (17) as following
\[\mathbf{x}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t}- \sqrt{1-\bar{\alpha}_{t}}\mathbf{W}_{\mathbf{n}}\epsilon). \tag{27}\]
However, only \(\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}},t)\) is available for sampling. \(\mathbf{x}_{0}\) is derived through an estimation process by replacing \(\epsilon\) with \(\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}},t)\) as following
\[\hat{\mathbf{x}}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t}- \sqrt{1-\bar{\alpha}_{t}}\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}(\mathbf{x}_{t}, \mathbf{h}_{\mathbf{r}},t)). \tag{28}\]
As a result, the sampling process is replaced with
\[f_{\theta}(\mathbf{x}_{t-1})=p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\hat{ \mathbf{x}}_{0},\mathbf{h}_{\mathbf{r}}). \tag{29}\]
Without the knowledge of \(\epsilon\), a sample of \(\mathbf{x}_{t-1}\) is
\[\mathbf{x}_{t-1}= \sqrt{\bar{\alpha}_{t-1}}(\underbrace{\frac{1}{\sqrt{\bar{ \alpha}_{t}}}(\mathbf{x}_{t}-\sqrt{1-\bar{\alpha}_{t}}\mathbf{W}_{\mathbf{n}} \epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}},t))}_{estimate~{}\mathbf{x}_{0}}\] \[+\underbrace{\sqrt{1-\bar{\alpha}_{t-1}}\mathbf{W}_{\mathbf{n}} \epsilon_{\theta}(\mathbf{x}_{t},\mathbf{h}_{\mathbf{r}},t)}_{sample~{}\mathbf{x}_{t-1}}. \tag{30}\]
Note that for the last step \(t=1\), we only predict \(\mathbf{x}_{0}\) such that sampling is taken as
\[\mathbf{y}=\frac{1}{\sqrt{\bar{\alpha}_{1}}}(\mathbf{x}_{1}- \sqrt{1-\bar{\alpha}_{1}}\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}(\mathbf{x}_{1}, \mathbf{h}_{\mathbf{r}},1)). \tag{31}\]
The sampling method is summarized in Algorithm 2.
### _Analysis on the conditional entropy_
To explain the denoising ability of the CDDM, we compare the conditional entropy between \(\mathbf{x}_{t}\) and \(\mathbf{x}_{t-1}\), where \(\mathbf{x}_{t}\) is considered as the receiving signal because (19) has shown that \(\mathbf{x}_{t}\) can belong to the same conditional distribution as the received signal.
For all \(t\in\{1,2,...,T\}\), \(\mathbf{x}_{t}\) is acquired as (17). According to (18), we can get the conditional entropy of the \(i\)-th element of \(\mathbf{x}_{t}\) as \(\mathcal{H}(x_{t,i}|\mathbf{x}_{0},\mathbf{h})=\frac{1}{2}\ln(W_{n,i}^{2}(1- \bar{\alpha}_{t}))+C\), \(i=1,2,...,2k\). Here, \(C\) is a constant. \(\mathbf{x}_{t-1}\) is sampled as (30). However, \(\mathbf{x}_{t}\) is unknown in \(\mathcal{H}(x_{t-1,i}|\mathbf{x}_{0},\mathbf{h})\). We can reparameter (30) with (17) and obtain
\[\mathbf{x}_{t-1}=\sqrt{\bar{\alpha}_{t-1}}\mathbf{x}_{0}+\beta_{t}\mathbf{W} _{\mathbf{n}}\epsilon-\beta_{t}\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}( \cdot)+\gamma_{t-1}\mathbf{W}_{\mathbf{n}}\epsilon_{\theta}(\cdot), \tag{32}\]
where \(\beta_{t}=\frac{\sqrt{1-\bar{\alpha}_{t}}}{\sqrt{\bar{\alpha}_{t}}}\) and \(\gamma_{t}=\sqrt{1-\bar{\alpha}_{t}}\). \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\) and thus \(\mathbf{x}_{t-1}\) is a random variable with respect to \(\epsilon\) with unknown distribution.
Now, we introduce two assumptions for the following analysis.
**Assumption 1**.: _There exists a constant bound \(\tau>0\) on the element-wise loss function:_
\[\mathbb{E}_{\epsilon}(||\epsilon_{i}-\epsilon_{\theta,i}(\cdot)||_{2}^{2}) \leq\tau. \tag{33}\]
This reasonable and necessary assumption is derived from the fact that the network is optimized sufficiently, meaning the loss function \(\mathbb{E}_{\epsilon}(||\epsilon-\epsilon_{\theta}(\cdot)||_{2}^{2})\leq\chi\), which can be written into element-wise form as (33).
**Assumption 2**.: _The mathematical expectation of network output is 0, i.e.,_
\[\mathbb{E}_{\epsilon}(\epsilon_{\theta,i}(\cdot))=0. \tag{34}\]
This assumption will be verified through Monte-Carlo in the following. Thus, we have the following theorem.
**Theorem 1**.: _Based on the two assumptions mentioned above, for all \(t\in\{1,2,...,T\}\) and \(i=1,2,...,2k\), the sufficiency condition of_
\[\mathcal{H}(x_{t-1,i}|\mathbf{x}_{0},\mathbf{h})\leq\mathcal{H}(x_{t,i}| \mathbf{x}_{0},\mathbf{h}) \tag{35}\]
_is_
\[\mathbb{E}_{\epsilon}(\epsilon_{\theta,i}^{2}(\cdot))\geq\frac{1-\bar{\alpha} _{t}-\beta_{t}\gamma_{t-1}}{\gamma_{t-1}^{2}-\beta_{t}\gamma_{t-1}}-\frac{ \beta_{t}^{2}-\beta_{t}\gamma_{t-1}}{\gamma_{t-1}^{2}-\beta_{t}\gamma_{t-1}}\tau. \tag{36}\]
Proof.: According to Assumption 1, we can derive the cross-correlation coefficient of the two random variables \(\epsilon_{i}\) and \(\epsilon_{\theta,i}(\cdot)\) as following
\[\mathbb{E}_{\epsilon}(||\epsilon_{i}-\epsilon_{\theta,i}(\cdot)||_{2}^{2})= \mathbb{E}(\epsilon_{i}^{2}-2\epsilon_{i}\epsilon_{\theta,i}(\cdot)+\epsilon_ {\theta,i}^{2}(\cdot))\leq\tau. \tag{37}\]
We then have
\[2\mathbb{E}(\epsilon_{i}\epsilon_{\theta,i}(\cdot))\geq 1-\tau+\mathbb{E}( \epsilon_{\theta,i}^{2}(\cdot)). \tag{38}\]
Let \(\pi_{t-1,i}^{2}\) be the variance of \(x_{t-1,i}\). According to (32), (38) and Assumption 2, we have
\[\pi_{t-1,i}^{2}=\mathbb{E}(x_{t-1,i}^{2})-\mathbb{E}^{2}(x_{t-1,i})\] \[=W_{n,i}^{2}\mathbb{E}(\beta_{t}^{2}\epsilon_{i}^{2}+(\beta_{t}- \gamma_{t-1})^{2}\epsilon_{\theta,i}^{2}(\cdot)-2\beta_{t}(\beta_{t}-\gamma_{ t-1})\epsilon_{i}\epsilon_{\theta,i}(\cdot))\] \[\leq W_{n,i}^{2}(\beta_{t}^{2}+(\beta_{t}-\gamma_{t-1})^{2} \mathbb{E}(\epsilon_{\theta,i}^{2}(\cdot))\] \[-\beta_{t}(\beta_{t}-\gamma_{t-1})(1-\tau+\mathbb{E}(\epsilon_{ \theta,i}^{2})))\] \[=W_{n,i}^{2}((\gamma_{t-1}^{2}-\beta_{t}\gamma_{t-1})\mathbb{E}( \epsilon_{\theta,i}^{2}(\cdot))\] \[+\beta_{t}\gamma_{t-1}+(\beta_{t}^{2}-\beta_{t}\gamma_{t-1})\tau). \tag{39}\]
Let \(u_{\tau}(t,\mathbf{h})\) be the upper bound of \(\mathcal{H}(x_{t-1,i}|\mathbf{x}_{0},\mathbf{h})\). With the maximum entropy principle, we have
\[\mathcal{H}(x_{t-1,i}|\mathbf{x}_{0},\mathbf{h}) \leq\frac{1}{2}\ln(\pi_{t-1,i}^{2})+C\] \[\leq\frac{1}{2}\ln(W_{n,i}^{2}((\gamma_{t-1}^{2}-\beta_{t}\gamma_ {t-1})\mathbb{E}(\epsilon_{\theta,i}^{2}(\cdot))\] \[+\beta_{t}\gamma_{t-1}+(\beta_{t}^{2}-\beta_{t}\gamma_{t-1})\tau))+C\] \[\triangleq u_{\tau}(t,\mathbf{h}). \tag{40}\]
Figure 4: Experiment results of \(\mathbb{E}(\epsilon_{\theta}(\cdot))\) and \(\mathbb{E}(\epsilon_{\theta}^{2}(\cdot))\) with theoretical values of \(f_{\tau}(t)\) versus sampling step \(t\). The black dot marked the maximum sampling step, below which the model satisfies the sufficient condition under AWGN channel.
Here, we have \(\gamma_{t-1}^{2}-\beta_{t}\gamma_{t-1}<0\). Therefore, it is easy to obtain the necessity and sufficiency conditional for the inequalities \(u_{\tau}(t,\mathbf{h})\leq\mathcal{H}(x_{t,i}|\mathbf{x}_{0},\mathbf{h})\) as following
\[\mathbb{E}(e_{\theta,i}^{2}(\cdot))\geq\frac{1-\bar{\alpha}_{t}- \beta_{t}\gamma_{t-1}}{\gamma_{t-1}^{2}-\beta_{t}\gamma_{t-1}}-\frac{\beta_{t} ^{2}-\beta_{t}\gamma_{t-1}}{\gamma_{t-1}^{2}-\beta_{t}\gamma_{t-1}}\tau\triangleq f _{\tau}(t). \tag{41}\]
Taking the necessity and sufficiency condition into (40), we can get the sufficiency condition as the theory.
In Fig. 4, the dashed line represents the Monte Carlo results of \(\mathbb{E}(e_{\theta}(\cdot))\) approaching zero, which proves that Assumption 2 holds in the proposed model. It also demonstrates that there exists a limitation \(\varsigma\). If \(t\leq\varsigma\), the condition (41) holds. This suggests that the number of sampling steps should be limited in order to achieve performance improvements. Fig. 5 shows the value of \(\mathcal{H}(x_{t,i}|\mathbf{x}_{0},\mathbf{h})-u_{\tau}(t,\mathbf{h})\) at \(\tau=0.3\) versus sampling step \(t\). It is observed that the curve initially exhibits a sharp decline and subsequently levels off rapidly. Considering two figures together, the sampling step of CDDM can not be determines utilizing (21) in case where the channel noise power is excessively high, as it would exceed the threshold \(\varsigma\). Furthermore, even if the sampling step is below \(\varsigma\), the gradient becomes very small when it falls within the flattened region. This can lead to the conditional entropy remaining stagnant, resulting in no performance improvement. On the other hand, if the sampling step is too small, the channel noise may not be eliminated sufficiently. Based on the analysis above, we recommend to set the maximum sampling step \(t_{max}\in[10,150]\) as shown in the Fig. 5 with red line. Correspondingly, (21) is revised into
\[m=\min(t_{max},\arg\min_{m}~{}2\sigma^{2}-\frac{1-\bar{\alpha}_{m}}{\bar{ \alpha}_{m}}). \tag{42}\]
## IV The joint CDDM and JSCC for semantic communications
In this section, the proposed CDDM is applied into a semantic communications system based on JSCC for wireless image transmission.
### _System Structure_
An overview architecture of the joint CDDM and JSCC system is shown in Fig. 2. An RGB source image \(\mathbf{s}\) is encoded as transmitted signal \(\mathbf{x}\in\mathbb{R}^{2k}\) by a JSCC encoder. In this paper, the JSCC is built upon the Swin Transformer [29] backbone, which has a more powerful expression ability than vision transformer by replacing the standard multi-head self attention in vision transformer with a shift window multi-head self attention. \(\mathbf{x}\) is then transmitted and processed into \(\mathbf{y_{r}}\) at the receiver, as described in Section II. At the receiver, the proposed CDDM removes the channal noise from \(\mathbf{y_{r}}\) using Algorithm 2. Following this, the output of CDDM is fed into the JSCC decoder to reconstruct the source image \(\mathbf{\hat{s}}\).
### _Training algorithm_
The entire training algorithm of the joint CDDM and JSCC system consists of three stages. In the first stage, the JSCC encoder and decoder are trained jointly through the channel shown in Fig. 2, except for the CDDM module, to minimize the distance \(d(\mathbf{s},\mathbf{\hat{s}})\). Therefore, the loss function for this stage is given by
\[L_{1}(\phi,\varphi)=\mathbb{E}_{\mathbf{s}\sim p_{\mathbf{s}}} \mathbb{E}_{\mathbf{y_{r}}\sim p_{\mathbf{y_{r}}|\mathbf{s}}}d(\mathbf{s}, \mathbf{\hat{s}}). \tag{43}\]
where \(\phi\) and \(\varphi\) encapsulate all parameters of JSCC encoder and decoder respectively.
In the second stage, the parameters of the JSCC encoder are fixed such that CDDM can learn the distribution of \(\mathbf{x}_{0}\) via Algorithm 1. The training process is not affected by the channel noise power because Algorithm 1 has a special forward diffusion process, and the process has been designed specially to simulate the distribution of channel noise. Benefitting from this, CDDM is designed for handling various channel conditions and requires only one training process.
In the third stage, the JSCC decoder is re-trained jointly with the trained JSCC encoder and CDDM to minimize \(d(\mathbf{s},\mathbf{\hat{s}})\). The entire joint CDDM and JSCC system is performed through the real channel, while only the parameters of the decoder are updated. The loss function is derived as
\[L_{3}(\varphi)=\mathbb{E}_{\mathbf{y}\sim p_{\mathbf{y}|\mathbf{s}}}d( \mathbf{s},\mathbf{\hat{s}}). \tag{44}\]
The training algorithm is summarized in Algorithm 3.
```
Input: Training set \(\mathbf{S}\), hyper-parameter \(T\), \(\bar{\alpha_{t}}\), and the channel estimation results \(\mathbf{h_{c}}\) and \(\sigma^{2}\) Output: The well-trained joint CDDM and JSCC system.
1:while the training stop condition of stage one is not met do
2: Randomly sample \(\mathbf{s}\) from \(S\)
3: Perform forward propagation through channel without CDDM.
4: Compute \(L_{1}(\phi,\varphi)\) and update \(\phi,\varphi\)
5:endwhile
6:while the training stop condition of stage two is not met do
7: Randomly sample \(\mathbf{s}\) from \(S\)
8: Compute \(\mathbf{s}\) as \(\mathbf{x}\)
9: Train CDDM with Algorithm 1.
10:endwhile
11:while the training stop condition of stage three is not met do
12: Randomly sample \(\mathbf{s}\) from \(S\)
13: Perform forward propagation through channel with noise power \(\sigma^{2}\) with the trained CDDM
14: Compute \(L_{3}(\varphi)\) and update \(\varphi\)
15:endwhile
```
**Algorithm 3** Training algorithm of the joint CDDM and JSCC
head(Conv Head) layer is adopted to compute the features as transmitted signal \(\mathbf{x}\). The structure of the JSCC decoder is identical to that of the JSCC encoder, with the exception that the downsample modules in the JSCC encoder are replaced with upsample modules.
The model structure of CDDM is predominantly based on the convolutional improved U-Net architecture [31]. Initially, \(\mathbf{y}_{r}\) undergoes a convolution layer and then serves as the input of the U-Net. Subsequently, the output of U-Net is further processed by another convolutional layer to generate the final output \(\mathbf{y}\). The U-Net is comprised of various components, including convolutional residual (Conv-Res) blocks [32], convolutional attention (Conv-Attn) blocks, Down-Sampling blocks, and Up-Sampling blocks. A Down-Sampling block is a convolutional layer that performs down-sampling and maintains the same number of input and output channels. The Up-Sampling block consists of an interpolation layer followed by a convolutional layer. A Conv-Attn is an attention block commonly adopted in classic transformer [33], but with the notable distinction of employing convolutional layers as a replacement for fully-connected (FC) layers. The structure of Conv-Res is depicted in Fig. 2. In comparison to the classic residual block, the Conv-Res block substitutes FC layers with convolutional layers. Moreover, an additional convolutional layer is incorporated into the residual path to adjust the data dimension and enhance the model's capacity. The sampling step \(t\) is addressed by a MLP and the embedded within the middle of the Conv-Res block. Multiple instances of these blocks are sequentially connected incorping two additional residual paths, ultimately forming the U-Net architecture.
## V Experiments Results
In this section, we provide a detailed description of the experimental setup and presented a considerable amount of experimental results, which comprehensively demonstrate the effectiveness of our proposed CDDM system. Additionally, we assess its natural robustness to channel estimation errors and its adaptability to different SNRs.
### _Experiment Setup_
**Datesets**: To obtain comprehensive and universally applicable results, we train and evaluate the proposed joint CDDM and JSCC system on two image datasets. CIFAR10 [34] dataset is employed for low-resolution images with dimensions of \(32\times 32\), comprising of 50000 color images for training and 10000 images for testing. The high-resolution images are obtained from DIV2K dataset [35], which includes 800 images for training and 100 images for testing. These images are collected from a wide range of real-world scenes and have a uniform resolution of 2K. During the training process, the images with high resolution are randomly cropped into patches with a size of \(256\times 256\).
**Comparison schemes**: We conduct a comparative analysis between the proposed joint CDDM and JSCC system and two other systems: the JSCC system without CDDM and the classical handcrafted separation-based source and channel coding system. More specifically, the JSCC system shares an identical structure and training configuration within the joint CDDM and JSCC system. It is worth emphasizing that in the event of a change in channel SNR, both systems undergo retraining to optimize their performance under the specific SNR condition. For the classical system, we employ the JPEG2000 codec for compression and LDPC [36] codec for channel coding, marking as "JPEG2000+LDPC". Here, we consider DVB-T2 LDPC codes with a block length of 64800 bits for different coding rates and quadrature amplitude modulations (QAM) adapted to the channel conditions.
**Evaluation Metrics**: We qualify the performance of all three schemes with both PSNR and MSSSIM. PSNR is a widely-used pixel-wise metric that measures the visiblity of errors between the reconstructed image and the reference image. A higher PSNR value indicates a smaller loss in the image quality. In this case, we adopt MSE to calculate \(d(\cdot)\) during optimizing our networks. MSSSIM is a perceptual metric that specially concentrates on the structural similarity and content of images, which aligns more closely with the evaluation results of the human visual system (HVS). The multi-scale design allows it to demonstrate consistent performance across images with varying resolutions. The value of MSSSIM ranges from 0 to 1, where a higher value indicates a higher similarity to the reference image. Also in this case, we adopt 1-MSSSIM to calculate \(d(\cdot)\) during optimizing our networks. When testing the performance, we convert MSSSIM into the form of dB for more intuitive observation and compaision. The formula is \(MSSSIM\)\((dB)=-10\)\(\log_{10}(1-MSSSIM)\).
**Training details**: For the CDDM training and sampling algorithms, we configure the parameter \(T=1000\) and set \(\alpha_{t}\) to constants decreasing linearly from an initial value of \(\alpha_{1}=0.9999\) to a final value \(\alpha_{T}=0.9800\). We set \(t_{max}=93\) for CIFAR10 dataset while \(t_{max}=52\) for DIV2K dataset. During optimizing the CDDM, we employ the Adma optimizer [37] and implement a cosine warm-up learning rate schedule [38] with an initial learning rate of 0.0001. In terms of
Figure 6: MSE performance of DIV2K versus SNRs under AWGN and Rayleigh fading channel with or without channel estimation errors. The CBR is \(3/128\).
the JSCC structure, the number of basic-blocks and patches varies depending on the dataset. For CIFAR10 dataset, the number of Basicblocks, denoted as \(M\), is set to \(2\), Swin Transformer numbers \([N_{1},N_{2}]=[2,4]\) and channel dimensions \([P_{1},P_{2}]=[128,256]\). On the other hand, for DIV2K dataset comprising high-resolution images, \(M\) is set to \(4\), Swin Transformer numbers \([N_{1},N_{2},N_{3},N_{4}]=[2,2,6,2]\) and channel dimensions \([P_{1},P_{2},P_{3},P_{4}]=[128,192,256,320]\). We employ Adam optimizer with a learning rate 0.0001 to optimize the JSCC [27].
### _MSE performance and visualization results_
Fig. 6 illustrates the MSE performance of CDDM in different SNR regimes. The results are based on DIV2K dataset with JSCC trained for maximizing PSNR and channel bandwidth ratio (CBR) is set to \(3/128\). In the case of using CDDM, we calculate the MSE between \(\mathbf{x}\) and \(\mathbf{y}\), while in the case of not using CDDM, we calculate the MSE between \(\mathbf{x}\) and \(\mathbf{y}_{r}\). As shown in Fig. 2, \(\mathbf{y}_{r}\) and \(\mathbf{y}\) are the input and output of CDDM, respectively. The solid line in Fig. 6 shows that the system with CDDM performs much better than the system without CDDM in all SNR regimes under both AWGN and Rayleigh fading channels. For example, for AWGN channel, the proposed CDDM reduces the MSE by \(0.27\) dB at SNR\(=\)\(20\) dB. Meanwhile, it can be seen that as the SNR decreases, the gain of CDDM in MSE increases. This indicates that as the SNR decreases, i.e., the channel noise increases, the proposed CDDM is easier to remove more noise, e.g. \(1.44\) dB gain at SNR\(=\)\(5\) dB for AWGN channel. Moreover, it is
Figure 7: Examples of visualization results under Rayleigh fading channel at SNR=10 dB. The four columns display the original images and the reconstructed images obtained from their respective systems. The red number corresponds to the percentage of additional bandwidth cost in comparison to the joint CDDM and JSCC system.
important to note that under Rayleigh fading channel, MMSE has theoretically minimized the MSE, but CDDM can further reduce the MSE after MMSE. The reason for this fact is that CDDM can learn the distribution of \(\mathbf{x}_{0}=\mathbf{W_{s}}\mathbf{x}\), and utilizes this learned knowledge to remove the noise thereby further reducing the MSE.
Additionally, to conduct a more comprehensively evaluation of our model, we assess the robustness of the proposed CDDM under Rayleigh fading channel with the presence of channel estimation errors. The receiver obtains a noisy estimation of \(\mathbf{h}\), denoted as \(\hat{\mathbf{h}}\) which is formulated as \(\hat{\mathbf{h}}=\mathbf{h}+\Delta\mathbf{h}\), where \(\Delta\mathbf{h}\sim\mathbb{CN}(0,\sigma_{h}^{2}\mathbf{I})\). In Fig. 6, the dashed lines correspond to lower estimation errors with \(\sigma_{h}=0.05\) and the dotted lines represent more estimation errors with \(\sigma_{h}=0.1\). It is observed that under \(\sigma_{h}=0.05\), the joint CDDM and JSCC system maintains gains relative to perfect channel estimation across all SNR ranges. However, as \(\sigma_{h}\) increases to \(0.1\), the gains tend to decrease. This reduction is particularly notable at SNRs of \(10\) and \(20\) dB.
Fig. 7 visualizes the reconstructions generated by the three systems. The results are obtained under Rayleigh fading channel with perfect channel estimation and an SNR of \(10\) dB. It can be observed clearly that both JSCC-based systems outperform JPEG2000+LDPC in terms of visual quality, despite a slightly lower CBR. However, the reconstructed images obtained from the JSCC system demonstrate significant color aberration when compared to their corresponding original images. For example, the first image exhibits a lean towards a pale yellow hue, while the second and third images tend to lean towards a cyan color tone. On the contrary, our joint CDDM
Figure 11: PSNR performance of DIV2K dataset versus CBR under AWGN channel. The SNR is \(10\) dB.
Figure 8: PSNR performance of DIV2K dataset versus SNR under AWGN channel. The CBR is set to \(3/128\).
Figure 10: PSNR performance of CIFAR10 versus SNR under Rayleigh fading channel with or without channel estimation errors. The CBR is \(1/8\).
Figure 9: PSNR performance of DIV2K versus SNR under Rayleigh fading channel with or without channel estimation errors. The CBR is \(3/128\).
and JSCC system simultaneously demonstrates superior color consistency and better visual quality.
### _PSNR performance_
Fig. 8 illustrates the PSNR performance for DIV2K dataset versus SNR under AWGN channel. The CBR is configured to \(3/128\). Our joint CDDM and JSCC system demonstrates superior performance compared to the JSCC system across a range of SNRs from \(5\) to \(20\) dB. Furthermore, the joint CDDM and JSCC system achieves significantly better performance when compared to the JPEG2000+LDPC system. Specifically, at an SNR of \(20\) dB, the performance of the JPEG2000+LDPC system is comparable to that of the JSCC system, but still exhibits a \(0.5\) dB inferiority compared to our joint CDDM and JSCC system.
Fig. 9 and 10 illustrate the PSNR performance for both DIV2K and CIFAR10 datasets under Rayleigh fading channel. The CBR is \(3/128\) for DIV2K and \(1/8\) for CIFAR10. The solid line, dashed line and dotted line represent that \(\sigma_{h}\) is \(0\), \(0.05\) and \(0.1\), respectively. It can be observed that, the joint CDDM and JSCC system consistently outperforms the JSCC system across both datasets and all SNRs, i.e. \(0.83\) dB for CIFAR10 dataset and \(0.53\) dB for DIV2K dataset at SNR=\(10\) dB with perfect channel estimation. Meanwhile, it is worth noting that the gain in PSNR performance for DIV2K dataset tends to decrease as the SNR increases when \(\sigma_{h}=0.1\), which is aligns with the decrease in MSE performance gain. The experimental results under both datasets, conducted at a channel estimation error level of \(\sigma_{h}=0.1\), highlight the lack of natural robustness in our system when exposed to high
Fig. 14: MSSSIM performance of DIV2K versus SNR under both AWGN and Rayleigh fading channels. The CBR is \(3/128\).
Fig. 12: PSNR performance of DIV2K dataset versus CBR under Rayleigh fading channel. The SNR is \(10\).
Fig. 13: PSNR performance, trained at a SNR of \(20\) dB, for DIV2K versus SNR under both AWGN and Rayleigh fading channels. The SNR is \(10\) dB.
Fig. 15: MSSSIM performance of CIFAR10 versus SNR under Rayleigh fading channel. The CBR is \(1/8\).
and high SNR conditions. This finding underscores the need to devise a specialized framework to mitigate the influence of channel estimation errors and enhance the system robustness in the future.
Fig. 11 and 12 show the PSNR performance for DIV2K dataset in different CBRs under AWGN and Rayleigh fading channn, respectively. The SNR is set to \(10\) dB. It is evident that our joint CDDM and JSCC system maintains effectiveness for complex high-resolution DIV2K dataset across various CBRs despite that the performance gain decreases as the CBR increases. This phenomenon can be attributed to the increase in the dimensionality of the transmitted signal \(x\) when the CBR increases, thereby leading to a notable augmentation in the complexity of the learned distribution. However, to maintain experimental fairness, the structure and depth of the CDDM remain unchanged for different CBRs, consequently impeding the model's capacity to effectively learn the complex distribution and leading to a decline in performance gain.
Fig. 13 illustrates the PSNR performance versus SNR for DIV2K dataset over both AWGN and Rayleigh fading channel. In this experiment, the joint CDDM and JSCC system, as well as the JSCC system, are trained at a fixed SNR of \(20\) dB and evaluated across various SNR values. It is evident that our joint CDDM and JSCC system consistently outperforms the JSCC system. More importantly, the performance gain becomes more pronounced as the SNR decreases in the Rayleigh fading channel. We attribute this phenomenon to the training of our CDDM utilizing Algorithm 1, which encompasses a wide range of SNRs. Consequently, when the SNR varies, our CDDM still effectively reduces noise by adjusting the sampling step \(m\), leading to enhanced performance. In contrast, the performance of the JSCC system deteriorates rapidly as the SNR decreases. This observation validates the adaptability of our joint CDDM and JSCC system to different SNRs.
### _MSSSIM performance_
Fig. 14 shows the MSSSIM performance versus SNR for DIV2K dataset over both AWGN channel and Rayleigh fading channel. The solid lines represent performance under AWGN channel and the dotted lines represent performance under Rayleigh fading channel. The results demonstrate that under AWGN channel, our joint CDDM and JSCC system achieves a notable improvement in MSSSIM performance at SNRs of \(15\) dB and \(20\) dB particularly, i.e. \(0.6\) dB at SNR=\(15\) dB. At lower SNRs, we can still achieve an enhanced performance albeit with a quite small magnitude. Under Rayleigh fading channel, we achieve significant improvement across all SNRs. Fig. 15 demonstrates the MSSSIM performance for CIFAR10 dataset over Rayleigh fading channel. It can be observed that the joint CDDM and JSCC system outperforms both the JSCC system and the JPEG2000+LDPC system across all SNRs.
Fig. 16 demonstrates the MSSSIM performance versus CBR for DIV2K under both AWGN channel and Rayleigh fading channel respectively. The results demonstrate that our joint CDDM and JSCC system outperforms the JSCC system under all examined conditions. Analogous to the PSNR performance, the magnitude of gain decrease when the CBR is large due to the same reason. Moreover, all the experiment results conducted with MSSSIM performance show the consistent phenomenon that the MSSSIM performance of the JPEG2000+LDPC system is remarkably poor across all experimental configurations, showcasing a substantial disparity compared to both JSCC-based systems. These phenomenons prove that when considering the HV, the JSCC system exhibits a dominant advantage over the JPEG2000+LDPC system. Furtherly, in this scenario, our joint CDDM and JSCC system can still enhance the performance.
The experiments conducted consistently demonstrate the efficacy of our joint CDDM and JSCC system, surpassing the performance of both the JSCC system and the JPEG2000+LDPC system across a wide range of conditions. These conditions encompass various SNRs, different CBRs, diverse evaluation metrics, distinct channel types and varying image resolutions.
## VI Conclusion
In this paper, we have proposed the channel denoising diffusion models to eliminate the channel nosie under Rayleigh fading channel and AWGN channel. CDDM is trained utilizing a specialized noise schedule adapted to the wireless channel, which permits effective elimination of the channel noise via a suitable sampling algorithm in the reverse sampling process. Furtherly, we derived the sufficient condition under which our CDDM can reduce the conditional entropy of the received signal and demonstrate that the well-trained model satisfies this condition for smaller sampling steps through Monte Carlo experiments. CDDM is then applied into the semantic communications system based on JSCC. Extensive experimental results on CIFAR10 and DIV2K datasets show that under both AWGN and Rayleigh fading channels, the joint CDDM and JSCC system performs much better than the JSCC system without CDDM in terms of MSE, PSNR and MSSSIM.
Figure 16: MSSSIM performance of DIV2K dataset versus CBR under both AWGN and Rayleigh fading channels. The SNR is \(10\) dB. |
2309.03718 | Compactness of conformal Chern-minimal surfaces in Hermitian surface | The Chern-minimal surfaces in Hermitian surface play a similar role as
minimal surfaces in K\"ahler surface (see \cite{[PX-21]}) from the viewpoint of
submanifolds. This paper studies the compactness of Chern-minimal surfaces. We
prove that any sequence $\{f_n\}$ of conformal Chern-minimal maps from closed
Riemann surface $(\Sigma,\emph{\texttt{j}})$ into a compact Hermitian surface
$(M, J, h)$ with bounded area has a bubble tree limit, which consisting of a
Chern-minimal map $f_0$ from $\Sigma$ into $M$ and a finite set of
Chern-minimal maps from $S^2$ into $M$. We also show that the limit preserves
area and homotopy class. | Xiaowei Xu | 2023-09-07T13:50:02Z | http://arxiv.org/abs/2309.03718v1 | # Compactness of conformal Chern-minimal surfaces in Hermitian surface
###### Abstract.
The Chern-minimal surfaces in Hermitian surface play a similar role as minimal surfaces in Kahler surface (see [11]) from the viewpoint of submanifolds. This paper studies the compactness of Chern-minimal surfaces. We prove that any sequence \(\{f_{n}\}\) of conformal Chern-minimal maps from closed Riemann surface \((\Sigma,\mathpzc{j})\) into a compact Hermitian surface \((M,J,h)\) with bounded area has a bubble tree limit, which consisting of a Chern-minimal map \(f_{0}\) from \(\Sigma\) into \(M\) and a finite set of Chern-minimal maps from \(S^{2}\) into \(M\). We also show that the limit preserves area and homotopy class.
## 1. Introduction
It is known that there are abundant results on minimal surfaces in four-dimensional Riemannian manifold, especially the ones in Kahler surface. For instance, S. Webster (see [14], [15]) counted the complex and anti-complex points of generic minimal immersion from Riemann surface into Kahler surface by using the topological data. Here the generic means that the immersion is neither holomorphic nor anti-holomorphic. More explicitly, let \(f\) be a conformal minimal immersion from closed Riemann surface \((\Sigma,\mathpzc{j})\) into Kahler surface \((M,J,\omega)\), a point \(x\in\Sigma\) is called _complex_ (resp. _anti-complex_) if \(df\circ\mathpzc{j}=J\circ df\) (resp. \(df\circ\mathpzc{j}=-J\circ df\)) holds at \(x\), where \(\mathpzc{j}\) and \(J\) are the complex structures. S. Webster proved that the sum of complex points and anti-complex points of a generic minimal immersion \(f\), denoted by \(P\) and \(Q\), respectively, can be counted by
\[P+Q=-(\chi(T\Sigma)+\chi(T^{\perp}\Sigma)), \tag{1.1}\]
\[P-Q=-f^{*}c_{1}(M)[\Sigma], \tag{1.2}\]
where \(\chi(T\Sigma)\), \(\chi(T^{\perp}\Sigma)\) are the Euler characteristic of tangent bundle \(T\Sigma\), normal bundle \(T^{\perp}\Sigma\), respectively, \(c_{1}(M)\) is the first Chern class of \(M\) and \([\Sigma]\) is the fundamental class of \(\Sigma\). There are some progresses on Webster's formulae. J.G. Wolfson (see [17]) gave a new proof and deep applications of these formulae in the theory of minimal surfaces in K\(\ddot{a}\)hler surface; J.Y. Chen and G. Tian (see [1]), X.L. Han and J.Y. Li (see [3]) proved that (1.1) holds for minimal surfaces in symplectic four-manifold, symplectic critical surfaces in Kahler surface, respectively. It is because that the Riemann surface immersed in Hermitian surface still has the Euler characteristic of tangent bundle, the Euler characteristic of normal bundle and the first Chern class, one can ask: _Do the Webster's formulae hold for certain class of closed Riemann surfaces immersed in Hermitian surface_? To give an answer for this problem, C.K. Peng and the author (see [11]) introduce the Chern-minimal surface and prove that Webster's formulae (1.1) and (1.2) hold for the closed generic Chern-minimal surfaces in compact Hermitian surface. In this paper we will study the compactness of Chern-minimal surfaces.
We first view the Chern-minimal surface from the aspect of analysis. More generally, for a smooth immersion \(f\) from Riemann surface \((\Sigma,\mathpzc{j},ds_{\Sigma}^{2})\) into Hermitian surface \((M,J,h)\), the tangent map \(df\) is a smooth section of bundle \(T^{*}\Sigma\otimes f^{-1}TM\). Then, by using the connection \(\nabla\) on \(T^{*}\Sigma\otimes f^{-1}TM\) induced from Chern connection and taking the covariant
differentiation, we obtain the second fundamental form \(\nabla df\). We call \(f\) is _Chern-harmonic_ if and only if it satifies the equation
\[tr\nabla df=0,\]
which is a new elliptic equation involving the metrics and complex structures on manifolds. Here the trace is taken with respect to the metric \(ds^{2}_{\Sigma}\). In particular, we call \(f\) is _Chern-minimal_ if it is an isometry. The definitions are analogous to the harmonic map and minimal surface from the viewpoint of geometrical definition. It should be pointed out that the Chern-harmonic (resp. Chern-minimal) is just the harmonic (resp. minimal) when the target manifold is Kahlerian, and the holomorphic/anti-holomorphic maps are automatically Chern-harmonic. Although we have not find the variational structure of equation (1.3), it is a conformally invariant equation (see Proposition 2.1) from two-dimensional manifolds. So, it is possible to obtain a compactness result for Chern-minimal surfaces.
There are two celebrated works on compactness of two-dimensional geometric objects, the Sacks-Uhlenbeck's work (see [13]) on harmonic maps and M. Gromov's work on pseudo-holomorphic curves. M. Gromov's original proof is entirely geometric. Inspiring from the works on harmonic maps, T.H. Parker and J.G. Wolfson (see [9]), R.G. Ye (see [18]) independently gave the analytic proof of Gromov's compactness theorem. Their proof reveal the unity of such two compactness problems. Namely, the entire bubble tree procedure requires only a conformally invariant elliptic equation with the properties of energy estimate, energy gap and removable sigularity.
Notice that the Chern-harmonic equation is analogous to harmonic map equation and the holomorphic curve is a special case of Chern-harmonic map, so we can study the compactness of Chern-minimal surfaces follows the way of harmonic maps. We first deduce a Bochner formula for the energy density of Chern-harmonic map to get the energy estimate, then we prove the isoperimetric inequality and removable sigularity theorem for Chern-minimal maps. Based on these preliminaries, we obtain the \(C^{\infty}\)-convergence and bubbling.
**Theorem 4.1.**_Let \(\{f_{n}\}\) be a sequence of conformal Chern-minimal immersions from closed Riemann surface \((\Sigma,\,\mathfrak{z})\) into compact Hermitian surface \((M,J,h)\) with the areas are uniformly bounded by \(\Cal{A}_{0}\). Then there are a subsequence \(\{f_{n,k}\}\), a finite set of points \(\{p_{1},\dots,p_{\kappa}\}\subset\Sigma\) and a conformal Chern-minimal immersion \(f_{0}\) from \(\Sigma\) into \(M\) such that \(f_{n,k}\) converges to \(f_{0}\) in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\dots,p_{\kappa}\}\). Furthermore, there is a Chern-minimal immersion \(f_{p_{i}}\) from \(S^{2}\) into \(M\) associated to each \(p_{i}\)._
The proof of Theorem 4.1 is essentially due to Sacks and Uhlenbeck (see [13]), it is now known as Sacks-Uhlenbeck's procedure. This procedure shows the existence of Chern-harmonic surface, but it doesn't characterize the quantities of energy and the position of bubbles. By modifying the renormalization procedure of T.H. Parker and J.G. Wolfson (see [9]), T.H. Parker (see [10]), working in conformal coordinate instead of the normal coordinate therein, we prove the energy identity and the necklessness, and hence we get the bubble tree convergence. The no energy loss statement for harmonic maps was previously proved by J. Jost in [5].
**Theorem 5.3.**_Let \(\{f_{n}\}\) be a sequence of conformal Chern-minimal immersions from Riemann surface \((\Sigma,\,\mathfrak{z})\) into compact Hermitian surface \((M,J,h)\) with the areas are uniformly bounded by \(\Cal{A}_{0}\). Then there are a subsequence \(\{f_{n,k}\}\), a Chern-minimal immersion \(f_{0}:\Sigma\longrightarrow M\), a finite set of renormalized Chern-minimal sequences \(\{\tilde{f}_{n,k,I}\}\) and a finite set of Chern-minimal two-spheres \(f_{p_{I}}:S^{2}\longrightarrow M\) so that_
(1) _The sequences \(\{f_{n,k}\}\), \(\{\tilde{f}_{n,k,I}\}\) converge to \(f_{0}\), \(f_{p_{I}}\) in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\ldots,p_{\kappa}\}\), \(S^{2}\setminus\{p_{I1},\ldots,p_{I\kappa_{I}},\mathbf{s}\}\), respectively, where \(s\) is the south pole of \(S^{2}\)._
(2) _There is no energy loss. That is_
\[\lim_{k\to\infty}E(f_{n,k})=E(f_{0})+\sum_{I}E(f_{p_{I}}). \tag{1.4}\]
(3) _There is no distance bubbling. Namely, for each bubble point \(p_{I}\), we have \(f_{p_{I}}(\mathbf{s})=f_{p_{I^{\prime}}}(p_{I})\) with indices \(I^{\prime}=i_{1}\cdots i_{\ell-1}\) and \(I=i_{1}\cdots i_{\ell-1}i_{\ell}\)._
The paper is organized as follows. In Section 2, we deduce a Bochner formula of Chern-harmonic maps, and use it to get the energy estimate. In Section 3, we prove the removable singularity of Chern-minimal surfaces by using the isoperimetric inequality. In Section 4, we use the Sacks-Uhlenbeck's procedure to get the \(C^{\infty}\)-convergence and bubbling. In Section 5, we prove the energy identity and necklessness to get the bubble tree limit.
## 2. A Bochner formula and energy estimate
The purpose of this section is to deduce a Bochner formula for the energy density of Chern-harmonic map, and we use it to give the energy estimate of Chern-harmonic maps.
Let \((\Sigma,\mathbf{j})\) be a Riemann surface, and let \(ds^{2}_{\Sigma}\) be a Riemannian metric on \(\Sigma\) which is conformal to the complex structure \(j\). Locally, we choose a complex valued 1-form \(\varphi\) of \((1,0)\)-type such that \(ds^{2}_{\Sigma}=\varphi\;\overline{\varphi}\). Then the structure equations of \(ds^{2}_{\Sigma}\) are given by
\[d\varphi=-\sqrt{-1}\rho\wedge\varphi,\;\;\;\overline{\rho}=\rho, \tag{2.1}\]
\[d\rho=-\sqrt{-1}K\varphi\wedge\overline{\varphi}, \tag{2.2}\]
where \(\rho\) is the connection 1-form and \(K\) is the Gaussian curvature.
Let \((M,J,h)\) be a Hermitian surface with the complex structure \(J\) and the Hermitian metric \(h\). We choose the Chern connection on the tangent bundle \(TM\). Namely, the unique one preserves \(J\), \(h\) and it also has vanishing \((1,1)\)-part of the torsion. Choosing a local field of unitary frame \(\{e_{1},e_{2}\}\) with the dual \(\{\omega^{1},\omega^{2}\}\) on \(M\). Then the structure equations of the Chern connection are given by
\[d\omega^{i}=-\omega^{i}_{j}\wedge\omega^{j}+\Theta^{i},\;\;\;\;\;\omega^{j}_{ i}+\overline{\omega^{i}_{j}}=0, \tag{2.3}\]
\[d\omega^{i}_{j}=-\omega^{i}_{k}\wedge\omega^{k}_{j}+\Omega^{i}_{j},\;\;\;\; \Omega^{j}_{i}+\overline{\Omega^{i}_{j}}=0, \tag{2.4}\]
where \(\omega^{i}_{j}\), \(\Theta^{i}\), \(\Omega^{i}_{j}\) are connection 1-forms, torsion 2-forms and curvature 2-forms, respectively. Explicitly, we can write
\[\Theta^{i}:=L^{i}_{jk}\,\omega^{j}\wedge\omega^{k},\;\;\;L^{i}_{jk}=-L^{i}_{ kj}, \tag{2.5}\]
\[\Omega^{i}_{j}:=R^{i}_{jk\ell}\,\omega^{k}\wedge\omega^{\ell}+R^{i}_{jk\ell} \,\omega^{k}\wedge\overline{\omega^{\ell}}+R^{i}_{j\overline{k}\overline{ \ell}}\,\overline{\omega^{k}}\wedge\overline{\omega^{\ell}}, \tag{2.6}\]
with \(R^{i}_{jk\ell}=-R^{i}_{j\ell k}\), \(R^{i}_{jk\overline{\ell}\,\ell}=-R^{i}_{j\overline{\ell}\overline{k}}\), \(R^{i}_{jk\ell}=\overline{R^{j}_{i\overline{\ell}\overline{k}}}\) and \(R^{i}_{jk\ell}=\overline{R^{j}_{i\ell}\overline{k}}\).
Let \(f\) be a smooth immersion from surface \((\Sigma,\mathbf{j})\) into Hermitian surface \((M,J,h)\). Set
\[f^{*}\omega^{i}:=a^{i}_{1}\,\varphi+a^{i}_{\overline{1}}\,\overline{\varphi}. \tag{2.7}\]
Taking the exterior differentiation of (2.7) and defining \(a^{i}_{11}\), \(a^{i}_{1\overline{1}}\), \(a^{i}_{\overline{1}\,\overline{1}}\), \(a^{i}_{\overline{1}\,\overline{1}}\) as follows
\[a^{i}_{11}\,\varphi+a^{i}_{1\overline{1}}\,\overline{\varphi}:=da^{i}_{1}- \sqrt{-1}\rho\,a^{i}_{1}+a^{j}_{1}\,\omega^{i}_{j}, \tag{2.8}\]
\[a^{i}_{\overline{1}\,\varphi}+a^{i}_{\overline{1}\,\overline{\varphi}}:=da^{i}_ {\overline{1}}+\sqrt{-1}\rho\,a^{i}_{\overline{1}}+a^{j}_{\overline{1}}\, \omega^{i}_{j}. \tag{2.9}\]
Then, we have
\[(a^{i}_{11}\,\varphi+a^{i}_{1\overline{1}}\,\overline{\varphi})\wedge\varphi+(a^{i }_{\overline{1}1}\,\varphi+a^{i}_{\overline{1}1}\,\overline{\varphi})\wedge \overline{\varphi}=f^{*}\Theta^{i}, \tag{2.10}\]
which implies
\[-a^{i}_{1\overline{1}}+a^{i}_{\overline{1}1}=2L^{i}_{jk}a^{j}_{1}a^{k}_{ \overline{1}}. \tag{2.11}\]
We take the exterior differentiation of (2.8) and (2.9), respectively, and defining \(a^{i}_{111}\), \(a^{i}_{11\overline{1}}\), \(a^{i}_{1\overline{1}1}\), \(a^{i}_{1\overline{1}1}\), \(a^{i}_{\overline{1}1\overline{1}}\), \(a^{i}_{\overline{1}1\overline{1}}\) as follows
\[a^{i}_{111}\varphi+a^{i}_{11\overline{1}}\,\overline{\varphi}:=da^{i}_{11}-2a^ {i}_{11}\sqrt{-1}\rho+a^{j}_{11}\omega^{i}_{j}, \tag{2.12}\]
\[a^{i}_{1\overline{1}1}\varphi+a^{i}_{1\overline{1}1}\,\overline{\varphi}:=da^ {i}_{1\overline{1}}+a^{j}_{1\overline{1}}\,\omega^{i}_{j}, \tag{2.13}\]
\[a^{i}_{\overline{1}11}\varphi+a^{i}_{\overline{1}1\overline{1}}\,\overline{ \varphi}:=da^{i}_{\overline{1}1}+a^{j}_{\overline{1}1}\,\omega^{i}_{j}, \tag{2.14}\]
\[a^{i}_{\overline{1}1\overline{1}}\varphi+a^{i}_{\overline{1}1\overline{1}}\, \overline{\varphi}:=da^{i}_{\overline{1}1}+2a^{i}_{\overline{1}1}\sqrt{-1} \rho+a^{j}_{\overline{1}1}\omega^{i}_{j}. \tag{2.15}\]
Then, we have
\[(a^{i}_{111}\varphi+a^{i}_{11\overline{1}}\,\overline{\varphi})\wedge\varphi+ (a^{i}_{1\overline{1}1}\varphi+a^{i}_{1\overline{1}1\overline{1}}\,\overline{ \varphi})\wedge\overline{\varphi}=-Ka^{i}_{1}\varphi\wedge\overline{\varphi}+ a^{j}_{1}\Omega^{i}_{j}, \tag{2.16}\]
\[(a^{i}_{\overline{1}11}\varphi+a^{i}_{\overline{1}1\overline{1}}\,\overline{ \varphi})\wedge\varphi+(a^{i}_{\overline{1}1\overline{1}}\,\varphi+a^{i}_{ \overline{1}1\overline{1}}\,\overline{\varphi})\wedge\overline{\varphi}=Ka^{i }_{\overline{1}}\varphi\wedge\overline{\varphi}+a^{j}_{\overline{1}}\,\Omega^{i} _{j}, \tag{2.17}\]
which imply the Ricci identities
\[a^{i}_{1\overline{1}1}-a^{i}_{11\overline{1}}=-Ka^{i}_{1}+a^{j}_{1}\,\Omega^{ i}_{j}/(\varphi\wedge\overline{\varphi}), \tag{2.18}\]
\[a^{i}_{\overline{1}11}-a^{i}_{\overline{1}1\overline{1}}=Ka^{i}_{\overline{1} }+a^{j}_{\overline{1}}\,\Omega^{i}_{j}/(\varphi\wedge\overline{\varphi}), \tag{2.19}\]
where
\[\Omega^{i}_{j}/(\varphi\wedge\overline{\varphi}):=2R^{i}_{jk\overline{k}}a^{k} _{1}a^{\ell}_{\overline{1}}+R^{i}_{jk\overline{\ell}}(a^{k}_{1}\overline{a^{ \ell}_{1}}-a^{k}_{\overline{1}}\overline{a^{\ell}_{\overline{1}}})+2R^{i}_{ jk\overline{\ell}}\overline{a^{k}_{\overline{1}}\,a^{\ell}_{\overline{1}}} \tag{2.20}\]
stands for the coefficient of the pull-back of \(\Omega^{i}_{j}\) with respect to the 2-form \(\varphi\wedge\overline{\varphi}\).
By using (2.7), (2.8) and (2.9), we can give a local expression of the second fundamental form as
\[\nabla df = a^{i}_{11}\,\varphi\otimes\varphi\otimes e_{i}+a^{i}_{1 \overline{1}}\,\overline{\varphi}\otimes\varphi\otimes e_{i}+a^{i}_{\overline {1}1}\,\varphi\otimes\overline{\varphi}\otimes e_{i}+a^{i}_{\overline{1}1}\, \overline{\varphi}\otimes\overline{\varphi}\otimes e_{i} \tag{2.21}\] \[+\,\overline{a^{i}_{11}}\,\overline{\varphi}\otimes\overline{ \varphi}\otimes\overline{e_{i}}+\overline{a^{i}_{1\overline{1}}}\,\varphi \otimes\overline{\varphi}\otimes\overline{e_{i}}+\overline{a^{i}_{\overline {1}1}}\,\overline{\varphi}\otimes\varphi\otimes\overline{e_{i}}+\overline{a^{i} _{\overline{1}1}}\,\varphi\otimes\varphi\otimes\overline{e_{i}}+\overline{a^{i} _{\overline{1}1}}\,\varphi\otimes\varphi\otimes\overline{e_{i}}.\]
So, \(f\) is Chern-harmonic if and only if
\[a^{i}_{1\overline{1}}+a^{i}_{\overline{1}1}=0. \tag{2.22}\]
This together with (2.11) imply
\[a^{i}_{1\overline{1}}=-a^{i}_{\overline{1}1}=-L^{i}_{jk}a^{j}_{1}a^{k}_{ \overline{1}}. \tag{2.23}\]
The following proposition shows that Chern-harmonic equation (1.3) is conformally invariant.
**Proposition 2.1**.: _Let \(f\) be a smooth map from Riemann surface \((\Sigma,\,\mathfrak{z})\) into Hermitian surface \((M,J,h)\). The metrics \(ds^{2}_{\Sigma}\), \(d\tilde{s}^{2}_{\Sigma}\) are conformal to \(\,\mathfrak{z}\). Then \(f\) is Chern-harmonic with respect to \(ds^{2}_{\Sigma}\) if and only if it is Chern-harmonic with respect to \(d\tilde{s}^{2}_{\Sigma}\)._
_Proof._ Locally, we write \(h=\omega^{1}\overline{\omega^{1}}+\omega^{2}\overline{\omega^{2}}\), \(ds^{2}_{\Sigma}=\varphi\overline{\varphi}\) and \(d\tilde{s}^{2}_{\Sigma}=\theta\overline{\theta}\), and set
\[f^{*}\omega^{i}=a^{i}_{1}\varphi+a^{i}_{\overline{1}}\overline{\varphi},\ \ \ \ \ \ \ \ f^{*}\omega^{i}=b^{i}_{1}\theta+b^{i}_{\overline{1}}\overline{\theta}. \tag{2.24}\]
Notice that both \(ds^{2}_{\Sigma}\) and \(d\tilde{s}^{2}_{\Sigma}\) are conformal to \(\,\mathfrak{z}\), there is a local smooth function \(\mu\) such that
\[\varphi=\mu\,\theta. \tag{2.25}\]
We define the covariant derivatives \(\mu_{1}\), \(\mu_{\overline{1}}\) by
\[\mu_{1}\theta+\mu_{\overline{1}}\,\overline{\theta}:=d\mu-\sqrt{-1}\mu\,\rho_{ 0}+\sqrt{-1}\mu\,\rho.\]
Taking the exterior differentiation of (2.25), we have
\[(\mu_{1}\theta+\mu_{\overline{1}}\,\overline{\theta})\wedge\theta=0,\]
which implies that \(\mu_{\overline{1}}=0\). On the other hand, it follows from (2.24) and (2.25) we have \(b^{i}_{1}=\mu\,a^{i}_{1}\), \(b^{i}_{\overline{1}}=\overline{\mu}\,a^{i}_{\overline{1}}\). So, we have
\[b^{i}_{1\overline{1}}=(\mu\,a^{i}_{1})_{\overline{1}} = \mu_{\overline{1}}\,a^{i}_{1}+\mu\,(a^{i}_{1})_{\overline{1}} \tag{2.26}\] \[= \mu_{\overline{1}}\,a^{i}_{1}+\mu\,a^{i}_{1\overline{1}}\, \overline{\mu}\] \[= |\mu|^{2}\,a^{i}_{1\overline{1}}.\]
Similarly, we also have
\[b^{i}_{\overline{1}}=|\mu|^{2}a^{i}_{\overline{1}}. \tag{2.27}\]
Then the conclusion follows from (2.22), (2.26) and (2.27).
\(\Box\)
We next provide an alternative description to the Chern-harmonic map. Recalling that the differential operator \(d^{\nabla}\) acts on \(\phi\otimes e\in\Omega^{p}(f^{-1}TM)\) is defined by \(d^{\nabla}(\phi\otimes e)=d\phi\otimes e+(-1)^{p}\phi\wedge\nabla e\), one can check that \(f\) is Chern-harmonic if and only if
\[(d^{\nabla})^{*}df=0, \tag{2.28}\]
where \((d^{\nabla})^{*}\) is the adjoint of \(d^{\nabla}\). Moreover, for a Chern-harmonic map \(f\), we have
\[\big{(}(d^{\nabla})^{*}+d^{\nabla}\big{)}df=p(df,df), \tag{2.29}\]
and
\[\big{(}d^{\nabla}(d^{\nabla})^{*}+(d^{\nabla})^{*}d^{\nabla}\big{)}df=q(\nabla df,df,df,df), \tag{2.30}\]
where
\[p(df,df):=2L^{i}_{jk}a^{j}_{1}a^{k}_{\overline{1}}\,\varphi\wedge \overline{\varphi}\otimes e_{i}-2\overline{L^{i}_{jk}a^{\overline{j}}_{1}a^{ \overline{k}}_{\overline{1}}}\,\varphi\wedge\overline{\varphi}\otimes \overline{e_{i}},\]
\[q(\nabla df,df,df,df) := 2\big{[}(L^{i}_{jk\ell}a^{\ell}_{1}+L^{i}_{jk\ell}\overline{a^{ \overline{\ell}}_{1}})a^{j}_{1}a^{k}_{\overline{1}}+L^{i}_{jk}(a^{j}_{11}a^{ k}_{\overline{1}}+a^{j}_{1}a^{k}_{\overline{1}1})\big{]}\varphi\otimes e_{i}\] \[-2\big{[}(L^{i}_{jk\ell}a^{\ell}_{\overline{1}}+L^{i}_{jk\ell} \overline{a^{\overline{\ell}}_{1}})a^{j}_{1}a^{k}_{\overline{1}}+L^{i}_{jk}( a^{j}_{1\overline{1}}a^{k}_{\overline{1}}+a^{j}_{1}a^{k}_{\overline{1}1})\big{]} \overline{\varphi}\otimes e_{i}\] \[-2\big{[}(\overline{L^{i}_{jk\ell}}\,a^{\overline{\ell}}_{ \overline{1}}+\overline{L^{i}_{jk\ell}a^{\ell}_{1}})\overline{a^{j}_{1}a^{k}_{ \overline{1}}}+\overline{L^{i}_{jk}}(\overline{a^{j}_{1\overline{1}}a^{k}_{ \overline{1}}}+\overline{a^{j}_{1}a^{k}_{\overline{1}1}})\big{]}\varphi\otimes \overline{e_{i}}\] \[+2\big{[}(\overline{L^{i}_{jk\ell}}\,a^{\overline{\ell}}_{1}+ \overline{L^{i}_{jk\ell}a^{\ell}_{1}})\overline{a^{j}_{1}a^{k}_{\overline{1}}}+ \overline{L^{i}_{jk}}(\overline{a^{j}_{11}a^{k}_{\overline{1}}}+\overline{a^{ j}_{1}a^{k}_{\overline{1}1}})\big{]}\overline{\varphi}\otimes\overline{e_{i}},\]
where \(L^{i}_{jk\ell}\), \(L^{i}_{jk\ell}\) are the covariant derivatives of the torsion. It should be pointed out that \((d^{\nabla})^{*}+d^{\nabla}\), \(d^{\nabla}(d^{\nabla})^{*}+(d^{\nabla})^{*}d^{\nabla}\) are elliptic operators of order one and two, respectively.
**Theorem 2.2**.: _Let \(f\) be a Chern-harmonic map from Riemann surface \((\Sigma,\mathfrak{j},ds^{2}_{\Sigma})\) into Hermitian surface \((M,J,h)\). Then, for the energy density \(e(f)\), we have_
\[\Delta e(f) = 2|A|^{2}+2K\,e(f)+2\overline{a^{i}_{1}}a^{j}_{1}\,\Omega^{i}_{j} /(\varphi\wedge\overline{\varphi})+2a^{i}_{\overline{1}}\overline{a^{j}_{ \overline{1}}}\,\Omega^{j}_{i}/(\varphi\wedge\overline{\varphi}) \tag{2.31}\] \[-4Re\big{[}a^{i}_{1}(\overline{L^{i}_{jk\ell}}\,a^{\overline{ \ell}}_{1}+\overline{L^{i}_{jk\ell}a^{\ell}_{\overline{1}}})\overline{a^{j}_{ \overline{1}}}\,\overline{a^{k}_{\overline{1}}}+a^{i}_{1}\overline{L^{i}_{jk}} (\overline{a^{j}_{11}a^{k}_{\overline{1}}}+\overline{a^{j}_{1}a^{k}_{\overline {1}1}})\big{]}\] \[+4Re\big{[}a^{i}_{\overline{1}}(\overline{L^{i}_{jk\ell}}\, \overline{a^{\ell}_{\overline{1}}}+\overline{L^{i}_{jk\ell}a^{\ell}_{\ell}}) \overline{a^{j}_{1}}\,\overline{a^{k}_{\overline{1}}}+a^{i}_{\overline{1}} \overline{L^{i}_{jk}}(\overline{a^{j}_{11}a^{k}_{\overline{1}}}+\overline{a^{j}_{ 1}a^{k}_{\overline{1}1}})\big{]},\]
_where \(|A|^{2}=|a^{i}_{11}|^{2}+2|a^{i}_{1\overline{1}}|^{2}+|a^{i}_{1\overline{1}}|^{2}\) is the half of the squared norm of \(\nabla df\)._
_Proof_. Notice that the _energy density_\(e(f)=|a^{i}_{1}|^{2}+|a^{i}_{\overline{1}}|^{2}\), we have
\[\frac{1}{2}\Delta e(f) = (a^{i}_{1}\overline{a^{i}_{1}}+a^{i}_{\overline{1}}\overline{a^{i }_{1}})_{1\overline{1}} \tag{2.32}\] \[= |A|^{2}+\overline{a^{i}_{1}}a^{i}_{1\overline{1}}+a^{i}_{1} \overline{a^{i}_{1\overline{1}}}+\overline{a^{i}_{1}}\,\overline{a^{i}_{1 \overline{1}}}+a^{i}_{\overline{1}}\,\overline{a^{i}_{1\overline{1}}}\, \overline{a^{i}_{1\overline{1}}}.\]
It follows from (2.23) that
\[a_{1}^{i}\overline{a_{1\mathbb{T}1}^{i}} = a_{1}^{i}(-\overline{L_{jk}^{i}a_{1}^{j}a_{1}^{k}})_{\mathbb{T}1} \tag{2.33}\] \[= -a_{1}^{i}(\overline{L_{jk\epsilon}^{i}a_{1}^{\epsilon}}+\overline {L_{jk\epsilon}^{i}}^{\epsilon}a_{\mathbb{T}}^{\epsilon})\overline{a_{1}^{j}} \overline{a_{1}^{k}}-a_{1}^{i}\overline{L_{jk}^{i}}(\overline{a_{11}^{j}a_{ \mathbb{T}}^{k}}+\overline{a_{1}^{j}a_{11}^{k}}),\]
and
\[\overline{a_{1}^{i}}a_{1\mathbb{T}1}^{i}=\overline{a_{\mathbb{T}}^{i}}(L_{jk \epsilon}^{i}a_{\mathbb{T}}^{\epsilon}+L_{jk\overline{\epsilon}^{i}}^{ \epsilon}\overline{a_{1}^{\epsilon}})a_{1}^{j}a_{\mathbb{T}}^{k}+\overline{a _{\mathbb{T}}^{i}}L_{jk}^{i}(a_{1\mathbb{T}}^{j}a_{\mathbb{T}}^{k}+a_{1}^{j}a _{\mathbb{T}1}^{k}). \tag{2.34}\]
For the other terms in (2.32), by using the Ricci identities (2.18), (2.19) and (2.23) again, we have
\[\overline{a_{1}^{i}}a_{1\mathbb{1}\mathbb{T}} = K|a_{1}^{i}|^{2}+\overline{a_{1}^{i}}a_{1}^{j}\;\Omega_{j}^{i}/( \varphi\wedge\overline{\varphi}) \tag{2.35}\] \[-\overline{a_{1}^{i}}(L_{jk\epsilon}^{i}a_{1}^{\epsilon}+L_{jk \overline{\epsilon}^{i}}^{\epsilon}\overline{a_{\mathbb{T}}^{\epsilon}})a_{ 1}^{j}a_{\mathbb{T}}^{k}-\overline{a_{1}^{i}}L_{jk}^{i}(a_{11}^{j}a_{\mathbb{ T}}^{k}+a_{1}^{j}a_{\mathbb{T}1}^{k}),\]
and
\[a_{\mathbb{T}}^{i}\overline{a_{1}^{i}}\overline{1}_{11} = K|a_{\mathbb{T}}^{i}|^{2}+a_{\mathbb{T}}^{i}\overline{a_{1}^{j}} \overline{1}\;\overline{\Omega_{j}^{i}/(\varphi\wedge\overline{\varphi})} \tag{2.36}\] \[+a_{\mathbb{T}}^{i}(\overline{L_{jk\epsilon}^{i}}^{\epsilon} \overline{a_{1}^{\epsilon}}+\overline{L_{jk\epsilon}^{i}}a_{1}^{\epsilon}) \overline{a_{1}^{j}}\overline{a_{1}^{k}}+a_{\mathbb{T}}^{i}\overline{L_{jk}^{ i}}(\overline{a_{11}^{j}a_{\mathbb{T}}^{k}}+\overline{a_{1}^{j}a_{11}^{k}}).\]
Substituting (2.33)-(2.36) into (2.32), we get the Bochner formula (2.31).
\(\Box\)
_Remark_. The _energy_\(E(f)\) of a smooth map \(f\) from Riemann surface \((\Sigma,\,\mathpzc{j},ds_{\Sigma}^{2})\) is defined by the integration of its energy density. Namely,
\[E(f)=\int_{\Omega}e(f)\,dA, \tag{2.37}\]
where \(dA\) is the area element of \(ds_{\Sigma}^{2}\). It is clear that \(E(f)\) is equal to the area \(\mathcal{A}(f(\Sigma))\) when \(f\) is conformal.
**Corollary 2.3**.: _Let \(f\) be a Chern-harmonic map from closed Riemann surface \((\Sigma,\,\mathpzc{j},ds_{\Sigma}^{2})\) into compact Hermitian surface \((M,J,h)\). Then there are positive constants \(C_{1}\) and \(C_{2}\) so that_
\[\Delta e(f)\geq-C_{1}\,e(f)-C_{2}\,e^{2}(f), \tag{2.38}\]
_where \(C_{1}\) depends on the curvature of \(ds_{\Sigma}^{2}\), \(C_{2}\) depends on the torsion and curvature of \(h\)._
_Proof_. It follows from (2.20), (2.31) and the Cauchy's inequality with \(\epsilon\). \(\Box\)
_Remark_. If we scaling the metric \(ds_{\Sigma}^{2}\) as \(d\bar{s}_{\Sigma}^{2}=\lambda^{2}ds_{\Sigma}^{2}\) for a positive constant \(\lambda\), then the corresponding constants \(\tilde{C}_{1}=C_{1}/\lambda^{2}\) and \(\tilde{C}_{2}=C_{2}\).
Once we have the differential inequality (2.38), one can get the following energy estimate of Chern-harmonic maps. This approach is the same as the corresponding one in the theory of harmonic maps and pseudo-holomorphic curves, which has been used by R. Schoen [12], J.G. Wolfson [16] and T.H. Parker, J.G. Wolfson [9].
**Theorem 2.4**.: _Let \(f\) be a Chern-harmonic map from closed Riemann surface \((\Sigma,\,\mathpzc{j},ds_{\Sigma}^{2})\) into compact Hermitian surface \((M,J,h)\). Then there are constants \(C_{3}\), \(\epsilon_{1}>0\) depend on the metrics \(ds_{\Sigma}^{2}\) and \(h\) so that for any geodesic disk \(D_{2r}\) of radius \(2r\) with the energy \(E(2r):=\int_{D_{2r}}e(f)\,dA\leq\epsilon_{1}\), we have_
\[\sup_{D_{r}}\,e(f)\leq C_{3}\,\frac{E(2r)}{r^{2}}. \tag{2.39}\]
_Proof._ We consider the function \(u(\tau):=\tau^{2}\sup\limits_{D_{2(r-\tau)}}e(f)\) with \(\tau_{0}\) is a maximum point of \(u(\tau)\). Set \(e_{0}:=\sup\limits_{D_{2(r-\tau_{0})}}e(f)\), and let \(x_{0}\) be a point in \(D_{2(r-\tau_{0})}\) such that \(e(f)(x_{0})=e_{0}\). It follows from \(u(\tau_{0}/2)\leq u(\tau_{0})\), we have
\[\sup\limits_{D_{2r-\tau_{0}}}e(f)\leq 4e_{0}. \tag{2.40}\]
Notice that \(D_{\tau_{0}}(x_{0})\subset D_{2r-\tau_{0}}\), then from (2.40) we have \(e(f)\leq 4e_{0}\) on \(D_{\tau_{0}}(x_{0})\). Set \(d\tilde{s}_{\Sigma}^{2}=4e_{0}\,ds_{\Sigma}^{2}\), considering the Chern-harmonic map \(f\) from \((\Sigma,\,\mathbf{j},d\tilde{s}_{\Sigma}^{2})\) into Hermitian surface \((M,J,h)\), we have \(\tilde{e}(f)=(4e_{0})^{-1}\,e(f)\leq 1\) on \(D_{\tau_{0}}(x_{0})\). By using the fact \(\tilde{\Delta}=(4e_{0})^{-1}\Delta\), then the Corollary 2.3 and its remark imply
\[\Delta\tilde{e}(f)\geq-C_{1}\,\tilde{e}(f)-4C_{2}e_{0}\,\tilde{e}^{2}(f), \tag{2.41}\]
where \(C_{1}\), \(C_{2}\) are the constants in Corollary 2.3.
We first consider the case that \(e_{0}\geq 1\). On \(D_{\tau_{0}}(x_{0})\), by the fact \(\tilde{e}(f)\leq 1\), then (2.41) gives
\[\Delta\tilde{e}(f)\geq-C_{3.1}\,e_{0}, \tag{2.42}\]
where \(C_{3.1}=C_{1}+4C_{2}\). Applying the Theorem 9.20 in [2] to (2.42) on \(D_{\tau}(x_{0})\) with \(0<\tau\leq\tau_{0}\), we have
\[\frac{1}{4}\leq\sup\limits_{D_{\tau/2}(x_{0})}\tilde{e}(f) \leq C_{3.2}\big{[}\frac{1}{4e_{0}\tau^{2}}\int_{D_{\tau}(x_{0})}e(f) \;dA+C_{3.1}\;e_{0}\tau^{2}\big{]}\] \[= \frac{C_{3.2}}{4e_{0}\tau^{2}}\,E(2r)+C_{3.1}C_{3.2}\,\tau^{2}\,e _{0},\]
which implies
\[e_{0}\leq 4C_{3.1}C_{3.2}\,\tau^{2}e_{0}^{2}+\frac{C_{3.2}}{\tau^{2}}\,E(2r), \tag{2.43}\]
where \(C_{3.2}\) depends on the metric \(ds_{\Sigma}^{2}\). We claim that \(4C_{3.1}C_{3.2}\tau_{0}^{2}e_{0}<1/2\) for \(\epsilon_{1}<\frac{1}{16C_{3.1}C_{3.2}^{2}}\). If not, suppose that \(4C_{3.1}C_{3.2}\tau_{0}^{2}e_{0}\geq 1/2\) and taking \(\tau=\frac{1}{\sqrt{8C_{3.1}C_{3.2}e_{0}}}\leq\tau_{0}\) in (2.43), we obtain
\[e_{0}\leq\frac{e_{0}}{2}+8C_{3.1}C_{3.2}^{2}\,E(2r)\,e_{0},\]
which is a contradiction when \(E(2r)\leq\epsilon_{1}\). Taking \(\tau=\tau_{0}\) in (2.43) and using \(4C_{3.1}C_{3.2}\tau_{0}^{2}e_{0}<1/2\) for \(\epsilon_{1}<\frac{1}{16C_{3.1}C_{3.2}^{2}}\), we have
\[u(\tau_{0})=\tau_{0}^{2}\sup\limits_{D_{2(r-\tau_{0})}}e(f)=\tau_{0}^{2}e_{0} \leq 2C_{3.2}\,E(2r). \tag{2.44}\]
So, (2.39) follows from \(u(r/2)\leq u(\tau_{0})\) with \(C_{3}=8C_{3.2}\).
We next consider the case that \(e_{0}<1\). By the fact \(\tilde{e}(f)\leq 1\) on \(D_{\tau_{0}}(x_{0})\), (2.41) gives
\[\Delta\tilde{e}(f)\geq-C_{3.1}\,\tilde{e}(f), \tag{2.45}\]
where \(C_{3.1}=C_{1}+4C_{2}\) as before. Applying the Theorem 9.20 in [2] again, (2.45) gives
\[\frac{1}{4}\leq\sup\limits_{D_{\tau/2}(x_{0})}\tilde{e}(f) \leq \frac{C_{3.4}}{4\tau_{0}^{2}e_{0}}\int_{D_{\tau}(x_{0})}e(f)\;dA\leq \frac{C_{3.4}}{4\tau_{0}^{2}e_{0}}\,E(2r),\]
which implies
\[u(\tau_{0})=\tau_{0}^{2}\sup\limits_{D_{2(r-\tau_{0})}}e(f)=\tau_{0}^{2}e_{0} \leq C_{3.4}\,E(2r). \tag{2.46}\]
Then the conclusion follows from \(u(r/2)\leq u(\tau_{0})\) with \(C_{3}=4C_{3.4}\).
\(\Box\)
By the definition of Chern-minimal, we know that \(ds^{2}_{\Sigma}\) is equal to the pull-back metric \(f^{*}h\), which means that \(ds^{2}_{\Sigma}\) depends on \(f\). However, one need the uniformly estimates when considering the compactness problem of conformal Chern-minimal maps. So, it is necessary to choose a good background metric on \((\Sigma,\,\mathpzc{j})\). According to the uniformization theorem, we can always choose the metric \(ds^{2}_{0}\) on \(\Sigma\) so that it is conformal to \(\mathpzc{j}\) and it has constant curvature \(1\), \(0\), \(-1\) when the genus \(g(\Sigma)=0\), \(1\), \(g(\Sigma)\geq 2\), respectively.
## 3. Isoperimetric inequality and removable singularity
The goal of this section is to prove the removable sigularity theorem for Chern-minimal maps, which is based on an isoperimetric inequality, Morrey's decay lemma, energy estimate and elliptic estimates.
**Theorem 3.1**.: _Let \(f\) be a Chern-minimal immersion from Riemann surface \((\Sigma,\mathpzc{j},ds^{2}_{\Sigma})\) into compact Hermitian surface \((M,J,h)\). Then there are a universal positive constant \(C_{4}\) and a positive constant \(\epsilon_{2}\) depends on \(h\) such that for any domain \(\Omega\subset\Sigma\) with boundary provided that \(\mathcal{A}(f(\Omega))\leq\epsilon_{2}\), we have_
\[\mathcal{A}(f(\Omega))\leq C_{4}\,\mathcal{L}^{2}(f(\partial\Omega)), \tag{3.1}\]
_where \(\mathcal{L}(f(\partial\Omega))\) is the length of \(f(\partial\Omega)\)._
_Proof._ We denote by \(H\) the mean curvature of \(f\) in the sense of Levi-Civita connection of \(ds^{2}_{\Sigma}\) and \(h\). By the Proposition 2.2 in [11], we have
\[H=2(a^{j}_{\overline{1}}\overline{L^{j}_{ki}}a^{k}_{\overline{1}}+\overline{a^ {j}_{1}}\overline{L^{k}_{ji}}a^{k}_{1})e_{i}+2(\overline{a^{j}_{1}}\overline{L ^{j}_{ki}}a^{k}_{\overline{1}}+a^{j}_{1}L^{k}_{ji}\overline{a^{k}_{1}}) \overline{e_{i}}. \tag{3.2}\]
Notice that \(f\) is an isometry, we know that \(|a^{i}_{1}|,|a^{i}_{1^{\prime}}|\leq 1\). So, from (3.2), we have \(|H|\leq C_{4.1}\) for a positive constant \(C_{4.1}\) depends on the torsion of Chern connection of \(h\). Then, it follows from the Theorem 2.2 in [4] that
\[\mathcal{A}^{1/2}(f(\Omega))\leq C_{4.2}\Big{(}\mathcal{L}(f(\partial\Omega)) +\int_{\Omega}|H|\,dA\Big{)} \tag{3.3}\]
provided that \(\mathcal{A}(f(\Omega))\leq C_{4.3}\), where \(C_{4.2}\) is a universal positive constant and \(C_{4.3}\) is a positive constant depends on the injectivity radius and sectional curvature of \(h\). Then, the conclusion (3.1) follows from (3.3) with \(2C_{4.1}C_{4.2}\mathcal{A}(f(\Omega))\leq\mathcal{A}^{1/2}(f(\Omega))\). Namely, we can choose \(\epsilon_{2}<\min\{C_{4.3},\frac{1}{4C_{4.1}^{2}C_{4.2}^{2}}\}\) and \(C_{4}=4C_{4.2}^{2}\).
\(\Box\)
_Remark._ Essentially, the isoperimetric inequality holds for Chern-minimal maps is because of the fact that a Chern-minimal surface in compact Hermitian surface has bounded mean curvature.
**Corollary 3.2**.: _Let \(f\) be a Chern-minimal immersion from Riemann surface \((\Sigma,\mathpzc{j})\) into compact Hermitian surface \((M,J,h)\). Then, for any \(x\in f(\Sigma)\) and sufficient small ball \(B_{r}(x)\subseteq M\) with no boundary points in \(B_{r}(x)\), we have_
\[C_{5}\,r^{2}\leq\mathcal{A}(f(\Sigma)\cap B_{r}(x)), \tag{3.4}\]
_where \(C_{5}=1/4C_{4}\)._
_Proof._ Set \(\mathcal{A}(t):=\mathcal{A}(f(\Sigma)\cap B_{t}(x))\). It follows from Theorem 3.1 we obtain \(\sqrt{\mathcal{A}(t)}\leq\sqrt{C_{4}}\,\mathcal{A}^{\prime}(t)\), whose integration from \(0\) to \(r\) yeilds (3.4).
\(\Box\)
**Corollary 3.3**.: _Let \(f\) be a conformal Chern-harmonic map from Riemann surface \((\Sigma,\,\mathpzc{j})\) into compact Hermitian surface \((M,J,h)\). Then \(f\) is a constant map if \(E(f)\leq\epsilon_{2}\)._
_Proof._ It follows from the remark of Theorem 2.2 and Theorem 3.1, for any geodesic disk \(D_{r}\) in \(\Sigma\), we have
\[E(f|_{\Sigma\setminus D_{r}})={\mathcal{A}}(f(\Sigma\setminus D_{r}))\leq C_{4} \,{\mathcal{L}}^{2}(f(\partial D_{r})).\]
Letting \(r\to 0\), we obtain \(E(f)=0\), which implies that \(f\) is a constant map.
\(\Box\)
Inspiring from (2.24) and integration by parts, we say a map \(f\in W^{1,2}(\Sigma,M)\) is _weakly Chern-minimal_ if it satisfies
\[\int_{\Sigma}\langle d^{\nabla}\xi,df\rangle\,dA_{0}=0, \tag{3.5}\]
for all \(\xi\in\Omega^{0}(f^{-1}TM)\) with compact support. Notice that the regularity problem is a local problem, so we always work on the geodesic disk \(D_{r}\) with the center \(p\), and the punctured disk will be denoted by \(D_{r}^{*}\).
**Lemma 3.4**.: _Let \(f\) be a smooth conformal Chern-minimal map from a punctured geodesic disk \(D^{*}\) into Hermitian surface \((M,J,h)\). Suppose that \(f\) is continuous on \(D\) with finite area, then \(f\) is a weakly Chern-minimal map from the geodesic disk \(D\) into \(M\)._
_Proof._ It is sufficient to show that the identity (3.5) holds for any \(\xi\in\Omega^{0}(f^{-1}TM)\) with \(\operatorname{supp}(\xi)\subset D\). For any \(0<\epsilon<1\), we take a cut-off function \(\eta_{\epsilon}\) such that
\[\operatorname{supp}(\eta_{\epsilon})\subset D_{\epsilon},\ \ \ \ 0\leq\eta_{ \epsilon}\leq 1,\ \ \ \eta_{\epsilon}|_{D_{\epsilon/2}}=1,\ \ \ \ |d\eta_{\epsilon}|\leq\frac{C^{\prime}}{\epsilon},\]
where \(C^{\prime}\) is a uniform positive constant. Then
\[\int_{D}\langle d^{\nabla}\xi,df\rangle\,dA_{0}=\int_{D}\langle d^{\nabla}((1 -\eta_{\epsilon})\xi),df\rangle\,dA_{0}+\int_{D}\langle d^{\nabla}(\eta_{ \epsilon}\xi),df\rangle\,dA_{0}. \tag{3.6}\]
The first term in (3.6) is equal to zero by the divergence theorem and (2.28). For the last term, notice that \(d^{\nabla}(\eta_{\epsilon}\xi)=d\eta_{\epsilon}\otimes\xi+\eta_{\epsilon}\nabla\xi\), then the Holder inequality and (2.46) imply
\[\int_{D}\langle d^{\nabla}(\eta_{\epsilon}\xi),df\rangle dA_{0}\leq C^{\prime \prime}(\epsilon\,|\nabla\xi|_{L^{\infty}}+|\xi|_{L^{\infty}})\;{\mathcal{A}} ^{1/2}(f(D_{\epsilon})), \tag{3.7}\]
where \(C^{\prime\prime}\) is a positive constant depends on \(ds_{0}^{2}\). So, the last term vanishes by letting \(\epsilon\to 0\) in (3.7). This shows that \(f\) is a weakly Chern-minimal map from \(D\) into \(M\).
\(\Box\)
_Remark._ The definition of weakly Chern-minimal and the proof of Lemma 3.4 are the same as the case of pseudo-holomorphic curves, which are given by T.H. Parker and J.G. Wolfson in [9].
We now prove the _removable singularity theorem_ of Chern-minimal maps.
**Theorem 3.5**.: _Let \(f\) be a smooth conformal Chern-minimal map from the punctured geodesic disk \(D^{*}\) into compact Hermitian surface \((M,J,h)\) with finite area. Then \(f\) can be extended to a smooth Chern-minimal map from \(D\) into \((M,J,h)\)._
_Proof._ We first use the energy estimate and Morrey's decay lemma to prove that \(f\) is \(C^{\alpha}\) on \(D_{r_{0}}^{*}\) for some \(\alpha\in(0,1)\) and \(r_{0}\ll 1\), and hence \(f\) can be extended continuously to \(D\), then we use the elliptic estimates to show that \(f\) is smooth on \(D\).
Notice that \({\mathcal{A}}(f(D^{*}))\) is finite, we have
\[\lim_{r\to 0}{\mathcal{A}}(f(D_{r}^{*}))=0. \tag{3.8}\]
Thus, there is a constant \(r_{1}>0\) such that \(D_{r_{1}}^{*}\subset D^{*}\) and \({\mathcal{A}}(f(D_{r_{1}}^{*}))=E_{0}(f|_{D_{r_{1}}^{*}})\leq\min\{\epsilon_{1},\epsilon_{2}\}\), where \(\epsilon_{1},\epsilon_{2}\) are the constants in Theorem 2.4 and Theorem 3.1, respectively. Let \(D_{r}(x)\) be a geodesic disk with the center \(x\) and radius \(r\). Obviously, the isoperimetric
inequality holds for the domain \(D_{r}(x)\subset D_{r_{1}}^{*}\). We will show the isoperimetric inequality still holds for the domain \(D_{r}^{*}(x):=D_{r}(x)\setminus\{p\}\subset D_{r_{1}}^{*}\), where \(p\) is the center of \(D\). It follows from Proposition 2.1, we know that \(f\) is a Chern-harmonic map from \((D^{*},ds_{0}^{2})\) into \((M,J,h)\). So, by the Theorem 2.4, for any \(x\in\partial D_{\rho}\) with \(0<2\rho<r_{1}\), we have
\[|df|(x)=\sqrt{e_{0}(f)(x)}\leq\sup_{D_{\rho/2}(x)}\sqrt{e_{0}(f)}\leq\frac{ \sqrt{C_{3}}}{\rho}\sqrt{\mathcal{A}(f(D_{2\rho}^{*}))}. \tag{3.9}\]
By using the polar coordinate \((\rho,\theta)\) of \(D\) and (3.9), we have
\[\mathcal{L}(f(\partial D_{\rho}))\leq\int_{0}^{2\pi}|df(\partial_{\theta})|\, d\theta\leq C_{1}^{\prime}\int_{0}^{2\pi}\rho\,|df|\,d\theta\leq 2\pi C_{1}^{ \prime}\sqrt{C_{3}}\sqrt{\mathcal{A}(f(D_{2\rho}^{*}))}, \tag{3.10}\]
where \(C_{1}^{\prime}\) depends on \(ds_{0}^{2}\). We choose \(\rho\) sufficient small such that \(D_{\rho}^{*}\subset D_{r}^{*}(x)\). Taking \(\Omega=D_{r}^{*}(x)\setminus\overline{D_{\rho}^{*}}\) in Theorem 3.1, we have
\[\mathcal{A}(f(D_{r}^{*}(x)))-\mathcal{A}(f(D_{\rho}^{*}))\leq C_{4}\big{[} \mathcal{L}\big{(}f(\partial D_{r}(x))\big{)}+\mathcal{L}\big{(}f(\partial D _{\rho})\big{)}\big{]}^{2}. \tag{3.11}\]
Letting \(\rho\to 0\) in (3.11) and using (3.8), (3.10), we have
\[\mathcal{A}(f(D_{r}^{*}(x)))\leq C_{4}\,\mathcal{L}^{2}\big{(}f(\partial D_{r}(x)) \big{)}. \tag{3.12}\]
We now fix \(r_{0}\) such that \(0<2r_{0}<r_{1}\). Let \(D_{r}(x)\) be any geodesic disk is contained in \(D_{r_{0}}^{*}\), and set \(\alpha^{\prime}=\log r_{0}/\log(r_{0}/2)<1\). It is easy to check that \(D_{r^{\alpha^{\prime}}}(x)\) or \(D_{r^{\alpha^{\prime}}}^{*}(x)\) is contained in \(D_{2r_{0}}^{*}\). Denote by \(\mathcal{A}(\rho)\) the area of \(D_{\rho}(x)\) or \(D_{\rho}^{*}(x)\). In the polar coordinate \((\rho,\theta)\) of \(D_{\rho}(x)\) or \(D_{\rho}^{*}(x)\), the isoperimetric inequality and Holder inequality imply
\[\mathcal{A}(\rho) \leq C_{3}\,\Big{(}\int_{0}^{2\pi}|df(\partial_{\theta})|\;d\theta \Big{)}^{2} \tag{3.13}\] \[\leq C_{3}\,\Big{(}\int_{0}^{2\pi}\rho\,w(\rho)\,|df|\;d\theta\Big{)} ^{2}\] \[\leq 2\pi C_{3}\,\rho^{2}\,w^{2}(\rho)\,\int_{0}^{2\pi}\,|df|^{2}\,d\theta\] \[\leq 2\pi C_{3}C_{1}^{\prime}\,\rho\,\frac{d}{d\rho}\mathcal{A}(\rho),\]
where \(C_{1}^{\prime}\) is a positive constant as in (3.10). Here \(w(\rho)=\sin\rho\), \(1\), \(\sinh\rho\) if the curvature of background metric \(ds_{0}^{2}\) is \(1\), \(0\), \(-1\), respectively. Hence, we have
\[\frac{(2\pi\,C_{3}\,C_{1}^{\prime})^{-1}}{\rho}\leq\frac{d}{d\rho}\log\mathcal{ A}(\rho), \tag{3.14}\]
Integrating (3.14) from \(r\) to \(r^{\alpha^{\prime}}\) gives
\[\mathcal{A}(r)=\mathcal{A}\big{(}f(D_{r}(x))\big{)}\leq C_{2}^{\prime}\;r^{ \alpha}, \tag{3.15}\]
where \(C_{2}^{\prime}=e^{(2\pi\,C_{3}\,C_{1}^{\prime})^{-1}}\mathcal{A}(D_{2r_{0}}^{*})\) and \(\alpha=1-\alpha^{\prime}\). Thus the Morrey's decay lemma (see Theorem 3.5.2 in [8] or Lemma 2.1.10 in [7]) gives that \(f\) is \(C^{\alpha}\) on \(D_{r_{0}}^{*}\), and hence \(f\) can be extended continuously to \(D\).
Notice that \(f\) is a weakly Chern-minimal by the Lemma 3.4 and the local expression of (3.5) has the same form as (8.4.1) in [6], then the Lemma 8.4.3 in [6] gives \(f\in W^{2,2}(D,M)\), and hence \(df\in W^{1,2}(D,M)\). The Sobolev embedding implies that \(df\in L^{q}(D,M)\). The index \(q\) here and below stands for different constants which are strictly greater than \(2\). So, the term \(p(df,df)\) in the right hand side of (2.29) belongs to \(L^{q}(D,M)\). The elliptic estimates implies that \(df\in W^{1,q}(D,M)\), and hence the term \(q(\nabla df,df,df,df)\) in the right
hand side of (2.30) belongs to \(L^{q}(D,M)\). It follows from (2.30) and the elliptic estimates that \(df\in W^{3,q}(D,M)\). Thus, the bootstrapping arguments show that \(f\in C^{\infty}(D,M)\).
## 4. \(C^{\infty}\)-convergence and bubbling
In this section we use the Sacks-Uhlenbeck's procedure (see [13]) to get \(C^{\infty}\)-convergence of conformal Chern-minimal surface and existence of Chern-minimal two-sphere in Hermitian surface around the bubble point.
**Theorem 4.1**.: _Let \(\{f_{n}\}\) be a sequence of conformal Chern-minimal immersions from closed Riemann surface \((\Sigma,\,\mathfrak{z})\) into compact Hermitian surface \((M,J,h)\) with the areas are uniformly bounded by \(\mathcal{A}_{0}\). Then there are a subsequence \(\{f_{n,k}\}\), a finite set of points \(\{p_{1},\dots,p_{\kappa}\}\subset\Sigma\) and a conformal Chern-minimal immersion \(f_{0}\) from \(\Sigma\) into \(M\) such that \(f_{n,k}\) converges to \(f_{0}\) in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\dots,p_{\kappa}\}\). Furthermore, there is a Chern-minimal immersion \(f_{p_{i}}\) from \(S^{2}\) into \(M\) associated to each \(p_{i}\)._
_Proof._ We choose a constant \(r_{0}>0\) and set \(r_{m}:=2^{-m}r_{0}\) with \(m\in\mathbf{N}\). For each \(m\), choosing a finite covering \(\mathcal{C}_{m}=\{D_{r_{m}}(p_{\alpha})\}\) of \(\Sigma\) such that each point in \(\Sigma\) is covered at most \(N\) times by disks in \(\mathcal{C}_{m}\) and \(\{D_{r_{m}/2}(p_{\alpha})\}\) is still a covering of \(\Sigma\). It clear that \(N\) only depends on \(\Sigma\). So, for each \(n\), we have
\[\sum_{\alpha}\int_{D_{r_{m}}(p_{\alpha})}e_{0}(f_{n})\,dA_{0}\leq N\mathcal{A }_{0}. \tag{4.1}\]
This implies that there are at most \(N\mathcal{A}_{0}/\epsilon_{1}\) disks in \(\mathcal{C}_{m}\), on which
\[\int_{D_{r_{m}}(p_{\alpha})}e_{0}(f_{n})\,dA_{0}>\epsilon_{1}, \tag{4.2}\]
where \(\epsilon_{1}\) is the constant in Theorem 2.4. The center points of these disks are at most \(N\mathcal{A}_{0}/\epsilon_{1}\) sequences of points in \(\Sigma\). Notice that \(\mathcal{C}_{m}\) is a finite covering and \(\Sigma\) is compact, we may assume that these center points are fixed by passing to a subsequence of \(\{f_{n}\}\). For each \(m\), we call these center points \(\{p_{1,m},\dots,p_{\kappa,m}\}\) with \(\kappa\leq N\mathcal{A}_{0}/\epsilon_{1}\). By the Theorem 2.4, the elliptic estimates and the Arzela-Ascoli theorem, we can successively choose a subsequence of \(\{f_{n}\}\) that converges in \(C^{\infty}\)-topology on every disk \(D_{r_{m}/2}(p_{\alpha})\) for each \(D_{r_{m}}(p_{\alpha})\in\mathcal{C}_{m}\setminus\mathcal{C}_{m}^{\prime}\), where \(\mathcal{C}_{m}^{\prime}:=\{D_{r_{m}}(p_{1,m}),\dots,D_{r_{m}}(p_{\kappa,m})\}\). Since \(\Sigma\) is compact, after choosing a subsequence of \(\{m\}\), we can assume that \(p_{1,m},\dots,p_{\kappa,m}\) converge to points \(p_{1},\dots,p_{\kappa}\) as \(m\to\infty\), respectively. By a diagonal argument, there is a subsequence of \(\{f_{n}\}\) converges in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\dots,p_{\kappa}\}\), and the limit is denoted by \(f_{0}\). By the elliptic regularity, \(f_{0}\) is smooth and Chern-minimal on \(\Sigma\setminus\{p_{1},\dots,p_{\kappa}\}\). Then the Theorem 3.5 implies that \(f_{0}\) can be extended smoothly to \(\Sigma\).
We next use the Sacks-Uhlenbeck's procedure to get a bubble for each \(p\in\{p_{1},\dots,p_{\kappa}\}\). For notational convenience, we still use \(\{f_{n}\}\) to relabel the subsequence obtained above. Suppose that \(r_{0}>0\) such that \(2r_{0}\) less than the injectivity radius of \((\Sigma,ds_{0}^{2})\) and no other \(p_{i}\) in the geodesic disk \(D_{r_{0}}(p)\). Set
\[b_{n}:=\sup_{q\in D_{r_{0}}(p)}\,|df_{n}(q)|.\]
It is clear that \(\{b_{n}\}\) is unbounded. Otherwise, the elliptic estimates and Arzela-Ascoli theorem imply that a subsequence converges on \(D_{r_{0}}(p)\) in \(C^{\infty}\)-topology. So, without loss of generality, we can assume that \(b_{n}\to\infty\) as \(n\to\infty\). Let \(q_{n}\in\overline{D_{r_{0}}(p)}\) be the point such that \(|df_{n}(q_{n})|=b_{n}\). It is easy to check that \(q_{n}\to p\) as \(n\to\infty\).
Locally, we choose a conformal coordinate \((U,\psi;x)\) around \(p\) in \(D_{r_{0}}(p)\) such that \(\psi(U)=B_{2}(0)\) with \(\psi(p)=0\), where \(B_{2}(0)\subset\mathbb{C}\) is the ball centered at \(0\) with radius \(2\). The local expression of metric \(ds_{0}^{2}\) on \(B_{2}(0)\) can be written as \(ds_{0}^{2}=\lambda^{2}(x)\,dxd\overline{x}\). For sufficient large \(n\), we assume that \(\psi(q_{n})=x_{n}\), \(B_{1}(x_{n})\subset B_{2}(0)\) and define
\[\tilde{f}_{n}:B_{b_{n}}(0)\longrightarrow M,\;\;y\mapsto f_{n}\circ\psi^{-1}( x_{n}+y/b_{n}). \tag{4.3}\]
We equip \(B_{b_{n}}(0)\) with the metric \(ds_{n}^{2}=\lambda_{n}^{2}(y)\,dyd\overline{y}\) with \(\lambda_{n}(y)=\lambda(x_{n}+y/b_{n})\). Then \(\tilde{f}_{n}\) is a Chern-harmonic map from \((B_{b_{n}}(0),ds_{n}^{2})\) into \((M,J,h)\) with \(|d\tilde{f}_{n}|_{n}\leq 1\) on \(B_{b_{n}}(0)\) and \(|d\tilde{f}(0)|_{n}=1\). By identifying \(S^{2}\setminus\{\boldsymbol{s}\}\) with \(\mathbb{C}\) via the southern stereographic projection, where \(\boldsymbol{s}=(0,0,-1)\) is the south pole, we can regards \(\tilde{f}_{n}\) as maps from domains in \(S^{2}\setminus\{\boldsymbol{s}\}\) into \(M\). Notice that \(ds_{n}^{2}\) converges smoothly to \(ds_{\infty}^{2}=\lambda^{2}(0)\,dyd\overline{y}\) on \(\mathbb{C}\) and they are conformal, so Proposition 2.1 implies that \(\tilde{f}_{n}\) is Chern-harmonic from \((B_{b_{n}}(0),ds_{\infty}^{2})\) into \((M,J,h)\) with finite area. Since \(ds_{n}^{2}\) and \(ds_{\infty}^{2}\) are uniformly equivalent, \(\{|d\tilde{f}_{n}|_{\infty}\}\) is bounded. By using a sequence of compact sets exhaust \(S^{2}\setminus\{\boldsymbol{s}\}\) and diagonal argument, the elliptic estimates yields a subsequence of \(\{\tilde{f}_{n}\}\) that converges in \(C^{\infty}\)-topology to a Chern-harmonic map \(\tilde{f}\) from \((S^{2}\setminus\{\boldsymbol{s}\},ds_{\infty}^{2})\) into \((M,J,h)\) with finite area. It is also Chern-harmonic with respect to the standard metric on \(S^{2}\) restricted to \(S^{2}\setminus\{\boldsymbol{s}\}\). The Theorem 3.5 gives a Chern-harmonic map \(\tilde{f}\) from \(S^{2}\) into \(M\). By using the Proposition 2.1 again, with respect to the induced metric on \(S^{2}\), we obtain a Chern-minimal map \(f_{p}\) from \(S^{2}\) into \((M,J,h)\), which is not a constant map by the fact \(|d\tilde{f}(0)|_{\infty}=1\).
\(\Box\)
_Remark_. By passing to a subsequence (will be still relabeled as \(\{f_{n}\}\)) in Theorem 4.1, we can define
\[m_{i}:=\lim_{r\to 0}\limsup_{n\to\infty}\int_{D_{r}(p_{i})}e_{0}(f_{n})\, dA_{0}. \tag{4.4}\]
It follows from Theorem 4.1 that we get the measures convergence
\[e_{0}(f_{n})\,dA_{0}\to e_{0}(f_{0})\,dA_{0}+\sum_{i=1}^{\kappa}m_{i}\,\delta _{p_{i}}, \tag{4.5}\]
where \(e_{0}(f_{n})\,dA_{0}\) is viewed as measures on \(\Sigma\), \(\delta_{p_{i}}\) is the point measure.
## 5. The bubble tree convergence
In this section we give another renormalization procedure to construct bubbles by controlling the energy on each bubble. This procedure can be iterated and terminates in finite steps, which is now known as bubble tree convergence in the theory of harmonic maps and pseudo-holomorphic curves.
Choosing a conformal coordinate \((U;x)\) around any \(p_{i}\in\{p_{1},\ldots,p_{\kappa}\}\) such that there is no other bubble points in \(U\) with
\[x(p_{i})=0,\qquad x(U)=B_{4}(0)\subset\mathbb{C},\qquad ds_{0}^{2}=\lambda^{2 }dxd\overline{x},\]
and we will identify \(B_{4}(0)\) with \(U\) in the sequel. Let \(\{f_{n}\}\) be the subsequence has been chosen in Theorem 4.1, and set
\[r_{n}:=\sup\Big{\{}\;r\;\Big{|}\;\;\int_{B_{2r}(0)}e_{0}(f_{0})\,dA_{0}\leq \frac{m_{i}}{32n^{2}}\mbox{ with }r\leq\frac{1}{n}\Big{\}}. \tag{5.1}\]
By (4.5) and (5.1), we choose a subsequence \(\{f_{n,k}\}\) inductively such that
\[(1-\frac{1}{16k^{2}})\,m_{i}\leq\int_{B_{r_{k}/16k^{2}}(0)}e_{0}(f_{n,j})dA_{0} \leq(1+\frac{1}{16k^{2}})\,m_{i}, \tag{5.2}\]
\[\int_{B_{2r_{k}}(0)\setminus B_{r_{k}/16k^{2}}(0)}e_{0}(f_{n,j})dA_{0}\leq \frac{m_{i}}{16k^{2}}, \tag{5.3}\]
for any \(j\geq k\). By the \(C^{\infty}\)-convergence in Theorem 4.1 we can further assume that
\[\sup_{x\in\partial B_{2r_{k}}(0)}\mbox{dist}\big{(}f_{n,j}(x),f_{0}(x)\big{)} \leq\frac{1}{k}, \tag{5.4}\]
\[\sup_{B_{2r_{1}}(0)\setminus B_{r_{k}/16k^{2}}(0)}\big{|}e_{0}(f_{n,j})-e_{0} (f_{0})\big{|}\leq 1, \tag{5.5}\]
for any \(j\geq k\). We define the center of mass of the measure \(e_{0}(f_{n,k})\,dA_{0}\) on \(B_{2r_{k}}(0)\) by
\[\tilde{x}_{k}:=\int_{B_{2r_{k}}(0)}x\,e_{0}(f_{n,k})dA_{0}\Big{/}\int_{B_{2r_{ k}(0)}}e_{0}(f_{n,k})dA_{0}. \tag{5.6}\]
It follows from (5.2), we have
\[\int_{B_{2r_{k}}(0)}|x|e_{0}(f_{n,k})dA_{0} \leq \frac{r_{k}}{16k^{2}}\int_{B_{r_{k}/16k^{2}(0)}}e_{0}(f_{n,k})dA_ {0}+2r_{k}\int_{B_{2r_{k}}(0)\setminus B_{r_{k}/16k^{2}}(0)}e_{0}(f_{n,k})dA_ {0}\] \[\leq (\frac{3}{16k^{2}}+\frac{1}{256k^{4}})\,r_{k}m_{i},\]
which implies
\[|\tilde{x}_{k}|\leq\frac{r_{k}}{4k^{2}}, \tag{5.7}\]
for sufficient large \(k\). Set
\[\mu_{k}:=\sup\Big{\{}\ \mu\ \Big{|}\ \ \int_{B_{2r_{k}}(0)\setminus B_{\mu}( \tilde{x}_{k})}e_{0}(f_{n,k})dA_{0}\geq C_{0}\Big{\}}, \tag{5.8}\]
where \(0<C_{0}<\varepsilon_{1}/2\) is a fixed renormalization constant which is small enough to be chosen later. It follows from (5.7) we have \(B_{r_{k}/16k^{2}}(0)\subset B_{r_{k}/k^{2}}(\tilde{x}_{k})\). So, for sufficient large \(k\), (5.3) gives
\[\int_{B_{2r_{k}}(0)\setminus B_{r_{k}/k^{2}}(\tilde{x}_{k})}e_{0}(f_{n,k})dA_ {0} \leq \int_{B_{2r_{k}}(0)\setminus B_{r_{k}/16k^{2}}(0)}e_{0}(f_{n,k}) dA_{0}<C_{0}.\]
By the definition of \(\mu_{k}\) in (5.8), we have
\[\mu_{k}\leq\frac{r_{k}}{k^{2}}, \tag{5.9}\]
for sufficient large \(k\).
Notice that the Theorem 2.4 and a covering argument imply that there is a positive constant \(C_{1}^{\prime}\) depends on \((\Sigma,ds_{0}^{2})\) so that
\[\sup_{\Sigma}|df_{0}|^{2}\leq C_{1}^{\prime}\,E_{0}(f_{0}). \tag{5.10}\]
Then (5.4) and (5.10) give
\[\mbox{dist}\big{(}f_{n,k}(\partial B_{2r_{k}}(0)),f_{0}(p_{i}) \big{)} \leq 4r_{k}\sup_{\Sigma}|df_{0}|+\sup_{x\in\partial B_{2r_{k}}(0)} \mbox{dist}\big{(}f_{n,k}(x),f_{0}(x)\big{)} \tag{5.11}\] \[\leq \frac{C_{2}^{\prime}}{k},\]
where \(C^{\prime}_{2}\) depends on \(C^{\prime}_{1}\) and \(E_{0}(f_{0})\).
For each \(k\), we define the conformal transformation on \(\mathbb{C}\) as
\[T_{k}:\mathbb{C}\longrightarrow\mathbb{C},\ x\mapsto T_{k}(x)=\frac{1}{\mu_{k} }(x-\tilde{x}_{k}).\]
Let \(\pi\) be the south stereographic projection from \(S^{2}\setminus\{\boldsymbol{s}\}\) to \(\mathbb{C}\), where \(\boldsymbol{s}=(0,0,-1)\) is the south pole. Set \(\Omega_{k}:=\pi^{-1}\circ T_{k}(B_{2r_{k}}(0))\), \(D_{k}:=\pi^{-1}\circ T_{k}(B_{k\mu_{k}}(\tilde{x}_{k}))\subset S^{2}\). It is clear that \(\Omega_{k}\) and \(D_{k}\) exhaust \(S^{2}\setminus\{\boldsymbol{s}\}\) as \(k\) goes to infinity, respectively. Then, define the renormalized maps as
\[\tilde{f}_{n,k}:=f_{n,k}\circ T_{k}^{-1}\circ\pi:\Omega_{k}\longrightarrow M,\]
whose image are agree with \(f_{n,k}|_{B_{2r_{k}}(0)}\). Notice that \(T_{k}^{-1}\) and \(\pi\) are conformal, so \(\tilde{f}_{n,k}\) is also Chern-harmonic with respect to the standard metric on \(S^{2}\). It follows from (4.5), (5.2), (5.3) and (5.8), we have
\[\lim_{k\to\infty}\int_{\Omega_{k}}e_{0}(\tilde{f}_{n,k})\,dA_{0}=m_{i} \tag{5.12}\]
and
\[\lim_{k\to\infty}\int_{\Omega_{k}\setminus S^{2}_{+}}e_{0}(\tilde{f}_{n,k})\, dA_{0}=C_{0}, \tag{5.13}\]
where \(S^{2}_{+}\) is the northern hemisphere of \(S^{2}\).
Choosing a sequence of compact sets \(\{G_{j}\}\) such that \(D_{j}\subset G_{j}\subset D_{j+1}\). Applying the Theorem 4.1 to the sequence \(\tilde{f}_{n,k}\) on \(G_{j}\) successively for \(j\) and a diagonal argument, we obtain a subsequence of \(\{\tilde{f}_{n,k}\}\) (still denoted by \(\{\tilde{f}_{n,k}\}\)), a finite subset \(\{p_{i1},\ldots,p_{i\kappa_{i}}\}\subset S^{2}\) with \(\kappa_{i}\leq\mathcal{A}_{0}/\epsilon_{1}\), and a Chern-harmonic map \(f_{p_{i}}:S^{2}\longrightarrow M\) so that the subsequence converges to \(f_{p_{i}}\) in \(C^{\infty}\)-topology on \(S^{2}\setminus\{p_{i1},\ldots,p_{i\kappa_{i}},\boldsymbol{s}\}\), with the measures convergence
\[e_{0}(\tilde{f}_{n,k})\,dA_{0}\to e_{0}(f_{p_{i}})\,dA_{0}+\sum_{j=1}^{\kappa_ {i}}m_{ij}\,\delta_{p_{ij}}+m_{i\boldsymbol{s}}\,\delta_{\boldsymbol{s}}, \tag{5.14}\]
where \(m_{ij}>\epsilon_{1}\). This together with (5.13) imply that there is no bubble points \(p_{ij}\) in the southern hemisphere. As the choosing of sequence \(\{f_{n,k}\}\), passing to a subsequence, for any \(j\geq k\), we can further assume that
\[\sup_{x\in\partial D_{k}}\operatorname{dist}\bigl{(}\tilde{f}_{n,j}(x),f_{p_{ i}}(x)\bigr{)}\leq\frac{1}{k} \tag{5.15}\]
and
\[\sup_{D^{\prime}_{k}\setminus D_{k}}\big{|}e_{0}(\tilde{f}_{n,j})-e_{0}(f_{p_ {i}})\big{|}\leq 1, \tag{5.16}\]
where \(D^{\prime}_{k}:=\pi^{-1}\circ T_{k}\bigl{(}B_{4k\mu_{k}}(\tilde{x}_{k})\bigr{)} \subseteq S^{2}\). Similarly, as the proof of (5.10) and (5.11), these imply
\[\sup_{S^{2}}\left|df_{p_{i}}\right|^{2}\leq C^{\prime}_{3}\,E_{0}(f_{p_{i}}), \tag{5.17}\]
and for any \(j\geq k\) that
\[\operatorname{dist}\bigl{(}\tilde{f}_{n,j}(\partial D_{k}),f_{p_{i}}( \boldsymbol{s})\bigr{)}\leq\frac{C^{\prime}_{4}}{k}, \tag{5.18}\]
where \(C^{\prime}_{3}\) depends on \((S^{2},ds_{0}^{2})\), \(C^{\prime}_{4}\) depends on \(C^{\prime}_{3}\) and \(E_{0}(f_{p_{i}})\). Notice that \(\partial B_{r_{k}}(\tilde{x}_{k})\subseteq B_{2r_{1}}(0)\setminus B_{r_{k}/16k^ {2}}(0)\), so for any \(x\in\partial B_{r_{k}}(\tilde{x}_{k})\), \(y\in\partial B_{2r_{k}}(0)\), (5.5) and (5.11) give
\[\mathrm{dist}\big{(}f_{n,k}(x),f_{0}(p_{i})\big{)} \leq \mathrm{dist}\big{(}f_{n,k}(x),f_{n,k}(y)\big{)}+\mathrm{dist} \big{(}f_{n,k}(y),f_{0}(p_{i})\big{)}\] \[\leq 4r_{k}\,\sup_{B_{2r_{1}}(0)\setminus B_{r_{k}/16k^{2}}(0)}\, \big{|}df_{n,k}\big{|}^{2}+\frac{C^{\prime}_{2}}{k}\] \[\leq \frac{C^{\prime}_{5}}{k},\]
where \(C^{\prime}_{5}\) depends on \(C^{\prime}_{2}\) and \(\sup_{\Sigma}e_{0}(f_{0})\). This implies
\[\mathrm{dist}\big{(}f_{n,k}(\partial B_{r_{k}}(\tilde{x}_{k})),f_{0}(p_{i}) \big{)}\leq\frac{C^{\prime}_{5}}{k}. \tag{5.19}\]
We next clearly describe the position of \(f_{0}(\Sigma)\) and \(f_{p_{i}}(S^{2})\) lie in \(M\). We divide the energy on \(B_{4}(0)\) into three parts as follows
\[\int_{B_{4}(0)}e_{0}(f_{n,k})\,dA_{0} = \int_{B_{4}(0)\setminus B_{r_{k}}(\tilde{x}_{k})}e_{0}(f_{n,k}) \,dA_{0}+\int_{B_{r_{k}}(\tilde{x}_{k})\setminus B_{k\mu_{k}}(\tilde{x}_{k})} e_{0}(f_{n,k})\,dA_{0} \tag{5.20}\] \[+\int_{B_{k\mu_{k}}(\tilde{x}_{k})}e_{0}(f_{n,k})\,dA_{0}.\]
So, it is natural to define the _neck map_
\[f_{n,k}|_{A_{k}}:A_{k}\longrightarrow M,\ \ \ \ A_{k}:=B_{r_{k}}(\tilde{x}_{k}) \setminus B_{k\mu_{k}}(\tilde{x}_{k}), \tag{5.21}\]
and the _bubble map_
\[\tilde{f}_{n,k}:D_{k}\longrightarrow M, \tag{5.22}\]
whose image are agree with \(f_{n,k}(B_{k\mu_{k}}(\tilde{x}_{k}))\). The corresponding domains \(A_{k}\), \(B_{k\mu_{k}}(\tilde{x}_{k})\) will be called the _neck domain_, _bubble domain_, respectively. Notice that \(f_{n,k}\) converges to \(f_{0}\) in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\ldots,p_{\kappa}\}\), we have
\[\lim_{k\to\infty}\int_{B_{4}(0)\setminus B_{r_{k}}(\tilde{x}_{k})}e_{0}(f_{n,k })\,dA_{0}=\int_{B_{4}(0)}e_{0}(f_{0})\,dA_{0}. \tag{5.23}\]
If we can show
\[\lim_{k\to\infty}\int_{B_{r_{k}}(\tilde{x}_{k})\setminus B_{k\mu_{k}}(\tilde{ x}_{k})}e_{0}(f_{n,k})\,dA_{0}=0, \tag{5.24}\]
then (4.5), (5.20) and (5.24) implies
\[\lim_{k\to\infty}\int_{B_{k\mu_{k}}(\tilde{x}_{k})}e_{0}(f_{n,k})\,dA_{0}=m_{i}. \tag{5.25}\]
To prove (5.24), we reparameterize the neck domain \(A_{k}\) in polar coordinate as
\[\phi_{k}:[0,T_{k}]\times S^{1}\longrightarrow\overline{A_{k}},\ (t,\theta) \mapsto r_{k}e^{-t+\mathrm{i}\theta},\]
where \(T_{k}=\log(r_{k}/k\mu_{k})\to\infty\) as \(k\to\infty\). We define a family of loops by \(\gamma_{k,t}:=f_{n,k}\circ\phi(t,\cdot)\), \(t\in[0,T_{k}]\).
**Proposition 5.1**.: _For the length of loops \(\gamma_{k,t}\) with \(t\in[0,T_{k}]\), for sufficient large \(k\), we have_
\[\mathcal{L}(\gamma_{k,t})\leq 2\pi\sqrt{C_{0}C_{3}C^{\prime}_{6}},\]
_where \(C^{\prime}_{6}\) is a constant dpends on \(ds_{0}^{2}\). Moreover, we have_
\[\max\bigl{\{}\mathcal{L}(\gamma_{k,0}),\mathcal{L}(\gamma_{k,T_{k}})\bigr{\}} \leq\frac{C_{6}}{k},\]
_where \(C_{6}\) is a constant depends on the metric \(ds_{0}^{2}\) and \(f_{0}\)._
_Proof._ By the definition of \(\mu_{k}\) in (5.8), we have \(E_{0}(f_{n,k}|_{A_{k}})\leq C_{0}\). So, by the Theorem 2.4, on the unit disk around each point in \([1,T_{k}-1]\times S^{1}\), we have \(e_{0}(f_{n,k}|_{A_{k}})\leq C_{0}C_{3}\). For \(1\leq t\leq T_{k}-1\), as the estimation given in (3.13), we have
\[\mathcal{L}^{2}(\gamma_{k,t}) = \big{(}\int_{0}^{2\pi}\big{|}df_{n,k}(\partial_{\theta})\big{|}\ \big{)}^{2}\] \[\leq C^{\prime}_{6}\big{(}\int_{0}^{2\pi}d\theta\ \big{)}\ \big{(}\int_{0}^{2\pi}\big{|}df_{n,k}\big{|}^{2}d\theta\ \big{)}\] \[\leq 4\pi^{2}C_{0}C_{3}C^{\prime}_{6},\]
where \(C^{\prime}_{6}\) is a constant depends on \(ds_{0}^{2}\). For \(0\leq t\leq 1\), since \(\gamma_{k,t}\subseteq f_{n,k}(\overline{B_{r_{k}}(\tilde{x}_{k})}\backslash B _{r_{k}/e}(\tilde{x}_{k}))\) and \(\overline{B_{r_{k}}(\tilde{x}_{k})}\setminus B_{r_{k}/e}(\tilde{x}_{k}) \subset B_{2r_{1}}(0)\setminus B_{r_{k}/16k^{2}}(0)\) for sufficient large k, it follows from (5.5) and (5.10) we have
\[\sup_{\theta}\big{|}d\gamma_{k,t}(\partial_{\theta})\big{|}^{2} \leq C^{\prime}_{6}\sup_{B_{r_{k}}(\tilde{x}_{k})\backslash B_{r_{k} /e}(\tilde{x}_{k})}r^{2}\left|df_{n,k}\right|^{2}\] \[\leq \frac{C^{\prime}_{6}C^{\prime}_{7}}{k^{2}},\]
where \(C^{\prime}_{7}\) depends on \(f_{0}\). This implies that \(\mathcal{L}(\gamma_{k,0})\leq C_{6}/k\), where \(C_{6}\) depends on \(C^{\prime}_{6}\) and \(C^{\prime}_{7}\). For \(T_{k}-1\leq t\leq T_{k}\), a similar estimation follows from (5.16) and (5.17).
\(\Box\)
**Proposition 5.2**.: _For the constants \(m_{i\boldsymbol{s}}\) in (5.14), we have_
\[m_{i\boldsymbol{s}}=\limsup_{k\to\infty}\int_{A_{k}}e_{0}(f_{n,k})\,dA_{0}.\]
_Proof._ Since \(\Omega_{k}\) exhaust \(S^{2}\setminus\{\boldsymbol{s}\}\) as \(k\) goes to infinity, it follows from (5.14) we obtain
\[m_{i\boldsymbol{s}} = \lim_{j\to\infty}\lim_{k\to\infty}E_{0}(\tilde{f}_{n,k}|_{ \Omega_{k}\backslash D_{j}}) \tag{5.26}\] \[= \lim_{j\to\infty}\lim_{k\to\infty}\big{(}E_{0}(\tilde{f}_{n,k}|_ {\Omega_{k}\backslash D_{k}})+E_{0}(\tilde{f}_{n,k}|_{D_{k}\backslash D_{j}}) \big{)}\] \[= \lim_{j\to\infty}\lim_{k\to\infty}\big{(}E_{0}(f_{n,k}|_{B_{2r_{k }}(0)\setminus B_{r_{k}}(\tilde{x}_{k})})+E_{0}(f_{n,k}|_{A_{k}})+E_{0}(\tilde{ f}_{n,k}|_{D_{k}\backslash D_{j}})\big{)}.\]
Since \(B_{2r_{k}}(0)\setminus B_{r_{k}}(\tilde{x}_{k})\subseteq B_{2r_{k}}(0) \setminus B_{r_{k}/16k^{2}}(0)\), then (5.3) implies that the first term in (5.26) is equal to zero. On the other hand, for any \(k\geq j\), the (5.16) and (5.17) give
\[\sup_{D_{j+1}\backslash D_{j}}e_{0}(\tilde{f}_{n,k})\leq\sup_{D_{j+1} \backslash D_{j}}\big{(}\big{|}e_{0}(\tilde{f}_{n,k})-e_{0}(f_{p_{i}})\big{|} +|e_{0}(f_{p_{i}})|\big{)}\leq C^{\prime}_{8},\]
where \(C^{\prime}_{8}\) depends on \(f_{p_{i}}\). This implies that \(\sup_{D_{k}\backslash D_{\ell}}e_{0}(\tilde{f}_{n_{k}})\leq C^{\prime}_{8}\), and hence the third term in (5.26) is also equal to zero. So, the statement holds by taking a subsequence converges to \(\limsup_{k\to\infty}E_{0}(f_{n,k}|_{A_{k}})\).
\(\Box\)
To be continue, we define a new map
\[F_{k}:B_{4}(0)\longrightarrow M\times\mathbb{C},\ \ x\mapsto(f_{n,k},x).\]
Notice that \(F_{k}\) is conformal, so it is easy to check that \(F_{k}\) is also Chern-harmonic with respect to the product metric \(h+dxd\overline{x}\) on \(M\times\mathbb{C}\). We will use \(F_{k}\) to prove the no energy loss and necklessness. This approach has been used by R. Schoen [12], J. Jost [5] and T.H. Parker [10].
**Proposition 5.3**.: _For the neck maps \(f_{n,k}|_{A_{k}}\), we have_
\[\limsup_{k\to\infty}\int_{A_{k}}e_{0}(f_{n,k})\,dA_{0}=0\ \ \text{and}\ \ \limsup_{k\to\infty}\sup_{x,x^{\prime}\in A_{k}}\operatorname{dist}\bigl{(}f_{n,k}(x),f_{n,k}(x^{\prime})\bigr{)}=0.\]
_Proof._ For the energy of \(F_{k}\) on \(A_{k}\), we have
\[\int_{A_{k}}e_{0}(F_{k})\,dA_{0}=\int_{A_{k}}\left(e_{0}(f_{n,k})+1\right)dA_{0 }\leq C_{0}+C_{9}^{\prime}r_{k}^{2}, \tag{5.27}\]
where \(C_{9}^{\prime}\) depends on \(ds_{0}^{2}\). This implies \(\mathcal{A}(F_{k}(A_{k}))\leq\epsilon_{1}\) for sufficient large \(k\). For the length of \(F_{k}(\partial B_{r}(\tilde{x}_{k}))\) with \(k\mu_{k}\leq r\leq r_{k}\), we have
\[\mathcal{L}(F_{k}(\partial B_{r}(\tilde{x}_{k})))\leq\mathcal{L}(f_{n,k}( \partial B_{r}(\tilde{x}_{k})))+2\pi r. \tag{5.28}\]
Applying the isoperimetric inequality to \(F_{k}(A_{k})\) with sufficient large \(k\), we have
\[E_{0}(f_{n,k}|_{A_{k}})\leq E_{0}(F_{k}|_{A_{k}})=\mathcal{A}(F_{k}|_{A_{k}}) \leq C_{10}^{\prime}\,\mathcal{L}^{2}(F_{k}|_{\partial A_{k}}), \tag{5.29}\]
where \(C_{10}^{\prime}\) depends on \((M\times\mathbb{C},h+dxd\overline{x})\). So, it follows from (5.28), (5.29) and Proposition 5.1 we obtain \(E_{0}(f_{n,k}|_{A_{k}})\to 0\) as \(k\to\infty\).
To prove the second identity, we only to show that for any fixed \(\epsilon>0\), there exists a point \(z\in M\times\mathbb{C}\) such that \(F_{k}(A_{k})\subseteq B_{\epsilon}(z)\) for large \(k\). It follows from (5.28) that there exist \(z,\ z^{\prime}\in M\times\mathbb{C}\) such that \(F_{k}(\partial A_{k})\subseteq B_{\epsilon/4}(z)\cup B_{\epsilon/4}(z^{\prime})\) for sufficient large \(k\). We claim that \(F_{k}(A_{k})\subseteq B_{\epsilon/2}(z)\cup B_{\epsilon/2}(z^{\prime})\). If not, for some large \(k\), there is a point \(y\in F_{k}(A_{k})\), but \(y\notin B_{\epsilon/2}(z)\cup B_{\epsilon/2}(z^{\prime})\). Then, the Corollary 3.2 gives
\[\frac{1}{4C_{10}^{\prime}}\epsilon^{2}\leq\mathcal{A}(F_{k}(A_{k})\cap B_{ \epsilon}(y)),\]
which is a contradiction with (5.29) for sufficient large \(k\). Notice that \(F_{k}(A_{k})\) is connected, we obtain that \(F_{k}(A_{k})\subseteq B_{\epsilon}(z)\) or \(B_{\epsilon}(z^{\prime})\) for large \(k\).
_Remark._ It follows from Proposition 5.2 and Proposition 5.3 that \(m_{is}=0\), and hence (5.14) reduces to
\[e_{0}(\tilde{f}_{n,k})\,dA_{0}\longrightarrow e_{0}(f_{p_{i}})\,dA_{0}+\sum_ {j=1}^{\kappa_{i}}m_{ij}\,\delta_{p_{ij}}. \tag{5.30}\]
We finally iterate the previous procedure to get bubble tree convergence for Chern-minimal immersions. Recall that for a given sequence of conformal Chern-minimal immersions \(\{f_{n}\}\) from Riemann surfaces \((\Sigma,\,\mathpzc{j})\) into compact Hermitian surface \((M,J,h)\) with the areas are uniformly bounded by \(\mathcal{A}_{0}\), then the Sacks-Uhlenbeck's procedure say that there are a subsequence \(\{f_{n,k}\}\), a Chern-minimal map \(f_{0}:\Sigma\longrightarrow M\), a finite set of bubble points \(\{p_{1},\ldots,p_{\kappa}\}\subset\Sigma\) with concentrated energy \(m_{1},\ldots,m_{\kappa}>\epsilon_{1}\) so that \(f_{n,k}\) converges to \(f_{0}\) in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\ldots,p_{\kappa}\}\) and the energy convergence
\[\lim_{k\to\infty}E(f_{n,k})=E(f_{0})+\sum_{i_{1}=1}^{\kappa}m_{i_{1}}. \tag{5.31}\]
The renormalization procedure around any bubble point \(p_{i_{1}}\in\{p_{1},\ldots,p_{\kappa}\}\) say that there are a sequence of renormalized Chern-minimal immersions \(\tilde{f}_{n,k_{i_{1}}}\) from domains \(\Omega_{k}\subset S^{2}\) exhaust \(S^{2}\setminus\{\mathpzc{s}\}\), a Chern-minimal map \(f_{p_{i_{1}}}:S^{2}\longrightarrow M\), a finite set of bubble points \(\{p_{i_{1}1},\ldots,p_{i_{1}\kappa_{i_{1}}}\}\subset S^{2}\) with concentrated energy \(m_{i_{1}1},\ldots,m_{i_{1}\kappa_{i_{1}}}>\epsilon_{1}\) so that
\(\tilde{f}_{n,k,i_{1}}\) converges to \(f_{p_{i_{1}}}\) in \(C^{\infty}\)-topology on \(S^{2}\setminus\{p_{i_{1}1},\ldots,p_{i_{1}\kappa_{i_{1}}},\boldsymbol{s}\}\) and (5.12) and (5.30) imply the energy convergence
\[m_{i_{1}}=\lim_{k\to\infty}E(\tilde{f}_{n,k,i_{1}})=E(f_{p_{i_{1}}})+\sum_{i_{ 2}=1}^{\kappa_{i_{1}}}m_{i_{1}i_{2}}. \tag{5.32}\]
Sequentially, if we have obtained a Chern-minimal map \(f_{p_{I^{\prime}}}:S^{2}\longrightarrow M\) and a finite set of bubble points \(\{p_{I^{\prime}1},\ldots,p_{I^{\prime}k_{I^{\prime}}}\}\) with concentrated energy \(m_{I^{\prime}1},\ldots,m_{I^{\prime}\kappa_{I^{\prime}}}>\epsilon_{1}\), by repeating the renormalization procedure, around each bubble point \(p_{I^{\prime}i_{\ell}}\in\{p_{I^{\prime}1},\ldots,p_{I^{\prime}k_{I^{\prime}}}\}\), there are a sequence of renormalized Chern-minimal immersions \(\tilde{f}_{n,k,I^{\prime}i_{\ell}}\) from domains \(\Omega_{k}\subset S^{2}\) exhaust \(S^{2}\setminus\{\boldsymbol{s}\}\), a Chern-minimal map \(f_{p_{I^{\prime}i_{\ell}}}:S^{2}\longrightarrow M\), a finite set of bubble points \(\{p_{I^{\prime}i_{\ell}}\,1,\ldots,p_{I^{\prime}i_{\ell}}\,\kappa_{I^{\prime} i_{\ell}}\}\subset S^{2}\) with concentrated energy \(m_{I^{\prime}i_{\ell}}\,1,\ldots,m_{I^{\prime}i_{\ell}}\,\kappa_{I^{\prime}i_ {\ell}}>\epsilon_{1}\) so that \(\tilde{f}_{n,k,I^{\prime}i_{\ell}}\) converges to \(f_{p_{I^{\prime}i_{\ell}}}\) in \(C^{\infty}\)-topology on \(S^{2}\setminus\{p_{I^{\prime}i_{\ell}}\,1,\ldots,p_{I^{\prime}i_{\ell}}\, \kappa_{I^{\prime}i_{\ell}},\boldsymbol{s}\}\) and the energy convergence
\[m_{I^{\prime}i_{\ell}}=\lim_{k\to\infty}E(\tilde{f}_{n,k,I^{\prime}i_{\ell}}) =E(f_{p_{I^{\prime}i_{\ell}}})+\sum_{i_{\ell+1}=1}^{\kappa_{I^{\prime}i_{\ell} }}m_{I^{\prime}i_{\ell}i_{\ell+1}}, \tag{5.33}\]
where \(I^{\prime}=i_{1}\cdots i_{\ell-1}\) is a multiplicity index in \(\mathbf{N}^{\ell-1}\). We will agree on the convention that \(f_{p_{0}}\) stands for the Chern-minimal map \(f_{0}\) from \(\Sigma\) into \(M\). Notice that (5.13) and (5.30) imply that \(E(f_{p_{I}})\geq C_{0}\), thus this procedure will terminate in finite steps less than \(\mathcal{A}_{0}/C_{0}\).
**Theorem 5.4**.: _Let \(\{f_{n}\}\) be a sequence of conformal Chern-minimal immersions from Riemann surfaces \((\Sigma,\,\mathfrak{j})\) into compact Hermitian surface \((M,J,h)\) with the areas are uniformly bounded by \(\mathcal{A}_{0}\). Then there are a subsequence \(\{f_{n,k}\}\), a Chern-minimal immersion \(f_{0}:\Sigma\longrightarrow M\), a finite set of renormalized Chern-minimal sequences \(\{\tilde{f}_{n,k,I}\}\) and a finite set of Chern-minimal two-spheres \(f_{p_{I}}:S^{2}\longrightarrow M\) so that_
\((1)\) _The sequences \(\{f_{n,k}\}\), \(\{\tilde{f}_{n,k,I}\}\) converge to \(f_{0}\), \(f_{p_{I}}\) in \(C^{\infty}\)-topology on \(\Sigma\setminus\{p_{1},\ldots,p_{\kappa}\}\), \(S^{2}\setminus\{p_{I1},\ldots,p_{\kappa_{I}},\boldsymbol{s}\}\), respectively._
\((2)\) _There is no energy loss. That is_
\[\lim_{k\to\infty}E(f_{n,k})=E(f_{0})+\sum_{I}E(f_{p_{I}}). \tag{5.34}\]
\((3)\) _There is no distance bubbling. Namely, for each bubble point \(p_{I}\), we have \(f_{p_{I}}(\boldsymbol{s})=f_{p_{I^{\prime}}}(p_{I})\) with indices \(I^{\prime}=i_{1}\cdots i_{\ell-1}\) and \(I=i_{1}\cdots i_{\ell-1}i_{\ell}\)._
_Proof._ The statement of no energy loss follows from (5.31)-(5.33), and the no distance bubbling follows from (5.18), (5.19) and Proposition 5.3.
\(\Box\).
The consequence of Theorem 5.4 imply that the images of \(\{f_{n,k}\}\) pointwisely converges to the connected image of \(\{f_{0},f_{p_{I}}\}\). The same as T.H. Parker's proof (see Corollary 2.3 in [10]) of harmonic maps, the no distance bubbling implies that the bubble tree limit preserves the homotopy class.
**Corollary 5.5**.: _Let \(\{f_{n}\}\) be a sequence of conformal Chern-minimal immersions from Riemann surfaces \((\Sigma,\,\mathfrak{j})\) into compact Hermitian surface \((M,J,h)\) with the areas are uniformly bounded by \(\mathcal{A}_{0}\). If each \(f_{n}\) represents the same homotopy class \(\alpha\) in the set \([\Sigma,M]\) of free homotopy class, then_
\[\alpha=[f_{0}]+\sum_{I}[f_{p_{I}}].\]
**Acknowledgments**. This project is supported by the NSFC (No.11871445), the Stable Support for Youth Team in Basic Research Field, CAS(YSBR-001) and the Fundamental Research Funds for the Central Universities.
|
2309.10016 | Evaluation of GPT-3 for Anti-Cancer Drug Sensitivity Prediction | In this study, we investigated the potential of GPT-3 for the anti-cancer
drug sensitivity prediction task using structured pharmacogenomics data across
five tissue types and evaluated its performance with zero-shot prompting and
fine-tuning paradigms. The drug's smile representation and cell line's genomic
mutation features were predictive of the drug response. The results from this
study have the potential to pave the way for designing more efficient treatment
protocols in precision oncology. | Shaika Chowdhury, Sivaraman Rajaganapathy, Lichao Sun, James Cerhan, Nansu Zong | 2023-09-18T16:17:44Z | http://arxiv.org/abs/2309.10016v2 | # Evaluation of GPT-3 for Anti-Cancer Drug Sensitivity Prediction
###### Abstract
Cancer is a complex genetic disease that originates from the accumulation of gene mutations within a cell and is ranked as the second leading cause of death in the United States according to the American Cancer Society1. Given the tumor heterogeneity arising from the genetic variations among patients even with the same cancer type, substantial differences in the anti-cancer drug response can be expected, thereby highlighting the urgent need for targeted therapies. Owing to the high cost and time associated with developing and validating anti-cancer drugs in clinical trials which is further exacerbated by the 96% failure rate, the development of preclinical computational models that can accurately predict whether a cell line is sensitive or resistant to a particular drug is imperative. The availability of large-scale pharmacogenomics datasets collected via high-throughput screening technologies offers feasible resources to develop robust drug response models and identify the important biomarkers predictive of drug sensitivity.
Large language models (LLM), such as the Generative Pre-trained Transformer (GPT-3) from OpenAI, are "task-agnostic models" pre-trained on large textual corpora crawled from the Web that have exhibited unprecedented capabilities on a broad array of NLP tasks. Recent studies have noted the potential of GPT-3 in the biomedical domain[2, 3]; however, these studies focus on processing NLP datasets that include unstructured text and their applicability to biomedical tasks with structured data (e.g., pharmacogenomics data) remains unexplored. To this end, this work aims to investigate GPT-3's potential for anti-cancer drug sensitivity prediction on the Genomics of Drug Sensitivity in Cancer (GDSC)[4] database containing tabular pharmacogenomic information. The main contributions of this work include: (1) task-specific prompt engineering of the structured data, (2) evaluating and comparing the performance of GPT-3 for drug sensitivity prediction in the zero-shot and fine-tuning settings, (3) analyzing the effect of simplified molecular input line entry specification (SMILES) sequences of drugs and genomic mutation features of cancer cell lines on the model's generalization and (4) we release a web app for using the GPT-3 variant fine-tuned on the GDSC data for drug sensitivity classification at [https://huggingface.co/spaces/ShaikaChy/SensitiveCancerGPT](https://huggingface.co/spaces/ShaikaChy/SensitiveCancerGPT).
**Methods**
_Dataset:_ We utilized the drug-cancer cell lines pairs and their corresponding drug response data (i.e., the half maximal inhibitory concentration (IC50)) from the new version of GDSC database (GDSC2) across 5 tissue types - Lung adenocarcinoma (LUAD), Breast invasive carcinoma (BRCA), Colon and rectum adenocarcinoma (COREAD), Thyroid carcinoma (THCA) and Brain Lower Grade Glioma (LGG) - which in total cover 288 unique drugs and 186 unique cell lines. We created dataset per tissue type which resulted in 17725, 22247, 38487, 1212 and 2607 drug-cell line pairs for the LUAD, BRCA, COREAD, THCA and LGG cohorts respectively; each cohort was trained and evaluated using GPT-3 separately with 80%-20% stratified split for the training and test sets. In addition, to inspect the effect of integrating additional context with the input in the form of drug's chemical structure (SMILE) and gene mutation information on the model's performance, we create ablated datasets wrt different input combinations for LUAD as the illustrative tissue type. The following are the ablated input combinations and their data sizes: drug + cell line + smile (12003), drug + cell line + mutation (4900), drug + cell line + smile + mutation (3500).
_Task Overview:_ We formulate drug sensitivity prediction as a binary classification problem to predict if a drug-cell line pair is sensitive or resistant. To convert the IC50 drug response values to binary labels, we adopt the strategy employed in Chang et al.[5] and set a fixed threshold \(\theta=-2\), where ln(IC50) \(<\)\(\theta\) is considered sensitive and resistant otherwise.
_Zero-Shot Prompting:_ Considering the test set as a M x N table of M drug-cell line pairs as the rows and N feature columns (i.e., "drug name", "drug target", "drug smile", "gene mutation", "drug response"), we first convert the structured cell values in each row to a natural language text \(T\) using the corresponding column names (e.g., "The drug name is pci-34051. The drug target is hdac1. The drug smile is COC1=CC=C(C=C1)CN2C=CC3=C2C=C(C=C3)C(=O)NO. The gene mutation is crebbp. Drug response:"). Note that the last column is left blank for the model to predict. We then prepare a task-specific instruction \(I\) "Decide in a single word if the drug's response to the target is sensitive or resistant." and concatenate it with \(T\) to get the final
prompt \(P\). The prompt \(P\) is directly inputted into the GPT-3 Ada model via OpenAI Completions API to generate the response \(R\) corresponding to the model's drug response prediction.
_Fine-Tuning:_ We prepare the training and test data in the form of prompt-completion pairs. The prompt is similar to that used in zero-shot setting but more concise as we prepend the column names (e.g., 'drug name:', 'drug target:', 'gene mutation:') before the respective cell values and concatenate them together using new line as the delimiter (e.g., drug: pci-34051\ndrug target: hdac1\ngene mutation: crebbp). The completion is the ground truth sensitive/resistant label. We fine-tune GPT-3 Ada model on the training set for 4 epochs and evaluate performance on the test set.
#### Results
We first compare the performance of GPT-3 in the zero-shot (Figure 1 subplot (i)) and fine-tuning (Figure 1 subplot (ii)) settings evaluated on the five tissue type datasets. We observe that although zero-shot prompting outperforms the fine-tuning counterparts across all the tissue types in F1 generally, the per class performance for zero-shot is heavily skewed toward the'sensitive' class as the F1 scores for the'resistant' class (F1-Resistant) are comparatively very low. We then analyze the results for fine-tuning with different feature combinations from the LUAD dataset, as reported in Figure 1 subplot (iii). The results reveal that gene mutation features alone and also in combination with the drug's smile representations are more informative of drug response, with 24% and 29% performance gains in F1 respectively. Subsequently, the fine-tuning performance on the other tissue cohorts with the best performing feature combination (i.e., Drug + Cell line + Smile + Mutation) is summarized in Figure 1 subplot (iv).
#### Discussion
This study yields encouraging results for anti-cancer drug sensitivity prediction by employing generative large language model on structured pharmacogenomics data and offers a new direction in AI-based precision oncology. Comparative analysis was performed to demonstrate GPT-3's drug response generalizability in the zero-shot vs fine-tuning settings, where the fine-tuning performance was further enhanced with the use of gene mutation and drug's smile features. We believe the per-class performance with zero-shot prompting could potentially be improved through the pre-training of GPT-3 on biomedical corpora.
#### Acknowledgement
This study is supported by the National Institute of Health (NIH) NIGMS (R00GM135488).
|
2309.16467 | Compositional Program Generation for Few-Shot Systematic Generalization | Compositional generalization is a key ability of humans that enables us to
learn new concepts from only a handful examples. Neural machine learning
models, including the now ubiquitous Transformers, struggle to generalize in
this way, and typically require thousands of examples of a concept during
training in order to generalize meaningfully. This difference in ability
between humans and artificial neural architectures, motivates this study on a
neuro-symbolic architecture called the Compositional Program Generator (CPG).
CPG has three key features: \textit{modularity}, \textit{composition}, and
\textit{abstraction}, in the form of grammar rules, that enable it to
generalize both systematically to new concepts in a few-shot manner, as well as
productively by length on various sequence-to-sequence language tasks. For each
input, CPG uses a grammar of the input language and a parser to generate a
parse in which each grammar rule is assigned its own unique semantic module, a
probabilistic copy or substitution program. Instances with the same parse are
always processed with the same composed modules, while those with different
parses may be processed with different modules. CPG learns parameters for the
modules and is able to learn the semantics for new rules and types
incrementally, without forgetting or retraining on rules it's already seen. It
achieves perfect generalization on both the SCAN and COGS benchmarks using just
14 examples for SCAN and 22 examples for COGS -- state-of-the-art accuracy with
a 1000x improvement in sample efficiency. | Tim Klinger, Luke Liu, Soham Dan, Maxwell Crouse, Parikshit Ram, Alexander Gray | 2023-09-28T14:33:20Z | http://arxiv.org/abs/2309.16467v2 | # Compositional Program Generation for Systematic Generalization
###### Abstract
Compositional generalization is a key ability of humans that enables us to learn new concepts from only a handful examples. Machine learning models, including the now ubiquitous transformers, struggle to generalize in this way, and typically require thousands of examples of a concept during training in order to generalize meaningfully. This difference in ability between humans and artificial neural architectures, motivates this study on a neuro-symbolic architecture called the Compositional Program Generator (CPG). CPG has three key features: _modularity_, _type abstraction_, and recursive _composition_, that enable it to generalize both systematically to new concepts in a few-shot manner, as well as productively by length on various sequence-to-sequence language tasks. For each input, CPG uses a grammar of the input domain and a parser to generate a type hierarchy in which each grammar rule is assigned its own unique semantic module, a probabilistic copy or substitution program. Instances with the same hierarchy are processed with the same composed program, while those with different hierarchies may be processed with different programs. CPG learns parameters for the semantic modules and is able to learn the semantics for new types incrementally. Given a context-free grammar of the input language and a dictionary mapping each word in the source language to its interpretation in the output language, CPG can achieve perfect generalization on the SCAN and COGS benchmarks, in both standard and extreme few-shot settings.
## Introduction
One of the long-standing issues with general-purpose neural architectures like transformers [23], is that they struggle to learn systematic behavior [14]. For example, a model that has been trained to follow instructions like "_walk_ left twice", "_run_ left twice", and "_turn_ left twice" may fail on the similar "_jump_ left twice" -- even if the system can correctly follow the instruction _jump_ by itself. This is often characterized as a failure of compositionality since the model seems unable to compose its knowledge of how to "jump" and its knowledge of how to " -- left twice", assuming that the true function to be learned is compositional in this way. Similarly, a model may fail to generalize to a recursive solution, working for some length inputs but not others.
There have been many different approaches to this problem, some of which we discuss further in the related work. They vary in the kind and amount of additional task-specific information they require (if any). Here we describe one such approach, a neuro-symbolic architecture, Compositional Program Generator (CPG) which compositionally generalizes for sequence-to-sequence language tasks like translation and semantic parsing. It requires a context-free grammar **of the input language only** and, for some experiments, a dictionary mapping each input word to its interpretation in the output language.
CPG has three key features which together support compositional generalization and incremental learning: _modularity_, _type abstraction_, and _recursive composition_. Specifically, it uses context-free grammar rules to generate a hierarchical, abstract parse of the input, and generates rule-specific probability distribution parameters for probabilistic copy or substitution programs. These functions are composed in a structure that mirrors the parse structure of the input sentence as shown in Figure 1. Crucially, the generated distributions such as \(P_{\theta}(\alpha)\) are conditioned only on the abstract grammar rules, not the input sequence itself. This supports generalization to new sentences which use the same grammar rule. In other words, CPG enforces the invariant that expressions with the same abstract parse will be handled by the same program of composed modules, but allows different parses to have different programs.
The contributions of this paper are:
* We propose a novel, neuro-symbolic architecture (CPG) for sequence-to-sequence language tasks that is modular, type abstract, and recursive.
* We demonstrate a simple, intuitive curricular training algorithm that allows CPG to incrementally learn new types.
* Our evaluation shows that CPG perfectly generalizes to two popular Compositionality benchmarks, SCAN and COGS, in the standard, few-shot, and even extreme few-shot settings.
* Our approach is interpretable.
## Approach
One strategy for achieving compositional generalization is to learn a _compositional function_. These are often described
informally as _meaning_ functions where _the meaning of the whole is a function of the meaning of the parts_. Usually attributed to Frege, this idea appears in some form as early as the sixth century BCE [20].
Our formulation of compositional function, follows [20]: \(\mu(\alpha(u_{1},u_{2},\ldots,u_{n}))=r(\alpha)(\mu(u_{1}),\mu(u_{2}),\ldots,\mu( u_{n}))\). In words, the function \(\mu\) (_meaning_) is defined recursively over \(u_{1},u_{2},\ldots,u_{n}\) (_parts_) of the input expression \(\alpha(u_{1},u_{2},\ldots,u_{n})\) (_whole_) and composed by the function \(r(\alpha)\), which depends on a context-free grammar rule \(\alpha\). Every expression in the language is associated with an abstract type hierarchy corresponding to its parse (it may have multiple parses if the grammar is ambiguous in which case one is selected by the parser). In our implementation, rules are identified with tuples of their input and output type indices (or token indices for primitive rules), so no information about the constituents of a class is available from its representation. This is an important feature of the model which prevents over-fitting the choice of composition function to the specifics of the class to which it applies.
Compositional functions are a useful hypothesis class for learning models which compositionally generalize. First they are recursively defined, so support productivity. Second, they are systematic by construction: two expressions parsed with the same rule \(\alpha\) are consistently mapped to the same semantic module \(r(\alpha)\), while expressions with different rules may use different modules. This is in contrast to a monolithic model, such as a transformer [21], which processes all inputs with the same function (\(r(\alpha)=g\) for a constant function \(g\)). Modularity at the level of abstract grammar rules allows changes to the underlying grammar to be localized so that learning is efficient as we discuss in the Experiments section.
## Model
The high-level CPG architecture is shown in Figure 2. The model uses a parser for a context-free grammar to produce a parse, which is applied by \(CPG_{\theta}\) to a dictionary translation of the input to produce the output.
We learn the compositional function \(CPG_{\theta}\) by _compositional program generation_ by which we mean both that the process of generating a semantic function from an input is composed recursively, bottom-up over the structure of the parse, but also that the generated program is itself compositional with respect to that structure in the relational sense of [20], obeying the requirement that the meaning of the whole be a function of the meaning of its parts.
The CPG recursive inference procedure shown in Algorithm 1 takes three arguments. The first is a dictionary function \(D\), which maps input vocabulary words to their interpretations in the output language. The second is a map \(R\) from types to the rule in the parse that produced that type. The third is the root type \(\tau\) of the parse for which we are computing the interpretation.
For the base case it checks if \(\tau\) is primitive using the function is_primitive and if so returns the dictionary translation. Primitive rules apply directly to the input, like \(Emma\to t_{1}\) and \(ate\to t_{2}\), etc. in Figure 3. For the recursive case the inference procedure recursively computes the meaning of the root type's child types supplied by the function _child_types_. It then computes the probability distribution parameter for the rule \(P_{\theta}(r)\), where \(r\) is the root rule (\(\delta\) in Figure 3) using a Gumbel softmax function \(\sigma_{G}\) applied to a feed forward network which is parameterized by \(\theta\) and applied to a vector of 1's. The resultant distribution is supplied as a parameter to either a substitution or copy module (denoted MODULE), which is then applied to compose the child results.
The dictionary maps the input tokens individually to their output language interpretation. For SCAN, each token is mapped to a corresponding command as shown in Figure 4. Some input tokens have no interpretation in the output language, like twice, which is mapped to an empty sequence denoted \(\phi\). For COGS, where the output is a logical form,
Figure 1: An example of compositional program generation on SCAN.
Figure 2: CPG architecture
variables are required and the dictionary maps each input token to a _pair_ consisting of an _expression_ and an _object list_ as shown in Figure 3.
Expressions may contain _slots_ denoted \(y\), e.g. \(y.agent(y,y)\)) and _Objects_ may be existential or universal variables, denoted by an integer index, or a constant like \(Emma\). The names of the predicates are themselves slots which are undetermined initially. The expressions for proper nouns are empty (\(\phi\)) since everything needed to interpret them is in the object list. For verbs like _ate_ the expression includes predicates describing the subject (or agent), the object (or theme) and others, depending on type. The COGS dictionary is shown in Figure 5. The SCAN and COGS grammars are in the Appendix for reference.
We use two general kinds of modules shown in Algorithm 2 and Algorithm 3: (1) _copy_ programs for SCAN, which assign to each output element an element of the input sequence (not all input elements must be used and elements can be used multiple times) and (2) _substitution_ programs for COGS which substitute objects into slots, sampling them according to a the parameter distribution \(P_{\theta}\). Both algorithms use a function _concat_ which takes a list of arguments and returns their concatenation. Substitution relies on the function _find_slots_ which returns the indices of the \(y\) variables in the supplied expression (0-indexed). All distributions \(P_{\theta}(\alpha)\) for SCAN rules \(\alpha\) are implemented as a 1-layer feed forward network followed by a Gumbel softmax estimator for sampling from a categorical distribution [10][11]. For COGS rules we use 2-layer feed forward networks. There are \(\sim 242k\) parameters in total in the model.
To make the substitution and copy operations differentiable we do not use indexing to produce the output but instead compute a binary selector mask and multiply. Because the symbolic operations required to substitute and copy are complicated to formulate in a tensor style we use loops and do not batch the input, so one iteration corresponds one training example.
### Example
In the inference shown in Figure 3, the input sentence Emma atethebread is first represented as a sequence of tokens [Emma,ate,the,bread] and supplied to a parser (generated from Lark from the grammar) to produce the map from types to parse rules that output them. In the figure the types of each expression are shown in the upper right corner of each node box (\(t_{1}\), \(t_{2}\), etc.). Here we map \(t_{1}\) to \(Emma\to t_{1}\), \(t_{2}\) to \(ate\to t_{2}\), \(t_{3}\) to \(the\to t_{3}\), \(t_{4}\) to \(bread\to t_{4}\), \(t_{5}\) to \(t_{2}\xrightarrow{\alpha}t_{5}\), and so on.
For the \(\alpha\) rule, we have sampled \([1,0,2,1,0,2]\sim P_{\theta}(\alpha)\). This _template_ is a map from object indices to \(y\) variable slots. It is applied to substitute the objects into the slots in order of their positions in the expression (no substitution is made for a nop). For the rule \(\gamma\) the template is \([5,3]\sim P_{\theta}(\gamma)\) which corresponds to the objects \([\texttt{nop},3]\) from the concatenated object lists: \([1,\texttt{eat},2,3,\texttt{bread}]\). These are substituted in order in the concatenated expression eat.agent(1, y) eat.theme(1, y) *bread(3) to yield the expression in the node above: eat.agent(1, y) eat.theme(1, 3) *bread(3) (spaces compressed).
We use a version of COGS which doesn't require multi-token variable names and we programmatically sort the sequence (a set of atoms) to match the canonical ordering required by COGS.
### Training
Our optimization objective is to find \(\theta\) which minimizes the cross entropy of the \(CFG_{\theta}\) function with the true value \(y\). Here \(D\) represents the training data. \(CFG_{\theta}\) is determined by the distributions \(P_{\theta}(\alpha)\) for each rule \(\alpha\).
\[\operatorname*{argmin}_{\theta}\operatorname*{\mathbf{E}}_{(x,y)\in D}[ \texttt{cross\_entropy}(\texttt{CFG}_{\theta}(x),y)]\]
Optimization is by stochastic gradient descent using the Adam optimizer with learning rate adjustment on plateaus. Training is curricular by stages (small batches of 2-3 consecutive length inputs). At the beginning of each stage the stage accuracy is set to 0.0 and the temperature for all Gumbel softmax distributions is set to 10.0. As training proceeds the temperature is annealed proportional to the stage accuracy. When the stage accuracy reaches 1.0, the parameters \(\theta\) for probability distributions \(P_{\theta}(\alpha)\) are frozen for all rules \(\alpha\)
Figure 4: The SCAN dictionary
Figure 3: An example of compositional program generation on COGS.
learned in that stage and the model is evaluated on the validation set before moving to the next stage. The set of types needed to parse a sentence grows as the length increases until a fixpoint is reached, so a curricular approach requires an incremental learning of types. For COGS there are 60 types in the grammar (see Appendix) and all types appear somewhere in a parse of a sentence in each dataset. Training was performed on a MacBook Pro laptop without the use of GPUs. Batch size is 1, word representation dim is 30, distribution representation size is 30.
## Experiments
We evaluate CPG on the standard SCAN and COGS benchmarks in both standard (classic) and few-shot settings. In all experiments there is a rise to competency on each stage followed by a drop as the curriculum moves to the next stage and longer sentences with new types are introduced. Because templates used in earlier stages can be composed to handle some sentences in later stages, the accuracy does not go to 0 in the beginning of a new stage and in fact the drop often decreases in later stages as more semantic modules are learned. The process repeats until all stages are complete or the maximum number of iterations is reached. For the COGS experiments we evaluate on the generalization set which provides the most difficult challenge. **In all experiments the generalization accuracy evaluated at the end of training is 1.0**.
Training is performed in a curricular fashion in stages of consecutive length sentences from shortest to longest. Because more types are generally required as the length of the sentence grows until all types have been seen, CP learns in a type-incremental way.
### Data sets
The SCAN benchmark Lake and Baroni (2018) requires the model to translate a simple English command like run left twice to a sequence of actions (I_TURN_LEFT I_RUN I_TURN_LEFT I_RUN) and tests longer lengths and generalization to held-out compositions like jump left twice.
There are 4 splits in SCAN: _simple_, _length_, _add-jump_, and _around-right_. We focus on the two most difficult: _length_ and _add-jump_ in our experiments. The _length_ split tests whether the model is able to generalize to longer input sequences than seen in training. The _add-jump_ split gives training on commands such as walk left twice, turn left twice, and run left twice as well as jump and tests systematic generalization to sentences like jump left twice.
The COGS benchmark task requires semantic parsing -- translation from English to a first-order logical form which captures the meaning of the sentence Kim and Linzen (2020). For example, the model must translate Emma atethe bread to the logical form * bread ( 3 ) ; eat. agent ( 1, Emma ) AND eat. theme ( 1, 3 ). Here, * bread ( 3 ) means that there is some specific bread object, denoted by the variable \(3\). The \(\star\) indicates that \(3\) is existentially quantified. The verb ate is normalized to the form eat and associated with an event of eating denoted by the variable 1. The agent is the subject of event \(1\), in this case Emma, while the bread variable \(3\) is the theme or object of the event \(1\).
For both datasets we also produce two few-shot versions of the training set, named _few-shot_ and _extreme few-shot_, which are much reduced in size from the originals. For COGS the original dataset, which we call _classic_ in the experiments, has \(\sim 24k\) examples, the few-shot dataset has
Figure 5: The COGS dictionary
349 examples, and the extreme few-shot training set contains just 22 examples, which are shown in Figure 8).
### Classic
Test set curves for 5 runs of the SCAN _length_ and _add-jump_ splits are shown in Figure 6 and Figure 7
### Few-shot
In the _few shot_ experiment we reduce the original dataset by first sorting it by length of the input, then looping in order, parsing each sentence, and keeping only those sentences whose parse has not been generated previously. The resulting dataset has distinct parses for each input sentence and is much smaller - two orders of magnitude smaller in the case of COGS -- and therefore presents a much more difficult challenge than for classic COGS. Training is similar to that for classic COGS and generalization accuracy is 1.0 for both SCAN and COGS. Additional training curves are shown in the Appendix.
### Extreme few-shot
For the _extreme few-shot_ experiments we start with the original datasets for SCAN and COGS and then reduce them as in the few-shot case but retaining only sentences whose parse trees include _types_ not seen in previously processed examples. This results in even further reduced datasets, just 22 instances instead of \(\sim 24k\) in the case of COGS as shown in Figure 8. The parses for these sentences collectively include every type in the grammar so it is possible for a model to learn, though it doesn't cover all parses involving those types as the few-shot dataset does, so presents an even more difficult challenge. Training proceeds very much the same as for the Classic case and generalization accuracy is 1.0 for both SCAN and COGS. Additional training curves are shown in the Appendix.
## 5 Sensitivity to grammar structure
When we constructed (or modified) context-free grammars for the COGS experiments we occasionally saw runs that got stuck in local minima. Because the sampled mappings (templates) are interpretable and associated with a specific type in the grammar, we were able to quickly understand which small set of types were involved and could trace them to their grammar rules.
We found two common cases in our hand-constructed grammars: (1) places in the grammar where sequences of types were repeated and could be replaced with a single type to avoid relearning the same concept multiple times independently and (2) places where two closely linked types exist which can be merged to reduce the search space (type merging). The need to split types is also a possible issue but we did not need to do it.
One example occurred with the grammar snippet below where we found a single seed (6) of COGS Classic performed sub-optimally at about 0.8 accuracy.
np_det \(\leftarrow\)det common_noun
np_p \(\leftarrow\)det common_noun pp_loc \(\leftarrow\)pp np An examination of the types involved showed that the template sampled from \(P_{\theta}(\text{np\_pp})\), the distribution for the rule np_pp was incorrect. To remediate it, we refactored the grammar rule np_pp \(\leftarrow\)det common_noun pp_loc to replace the det and common_noun elements with the np_det type. This results in the grammar shown below.
Figure 6: SCAN Classic length split accuracy: (top) test, (bot) train
Figure 7: SCAN Classic add-jump split accuracy. (top) test, (bot) train
nput_det \(\leftarrow\)det common_noun
np_p \(\leftarrow\) np_det pp_loc
pp_loc \(\leftarrow\)pp np
np_det \(\leftarrow\)det common_noun
np_p \(\leftarrow\) np_det pp_loc
pp_loc \(\leftarrow\)pp np
np_det \(\leftarrow\)det common_noun
np_p \(\leftarrow\) np_det pp_loc
pp_loc \(\leftarrow\)pp np
np_det \(\leftarrow\)det common_noun
np_p \(\leftarrow\) np_det pp_loc
pp_loc \(\leftarrow\)pp np
np_det \(\leftarrow\)det common_noun
np_p \(\leftarrow\) np_det pp_np
After applying both the type re-use and merge refactorings the performance becomes optimal. For human-supplied grammars the ability to adjust the grammar during training to control the learning, focusing on just the types that are problematic, can be a valuable debugging tool unavailable in a purely neural model. Automating this process may also provide an interesting strategy for learning the grammar in future work by searching through the space of grammar refactorings.
## Discussion
What our results show is that with a correct grammar (and dictionary for COGS), CPG achieves perfect generalization on standard benchmarks in extreme few-shot settings using two very simple and general modules for copying and substitution. When the grammar types are well-aligned with the semantic task the performance is very strong. When the grammar types are misaligned it can increase the search space for the distributions and impact performance, though the impact is localized.
Although CPG is directly applicable to problems for which the grammatical structure and dictionary is known, like programming language translation, our aim is to scale it to a broader set of real-world problems. For this, factoring the problem into the sub-tasks of learning the grammar/parser, dictionary, and CPG is a helpful decomposition which corresponds to cognitively coherent tasks: learn a concept hierarchy, learn the meanings of words, and learn to compose those meanings systematically to scale to novel combinations of words.
Human studies such as discussed in Carey (2000) provide evidence that people learn these different cognitive tasks with different strategies. For example, it is easier for people to learn to compose and reason with an existing hierarchy of concepts than it is for them to learn to refine their concepts when new information is provided. Dictionary learning is another example of this. As described in Carey (2000) in a discussion about how children learn that particular words express particular concepts, she points out that "if the concept _give_ includes a given, a receiver, and a gift, then the child may exploit syntactic evidence that a given verb has three arguments to guess that it might express this concept...". Here the giver, receiver, and gift correspond to the COGS semantic concepts of agent, recipient, and theme respectively.
From a machine learning perspective there are several challenges in solving all three tasks jointly. First, the grammar/parser, dictionary, and CPG are intertwined and learning CPG with the parser or dictionary is non-stationary so training needs to be carefully orchestrated. This issue is common in modular architectures and has been discussed in detail here Rosenbaum et al. (2019). Second, the number of types is unknown and must be estimated ahead of time. Third, the objective of the parser needs to be aligned with the task so that it discovers a latent grammar for which the associated modules are easily learned. The ideal case is when the types of the grammar align directly with important semantic classes which are easy to learn.
Based on our experiments learning the dictionary for SCAN we believe it should be possible for the model to learn the abstract dictionary expressions in COGS (see Figure 5), but the model must avoid two pitfalls: local minima, such as might occur when generating a constant where a slot \(y\) is required, and over-generalization where the model predicts a slot and substitutes values when a constant is required. The issue of non-stationarity appears here as well since the
Figure 8: COGS extreme few-shot instances
model is learning jointly to generate slots and to substitute objects for them.
## Related Work
The ability of neural networks to generalize compositionally has been the subject of a long-standing debate in cognitive science. [12, 13]. Recently, there have been several benchmarks proposed that have renewed interest in the compositional generalization abilities of modern neural networks, demonstrating that such models still struggle with systematicity and productivity. lake2018deep introduce the SCAN dataset to test compositional generalization. They evaluate several sequence-to-sequence models, all of which generalize poorly. Follow-up work confirmed and strengthened these findings [10].
Similarly hupkes2020deep propose the PCFG (probabilistic context-free grammar) task and evaluate LSTMs [15], convolutional nets, and transformers, yielding mixed results, with all performing poorly on productivity tests. Later, csordas2021deep found that transformer performance could be improved with more careful tuning yielding increased performance on PCFG and COGS, though the performance was still poor on the CFQ (compositional Freebase Questions) dataset of keysers2019deep.
The cogs [16] benchmark is designed to test model performance on semantic parsing in natural language 1. Again, they demonstrate that sequence-to-sequence encoder-decoder architectures fail to compositionally generalize, supporting evidence from several other recent works [11, 12].
Footnote 1: COGS is synthetic but uses everyday English words and has a relatively comprehensive recursive grammar with 60 distinct types.
The works outlined above provide substantial evidence that standard neural architectures fail to generalize compositionally in a consistent way. Recently, new architectures have been proposed that help mitigate this problem.
lane[15] proposes a recursive and compositional approach that performs perfectly on all the splits of scan. However, their approach is non-trivial to adapt to other datasets like COGS. This approach does not make explicit use of abstraction or modules which would limit it from performing few-shot generalization or benefiting from incremental training. It is an end-to-end model which learns to parse and requires no human-supplied features.
lear[15] performs well on COGS, among other datasets, but requires semantic function definitions and does not learn in a modular way. However, it does learn to parse expressions and does not require an input grammar or dictionary.
Herzig2021deep propose an approach specifically for semantic parsing called SPAN_BASED_SP which uses a parser to recognize spans and generates programs for their semantics. Their approach is bottom-up, recursive, and predicts primitive type categories, but does not learn rule-specific modules and is not evaluated on compositional generalization benchmarks like scan, cogs 2.
Footnote 2: They evaluate on a modified form of scan for semantic parsing, which differs from the original scan task.
In russin2019deep they take the position, as we do, that syntax and semantics should be separated. They present a method of alignment called _syntactic attention_ which can be used in an end-to-end model without additional user-specified features. They achieve a performance of \(0.91\pm 0.274\) on the add-jump (systematicity) split of SCAN but fail on the length (productivity) split, and do not evaluate on other datasets.
Perhaps closest in philosophy and spirit to our work is nye2020deep which also follows a program synthesis approach. In this work, they learn an interpretation grammar that represents the program needed to translate input examples to their outputs. Their approach differs from ours in several respects: (1) they follow a meta-learning paradigm, (2) they do not use abstract types or adopt a type-specific modular generation procedure, (3) they do not learn incrementally with a curriculum.
Our approach rests on the functional definition of compositionality formalized and rigorously developed in pagin2010deep.
The benefits of a modular approach to learning in natural language domains are well-documented in arndeas2017deep; russin2019deep; russin2019deep; russin2019deep. As far as we are aware, we are the first to apply a modular architecture to the problem of compositional generalization, and the use of routing at the abstract type level is novel here as well.
## Conclusion
We have presented CPG, a novel neuro-symbolic architecture for sequence-to-sequence language problems, which has three key attributes that encourage systematic, productive and efficient learning: _type abstraction_ in the form of context-free grammar rules, _modularity_ in the form of rule-specific parameter generation, and _compositionality_ in the method of recursively composing an input-specific program to produce the prediction. Given an input language grammar and dictionary our implementation of CPG is able to solve the difficult COGS benchmark when reduced to just 22 exemplar sentences. Training is curricular, efficient, and low-variance when the grammar is well-factored and aligned with the semantic task so that all elements of a type can share a semantic module, while different types can have different modules.
## Acknowledgements
We thank Najoung Kim and Tal Linzen for making the COGS grammar and dataset generator available, Joe Cappadona for his contributions to an earlier iteration of this project, and Ernest Davis for his support. |
2309.06112 | Characterizing Latent Perspectives of Media Houses Towards Public
Figures | Media houses reporting on public figures, often come with their own biases
stemming from their respective worldviews. A characterization of these
underlying patterns helps us in better understanding and interpreting news
stories. For this, we need diverse or subjective summarizations, which may not
be amenable for classifying into predefined class labels. This work proposes a
zero-shot approach for non-extractive or generative characterizations of person
entities from a corpus using GPT-2. We use well-articulated articles from
several well-known news media houses as a corpus to build a sound argument for
this approach. First, we fine-tune a GPT-2 pre-trained language model with a
corpus where specific person entities are characterized. Second, we further
fine-tune this with demonstrations of person entity characterizations, created
from a corpus of programmatically constructed characterizations. This twice
fine-tuned model is primed with manual prompts consisting of entity names that
were not previously encountered in the second fine-tuning, to generate a simple
sentence about the entity. The results were encouraging, when compared against
actual characterizations from the corpus. | Sharath Srivatsa, Srinath Srinivasa | 2023-09-12T10:27:39Z | http://arxiv.org/abs/2309.06112v1 | # Characterizing Latent Perspectives of Media Houses Towards Public Figures
###### Abstract.
Media houses reporting on public figures, often come with their own biases stemming from their respective worldviews. A characterization of these underlying patterns helps us in better understanding and interpreting news stories. For this, we need diverse or subjective summarizations, which may not be amenable for classifying into predefined class labels. This work proposes a zero-shot approach for non-extractive or generative characterizations of person entities from a corpus using GPT-2. We use well-articulated articles from several well-known news media houses as a corpus to build a sound argument for this approach. First, we fine-tune a GPT-2 pre-trained language model with a corpus where specific person entities are characterized. Second, we further fine-tune this with demonstrations of person entity characterizations, created from a corpus of programmatically constructed characterizations. This twice fine-tuned model is primed with manual prompts consisting of entity names that were not previously encountered in the second fine-tuning, to generate a simple sentence about the entity. The results were encouraging, when compared against actual characterizations from the corpus.
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote β : copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
generated texts were inspected for characterizations of entities. Since the test entity sentences were not used in demonstrations, we attribute the generated text to zero-shot generations.
## 2. Related Work
In the recent past, text classification with language models and pattern training has shown promising results on key datasets. Schick and Schutze (2018), show that language models understand text classification task by converting input to _cloze question_ patterns and training. Substantial soft-labeled inputs are generated with semi-supervised and ensemble language models to train the final classifier. Good results are observed with zero to few numbers of the initial dataset.
GPT-3 with hundreds of billions of parameters shows remarkable few-shot performance on SuperGLUE. Schick and Schutze (2018) show that an equivalent few-shot performance can be achieved by training small language model ALBERT with _cloze question_ patterns. A set of SuperGLUE tasks require _cloze question_ patterns with multiple masks, a strategy to handle this is shown.
In the text classification task, mapping predicted tokens to predefined labels is challenging and requires domain expertise even though training with patterns optimizes text classification. Schick et al. (2018), show an approach to automatically map the predicted tokens to labels. Training language models with patterns have shown adequate performance in the text classification task. In this work, we propose a similar approach with manual prompts patterns to generate non-extractive information about person entities from a corpus.
Choosing prompts and equivalent words of classification labels manually or algorithmically are challenging since there are significant variations. Hambardzumyan et al. (2017) show an approach to finding these as embeddings in a continuous embedding space of word embeddings. Trainable embeddings are added around the input to make the masked language model predict the masked token and evaluated on natural language understanding tasks of GLUE Benchmark.
With natural language prompts and a few demonstrations on GPT-3, awe-inspiring performance on language understanding tasks is observed. However, since GPT-3 has 175B parameters, it is challenging to use in real-world applications. Gao et al. (2018) show prompt-based fine-tuning with demonstrations on moderately small language models BERT and RoBERTa. In this work, we have fine-tuned with person entity characterizing sentences as demonstrations.
Mining commonsense knowledge is an important natural language processing task. Language models are known to have this, Davison et al. (2018) show an approach to mine commonsense knowledge from Pre-trained Language Modes. A uni-directional model generates sentences with a specific template for each type of relation in information triples. This generated sentence is validated by masking and predicting the tokens using a bi-directional language model.
Apart from linguistic knowledge, language models might also contain relational knowledge in the training data. Petroni et al. (2018) analyze relational knowledge in state-of-the-art pre-trained language models with LAMA (LLanguage Model Analysis) probe a corpus of facts in subject-relation-object triples or question-answer pairs forms derived from diverse factual and commonsense knowledge sources. Kassner and Schutze (2018), show that the ability of pre-trained language models to learn factual knowledge is not as good as humans learn by probing for facts with Negated LAMA and Misprimed LAMA. Ideally, these probe variants should result in contradictions, whereas it was not so, suggesting that factual knowledge extraction is based on pattern matching rather than inference. Jiang et al. (2018) study factual knowledge in multilingual language models with manually created probes in 23 languages similar to LAMA.
Kumar and Talukdar (2018), show that the order of training examples significantly reduces the samples required for few-shot learning on Sentiment Classification, NLI, and Fact Retrieval tasks.
Nishida et al. (2018) shows an approach where the pre-trained BERT is adapted to the target domain and next fine-tuned with RC task on a source domain. Finally, this model performs RC tasks in the target domain. The key idea is that the model is trained for a task on one domain and used to perform the task on another model.
Domain adaptation is crucial to solving any task related to that domain.Gururangan et al. (2017) show that even pre-trained language models of hundreds of millions of parameters are ineffective to encode the nuances of a given textual domain. Therefore, it is necessary to specialize the model with the domain or task-relevant corpus before solving any task in a textual domain.
The state-of-the-art of using pre-trained language models to solve an NLP task show domain adaptation and fine-tuning with demonstrations of patterns as the most plausible approach to a reasonable extent. In this work, we propose an approach to characterize entities along similar lines.
## 3. GPT-2 Domain Adaptation
A GPT-2 Pre-trained Language Model (PLM), with 345M parameters, was fine-tuned with steps from GitHub.1 PLM was fine-tuned individually on four popular news media corpora. Due to limitations in the available compute instance, 345M PLM, medium model was fine-tuned and this model proved sufficient to get convincing results. Domain adaptation or
fine-tuning PLM on domain corpora is a prerequisite before task-specific training. The domain-adapted PLM was further fine-tuned with programmatically constructed demonstration sentences.
### Textual Media Source
The GDELT Project2 records the world's broadcast, print, and web news from nearly every corner of every country in over 100 languages. From GDELT database, textual news media article URLs of four popular media houses, between years 2015 to 2021, were extracted and texts of articles were scraped for domain adaptation. Table 1 shows the details of each media house corpus.
Footnote 2: GDELT Project: url[https://www.gdeltproject.org/](https://www.gdeltproject.org/)
## 4. Person Entity Characterization with Manual Prefix Prompts
_Cloze_ and _Prefix_ prompts are two types of prompts used as inputs for a language model to solve NLP tasks. Cloze prompts as in (Cloze, 2015) is where the token to be predicted is masked and the model predicts. The Prefix prompts (Friedman, 2016; Goyal et al., 2016) or the prompts used for priming when used as input to language model generates a conditional sequence text auto-regressive.
Priming in this work can be attributed to _"programming in natural language"_ detailed by Reynolds and McDonell (Reynolds and McDonell, 2015). This work attempts to prompt language model to generate characteristics of a person entity with prompts ubiquitous in spoken and written English language. The concept is when you want to describe a person, one would express beginning with _"John is described as..."_ or a semantically similar prefix, in most contexts. These prefixes and synonymous ones are very common in any corpora used to train the language models and priming with natural language phrases like _"John is described as..."_ would constrain the entailment to something about _John_. The intuition is prime the language model in _"ubiquitous or natural language way."_ Since these demonstrations are not very frequent in the corpus we construct a corpus of these type of sentences to fine-tune. To test this hypothesis following steps were followed with each Media House corpus and depicted in **Figure 1**.
**Block 1**: Person Entity Mention Disambiguation in Articles
* Co-reference Replacement3 Footnote 3: Co-reference Replacement: [https://github.com/NeuroSYS-pl/coreference-resolution](https://github.com/NeuroSYS-pl/coreference-resolution)
* Replace short names with full name
* First fine-tuning, GPT-2 PLM (345M) if fine-tuned with the **Block 1** processed disambiguated articles corpus and named as **FT1 Checkpoint**
* Extract clauses and their parts from sentences of person entities using _spacy-clausie4_ from **Block 1** disambiguated articles corpus
Footnote 4: space-clausie: [https://github.com/mmxgn/spacy-clausie](https://github.com/mmxgn/spacy-clausie)
* With parts of clauses (**Block 3**) convert lemmatized verb of clauses to a gerund and construct a corpus of simple entity characterization demonstration sentences in the following pattern:
"_<Person_Entity_Name_" **is described as' _<gerund-grammatically valid combination of parts of clause_". From this corpus of sentences, sentences of ten entities with high frequencies in different ranges set aside as _Test Corpus_ and rest as _Demonstrations or Training Corpus_
* With the _Demonstrations Corpus_ (**Block 4**), **FT1 Checkpoint** was fine-tuned and named as **FT2 Checkpoint**
* **F72 Checkpoint** was used to generate sentences of entities in _Test Corpus_ with prompts defined in **Table 2**
* Sentences generated about entities in **Block 7** were tested for _non-extractive characterization_ against FT1 and FT2 corpus sentences using Semantic Textual Similarity 5 and Sentiment Analysis
Footnote 5: STS: [https://www.bert.net/docs/usage/semantic_textual_similarity.html](https://www.bert.net/docs/usage/semantic_textual_similarity.html)
First fine-tuning to get FT1 Checkpoint is stopped _at loss less than 0.6_ and Second fine-tuning to get FT2 Checkpoint is stopped _at loss less than 0.1_
Footnote 6: 3: 7: 8:5
Following subsections detail each of the above steps.
### Person Entity Mention Disambiguation
Co-reference resolution improves the accuracy of NLP tasks like machine translation, sentiment analysis, paraphrase detection and summarization (Sukthanker et al.) (Sukthanker et al., 2016). We have disambiguated person entity mentions in the articles to ensure that every person entity sentence has full name of the entity.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Media house**} & **No. of** & **Size on** \\ & **Articles** & **Disk** \\ \hline Media House A & \(40,514\) & \(282\)M \\ \hline Media House B & \(53,024\) & \(364\)M \\ \hline Media House C & \(31,029\) & \(298.6\)M \\ \hline Media House D & \(27,044\) & \(171\)M \\ \hline \hline \end{tabular}
\end{table}
Table 1. Scraped Media House Articles between 2015 and 2021
The first pre-processing was replacing entity co-references with the actual entity name and on this output replace partial name references with full name to finally get a processed document with full name of the entity in maximum number of sentences in each news article.
NeuroSYS coreference-resolution9 proposes three intersection strategies or ensemble methods of AllenNLP and Huggingface coreference models outputs. The methods are _strict_ where clusters identical in both the models are considered, _partial_ where spans identical in both model outputs are considered and _fuzzy_ where spans and overlapping spans are considered from both the models. In this work we leveraged the _fuzzy_ ensemble and processed the raw articles.
The objective of this work was to generate single concise sentences of person entity characterizations. To align with this objective the sentences in each media house articles were processed to contain unambiguous entity mentions. To address this, the co-references replaced texts were processed to replace partial name references with full name
\begin{table}
\begin{tabular}{l} \hline \({}^{*}\)-_person_Entity_Name_- is described as being\({}^{*}\) \\ \hline \({}^{*}\)-_person_Entity_Name_- is described as having characteristics\({}^{*}\) \\ \hline \({}^{*}\)-_person_Entity_Name_- is described as performing\({}^{*}\) \\ \hline \({}^{*}\)-_person_Entity_Name_- is described as stating\({}^{*}\) \\ \hline \end{tabular}
\end{table}
Table 2. Four types of _prefix prompts_ used to generate sentences about entities
Figure 1. Pipeline of processing a Media House Corpus, generating sentences about entities and validating for characterizations. _Block 1_ uses NeuroSYS\({}^{6}\). _Block 3_ uses Claucy\({}^{7}\). **Details of FT2 or Demonstrations Corpus is shown in Table 3. In _Block 6_ sentences about test entities are generated with prompts listed in Table 2. _Block 7_ uses Semantic Text Similarity (STS)\({}^{8}\) to compare generated sentence with corpus sentences. Examples of generated and semantically similar corpus sentences are shown in Table 6**
of the entity so that every sentence has full qualified mention of the entity and information about the entity. To achieve this, we followed a logic of processing one article at a time, mapping partial names, either first name or last name, with the full name by comparing tokens. The intuition is that entity is referred with full name in the initial parts of the article and in later sentences of the article either first name or last name is used to refer to the entity. The partial name should be either first name or last name of the entity in the previously used longer name.
This final corpus of articles with full entity disambiguation was used for first fine tuning or FT1. For all the media houses the loss plateaued around 0.6 and hence a checkpoint around this loss was considered for next fine tuning FT2.
### Characterization Sentences Corpus of Person Entities
To generate simple concise demonstration sentences of entity characterizations, the FT1 Checkpoint was fine-tuned with manual prompt prefixed to clauses of entities. Clauses contain the main information of entities. Corpus of simple sentences of anything said, done or events related to the entities was constructed using clauses and their parts extracted from each article using ClauCy10(Cleaning et al., 2016). Clauses and their parts were extracted from each sentence in articles. The parts of the clauses are: _Type, Subject, Verb, Indirect_Object, Direct_Object, Complement and Adverbials_. There are ten Clause Types with combination of parts: _SVC, SVOO, SVOC, SVO, SVOA, SVO, SVO, SV, SVA and SVO_. Every clause has a subject and verb, other parts vary depending on the input sentence. Entities and the sentences they appear were mapped and maps with more than 500 sentences were considered for FT2 corpus. Table 3 shows the details of FT2 sentences corpus for each media house.
Footnote 10: 4
FT2 sentences were constructed by suffixing Subject with "is described as", converting Verb in to Gerund form and grammatically joining other parts of the clause to form a complete readable sentence. Gerund or present participible is the adjective form the verb (like showing, saying, claiming, winning, etc.) and functions as attributing the other parts of the clause (Object, Complements and Adverbials) to the Subject. Ten subjects or person entities with highest count in different ranges were separated as test corpus and rest of entity sentences for second fine tuning. This was done to ensure testing with entity count in broad ranges. The checkpoint from FT1 was further fine tuned with FT2 corpus. For all the media houses, the second fine tuning plateaued around loss of 0.1 and hence fine tuning was stopped when loss reached below 0.1.
### Generative Entity Characterization
Widely prevalent manual prompts in the spoken and written language used to talk about a person were chosen to prime the language model. Sentences were generated with the FT2 Checkpoint. The second fine-tuning, FT2, was with a corpus of sentences with "is described as" prompt. The results of the generated sentences with this prompt were not convincing, so we experimented with semantic alternative prompts shown in Table 2. With these prompts, we observed entity characterizing generated sentences. Ideally, all the test sentences should be generated; hence, sentences were generated to each entity's count in the test corpus. Novel combinations of information in the corpus or summarized opinions of test entities were expected in the generated texts. The generated texts were compared for Semantic Textual Similarity (STS) with FT1 and FT2 corpus sentences using Sentence Transformers11. Since language models are probabilistic and generate novel sentences, we chose cosine similarity of greater than or equal to 0.6 as a positive result.
Footnote 11: 5
To the best of our knowledge, there is no start-of-the-art corpus for Entity Characterization demonstrations and evaluation criteria. For this purpose, we have compiled FT2 dataset and defined evaluation criteria with Confusion Matrix as shown in Table 4.
The following section details the results of entity characterizations generated with prefix prompts in Table 2
## 5. Results
With the FT2 checkpoint of each media house, sentences were generated for ten test entities, with four prompts shown
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Clause Type} & Media & Media & Media & Media \\ & House 1 & House 2 & House 3 & House 4 \\ \hline SV & 11,349 & 3,244 & 17,863 & 7,768 \\ \hline SVA & 2,732 & 695 & 3,829 & 1,698 \\ \hline SVC & 26,042 & 10,403 & 40,899 & 16,617 \\ \hline SVO & 23,522 & 8,750 & 34,164 & 15,795 \\ \hline SVOA & 1,223 & 488 & 1,915 & 937 \\ \hline SVOC & 2,832 & 1,249 & 4,619 & 1,857 \\ \hline SVOO & 597 & 246 & 738 & 370 \\ \hline FT2 Dataset & & & & \\ Sentences Count & 68,297 & 25,075 & 1,04,027 & 45,042 \\ \hline Unique Person & & & & \\ Entities Count & 117 & 69 & 140 & 83 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Each Media House FT2 Sentences Corpus details. Count of each extracted clause type, total number of sentences and unique person entities in each corpus
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline & & \multicolumn{3}{c}{\%} & \multicolumn{3}{c}{Average} & & \\ & Distinct & Distinct & Sentiment & Scores & F1 & Precision & Recall \\ & Generated & Semantic & & Differences of & Score & & \\ & Sentences & Matches & & True Positives (TP) & & & \\ & Count & F1 & FT2 & FT1 & FT2 & FT1 & FT2 & FT1 & FT2 \\ \hline \hline \multicolumn{10}{c}{**Media House 1**} \\ \hline \multicolumn{10}{c}{_is described as having characteristics_} & 3010 & 41\% & 28\% & 0.157 & 0.108 & **0.864** & 0.54 & **0.888** & 0.504 & **0.842** & 0.981 \\ \hline \multicolumn{10}{c}{_is described as being_} & 7347 & 50\% & 34\% & 0.154 & 0.136 & **0.898** & 0.497 & **0.852** & 0.414 & **0.948** & 0.973 \\ \hline \multicolumn{10}{c}{is described as performing} & 4243 & 28\% & 20\% & 0.034 & 0.027 & 0.725 & 0.317 & 0.607 & 0.294 & 0.899 & 0.968 \\ \hline \multicolumn{10}{c}{is described as stating} & 8901 & 65\% & 44\% & 0.138 & 0.097 & 0.807 & 0.406 & 0.726 & 0.363 & 0.907 & 0.935 \\ \hline \multicolumn{10}{c}{**Media House 2**} \\ \hline \multicolumn{10}{c}{_is described as having characteristics_} & 4407 & 27\% & 17\% & 0.062 & 0.054 & **0.910** & 0.55 & **0.892** & 0.622 & **0.929** & 0.492 \\ \hline \multicolumn{10}{c}{_is described as being_} & 4985 & 51\% & 32\% & 0.155 & 0.139 & **0.894** & 0.54 & **0.837** & 0.519 & **0.960** & 0.563 \\ \hline \multicolumn{10}{c}{is described as stating} & 5794 & 67\% & 41\% & 0.126 & 0.099 & 0.825 & 0.469 & 0.734 & 0.476 & 0.942 & 0.461 \\ \hline \multicolumn{10}{c}{is described as performing} & 2506 & 30\% & 20\% & 0.023 & 0.016 & 0.557 & 0.263 & 0.404 & 0.184 & 0.898 & 0.467 \\ \hline \multicolumn{10}{c}{**Media House 3**} \\ \hline \multicolumn{10}{c}{_is described as having characteristics_} & 5418 & 22\% & 20\% & 0.102 & 0.079 & **0.953** & **0.743** & 0.945 & **0.682** & **0.960** & **0.816** \\ \hline \multicolumn{10}{c}{_is described as being_} & 9591 & 47\% & 59\% & 0.177 & 0.142 & **0.921** & 0.597 & **0.889** & 0.525 & **0.954** & 0.692 \\ \hline \multicolumn{10}{c}{is described as performing} & 6430 & 35\% & 39\% & 0.064 & 0.039 & 0.869 & 0.576 & 0.828 & 0.517 & 0.915 & 0.650 \\ \hline \multicolumn{10}{c}{is described as stating} & 11222 & 59\% & 30\% & 0.150 & 0.117 & 0.844 & 0.515 & 0.767 & 0.465 & 0.940 & 0.576 \\ \hline \multicolumn{10}{c}{_Media House 4_} \\ \hline \multicolumn{10}{c}{_is described as having characteristics_} & 177 & 42\% & 23\% & 0.024 & 0.038 & 0.789 & **0.824** & 0.679 & **0.860** & 0.942 & **0.791** \\ \hline \multicolumn{10}{c}{is described as performing} & 4478 & 29\% & 20\% & 0.025 & 0.011 & 0.754 & 0.622 & 0.660 & 0.638 & 0.879 & 0.607 \\ \hline \multicolumn{10}{c}{_is described as being_} & 5375 & 48\% & 32\% & 0.156 & 0.110 & **0.903** & 0.574 & **0.874** & 0.548 & **0.934** & 0.601 \\ \hline \multicolumn{10}{c}{is described as stating} & 6420 & 60\% & 39\% & 0.139 & 0.090 & 0.837 & 0.464 & 0.789 & 0.476 & 0.892 & 0.452 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Metrics based on evaluation criteria in Table 4 of FT2 checkpoint generated sentences with FT1 and FT2 corpus sentences
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline \multicolumn{10}{c}{**True Positive (TP)**} \\ \hline \multicolumn{10}{c}{**Definition: Novel and Meaningful or Non-extractive Characterization.**} \\ \multicolumn{10}{c}{The generated sentence has a high semantically matching sentence in FT1 or FT2 datasets, and the person entity in both sentence contexts are the same.} & \multicolumn{10}{c}{**Definition:** The generated sentence has a high semantically matching sentence in FT1 or FT2 datasets and the person entity in both sentence contexts are the same.} & \multicolumn{10}{c}{**Definition:**} \\ \multicolumn{10}{c}{Condition:**} \\ \multicolumn{10}{c}{Prompt Entity == Ground Truth Entity} \\ \multicolumn{10}{c}{And} \\ \multicolumn{10}{c}{The cosine score between generated sentence and FT1 or FT2 dataset sentence is \(\sim\)= 0.6} \\ \hline \multicolumn{10}{c}{**Type 1 Error or False Negative (FN)**} \\ \hline \multicolumn{10}{c}{**Definition:** The generated sentence has a low semantically matching sentence in FT1 or FT2 datasets, and the person entity in both sentence contexts are the same.} & \multicolumn{10}{c}{**Definition:** The generated sentence has a low semantically matching sentence in FT1 or FT2 datasets and the person entity in both sentence contexts are the same.} & \multicolumn{10}{c}{**Definition:**} \\ \multicolumn{10}{c}{**Condition:**} \\ \multicolumn{10}{c}{Prompt Entity == Ground Truth Entity And And And The cosine score between generated sentence and FT1 or FT2 dataset sentence is \(\sim\)= 0.6} \\ \hline \multicolumn{10}{c}{**Type 1 Error or False Negative (FN)**} \\ \multicolumn{10}{c}{**Definition:** The generated sentence has a low semantically matching sentence in FT1 or FT2 datasets, and the person entity in both sentence contexts are the same.} & \multicolumn{10}{c}{**Definition:** The generated sentence has a low semantically matching sentence in FT1 or FT2 datasets and the person entity in both sentence contexts are different.} & \multicolumn{10}{c}{**Condition:**} \\ \multicolumn{10}{c}{**Condition:**} \\ \multicolumn{10}{c}{Prompt Entity == Ground Truth Entity And And And The cosine score between generated sentence and FT1 or FT2 dataset sentence is \(\sim\)= 0.6} \\ \multicolumn{10}{c}{The cosine score between generated sentence and FT1 or FT2 dataset sentence is \(\sim\)=0.6} \\ \multicolumn{10}{c}{dataset sentence is \(\sim\)0.6} \\ \hline \hline \end{tabular}
\end{table}
Table 4: Person Entity Characterization Evaluation Criteria
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Examples of Novel and Meaningful or Non-extractive Person Entity Characterizations (True Positives)**} \\ \hline \multicolumn{1}{c}{**Congues Text**} \\ \hline \multicolumn{1}{c}{**Media House 1 - FT1**} \\ \hline Entity A is described as having characteristics that can end up forming the government in State. & As per sources, Entity A is tipped to be the next Chief oflected Members of State. \\ \hline Entity B is described as having characteristics like threatening, stoking violence, etc. & Entity B comments come after he was likened to a terrorist by a prominent leader. \\ \hline Entity C is described as having characteristics of a caring truly, a living truly, and a picous truly & Entity C was a great leader with a great sense of compassion and humour. \\ \hline Entity D is described as having the characteristics of an Angel. & A prominent chronicals of a Powerful person, a character that bears an uncanny resemblance to Entity D. \\ \hline Entity E is described as being critical of the Prominent Party government in state. & Leader Entity E had remained highly critical of the Prominent Party government in the past. \\ \hline Entity D is described as being a strong advocate for the interests of the people. & Listing out the various pro-people initiatives launched by Entity D, a Rbel leader \\ \hline Entity F is described as being an extremely beautiful face & The smoky eyes and male lips further complimented Entity Fβs look. \\ \hline Entity G is described as being very quick in taking the decision, in such a situation. & Entity G, however, is the first politician from the rulerβs family to have reacted to the step. \\ \hline \multicolumn{1}{c}{**Media House 2 - FT1**} \\ \hline Entity H is described as being under house arrest, at his residence. & Does this mean party head and elected member Entity H is under house arrest? \\ \hline Entity Is described as being the new Chief of State. & βPeople of State want Entry I to become the Chief of State,β he added. \\ \hline Entity H is described as being unwell. & βWe have heard that Entity H is unwell, which is understandableβ \\ \hline Entity J is described as being mature. & Another significant development is that Entity J has emerged as a matureted leader during the General elections. \\ \hline Entity J is described as having the characteristics like a true leader and & He said that Entity J has a good vision and thoughts. \\ \hline Entity H is described as having characteristics such as being able & βThrough better economic management, we could take the common man ahead on the \\ \hline being people on the path of development. & path of progress.βEntity K claimed. \\ \hline Entity L is described as having characteristics like a seasoned politician and & Entity L is an extremely qualified \& respected leader, Entity I has served this nation with \\ leader and an ideal organisational person. & dedication \& humulty. \\ \hline Entity M is described as having characteristics such as reconciling to the family, & She said, βactor Entity M has really had my back, and has been there for me as a friend and support over the years, unfailingly and intuitively.β \\ \hline Entity N is described as having the characteristics like a true leader and & Entity N said that he was restricted only to his region as he does not hold any official post \\ \hline Entity N is described as being no entry, in the roadhow. & in city unit. \\ \hline Entity M is described as being an awareness campaign to urge people to follow. & During this time, Entity M has appeared in several public safety videos, urging his fans to obey laws. \\ \hline \multicolumn{1}{c}{**Media House 3 - FT1**} \\ \hline Entity D is described as having characteristics of a strong personality. & On one side, you see in Entry D a woman who was the personification of authoritarianism. \\ \hline Entity O is described as having characteristics of a classic leader born to influential parents. & With a massive campaign focused on Entity Oβs personality, he has trouered over other \\ \hline Entity J is described as being to become the President. & stolars is destrelucting to become President. \\ \hline Entity P is described as being the primary link between the party and the people & βEntity P is the unifying factor for party;β the party affairs representative told \\ \hline \multicolumn{1}{c}{**Media House 3 - FT2**} \\ \hline Entity P is described as having characteristics of a leader who has a habit of wearing & Entity P is described as coming in her uniform. \\ \hline Entity E is described as having characteristics of a leader who may be able to win City elections. & Entity E is described as claiming be built his from the ground up by addressing dozens of \\ \hline Entity Q is described as having characteristics of a successful order. & Entity Q is described as making that comment, in his personal equacity. \\ \hline Entity R is described as having characteristics of a leader who may need to rein & Entity R is described as saying that he will take all efforts to help authorities \\ in elements on the ground & contain the spread of the disease. \\ \hline \multicolumn{1}{c}{**Media House 4 - FT1**} \\ \hline Entity B is described as being in State, for a two-day visit to State. & Entity B is on a two-day visit to State. \\ \hline Entity S is described as being active, on social media. & Entity S is an avid social media player and also a wrrties a blog regularly. \\ \hline Entity T is described as being the new go-to girl. & New 'Country Girlβ Entity T is making a lot of headlines these days. \\ \hline Entity U is described as being in no mood to waste time. & βI do not waste my time on what he says;β said the leader Entity U. \\ \hline \multicolumn{1}{c}{**Media House 4 - FT2**} \\ \hline Entity J is described as having characteristics of a revolutionary. & Entity J is described as showing his mentile. \\ \hline Entity V is described as having characteristics of an attitude. & Entity V is described as winning several accolades for his work, including the \\ \hline Entity W is described as having characteristics of a leader. & Country Award for his debut role as a child artist. \\ \hline \hline \end{tabular}
\end{table}
Table 6: True Positive examples of top metrics in Table 5
in Table 2, to test the hypothesis. The count of generated sentences was up to the entity sentences count in the FT2 sentences corpus. The length of the generated text was limited to 30, and the first sentence in the generated text was considered for evaluation. The first evaluation was with the FT2 sentences corpus. Entity names in the FT2 sentences corpus was masked and embeddings were constructed. Then, each generated sentence matched with all sentence embeddings. Masking entity names in FT2 corpus resulted in better relevant matches. The match with the highest cosine score was considered the best semantic match. Next, a similar evaluation was done with the FT1 articles corpus. Every sentence was extracted from each article of the FT1 corpus, and sentences with person entities and lengths greater than ten were considered to compare with the generated text to consider sentences with reasonable information content and to exclude insignificant sentences. In this evaluation entity names were not masked in the FT1 corpus sentences.
We define the evaluation criteria as detailed in Table 4. The evaluation approach is that if a generated sentence is semantically similar to an FT1 or FT2 sentence, the entity referred to is the same. Then the generated sentence should be about the entity. In FT2, we have processed sentences where something said, done, and about an event related to the entity is suffixed with entity name and "is described as" and we refer to these sentences as the entity characterizing sentences. The FT2 generated sentences were the same kind as in the FT2 corpus. Examples in Table 6. Hence we define the FT2 generated sentences as characterizations and validate the characterizations with the Confusion Matrix definitions in Table 4. Also, good metrics on either FT1 or FT2 dataset is good enough to conclude soundness of the approach.
Table 5 shows the metrics derived from the evaluation criteria. F1, Precision and Recall are computed based on the _Distinct Generated Sentences Count_. was shown, the _"is described as having characteristics"_ and _"is described as being"_ prompts resulted in good F1, Precision, and Recall (or True Positive Rate) scores across media houses, which is Confirming that FT2 would lead to generating the most relevant sentences to the entity. More than one generated sentence is semantically similar to a corpus sentence. For True Positives average of the difference in sentiment scores of generated and semantically matching sentence is marginal. Therefore, it is encouraging to conclude that FT2 generated sentences are about the prompted entities and characterizing the entities with sentiment in the corpus. An exhaustive examples of generated True Positive and corresponding semantically matching sentences of top metrics in Table 5 is shown in Table 6.
With the approach, evaluation criteria, and test prompts detailed in this work, the _"is described as having characteristics"_ and _"is described as being"_ manual prompts function reasonably well as prompts to generate non-extractive characterizations of entities, as is evident from the examples. Examples of top characterizations of three test entities appearing across media houses are shown in Table 7 to contrast the characterizations generated by each media house. Generated characterizations have a cosine similarity score of greater than 0.75 with the FT1 corpus sentences. It is evident that top characterizations differ distinctly across media houses for the entities.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Test Entity** & **Examples of Generated Characterizations Across Media Houses** \\ \hline \multirow{4}{*}{**Entity 1**} & MH1: _is described as having characteristics_ of an immature, perhaps naive, leader \\ & MH1: _is described as having characteristics_ of an immature, perhaps anti-national, protestor \\ \cline{2-3} & MH2: _is described as having characteristics_ like a true paider and a man to trust \\ \cline{2-3} & MH3: _is described as having characteristics_ of a classic Party loyalist \\ & MH3: _is described as having characteristics_ of a leader who is adept at top command \\ \cline{2-3} & MH4: _is described as being_ at loggerheads with the Party leadership \\ & MH4: _is described as being_ fit, also, to be a prime minister \\ \hline \multirow{4}{*}{**Entity 2**} & MH1: _is described as having characteristics_ of a strong woman \\ & MH1: _is described as having characteristics_ of a strong political personality \\ \cline{2-3} & MH2: _is described as having characteristics_ such as long history with the State and its unique culture and languages \\ \cline{2-3} & MH2: _is described as having characteristics_ like a person, strong willpower, and political instincts \\ \cline{2-3} & MH3: _is described as having characteristics_ of a classic leader \\ \cline{2-3} & MH3: _is described as having characteristics_ of a strong regional leader \\ \hline \multirow{4}{*}{**Entity 3**} & MH3: _is described as having characteristics_ of a leader who is adept at stoking passions through the Partyβs various programs \\ \cline{2-3} & MH3: _is described as having characteristics_ like a leader with firm control over the party, a decisive figure, and an ability to move the front \\ \cline{1-1} \cline{2-3} & MH4: _is described as being successful_ in expanding the Party \\ \cline{1-1} & MH4: _is described as being_ a "prominent face" of the Party \\ \hline \hline \end{tabular}
\end{table}
Table 7. Examples of Entity Characterizations across media houses
Conclusion
There are diverse perspectives about a person entity we know and even more with famous personalities. Media House discusses are diverse and impact the World Views of famous personalities. In today's world of the Information Age, getting insights into these World Views will lead to faster and better awareness. In this work, we propose an approach to derive common perceptions in a Zero-shot way. The evaluation criteria and metrics show a good performance of the approach.
|
2306.17842 | SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen
LLMs | In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enabling
frozen LLMs to perform both understanding and generation tasks involving
non-linguistic modalities such as images or videos. SPAE converts between raw
pixels and interpretable lexical tokens (or words) extracted from the LLM's
vocabulary. The resulting tokens capture both the semantic meaning and the
fine-grained details needed for visual reconstruction, effectively translating
the visual content into a language comprehensible to the LLM, and empowering it
to perform a wide array of multimodal tasks. Our approach is validated through
in-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse set
of image understanding and generation tasks. Our method marks the first
successful attempt to enable a frozen LLM to generate image content while
surpassing state-of-the-art performance in image understanding tasks, under the
same setting, by over 25%. | Lijun Yu, Yong Cheng, Zhiruo Wang, Vivek Kumar, Wolfgang Macherey, Yanping Huang, David A. Ross, Irfan Essa, Yonatan Bisk, Ming-Hsuan Yang, Kevin Murphy, Alexander G. Hauptmann, Lu Jiang | 2023-06-30T17:59:07Z | http://arxiv.org/abs/2306.17842v3 | # SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs
###### Abstract
In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enabling frozen LLMs to perform both understanding and generation tasks involving non-linguistic modalities such as images or videos. SPAE converts between raw pixels and interpretable lexical tokens (or words) extracted from the LLM's vocabulary. The resulting tokens capture both the semantic meaning and the fine-grained details needed for visual reconstruction, effectively translating the visual content into a language comprehensible to the LLM, and empowering it to perform a wide array of multimodal tasks. Our approach is validated through in-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse set of image understanding and generation tasks. Our method marks the first successful attempt to enable a frozen LLM to generate image content while surpassing state-of-the-art performance in image understanding tasks, under the same setting, by over 25%.
## 1 Introduction
Large language models (LLMs) empowered by Transformers [39] have achieved remarkable progress in addressing a broad spectrum of Natural Language Processing (NLP) tasks [4; 8; 29; 2]. With the continuous increases in model size and training data, LLMs are gradually becoming more versatile and agnostic to specific tasks, unlocking new capabilities in solving complex AI tasks [42], like question answering, code generation, reasoning, mathematics problem-solving, and understanding humor, among various other applications [2; 29].
LLMs capture rich conceptual knowledge about the world in their lexical embeddings. This raises a question: if provided with the appropriate visual representations as input, _are frozen LLMs capable of solving tasks in visual modalities?_ Very recently, there have been notable advancements in extending the capabilities of frozen LLMs to tackle image understanding and retrieval tasks [21; 28]. However, generating a different modality using a frozen LLM that has not been explicitly trained on that modality has proven to be challenging and has had little success.
To facilitate LLMs for such cross-modal tasks, we propose to learn a vector quantizer to map an image, or some other non-linguisitic ("foreign") modality, to the token space of a frozen LLM. This effectively translates the image into a language that the LLM can comprehend, enabling us to leverage the generative abilities of the LLM to perform conditional image understanding and generation tasks without having to train on image-text pairs. Specifically, our new approach is that, given an image prompt, convert it to a token space with our learned encoder, use the LLM to generate suitable lexical tokens, and convert back to pixel space with our learned decoder.
We introduce a novel Semantic Pyramid AutoEncoder (SPAE) that produces a lexical word sequence that (1) carries rich semantics, and (2) retains fine details for signal reconstruction. In contrast to the majority of VQ-VAE approaches [38], our encoder maps to an interpretable discrete latent space, _i.e._, words. As depicted in Fig. 1, SPAE tokens have a multi-scale representation arranged in a pyramid structure. The upper layers of the pyramid comprise semantic-central concepts, while the lower layers prioritize appearance representations that captures the fine details for image reconstruction. This design enables us to dynamically adjust the token length to accommodate various tasks, such as using fewer tokens for understanding tasks and more tokens for generation tasks.
We verify the plausibility of our approach in an extreme setting of in-context learning [4], without any parameter updates to the LLM. Our SPAE model is trained standalone, without backpropagating through any language model. We evaluate our approach on image understanding tasks including image classification, image captioning, and visual question answering. We showcase the image generation capabilities of LLMs by leveraging in-context denoising techniques. Our method is LLM-agnostic and has been tested with PaLM 2 [2] and GPT-3.5 [29], suggesting compatibility with arbitrary LLMs. Code and models will be made available at [https://github.com/google-research/magvit/projects/spae](https://github.com/google-research/magvit/projects/spae) for the purpose of reproducible research.
The main contributions of this work are summarized as follows:
* This is the first successful method, to the best of our knowledge, that uses a frozen language model, trained solely on language tokens, to directly generate image content through in-context learning.
* We introduce a new SPAE tokenizer producing interpretable representations of semantic concepts and fine-grained details in the form of multilingual linguistic tokens with adjustable lengths.
* We propose a new progressive prompting method that facilitates in-context generation of long cross-modal sequences.
* We evaluate our method on visual understanding and generation tasks, and notably, our approach outperforms the best-published few-shot image classification accuracy [28] by an absolute 25% under the same in-context setting.
## 2 Related Work
Multimodal generation with LLMs.Advances have been made to expand the capabilities of LLMs beyond language. For example, Visual ChatGPT [43] uses ChatGPT to generate prompts and executes multimodal tasks through another model, _e.g._, generating image from text prompts by Stable Diffusion [33]. FROMAGe [21] feeds CLIP [31] embeddings to OPT [48] for image understanding and retrieval. However, it requires backpropagation through the LLM and does not support image synthesis. This work enables a standalone frozen LLM to understand and generate other modalities which are unseen in training.
Tokenization via vector quantization.VQ-VAE [38] compresses data into a discrete latent space defined by a codebook via vector quantization. VQGAN [14] enhances the reconstruction quality
Figure 1: **Framework of the proposed SPAE model.** An image is encoded into a pyramid of lexical tokens capturing semantic concepts and appearance details necessary for reconstruction.
with adversarial and perceptual objectives. These discrete latent quantities, often referred to as _tokens_, are widely used to learn generative transformer models for image [33; 7], video [45; 15], and audio [3; 9].Our SPAE model is built upon the VQGAN framework and applicable to different modalities.
**Tokenization into lexical representations.** The codebooks in typical VQGANs are learned jointly with the encoder and decoder stacks, which are not directly interpretable via natural languages. LQAE [28] replaces the learned codebook with frozen word embeddings from BERT [12] to connect with an English vocabulary. However, the LQAE tokens seldom contain semantic concepts in an image, and the reconstruction quality is worse than that with a learned codebook. Our SPAE quantizes an input sample into semantically related tokens in a multilingual vocabulary while preserving the high reconstruction quality of a VQGAN for generative tasks. In addition, SPAE tokens are organized in a multi-layer coarse-to-fine pyramid for flexible usage in different tasks.
**Few-shot learning with LLMs.** In-context learning [4; 8; 2] facilitates LLMs for few-shot learning via the text interface without parameter updates. This approach is commonly employed to assess the performance of LLMs on numerous NLP benchmarks, _e.g._, classification and question answering [41], mathematical reasoning [24], and code generation [44], which yields competitive results to their fine-tuned counterparts. However, existing few-shot vision-language understanding and generation frameworks [1; 21] still require LLM parameter updates. In contrast, our work inherits the in-context learning ability from frozen LLMs.
## 3 Method
Our goal is to model an image, or some other non-linguistic modality (_e.g._, video or audio), as a language sequence that LLMs can comprehend. _Semantic Pyramid AutoEncoder_ (SPAE) generates a lexical word sequence with dynamically adjustable length that carries rich semantics and retains fine details for signal reconstruction. To work with a frozen LLM via in-context learning, we introduce a progressive in-context denoising method to facilitate image generation. We use the image modality in this section to introduce our SPAE model in 2D, and later showcase the results of a 3D variant with the video modality in our experiments.
### Semantic Pyramid AutoEncoder
Our SPAE model extends the VQ-VAE [38] framework, which comprises an encoder, a quantizer, and a decoder. The CNN encoder maps an image \(\mathbf{I}\in\mathbb{R}^{H\times W\times 3}\) into continuous embeddings \(\mathbf{Z}\in\mathbb{R}^{h\times w\times c}\). Each element \(\mathbf{z}\in\mathbf{Z}\) is then passed through the quantizer, which assigns it to the closest entry in a codebook, resulting in the quantized embedding. Let \(\hat{\mathbf{Z}}\) represent the quantized embeddings for the entire image. The CNN decoder receives \(\hat{\mathbf{Z}}\) as input and generates the reconstructed image \(\hat{\mathbf{I}}\). Below we highlight the design differences in SPAE.
As illustrated in Fig. 1, SPAE generates lexical tokens arranged in a pyramid structure, which contains semantic concepts in the upper layers and appearance with progressively refined details in the lower layers. We introduce a semantic loss to encourage the usage of conceptually relevant tokens.
**Frozen language codebook.** To generate lexical tokens, we utilize a pretrained LLM codebook \(\mathbb{C}=\{(k,\mathbf{e}(k))\mid k\in\mathbb{T}\}\) and freeze it during training, where \(\mathbb{T}\) is a subset of the LLM vocabulary. Here, \(\mathbf{e}(\cdot)\) produces the text embedding for a sub-word \(k\) which may be obtained from any layer of the LLM. Since the codebook is aligned with the language vocabulary, we use the terms "token" and "word" interchangeably.
**Token pyramid.** The SPAE quantizer produces \(D\) layers of tokens where the tokens at layer \(l\) are denoted as \(\mathbf{k}_{l}\in\mathbb{T}^{h_{l}\times w_{l}}\). Prior works use Residual Quantization (RQ) to generate multi-layer tokens [22; 46]. In these methods, tokens from all layers have uniform shapes and do not carry specific semantic meanings. In contrast, we propose a pyramid token structure by enforcing the constraint \(h_{l}\leq h_{l+1}\wedge w_{l}\leq w_{l+1}\). The pyramid structure is purposefully designed to concentrate semantics within the within the upper layers of the pyramid. This design allow for representing semantic concepts using a significantly reduced number of tokens, _e.g._, as few as five tokens. RQ does not have this challenge as its tokens do not carry any inherent semantic meaning. A dilation subsample \(\mathbf{P}(l)\) is used, which selects the positions for quantization at layer \(l\) as
\[\mathbf{P}(l)=\{(h^{\prime}i-\left\lceil\frac{h^{\prime}}{2}\right\rceil+1,w^{ \prime}j-\left\lceil\frac{w^{\prime}}{2}\right\rceil+1)\mid(i,j)\in([1,h_{l}] \times[1,w_{l}])\cap\mathbb{Z}^{2}\} \tag{1}\]
where \(h^{\prime}=\frac{h_{D}}{h_{l}}\), and \(w^{\prime}=\frac{w_{D}}{w_{l}}\) are the downsample ratios.
For each embedding \(\mathbf{z}\) at position \((x,y)\), we obtain its discrete tokens \(k_{l}\) sequentially from layer \(1\) to \(D\). At layer \(l\), if \((x,y)\in\mathbf{P}(l)\), the quantizer assigns \(k_{l}=\operatorname*{arg\,min}_{k\in\mathbb{T}}\|\mathbf{z}_{l}-\mathbf{e}(k)\| _{2}^{2}\), where \(\mathbf{z}_{l}\) is the current remainder embedding, calculated from
\[\mathbf{z}_{l}=\mathbf{z}+\sum_{i=1}^{l-1}\mathbf{1}_{(x,y)\in\mathbf{P}(i)}( \mathbf{z}-\mathbf{e}(k_{i})) \tag{2}\]
The quantized embedding reconstructed with the first \(l\) layers is given by the average of the existing token embeddings as
\[\hat{\mathbf{z}}_{\leq l}=\frac{\sum_{i=1}^{l}\mathbf{1}_{(x,y)\in\mathbf{P}(i )}\mathbf{e}(k_{i})}{\sum_{i=1}^{l}\mathbf{1}_{(x,y)\in\mathbf{P}(i)}} \tag{3}\]
Using the input of \(\hat{\mathbf{Z}}_{\leq l}\) from tokens up to layer \(l\), the decoder can progressively reconstruct the image with dynamic token lengths, resulting in improved quality with refined appearance details. We term this approach as _Streaming Average Quantization_ (SAQ) due to its resemblance to computing the average on streaming data, where \(\hat{\mathbf{z}}_{\leq l+1}=\hat{\mathbf{z}}_{\leq l}+\frac{1}{l+1}\mathbf{e }(k_{l+1}),\hat{l}=\sum_{i=1}^{l}\mathbf{1}_{(x,y)\in\mathbf{P}(i)}\).
RQ [22, 46] is applicable but yields worse results in this context, as revealed by our ablation studies. This can be attributed to (1) varying scales of embeddings in residual layers, potentially dividing the codebook into multiple parts, and (2) misalignment in the summation of word embeddings, which undermines learning semantically meaningful tokens in later layers.
Semantic loss.We encourage the semantic similarity between the image \(\mathbf{I}\) and each lexical token \(k\) denoted by \(s(\mathbf{I},k)\). During training, we build per-layer candidate token pools as
\[\mathbf{C}_{l}(\mathbf{I})=\{k\in\mathbb{T}\mid s(\mathbf{I},k)\geq\rho_{l}\} \tag{4}\]
where \(\rho_{l}\) is a threshold. We set \(\rho_{l}\geq\rho_{l+1}\) to allow deeper layers to have a larger pool of candidate tokens while sacrificing some semantics.
Using image-text pairs [27] could lead to an ideal definition of \(s(\mathbf{I},k)\), but it limits the utilization of large-scale unpaired data. To define the similarity score, this paper employs a pretrained CLIP model [30]. In more details, let \(f_{\mathcal{I}}\) and \(f_{\mathcal{T}}\) be a pair of image and text CLIP embedding functions. We precompute the text feature for each token \(k\in\mathbb{T}\) as
\[f_{\mathcal{T}}^{\prime}(k)=\frac{1}{|\mathbf{p}|}\sum_{i=1}^{|\mathbf{p}|}f_{ \mathcal{T}}(\mathbf{p}_{i}(k)) \tag{5}\]
where \(\mathbf{p}\) is a list of prompt templates, such as "a photo of...". During training, we extract the image feature \(f_{\mathcal{I}}(\mathbf{I})\) and compute the dot-product similarity as \(\mathbf{s}^{\prime}(\mathbf{I},k)=f_{\mathcal{I}}(\mathbf{I})\cdot f_{ \mathcal{T}}^{\prime}(k)\). The similarity score is then normalized to account for the varying scales across different images.
\[\mathbf{s}(\mathbf{I},k)=\frac{\mathbf{s}^{\prime}(\mathbf{I},k)-\min_{j} \mathbf{s}^{\prime}(\mathbf{I},j)}{\max_{j}\mathbf{s}^{\prime}(\mathbf{I},j)- \min_{j}\mathbf{s}^{\prime}(\mathbf{I},j)} \tag{6}\]
We define the semantic loss for the encoder parameters \(\theta_{e}\) as
\[\mathcal{L}_{\text{semantic}}(\theta_{e};\mathbf{I})=\mathop{\mathbb{E}}_{l \in[1,D^{\prime}]}\mathop{\mathbb{E}}_{\mathbf{z}_{l}}\mathop{\mathbb{E}}_{c \in\mathbf{C}_{l}(\mathbf{I})}-\log\frac{\exp(-\|(\mathbf{z}_{l}-\mathbf{e}(c) \|_{2}^{2})}{\sum_{k\in\mathbb{T}}\exp(-\|\mathbf{z}_{l}-\mathbf{e}(k)\|_{2}^{ 2})} \tag{7}\]
where we randomly sample semantically similar target codes \(c\) for each remainder embedding in the first \(D^{\prime}\) layers.
Appearance loss.Using an improved VQGAN objective from [45], the appearance loss is calculated as follows:
\[\mathcal{L}_{\text{appearance}}(\theta_{e},\theta_{d};\mathbf{I}) =\!\|\mathbf{I}\!-\!\hat{\mathbf{I}}\|_{2}^{2}\!+\!\beta\sum_{l=1}^{D} \|\mathbf{Z}\!-\!\text{sg}(\hat{\mathbf{Z}}_{\leq l})\|_{2}^{2}\!+\!\lambda \mathcal{L}_{\text{GAN}}\!+\!\eta\mathcal{L}_{\text{Perceptual}}\!+\!\phi \mathcal{L}_{\text{LeCAM}} \tag{8}\]
where \(\mathcal{L}_{\text{GAN}}\), \(\mathcal{L}_{\text{Perceptual}}\), and \(\mathcal{L}_{\text{LeCAM}}\) are the VQGAN [15], perceptual [19], and LeCAM [35] losses. In addition, \(\text{sg}(x)\equiv x,\,\frac{d}{dx}\text{sg}(x)\equiv 0\) is the stop-gradient operation. The appearance loss is applied to both the encoder \(\theta_{e}\) and decoder parameters \(\theta_{d}\), excluding the frozen codebook embedding.
To stabilize the training and balance between appearance and semantics, we add a dynamic weight for the semantic guidance loss as \(w=\text{sg}\!\left(\frac{\mathcal{L}_{\text{appearance}}(\mathbf{I})}{\mathcal{L }_{\text{semantic}}(\mathbf{I})}\right)\). The total training loss excluding the GAN discriminator is
\[\mathcal{L}_{\text{SPAE}}(\theta_{e},\theta_{q})=\mathop{\mathbb{E}}_{\mathbf{I} }\left[\mathcal{L}_{\text{appearance}}(\theta_{e},\theta_{q};\mathbf{I})+ \alpha w\mathcal{L}_{\text{semantic}}(\theta_{e};\mathbf{I})\right] \tag{9}\]
### Progressive In-Context Denoising
While our method is more effective when backpropagating through LLMs by prompt [23] or adapter tuning [17; 18], this work focuses on verifying the plausibility in an extreme setting of in-context learning [4]. We demonstrate that LLMs are capable of performing new tasks in foreign modalities without any parameter updates. Specifically, a set of \(K\) examples \(\{(\mathbf{u}^{i},\mathbf{v}^{i})\}_{i=1}^{K}\) are fed to the LLM to learn a new task and answer a query \(\hat{\mathbf{u}}\) with
\[\hat{\mathbf{v}}\sim\mathrm{P}_{\mathrm{LLM}}(\cdot|\hat{\mathbf{u}};\{( \mathbf{u}^{i},\mathbf{v}^{i})\}_{i=1}^{K}) \tag{10}\]
Sampling \(\hat{\mathbf{v}}\) by a single-pass autoregressive decoding is suboptimal due to the distributional shift in the representation and the presence of exceptionally long sequences, _e.g._, an image is quantized into over 500 tokens. To this end, we introduce a progressive in-context denoising method.
Progressive generation.We generalize Eq. (10) into a multi-step generation process. Let \(s_{t}\) denote the number of tokens generated in \(t\) steps. At each step, we sample a segment of the full sequence
\[\hat{\mathbf{v}}_{s_{t}:s_{t+1}}\sim\mathrm{P}_{\mathrm{LLM}}(\cdot|[\hat{ \mathbf{u}},\hat{\mathbf{v}}_{<c_{t}}];\{([\mathbf{u}^{i},\mathbf{v}^{i}_{<c_{ t}}],\mathbf{v}^{i}_{s_{t}:s_{t+1}})\}_{i=1}^{K}) \tag{11}\]
where \([\cdot,\cdot]\) concatenates the sequences. We use \(c_{t}\) to control the length of previous segments to condition on. When \(c_{t}=s_{t}\), it is still an autoregressive (AR) process, where each \(\hat{\mathbf{v}}_{s_{t}:s_{t+1}}\) conditions on all previously decoded segments \(\hat{\mathbf{v}}_{<s_{t}}\). When \(c_{t}=0\), it corresponds to a segment-wise non-autoregressive (NAR) process, where each \(\hat{\mathbf{v}}_{s_{t}:s_{t+1}}\) is independently sampled. Note that the generation within each segment is always an autoregressive decoding procedure of the LLM. In practice, we use AR to generate the first few token layers given task-specific conditions. Then we use NAR to generate the remaining token layers conditioned on the previous layers in an unconditional latent refinement process.
In-context denoising.The learning capacity of an in-context setup is far from sufficient for the entirety of a foreign modality \(\mathcal{M}\). So far, there have been no successful attempts in the literature demonstrating that a frozen LLM can generate image content. Therefore, we operate in a denoising subspace to achieve generation. Take the image-to-image task in Fig. 2 as an example. The provided context are images randomly corrupted in the token space by \(\epsilon(\cdot;r)\), where the corruption ratio \(r\) follows a cosine schedule [7].
\[(\mathbf{u}^{i},\mathbf{v}^{i})\sim\Big{(}\epsilon\big{(}\mathcal{Q}(\mathtt{ mask}(\mathbf{I}));r_{i}\big{)},\epsilon\big{(}\mathcal{Q}(\mathbf{I});r_{i} \big{)}\Big{)},\mathbf{I}\in\mathcal{M}^{\prime}\subset\mathcal{M} \tag{12}\]
where \(\mathcal{Q}(\cdot)\) represents the SPAE tokenizer and \(\mathcal{M}^{\prime}\) is a small subset of raw images. \(\mathtt{mask}(\cdot)\) zeros out pixels of the real image to create the condition image, such as masking out the bottom half for out-painting. The query \(\hat{\mathbf{u}}\) is always sampled from \(\mathcal{M}^{\prime}\) without noise \(\epsilon(\cdot;r)\).
To ensure the generation is not simply copying the context, we enforce a minimal corruption rate of 20% such that no identical image from the context matches the real target image.
## 4 Experimental Results
### Experimental Settings
To verify the flexibility of our approach and the compatibility with different LLMs, we train two variants of SPAE, namely SPAE\({}_{\text{PLM}}\) and SPAE\({}_{\text{GPT}}\). The SPAE\({}_{\text{PLM}}\) codebook is taken from the input embedding layer of a PaLM 2-S checkpoint with a 65k vocabulary of the most frequent sentence pieces. The PaLM 2-L API [2] is used for in-context learning with SPAE\({}_{\text{PLM}}\). SPAE\({}_{\text{GPT}}\) uses a byte-pair encoding vocabulary with 9wk UTF-8 tokens ([https://github.com/openai/tiktoken](https://github.com/openai/tiktoken)), where we obtain the contextual token embeddings from OpenAI text-embedding-ada-002 ([https://platform.openai.com/docs/models/embeddings](https://platform.openai.com/docs/models/embeddings)). For a fair comparison with prior works [28],
Figure 2: **In-context denoising**. The context comprises images randomly corrupted in the token space, gradually ranging from 50% to 20%.
we use SPAE\({}_{\text{GPT}}\) with the GPT 3.5 text-davinci-003 API ([https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)).
We configure SPAE to encode a 128\(\times\)128 image into a token pyramid of 6 layers where each layer has \(2^{k}\times 2^{k}\) tokens and \(k=[0,1,2,3,4,4]\). Additionally, we train a video-based SPAE model on the Kinetics-600 dataset [5], and further details can be found in the Appendix. We apply semantic guidance loss to the first five layers, with thresholds of 0.98, 0.95, 0.9, 0.85, and 0.8. The CLIP with a ViT-L/14 [13] vision backbone is used. We use 80 prompt templates from the zero-shot ImageNet classification setup to precompute the CLIP text embeddings for the vocabulary. In addition, we use the Adam [20] optimizer with loss weights \(\alpha=1,\beta=0.33,\lambda=0.1,\eta=0.1,\phi=10^{-4}\) and a learning rate of \(10^{-4}\) following a linear warmup/cooldown and root square decay schedule. Following the prior work [28], SPAE is trained on the ImageNet ILSVRC2012 [10] dataset. We train with a batch size of 256 for 450k steps. Additional implementation details are elaborated upon in the Appendix.
**Tokenization quality.** We compare the image and video reconstruction quality using the tokens produced by SPAE and the VOGAN baseline used in state-of-the-art image [7, 25, 6] and video generation [45]. We use FID [16], Inception Score (IS) [34], and LPIPS [47] to compare with the image VQGAN from MaskGIT [7] on the ImageNet validation set, and FVD [37] to compare the 3D-VQGAN from MAGVIT [45] on the Kinetics-600 validation set. To quantify the semantics, we compute the CLIP and relative CLIP scores (Eq. (6)), both averaged across all lexical tokens.
The results are presented in Tab. 3. Unlike VQGAN tokens, which lack specific semantic meaning, SPAE tokens demonstrate high semantic CLIP scores, more evident in the lower layers. As the number of layers increases, more tokens are utilized, resulting in improved reconstruction quality. This flexibility allows for dynamic adjustment of the token length to accommodate various tasks, such as using fewer tokens for understanding tasks. While SPAE may have more lossy reconstruction compared to VQGAN when using a similar number of tokens, this is compensated by going into deeper layers, as shown in the last row of Tab. 3.
**Token pyramid visualization.** We visualize the tokens produced by SPAE in Fig. 4, where we show the raw pyramid or histogram of tokens with top frequencies for the first four layers, with reconstructed images from layer 5 and 6. We have the following findings.
First, the SPAE tokens are organized in a pyramid structure, with every layer comprising semantically related tokens to the image. The few tokens in the top layers seem to capture the primary theme of the image. For instance, in Fig. 4, the token presso (highlighted in orange) represents the espresso machine and other tokens like blender refer to related regions. Layer 3 and Layer 4 reveal additional details about localized objects. For example, the token Thermo refers to the thermometer in the top-left region, while stove appears in the bottom-right area. In addition to nouns, related verbs also show up, including pouring, refill, spill, and brew.
Second, it is worth noting that the CLIP model has an English-only vocabulary. However, thanks to the multilingual vocabularies and embeddings from the LLM, SPAE's semantic guidance is able to map to similar concepts in other languages, such as koffie in Dutch and kaffe in Danish as corresponding terms to the concept of coffee.
Figure 4: **Examples of pyramid image tokenization and reconstruction** by a 6-layer SPAE. We show the raw pyramid or histogram of most frequent tokens for the first four layers, and reconstructed images from layer 5 and 6. In the pyramid, we use darker cells to show tokens with higher CLIP similarity to the original image. For non-English sub-word tokens, we show automatic translation for reference in italic fonts below the original token. Circled tokens are mentioned in Section 4.2. See full pyramid visualizations in the Appendix.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{3}{*}{Method} & \multicolumn{4}{c}{Image (ImageNet ILSVRC2012 [10])} & \multicolumn{4}{c}{Video (Kinetics 600 [5])} \\ & \# Layers & \multirow{2}{*}{FID\(\downarrow\)} & \multirow{2}{*}{IS\(\uparrow\)} & \multirow{2}{*}{LPIPS\(\downarrow\)} & \multirow{2}{*}{CLIP\(\uparrow\)} & Relative & \# Layers & \multirow{2}{*}{FVD\(\downarrow\)} & \multirow{2}{*}{CLIP\(\uparrow\)} & Relative \\ & : \# Tokens & & & & & & CLIP\(\uparrow\) & & \# Tokens & \\ \hline VQGAN & 1: 256 & 5.48 & 119.69 & 0.13 & n/a & n/a & 1: 1024 & 6.79 & n/a & n/a \\ \hline \multirow{6}{*}{_SPAE (ours)_} & 1: 1 & - & - & - & **0.1879** & **0.7196** & 1: 1 & - & **0.2061** & **0.8425** \\ & 2: 5 & - & - & - & 0.1868 & 0.7147 & 2: 5 & - & 0.2056 & 0.8402 \\ \cline{1-1} & 3: 21 & - & - & - & 0.1815 & 0.6901 & 3: 21 & - & 0.2032 & 0.8286 \\ \cline{1-1} & 4: 85 & - & - & - & 0.1711 & 0.6414 & 4: 149 & - & 0.1896 & 0.7620 \\ \cline{1-1} & 5: 341 & 9.49 & 109.46 & 0.17 & 0.1604 & 0.5914 & 5: 1173 & 52.28 & 0.1670 & 0.6531 \\ \cline{1-1} & 6: 597 & **4.41** & **133.03** & **0.12** & 0.1577 & 0.5787 & 6: 2197 & **6.35** & 0.1635 & 0.6367 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison of reconstruction quality and semantic relevance** of tokens between SPAE and the VQGAN baselines used in state-of-the-art image [7, 25, 6] and video [45] generation models.
Third, similar to RQ tokens [22], SPAE tokens can reconstruct the image with progressively refined details when more layers, and thus tokens, are utilized. Fig. 4 shows Layer 5 begins to produce a reasonable reconstruction while Layer 6 further enhances the level of detail and smoothness.
### Qualitative Studies
This section explores the capability of a frozen PaLM 2, trained solely on language tokens, in performing multimodal tasks using in-context learning. We adopt a two-stage decoding process for image generation. In stage one, we use AR decoding to produce the first 5 SPAE layers with task-specific conditions. Stage two is a task-agnostic NAR decoding process for layer 6 conditioned on the first 5 layers.
Image to text and VQA.We examine two tasks involving visual-text reasoning (1) image captioning on COCO [26] captions; and (2) visual question answering (VQA) on COCO-QA [32]. For both tasks, we provide 10 unique training examples as prompts. In the case of VQA, 10 different answers are presented to form a 10-way 1-shot setup.
We compare SPAE to a baseline model trained with the same frozen language codebook but without the proposed semantic guidance or pyramid SAQ. As shown in Fig. 5, when fed with baseline tokens, the LLM randomly hallucinates a caption or guesses an answer simply based on the question. Similar hallucination can happen if we only use the first two layers of SPAE or five words to represent an image, as it provides insufficient context for captioning. Reasonable captions start to appear with 4
Figure 5: **Qualitative samples of image-to-text generation**: image captioning and VQA. We compare between different layers of SPAE (L1-L6) and a baseline model without semantic guidance or pyramid SAQ.
Figure 6: **Examples of text-to-image generation on MNIST using the frozen PaLM 2 model.** We provide 50 handwritten images in the context and ask PaLM 2, a LLM trained solely on text tokens, to answer complex queries that require generating digit images as the output. To achieve this, we use SPAE to convert the image into lexical tokens and construct prompts. Then, we ask PaLM 2 to generate suitable answers for the prompts. Finally, we convert the answer tokens back into the pixel space and display them in the figure. Note that the generated digit images do not appear identical to any of the samples provided in the context.
layers or 85 words representing an image, while complex scenes may still need the full 6 layers of 597 words.
LLM generating MNIST images.Fig. 6 shows a few text-to-image generation examples on MNIST [11]. The frozen LLM learns about handwritten digit images through 50 context samples tokenized by SPAE trained on MNIST. Each sample consists of a preamble "an image of \(k\)" and the lexical tokens representing an image of digit \(k\). Then we can ask the LLM to answer questions with digit images. Specifically, with a query of "an image of 1+7", we can use progressive AR decoding with the LLM to produce a token sequence that can be decoded into an image of 8 by SPAE. We test with complex questions requiring mathematical reasoning or common sense knowledge, and the LLM is able to respond correctly. In addition, the generated digit images appear different from all context samples. This demonstrates the cross-modal reasoning capability enabled by SPAE and a frozen LLM, with images generated over the text-only interface.
Conditional image generation.To the best of our knowledge, there have been no successful attempts that demonstrate generic image generation capability using a frozen LLM. To this end, we define a very simple conditional image generation setup to explore the interpolation capability of LLM, where the conditions are integers from 1 to 9. The target images are created with different image transformations, _e.g._, brightness, contrast, saturation, and color. As shown in Fig. 7, images 1-4 and 6-9 are fed as context to produce image 5, where the model interpolates the variable property.
Image to image denoising generation.We transition to a more challenging task to generate an image from a condition image. Fig. 8 demonstrates the conditional image generation tasks, _e.g._, image outpainting, deblur, inpainting, location translation, rotation. Note that, in order to generate images for each task, we utilize 10 pairs of in-context denoising examples with corruption rates ranging from 50% to 20%, as discussed in Section 3.2. The full context, which is omitted in Fig. 8, can be found in the Appendix.
The top rows of Fig. 8 compare the generation from different decoding strides with the same set of context examples. Single-step decoding with infinity stride fails to produce a reasonable image, which marks the significance of our proposed progressive generation technique. In the first stage, we observe improved quality with smaller strides down to 4. However, stride 1 is suboptimal due to the
Figure 8: **Examples of image-to-image denoising generation**. We compare different decoding strides for both stages with an outpainting task at the top. Bottom are samples from other tasks. Each generated image use ten pairs of in-context denoising examples omitted in the figure, which can be found in the Appendix.
Figure 7: **Examples of conditional image generation**: interpolation of different image transformations.
lost of context. In the second stage, since the task is much simpler, most trials produce reasonable images, while we adopt stride 16 to balance between efficiency and quality.
Generating image and text using a single model.Fig. 9 shows a task requiring a single LLM to generate both image and text, where it first inpaints the center region of an image using in-context denoising and then creates captions for the generated image. Please refer to the Appendix for the complete context inputs.
Image to video denoising.Due to space constraints, we show the resulted videos in the Appendix.
### Ablation Studies
The results in Tab. 4 and Fig. 10 verify the effectiveness of the proposed designs in SPAE, as evaluated by reconstruction quality (FID, IS, LPIPS) and semantic similarity (CLIP and Relative CLIP). We have the following findings. First, simply using a frozen codebook negatively affects the reconstruction results, but with semantic guidance it performs comparably with the original VQGAN while producing meaningful lexical words. Second, RQ hurts reconstruction quality with a frozen codebook, which is different from the standard setup [22] where the codebook is learned. Third, SAQ improves both quality and semantic similarity, where the pyramid adds flexibility for dynamic adjustment of the token length that can balance high-level semantics and low-level appearance details.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & \begin{tabular}{c} \# Layers \\ : \# Tokens \\ \end{tabular} & FID\(\downarrow\) & IS\(\uparrow\) & LPIPS\(\downarrow\) & CLIP\(\uparrow\) &
\begin{tabular}{c} Relative \\ CLIP\(\uparrow\) \\ \end{tabular} \\ \hline Baseline VQ & 1: 256 & 5.48 & 119.69 & 0.13 & n/a & n/a \\ + frozen codebook & 1: 256 & 7.44 & 101.39 & 0.17 & 0.1464 & 0.5260 \\ + semantic guidance & 1: 256 & 5.17 & 124.41 & 0.13 & 0.1518 & 0.5510 \\ \hline + 2-layer RQ [22] & 1: 256 & 11.94 & 89.01 & 0.22 & 0.1595 & 0.5875 \\ + 2-layer RQ [22] & 2: 512 & 6.05 & 113.93 & 0.15 & 0.1547 & 0.5646 \\ \hline + 2-layer SAQ & 1: 256 & 12.30 & 93.33 & 0.21 & 0.1613 & 0.5957 \\ + 2-layer SAQ & 2: 512 & 5.08 & 125.27 & 0.14 & 0.1595 & 0.5872 \\ \hline + 6-layer pyramid SAQ & 1: 1 & - & - & - & **0.1879** & **0.7196** \\ (\(c.f.\) layer 2-5 in Tab. 3) & 6: 597 & **4.41** & **133.03** & **0.12** & 0.1577 & 0.5787 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation studies** on codebook, semantic guidance, quantization method, and token structure.
Conclusion
Our work unveils the untapped potential of frozen Large Language Models (LLMs) in tackling multimodal understanding and generation tasks involving images and videos, without requiring explicit training on these modalities. This is achieved by a new method, SPAE, which converts between visual content and lexical tokens of variable length, imbued with rich semantic meaning. Our findings show the great potential of harnessing the vast knowledge and reasoning capabilities of LLMs in the field of computer vision, transcending the limitations of language-only tasks.
Limitations.The capability of in-context learning is significantly constrained by the acceptable sequence length. Although our results suggest the plausibility of image generation, the quality and diversity is still far from the recent text-to-image models trained on paired image and text data.
Broader impactOur paper showcases the untapped potential of frozen Large Language Models (LLMs) in multimodal understanding and generation tasks involving images and videos, without requiring explicit training on these modalities. As an initial research proof-of-concept, we focus on in-context learning, which has limitations in learning context and constrained capabilities. Consequently, there is still a substantial gap to the recent specialized models for text-to-image (_e.g._, Stable Diffusion) or image-to-text that have been specifically trained using billions of text-image pairs.
The potential impact of our research lies in its influence on future studies, specifically in the area of interacting with pretrained LLMs to enhance their understanding and generation capabilities in the visual modality. For instance, our work can be extended to explore finetuning or adapter tuning of LLMs on large-scale text-image datasets. Future research in these directions may implicate ethical issues around fairness and transparency, which need to be carefully considered beyond the quality measurements employed in our paper. We have found that the generated tokens occasionally include slang terms or words that create inappropriate connotations related to the subject depicted in the image or video. Such concerns must be thoroughly considered and effectively addressed prior to deploying this method in real-world applications.
## References
* [1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In _NeurIPS_, 2022.
* [2] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. _arXiv:2305.10403_, 2023.
* [3] Zalan Borsos, Raphael Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. Audiolm: a language modeling approach to audio generation. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_, 2023.
* [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In _NeurIPS_, 2020.
* [5] Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about Kinetics-600. _arXiv:1808.01340_, 2018.
* [6] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. In _ICML_, 2023.
* [7] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. MaskGIT: Masked generative image transformer. In _CVPR_, 2022.
* [8] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. _arXiv:2204.02311_, 2022.
* [9] Alexandre Defossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression. _arXiv:2210.13438_, 2022.
* [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In _CVPR_, 2009.
* [11] Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. _IEEE Signal Processing Magazine_, 29(6):141-142, 2012.
* [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _NAACL_, 2019.
* [13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In _ICLR_, 2020.
* [14] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In _CVPR_, 2021.
* [15] Songwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, and Devi Parikh. Long video generation with time-agnostic VQGAN and time-sensitive transformer. In _ECCV_, 2022.
* [16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _NeurIPS_, 2017.
* [17] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In _ICLR_, 2019.
* [18] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In _ICLR_, 2021.
* [19] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In _ECCV_, 2016.
* [20] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv:1412.6980_, 2014.
* [21] Jing Yu Koh, Ruslan Salakhutdinov, and Daniel Fried. Grounding language models to images for multimodal generation. _arXiv:2301.13823_, 2023.
* [22] Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. In _CVPR_, 2022.
* [23] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In _EMNLP_, 2021.
* [24] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. In _NeurIPS_, 2022.
* [25] Jose Lezama, Tim Salimans, Lu Jiang, Huiwen Chang, Jonathan Ho, and Irfan Essa. Discrete predictor-corrector diffusion models for image synthesis. In _ICLR_, 2023.
* [26] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _ECCV_, 2014.
* [27] Alex Liu, SouYoung Jin, Cheng-I Lai, Andrew Rouditchenko, Aude Oliva, and James Glass. Cross-modal discrete representation learning. In _ACL_, 2022.
* [28] Hao Liu, Wilson Yan, and Pieter Abbeel. Language quantized autoencoders: Towards unsupervised text-image alignment. _arXiv:2302.00902_, 2023.
* [29] OpenAI. GPT-4 technical report. _arXiv:2303.08774_, 2023.
* [30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _ICML_, 2021.
* [31] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In _ICML_, 2021.
* [32] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. In _NeurIPS_, 2015.
* [33] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _CVPR_, 2022.
* [34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In _NeurIPS_, 2016.
* [35] Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, and Weilong Yang. Regularizing generative adversarial networks under limited data. In _CVPR_, 2021.
* [36] Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. In _NeurIPS_, 2021.
* [37] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. _arXiv:1812.01717_, 2018.
* [38] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In _NeurIPS_, 2017.
* [39] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NeurIPS_, 2017.
* [40] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In _NeurIPS_, 2016.
* [41] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In _NeurIPS_, 2019.
* [42] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. _TMLR_, 2022.
* [43] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual ChatGPT: Talking, drawing and editing with visual foundation models. _arXiv:2303.04671_, 2023.
* [44] Pengcheng Yin, Wen-Ding Li, Kefan Xiao, Abhishek Rao, Yeming Wen, Kensen Shi, Joshua Howland, Paige Bailey, Michele Catasta, Henryk Michalewski, et al. Natural language to code generation in interactive data science notebooks. _arXiv:2212.09248_, 2022.
* [45] Lijun Yu, Yong Cheng, Kihyuk Sohn, Jose Lezama, Han Zhang, Huiwen Chang, Alexander G Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, et al. MAGVIT: Masked generative video transformer. In _CVPR_, 2023.
* [46] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. Soundstream: An end-to-end neural audio codec. _IEEE/ACM Trans. on Audio, Speech, and Language Processing_, 30:495-507, 2021.
* [47] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _CVPR_, 2018.
* [48] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. OPT: Open pre-trained transformer language models. _arXiv:2205.01068_, 2022.
SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs Supplementary Materials
## Appendix Overview
This supplementary document provides additional details to support our main manuscript, organized as follows:
* Appendix A presents more details on the SPAE model architecture design.
* Appendix B provides additional implementation details, including a video SPAE variant.
* Appendix C includes more quantitative evaluation results.
* Appendix D shows more qualitative examples of model generations.
## Appendix A SPAE Model Architecture
We present additional details about the SPAE model in this section.
Token pyramid.Fig. 11 shows an example of the dilation subsampler defined by Eq. (1). We select evenly distributed positions in each layer to form the token pyramid with monotonically increasing layer sizes.
Streaming average quantization.Fig. 12 compares our proposed Streaming Average Quantization (SAQ) with Residual Quantization (RQ) [6; 10]. At layer 2, the SAQ remainder embedding \(\mathbf{z}_{2}=2\mathbf{z}-\mathbf{e}(k_{1})\) is at a more similar scale to \(\mathbf{z}\), compared to the RQ remainder \(\mathbf{z}-\mathbf{e}(k_{1})\). We find that the scale consistency promotes better utilization of the frozen language codebook despite a large number of layers being used. Due to the pyramid structure, quantization in the first few layers may be skipped for those positions not selected by the dilation subsampler. Considering the scale consistency across quantization layers, the use of SAQ is more appropriate in this case.
## Appendix B Implementation Details
### SPAE Training
Image SPAE.An image SPAE encodes a 128\(\times\)128 image into 16\(\times\)16 embeddings. Following the VQGAN [4] architecture, we use 128 base filters with channel multipliers [1; 2; 2; 4] and 2 residual blocks at each scale, which results in 59M parameters in total.
Figure 11: **Dilation subsampler visualization**.
Figure 12: **Comparison between RQ and SAQ.** We show a 2-layer quantization process in a 2-dimensional space as an example. At layer \(l\), we use blue for the current remainder embeddings \(\mathbf{z}_{l}\), green for current post-quantization embeddings \(\mathbf{e}(k_{l})\), and orange for the reconstructed embeddings up to layer \(l\) as \(\hat{\mathbf{z}}_{<l}\).
Image SPAE-8.In addition to the primary SPAE model with six pyramid layers studied in the main paper, we also train an SPAE-8 model with eight layers to conduct a more in-depth analysis of the coarse-to-fine reconstruction process. The two extra layers each contain 16\(\times\)16 tokens. The semantic loss is still applied on the first 5 layers as in the primary model.
Mnist Spae.We train another SPAE on the MNIST [3] dataset with the same architecture setup. We pad the handwritten digit images from 28\(\times\)28 to 32\(\times\)32 pixels, which are then encoded into 4\(\times\)4 embeddings. Each image is represented by 37 tokens organized in four layers, with sizes of 1\(\times\)1, 2\(\times\)2, 4\(\times\)4, and 4\(\times\)4. We replace the CLIP image embedding with the CLIP text embedding of the label for the semantic loss. The model is trained for 10k steps with a batch size of 256. For in-context generation, AR decoding with a stride of 4 is used to produce all 37 tokens.
Video Spae.We initialize a video SPAE by VQGAN inflation [9] from a pretrained image SPAE, which encodes 16 frames at 128\(\times\)128 resolution into 4\(\times\)16\(\times\)16 embeddings. A video SPAE consists of 176M parameters. The pyramid layers contain 1\(\times\)1\(\times\)1, 1\(\times\)2\(\times\)2, 1\(\times\)4\(\times\)4, 2\(\times\)8\(\times\)8, 4\(\times\)16\(\times\)16, and 4\(\times\)16\(\times\)16 tokens. The video embedding is obtained as the average CLIP embedding for all frames. The model is trained on the Kinetics-600 [1] dataset which contains 384k videos. We train with a batch size of 512 for 130k steps, which takes 5.8k TPUv4-hours.
### LLM Prompting
To generate prompts, we utilize SPAE to quantize an image, or another non-linguistic modality, into a pyramid of lexical tokens. Subsequently, we flatten the tokens by concatenating them layer-by-layer, following a raster scan, and resulting in a 1-D string. This string, representing the image, is referred to as the _SPAE string_ in the following prompts.
We use task-specific prompt templates to facilitate answer generation with LLMs. The LLM output is always parsed by removing leading and trailing whitespace or newline characters.
Image classification with GPT 3.5.We use the same prompt template as LQAE [7] to interact with GPT 3.5. For a 2-way 1-shot classification between class _lion_ and _vase_, the prompt is
For each of the following input output pairs, output is one of ['lion', 'vase'] ### Input: <SPAE string from a lion image> Output: lion
### Input: <SPAE string from a vase image> Output: vase
Input: <SPAE string from the query image> Output:
We use greedy decoding to get a maximum of 7 tokens from GPT 3.5.
Image classification with PaLM 2.We use the original miniImageNet [8] format with PaLM 2. The prompt looks like
Answer with "lion" or "vase".
<SPAE string from a lion image> This is a lion
<SPAE string from a vase image> This is a vase
<SPAE string from the query image>
What is this? # Only used in 5-way 3/5-shot setups This is a We use greedy decoding to get a maximum of 4 tokens from PaLM 2.
Image captioning.We use greedy decoding to get a maximum of 20 tokens before the first newline character with the following prompt:
Generate a caption sentence based on words describing an image.
Q: <SPAE string from image 1> A: <Caption for image 1>
Q: <SPAE string from image 2> A: <Caption for image 2>
Q: <SPAE string from the query image> A:
Visual question answering.We use greedy decoding to get a maximum of 4 tokens before the first newline character with the prompt template as
Answer with a single word.
C: <SPAE string from image 1> Q: <Question for image 1> A: <Answer for image 1>
C: <SPAE string from image 2> Q: <Question for image 2> A: <Answer for image 2>
C: <SPAE string from the query image> Q: <Question for the query image> A:
Image/video generation with AR decoding.For image or video generation tasks, the condition can be a text string or an SPAE string of a condition image. Suppose we use AR decoding with a stride of 4 tokens. At the 4th step, the prompt looks like
Learn a new language and predict the 4 tokens following the examples.
C:<condition for image 1> Q:<SPAE string (token 1-12) for image 1> A:<SPAE string (token 13-16) for image 1>
C:<condition for image 2> Q:<SPAE string (token 1-12) for image 2> A:<SPAE string (token 13-16) for image 2>
C:<condition for the query> Q:<SPAE string (token 1-12) for the generated image from previous steps> A:
We use PaLM 2 to generate 8 predicted sequences for the next 4 tokens, starting with a temperature \(T_{0}=0\). We use the sentence piece [5] tokenizer to tokenize the output string. If all predictions are shorter than 4 tokens, we retry the LLM prediction with a higher temperature. At the \(i\)-th retry, the temperature is given by
\[T_{i}=\psi\sum_{j=1}^{i}2^{j} \tag{13}\]
where \(\psi=0.01\) is used.
Image/video generation with NAR decoding.We use NAR decoding to generate SPAE layer 6 conditioned on layer 1-5. With a stride of 16, the prompt at the 3rd step looks like
Predict the outputs following the examples.
Q:<SPAE string from layer 1-5 for image 1> A:<SPAE string from layer 6 (token 33-48) for image 1>
Q:<SPAE string from layer 1-5 for image 2> A:<SPAE string from layer 6 (token 33-48) for image 2>
Q:<SPAE string from layer 1-5 for the generated image from AR decoding> A:
We use PaLM 2 to generate 8 predicted sequences for the next 16 tokens. If the sentence piece parsing fails, we retry with the same temperature schedule as in AR decoding.
### Corruption Functions
Pixel-space transformation.To the best of our knowledge, there have been no successful attempts that demonstrate image generation capability using a frozen LLM. We use pixel-space transformation in the text-to-image generation tasks with the following setups:
* Brightness: \([\pm 0.8,\pm 0.6,\pm 0.4,\pm 0.2]\).
* Contrast: \([\pm 0.8,\pm 0.6,\pm 0.4,\pm 0.2]\).
* Saturation: \([\pm 0.4,\pm 0.3,\pm 0.2,\pm 0.1]\).
* Color (RGB): \([(0.6,1.4,1),(0.7,1.3,1),(0.8,1.2,1),(0.9,1.1,1),\\ (1.1,0.9,1),(1.2,0.8,1),(1.3,0.7,1),(1.4,0.6,1)]\)
Overflow pixels are clipped to \([0,255]\). Although our results suggest the plausibility of image generation, the quality and diversity is still far from the recent text-to-image models trained on paired image and text data.
Token-space permutation noise.Random permutation is used in the in-context denoising setup for image-to-image generation tasks. Specifically, we replace a fraction of tokens each with a random token sampled from the entire 65k vocabulary to satisfy a given corruption rate. The corruption rates for the 10 examples are \([0.5,0.47,0.44,0.41,0.38,0.35,0.32,0.29,0.26,0.23]\). The permutation noise presents a context distribution with expectation at the real image, but does not contain the ground truth tokens to prevent information leakage.
## Appendix C Additional Quantitative Results
Few-shot image classification with different SPAE layers.Tabs. 5 and 6 present the few-shot mini-ImageNet classification performance with each SPAE\({}_{\text{PaLM}}\) layer. These detailed quantitative numbers accompany the findings from Fig. 3. As shown, Layer 3 achieves the best overall performance as well as in most of the setups, which balances between the level of details and the burden of the LLM.
Few-shot image classification with SPAE\({}^{\text{disjoint}}\).Following the previous work of LQAE [7], we train our SPAE on the ImageNet training split [2] and present the comparative results in the main paper. There is a possibility of overlap between the training split of ImageNet and the mini-ImageNet dataset used in the few-shot classification task [8]. Since few studies have investigated this before, we present the results of training SPAE on the ImageNet training split after excluding the 20 classes used in the few-shot mini-ImageNet classification task. This creates a even more challenging setting as the visual classes have never been seen during the training of the tokenizer or the LLMs.
As demonstrated in the lower section of Tab. 5, we present the results of training our tokenizer on the _disjointed_ data, referred to as SPAE\({}^{\text{disjoint}}\). As expected, we observe a slight decrease in performance, since both SPAE and LLMs need to generalize to the test classes that are outside the
training data distribution. Despite the fact that the baseline is trained on unlabeled images sampled from the mini-ImageNet test classes, SPAE\({}^{\text{disjoint}}_{\text{PBLM}}\) still demonstrates a significant improvement over the state-of-the-art baseline on the 2-way benchmarks.
Token quality with more SPAE layers.Tab. 7 shows the per-layer reconstruction quality and semantic relevance of tokens from the SPAE-8 model in comparison to the default model. With more token layers, the model gains larger capacity for both semantic and appearance, where the appearance gets pushed into deeper layers. At layer 1 to 6, SPAE-8 yields consistently higher CLIP scores than SPAE. At the last three layers, SPAE-8 also has better reconstruction quality than the last two layers of SPAE. These results suggest the potential of better reconstruction quality and semantic relevance from using more token layers.
## Appendix D Additional Qualitative Examples
Token pyramid visualization.Fig. 13 shows tokenization and reconstruction samples by a 6-layer SPAE from ImageNet validation set. Key concepts are captured in the first few layers, whereas the later layers focus on the visual appearance. In the coffee machine example, many keywords are present to describe various aspects from the stove to the thermometer. In the parrot case, a single unified concept is repeatedly highlighted.
Coarse-to-fine reconstruction.Fig. 14 shows reconstruction samples by SPAE-8 from ImageNet validation set. We compare the reconstructed images from layer 5 to layer 8 to demonstrate the coarse-to-fine progress.
Image to image generation.We use AR decoding to produce the first 5 token layers with task-specific conditions, followed by task-agnostic NAR decoding to fill in layer 6. Fig. 15 visualizes the input pairs for the image-to-image generation examples in Fig. 7, with more examples in Fig. 16.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Task Induction} & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \\ Method & \# Layers & Inner shots & 1 & 1 & 3 & 5 & 1 & 1 & 1 & Avg \\ & : \# Tokens & Repeats & 0 & 0 & 0 & 0 & 1 & 3 & 5 & \\ \hline \hline \(\textit{SPAE}_{\text{PBLM}}\) & 1: 1 & PaLM 2 & **26.8** & 52.0 & 50.9 & 49.9 & 51.9 & 48.4 & 47.9 & 46.83 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 2: 5 & PaLM 2 & 23.6 & 64.2 & 68.0 & 69.9 & 63.4 & 62.0 & 60.2 & 58.76 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 3: 21 & PaLM 2 & 20.2 & **65.1** & **73.7** & **74.3** & **66.4** & **67.0** & 66.3 & **61.86** \\ \(\textit{SPAE}_{\text{PBLM}}\) & 4: 85 & PaLM 2 & 16.1 & 58.5 & 67.2 & 69.1 & 64.0 & 66.4 & **67.4** & 58.39 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 5: 341 & PaLM 2 & 12.1 & 46.3 & 55.9 & 67.2 & 43.3 & 46.3 & - & - \\ \(\textit{SPAE}_{\text{PBLM}}\) & 6: 597 & PaLM 2 & 12.1 & 35.7 & - & - & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Few-shot classification accuracy on the mini-ImageNet 5-way benchmarks. - means value unavailable due to an infeasible sequence length.**
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Task Induction} & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \\ Method & \# Layers & Inner shots & 1 & 1 & 3 & 5 & 1 & 1 & 1 & Avg \\ & : \# Tokens & Repeats & 0 & 0 & 0 & 0 & 1 & 3 & 5 & \\ \hline LQAE [7] & 1: 256 & GPT 3.5 & 1.5 & 35.2 & 68.2 & 69.8 & 68.5 & 68.7 & 65.9 & 53.97 \\ \hline \(\textit{SPAE}_{\text{PBLM}}\) & 1: 1 & PaLM 2 & **34.8** & 77.2 & 81.2 & 80.3 & 74.0 & 73.2 & 71.5 & 70.31 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 2: 5 & PaLM 2 & 32.2 & 84.0 & 88.5 & 88.4 & **85.1** & 83.6 & 82.4 & 77.74 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 3: 21 & PaLM 2 & 27.9 & **84.8** & **92.5** & **92.6** & 84.8 & **85.2** & **85.4** & **79.03** \\ \(\textit{SPAE}_{\text{PBLM}}\) & 4: 85 & PaLM 2 & 22.8 & 81.1 & 91.4 & 90.4 & 82.6 & 84.3 & 84.7 & 76.76 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 5: 341 & PaLM 2 & 21.2 & 77.4 & 88.0 & 79.1 & 84.8 & 74.0 & 76.1 & 71.51 \\ \(\textit{SPAE}_{\text{PBLM}}\) & 6: 597 & PaLM 2 & 21.8 & 73.8 & 70.8 & 62.4 & 64.8 & 62.1 & 58.6 & 59.19 \\ \hline \hline \(\textit{SPAE}_{\text{PBLM}}^{\text{disjoint}}\) & 2: 5 & PaLM 2 & 24.8 & 79.8 & 84.5 & 83.7 & 80.8 & 78.5 & 78.4 & 72.93 \\ \(\textit{SPAE}_{\text{PBLM}}^{\text{disjoint}}\) & 3: 21 & PaLM 2 & 21.4 & 81.4 & 89.2 & 87.9 & 82.6 & 81.7 & 80.6 & 74.98 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Few-shot classification accuracy on the mini-ImageNet 2-way benchmarks.**
Under the in-context denoising setup, the LLM generates novel images based on the provided context, where multiple different generations can be obtained.
Image and text generation.Fig. 17 visualizes the input pairs for the image+text generation example in Fig. 8. The LLM generates a novel image with multiple captions based on the provided context.
Image to video generation.Fig. 18 shows an image-to-video example with the frame prediction task. The input is one frame tokenized by the image SPAE, while the output is a 16-frame clip tokenized by the video SPAE. We follow the same two-stage procedure as image-to-image generation, with more steps in each stage to account for the longer sequence. Due to the sequence length limit, only four samples can be fit into the context, which limits the generation performance. Nevertheless, this demonstrates, for the first time, video generation capabilities of a frozen LLM.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & \begin{tabular}{c} \# Layers \\ : \# Tokens \\ \end{tabular} & FID\(\downarrow\) & IS\(\uparrow\) & LPIPS\(\downarrow\) & CLIP\(\uparrow\) &
\begin{tabular}{c} Relative \\ CLIP\(\uparrow\) \\ \end{tabular} \\ \hline \multirow{6}{*}{SPAE} & 1: 1 & - & - & - & **0.1879** & **0.7196** \\ & 2: 5 & - & - & - & 0.1868 & 0.7147 \\ & 3: 21 & - & - & - & 0.1815 & 0.6901 \\ & 4: 85 & - & - & - & 0.1711 & 0.6414 \\ & 5: 341 & 9.49 & 109.46 & 0.17 & 0.1604 & 0.5914 \\ & 6: 597 & **4.41** & **133.03** & **0.12** & 0.1577 & 0.5787 \\ \hline \multirow{6}{*}{SPAE-8} & 1: 1 & - & - & - & **0.2051** & **0.8018** \\ & 2: 5 & - & - & - & 0.2046 & 0.7994 \\ \cline{1-1} & 3: 21 & - & - & - & 0.2012 & 0.7834 \\ \cline{1-1} & 4: 85 & - & - & - & 0.1896 & 0.7289 \\ \cline{1-1} & 5: 341 & 43.42 & 49.78 & 0.32 & 0.1709 & 0.6412 \\ \cline{1-1} & 6: 597 & 8.93 & 116.12 & 0.18 & 0.1667 & 0.6213 \\ \cline{1-1} & 7: 853 & 4.78 & 135.01 & 0.13 & 0.1647 & 0.6119 \\ \cline{1-1} & 8: 1109 & **3.89** & **140.55** & **0.11** & 0.1634 & 0.6058 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Reconstruction quality and semantic relevance of SPAE tokens.**
Figure 13: **Examples of multi-layer image tokenization and reconstruction by a 6-layer SPAE. For visualization purposes only, we use darker cells to show tokens with higher CLIP scores regarding the original image. For non-English sub-word tokens, we show automatic translation for reference in italic fonts below the original token. We show tokens in all six layers, along with reconstructed images from the last two layers.**
Figure 14: **Examples of coarse-to-fine image reconstruction** by SPAE-8. The top 5 layers reconstruct a noisy image. The appearance details gradually get refined as more token layers are aggregated by the streaming average quantization process.
Figure 15: **Examples of image-to-image generation** via in-context denoising. All input samples for the in-context learning are presented for the examples in Fig. 7. The LLM generates novel images based on the provided context. Multiple different generations can be obtained from the same set of context samples.
Figure 16: **More examples of image-to-image generation** via in-context denoising. The LLM generates novel images based on the provided context image pairs.
Figure 17: **Examples of image+text generation**. All input samples for the in-context learning are presented for the example in Fig. 8. The LLM generates a novel image with multiple captions based on the provided context.
Figure 18: **Examples of image-to-video generation**: frame prediction. We follow the same two-stage generation procedure as in image-to-image tasks. Due to the sequence length limit, only four samples can be fit into the context. The generated video clip appear visually different from the context samples, especially around the reflections of the bowl. |
2302.14664 | On Vietoris-Rips complexes of Finite Metric Spaces with Scale $2$ | We examine the homotopy types of Vietoris-Rips complexes on certain finite
metric spaces at scale $2$. We consider the collections of subsets of $[m]=\{1,
2, \ldots, m\}$ equipped with symmetric difference metric $d$, specifically,
$\mathcal{F}^m_n$, $\mathcal{F}_n^m\cup \mathcal{F}^m_{n+1}$,
$\mathcal{F}_n^m\cup \mathcal{F}^m_{n+2}$, and $\mathcal{F}_{\preceq A}^m$.
Here $\mathcal{F}^m_n$ is the collection of size $n$ subsets of $[m]$ and
$\mathcal{F}_{\preceq A}^m$ is the collection of subsets $\preceq A$ where
$\preceq$ is a total order on the collections of subsets of $[m]$ and
$A\subseteq [m]$ (see the definition of $\preceq$ in Section~\ref{Intro}). We
prove that the Vietoris-Rips complexes $\mathcal{VR}(\mathcal{F}^m_n, 2)$ and
$\mathcal{VR}(\mathcal{F}_n^m\cup \mathcal{F}^m_{n+1}, 2)$ are either
contractible or homotopy equivalent to a wedge sum of $S^2$'s; also, the
complexes $\mathcal{VR}(\mathcal{F}_n^m\cup \mathcal{F}^m_{n+2}, 2)$ and
$\mathcal{VR}(\mathcal{F}_{\preceq A}^m, 2)$ are either contractible or
homotopy equivalent to a wedge sum of $S^3$'s. We provide inductive formula for
these homotopy types extending the result of Barmak in \cite{Bar13} about the
independence complexes of Kneser graphs \text{KG}$_{2, k}$ and the result of
Adamaszek and Adams in \cite{AA22} about Vietoris-Rips complexes of hypercube
graphs with scale $2$. | Ziqin Feng, Naga Chandra Padmini Nukala | 2023-02-28T15:30:24Z | http://arxiv.org/abs/2302.14664v3 | # On Vietoris-Rips complexes of finite metric spaces with scale \(2\)
###### Abstract.
We examine the homotopy types of Vietoris-Rips complexes on certain finite metric spaces at scale \(2\). We consider the collections of subsets of \([m]=\{1,2,\ldots,m\}\) equipped with symmetric difference metric \(d\), specifically, \(\mathcal{F}_{n}^{m}\), \(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m}\), \(\mathcal{F}_{m}^{m}\cup\mathcal{F}_{n+2}^{m}\), and \(\mathcal{F}_{\preceq A}^{m}\). Here \(\mathcal{F}_{n}^{m}\) is the collection of size \(n\) subsets of \([m]\) and \(\mathcal{F}_{\preceq A}^{m}\) is the collection of subsets \(\preceq A\) where \(\preceq\) is a total order on the collections of subsets of \([m]\) and \(A\subseteq[m]\) (see the definition of \(\preceq\) in Section 1). We prove that the Vietoris-Rips complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) are either contractible or homotopy equivalent to a wedge sum of \(S^{2}\)'s; also, the complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\) are either contractible or homotopy equivalent to a wedge sum of \(S^{3}\)'s. We provide inductive formulae for these homotopy types extending the result of Barmak in [4] about the independence complexes of Kneser graphs \(\text{KG}_{2,k}\) and the result of Adamamszek and Adams in [2] about Vietoris-Rips complexes of hypercube graphs with scale \(2\).
Key words and phrases:Vietories-Rips Complexes, Simplicial Complexes, Homotopy Types, Kneser Graphs, Hypercube Graphs 2020 Mathematics Subject Classification: 05E45, 55P10, 55N31
## 1. Introduction
Along with the development of topological data analysis [10, 6], determining the homotopy types of Vietoris-Rips complex of finite metric spaces has become crucial in applied topology. In fact, the idea behind persistent homology is to compute the (co)homology of a Vietoris-Rips complex filtration built on data, which is typically a finite metric space with high dimensions ([5]). Vietoris-Rips complexes were introduced by Vietoris in [17] and then by Rips (see [12]) to approximate a metric space at a chosen scale for different purposes. Additionally, these kinds of complexes have been intensively used in computational topology as a simplical model for the sensor networks ([11, 13, 14]) and as a tool for image processing ([15]).
The Vietoris-Rips complex \(\mathcal{VR}(X;r)\) of a metric space \((X,d)\) with scale \(r\geq 0\) is a simplicial complex with vertex set \(X\), where a finite subset \(\sigma\in[X]^{<\infty}\) is a simplex in \(\mathcal{VR}(X;r)\) if and only if its diameter \(\text{diam}(\sigma)\leq r\). Here, \([X]^{<\infty}\) denotes the collection of all finite subsets of \(X\), and for any subset \(S\) of \(X\)\(\text{diam}(S)\) is defined as the supremum of all distances \(d(x,y)\) between pairs of points \(x,y\in S\). Recent work has focused on studying Vietoris-Rips complexes of circles ([1]), metric graphs ([7]), geodesic spaces ([18, 19]), and more.
In this paper, we investigate the homotopy type of the Vietoris-Rips complex \(\mathcal{VR}(\mathcal{F},2)\) of a specific class of finite metric spaces with scale \(2\). Let \(\mathcal{F}\) be a collection of subsets of \([m]\) for some \(m\in\mathbb{N}\), where \([m]=\{1,2,\ldots,m\}\). We define a metric
\(d\) on \(\mathcal{F}\) such that, for any \(A\) and \(B\) in \(\mathcal{F}\), \(d(A,B)=|A\Delta B|\), where \(A\Delta B\) denotes the symmetric difference of \(A\) and \(B\), i.e., \((A\setminus B)\cup(B\setminus A)\). Hence, \((\mathcal{F},d)\) is a finite metric space. The homotopy type of \(\mathcal{VR}(\mathcal{F},r)\) for \(r\geq 0\) is closely related to the study of the independence complex of Kneser graphs and the Vietoris-Rips complexes of hypercube graphs.
The independence complex \(\mathrm{I}_{G}\) of a graph \(G=(V(G),E(G))\) is a simplicial complex whose simplices are the independent sets of vertices of \(G\), i.e., sets of vertices no two of which are adjacent. The Kneser graph \(\mathrm{KG}_{n,k}\) has the \(n\)-subsets of \([2n+k]\) as its vertices and its edges are given by pairs of disjoint such subsets. In particular, any two vertices in \(\mathrm{KG}_{n,k}\) are disjoint if and only if their symmetric difference distance is at most \(2n-1\). Therefore, the independence complex of \(\mathrm{KG}_{n,k}\) is identical to the Vietoris-Rips complex \(\mathcal{VR}(\mathcal{F}_{n}^{2n+k},2n-1)\), where \(\mathcal{F}_{n}^{m}\) denotes the collection of all \(n\)-subsets of \([m]\).
Barmak proved in [4] (Theorem 4.11) that the independence complex of \(\mathrm{KG}_{2,k}\) is homotopy equivalent to \(\bigvee\binom{k+3}{3}S^{2}\). For any \(m\geq 4\), note that \(\mathcal{VR}(\mathcal{F}_{2}^{m},2)=\mathcal{VR}(\mathcal{F}_{2}^{m},3)= \mathrm{I}(\mathrm{KG}_{2,m-4})\); thus, the complex \(\mathcal{VR}(\mathcal{F}_{2}^{m},2)\) is homotopy equivalent to a wedge sum of \(\binom{m-1}{3}\) copies of \(S^{2}\). When \(m=2n\), the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},m-2)\) has \(\binom{m}{n}\) vertices and is the boundary of a cross-polytope, so it is homotopy equivalent to \(S^{\frac{1}{2}\binom{m}{n}-1}\).
The hypercube graph is a graph whose vertices are all binary strings of length \(m\), denoted by \(Q_{m}\), and whose edges are given by pairs of such strings with Hamming distance \(1\). The Hamming distance between any two binary strings with the same length is defined as the number of positions in which their entries differ. We can consider \(Q_{m}\) as a metric space equipped with the Hamming distance, and then the hypercube graph can be identified as the complex \(\mathcal{VR}(Q_{m},1)\).
Adamaszek and Adams investigated the Vietoris-Rips complexes \(\mathcal{VR}(Q_{m},r)\) at small scales \(r=0,1,2\) in their recent work [2]. The complex \(\mathcal{VR}(Q_{m},0)\) is a wedge sum of \((2^{m}-1)\)-many \(S^{0}\)'s, and \(\mathcal{VR}(Q_{m},1)\) is a wedge sum of \(((m-2)2^{m-1}+1)\)-many \(S^{1}\)'s. Their main result is that the complex \(\mathcal{VR}(Q_{m},2)\) is homotopy equivalent to a wedge sum of \(c_{m}\) copies of \(S^{3}\)'s, where \(c_{m}\) is given by \(c_{m}=\sum_{0\leq j<i<m}(j+1)(2^{m-2}-2^{i-1})\). The Cech complexes of the metric space \(Q_{m}\) with scales \(2\) and \(3\) are studied in [3].
Each binary string of length \(m\) can also be considered as the characteristic function of a subset of \([m]\). Hence, there is a natural isometric map between the metric spaces \(Q_{m}\) and \(\mathcal{P}([m])\), where \(\mathcal{P}([m])\) is the collection of all subsets of \([m]\) equipped with the symmetric difference metric \(d\). Adamaszek and Adams in [2] used Polymake [8] and Risper++ [20] to compute the reduced homology groups of \(\mathcal{VR}(\mathcal{P}[m],3)\) for \(m=5,6,\ldots,9\), with coefficients \(\mathbb{Z}\) or \(\mathbb{Z}/2\mathbb{Z}\). They found that these homology groups are nontrivial only in dimensions \(4\) and \(7\), indicating that the complex \(\mathcal{VR}(\mathcal{P}[m],3)\) is a wedge sum of copies of \(S^{4}\)'s and \(S^{7}\)'s. This suggests that the homotopy type of the complex \(\mathcal{VR}(\mathcal{P}[m],3)\) is more complicated than that of the complexes \(\mathcal{VR}(\mathcal{P}[m],r)\) with \(r=0,1,2\). Shukla [16] subsequently proved that for \(m\geq 5\), the reduced homology group \(\mathring{H}_{i}(\mathcal{VR}(\mathcal{P}([m]),3))\) is nontrivial if and only if \(i\in\{4,7\}\).
In this paper, we extend the study of Vietoris-Rips complexes to other collections of subsets in \([m]\) with scale \(2\) beyond \(\mathcal{F}_{2}^{m}\) and \(\mathcal{P}[m]\). To determine the homotopy type of \(\mathcal{VF}(\mathcal{P}[m],2)\), Adamaszek and Adams in [2] used an inductive proof on the
clique complex of the graph \(G_{\ell}^{2}\), whose vertices are binary sequences of non-negative integers \(\leq\ell-1\) with edges given by pairs of sequences with Hamming distance \(\leq 2\). We adopt a different inductive process to study these complexes and our approach is also potentially applicable to the investigation of these complexes at larger scales.
We start with introducing notations for certain collections of subsets of \([m]\). For \(n\leq m\), let \(\mathcal{F}_{\leq n}^{m}\) be the collection of all subsets of \([m]\) with cardinality \(n\). It is easy to see that the complex \(\mathcal{VR}(\mathcal{F}_{\leq r}^{m},r)\) is contractible since it is a cone. We now proceed to define a total ordering \(\prec\) on \(\mathcal{P}([m])\) such that we could carry out our induction. For each \(A\subseteq[m]\) with \(|A|=n\), we represent \(A=\{i_{1},i_{2},\ldots,i_{n}\}\) as \(i_{1}i_{2}\cdots i_{n}\) with \(i_{1}<i_{2}<\cdots<i_{n}\). For any \(A,B\subseteq[m]\), we say \(A\prec B\) if one of the followings holds:
1. \(|A|<|B|\);
2. there is a \(k\in\mathbb{N}\) such that \(i_{k}<j_{k}\) and \(i_{\ell}=j_{\ell}\) for any \(\ell<k\), when \(n=|A|=|B|\), \(A=i_{1}i_{2}\cdots i_{n}\) and \(B=j_{1}j_{2}\cdots j_{n}\).
Clearly this is a total order on \(\mathcal{P}([m])\) and for any subcollection \(\mathcal{F}\) of \(\mathcal{P}([m])\), \((\mathcal{F},\prec)\) is also a total order. For any \(A\subset[m]\), we denote \(\mathcal{F}_{\prec A}=\{B:B\prec A\}\) and \(\mathcal{F}_{\preceq A}=\mathcal{F}_{\prec A}\cup\{A\}\).
In this paper, we study the homotopy types of the Vietoris-Rips complexes, \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) (Section 4), \(\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\) (Section 5), and \(\mathcal{VR}(\mathcal{F}_{p}^{m}\cup\mathcal{F}_{q}^{m},2)\) (Section 6). We'll show that:
1. the complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) are either contractible or homotopy equivalent to a wedge sum of \(S^{2}\)'s;
2. the complexes \(\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\) are either contractible or homotopy equivalent to a wedge sum of \(S^{3}\)'s.
Furthermore, we'll identify inductive formulas for determining the homotopy types of these complexes.
We start with some easy observations of the homotopy types of such complexes. For any collection \(\mathcal{F}\) of subsets of \([m]\), \(\mathcal{VR}(\mathcal{F},0)\) is \(|\mathcal{F}|\) disjoint vertices. Also for any \(1\leq n\leq m-1\), \(\mathcal{VR}(\mathcal{F}_{n}^{m},1)\) is also \(\binom{m}{n}\) disjoint vertices since \(d(A,B)\geq 2\) for any two different subsets \(A,B\) with cardinality \(n\). Also for each \(i=0,1,\ldots,m\), the metric space \(\mathcal{F}_{i}^{m}\) is isometric to \(\mathcal{F}_{m-i}^{m}\); hence \(\mathcal{VR}(\mathcal{F}_{i}^{m},r)\) is homotopy equivalent to \(\mathcal{VR}(\mathcal{F}_{m-i}^{m},r)\) for each \(r\geq 0\). We see that the complexes \(\mathcal{VR}(\mathcal{F}_{1}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{m-1}^{m},2)\) is contractible because each pair of their vertices has distance \(2\). Hence the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) is contractible when \(n=0,1,m-1\), or \(m\). Similarly the complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) is contractible if \(n=0\) or \(m-1\).
## 2. Notations and Preliminary Results
**Topological Spaces and Wedge sums.** We write \(X\simeq Y\) when they are homotopy equivalent. We denote \(S^{k}\) to be the \(k\)-dimensional sphere. For topological spaces \(X\) and \(Y\), their wedge sum, \(X\lor Y\) is the space obtained by gluing \(X\) and \(Y\) together at a single point. The homotopy type of \(X\lor Y\) is independent of the choice of points if \(X\) and \(Y\) are connected CW complexes. For \(k\geq 1\), \(\vee_{k}X\) denotes the \(k\)-fold wedge sum of \(X\). We denote \(\Sigma X\) to be the suspension of \(X\). For any sphere \(S^{k}\), \(\Sigma S^{k}\) is homeomorphic to \(S^{k+1}\). A function \(f\) from \(X\) to \(Y\) is said to be null-homotopic if it is homotopic to a constant map. And it is well-known that any mapping from \(S^{n}\) to \(S^{m}\) is null-homotopic when \(n<m\).
Any two metric spaces \((X,d_{X})\) and \((Y,d_{Y})\) is said to be isometric if there is a bijective distance-preserving map \(f\) from \(X\) to \(Y\), i.e., \(d_{X}(x_{1},x_{2})=d_{Y}(f(x_{1}),f(x_{2}))\) for any \(x_{1},x_{2}\in X\). Hence if \(X\) and \(Y\) are isometric, then it is straightforward to verify that \(\mathcal{VR}(X,r)\) is homeomorphic to \(\mathcal{VR}(Y,r)\) for any \(r\geq 0\).
**Simplicial complexes.** A simplicial complex \(K\) on a vertex set \(V\) is a collection of subsets of \(V\) such that: i) all singletons are in \(K\); and ii) if \(\sigma\in K\) and \(\tau\subset\sigma\), then \(\tau\in K\). For a complex \(K\), we use \(K^{(k)}\) to represent the \(k\)-skeleton of \(K\) which is a subcomplex of \(K\). For vertices \(v_{1},v_{2},\ldots,v_{k}\) in a complex \(K\), if they span a simplex in \(K\), then we denote the simplex to be \(\{v_{1},v_{2},\ldots,v_{k}\}\). If \(\sigma\) and \(\tau\) are simplices in \(K\) with \(\sigma\subset\tau\), we say \(\sigma\) is a face of \(\tau\). We say a simplex is a maximal simplex (or a facet) if it is not a face of any other simplex. We say that \(L\) is a full subcomplex \(K\) if it contains all the simplicies in \(K\) spanned by the vertices in \(L\).
If \(\sigma\) is a \(k\)-simplex and \(K_{\sigma}\) generated by \(\sigma\), then \(K_{\sigma}^{(n)}\) is homotopy equivalent to a wedge sum of \(\binom{k}{n+1}\)-many of \(S^{n}\) for any \(n<k\).
A complex \(K\) is _clique_ if \(\sigma\in K\) for each non-empty subset of vertices \(\sigma\) with \(\{v,w\}\in K\) for any \(v,w\in\sigma\). For any graph \(G=(V,E)\), we denote \(\operatorname{Cl}(G)\) to be the clique complex of \(G\) whose vertex set is \(V\) and \(\operatorname{Cl}(G)\) contains a finite subset \(\sigma\subset V\) as a simplex if each pair of vertices in \(\sigma\) forms an edge in \(G\). Also, we see that the Vietoris-Rips complex over any metric space is clique by the definition.
The following result is proved in [9]. This is an important method to investigate the homotopy type of a complex by splitting it into two or more subcomplexes.
**Lemma 1**.: _The simplicial complex \(K=K_{1}\cup K_{2}\) satisfies that the inclusion maps \(\imath_{1}:K_{1}\cap K_{2}\to K_{1}\) and \(\imath_{2}:K_{1}\cap K_{2}\to K_{2}\) are both null-homotopic. Then_
\[K\simeq K_{1}\lor K_{2}\vee\Sigma(K_{1}\cap K_{2}).\]
The next lemma (see [2], Lemma 1) is an easy corollary of this result. For any vertex \(v\) in a complex \(K\), \(K\setminus v\) denote the induced complex on the vertex set \(K^{(0)}\setminus\{v\}\). The star of a vertex \(v\) in \(K\) is \(\operatorname{st}_{K}(v)=\{\sigma:\sigma\cup\{v\}\in K\}\). Hence for any \(v\in V\), \(\operatorname{st}_{K}(v)\) is contractible because it is a cone with the vertex \(v\), namely \(v*\operatorname{lk}_{K}(v)\) where \(\operatorname{lk}_{K}(v)=\{\sigma:\sigma\cup\{v\}\in K\text{ and }v\notin\sigma\}\).
**Lemma 2**.: _If \(v\) is a vertex in \(K\) with the inclusion map \(\imath:\operatorname{lk}_{K}(v)\to K\) being null-homotopic, then \(K\) is homotopic to \(K\setminus v\vee\Sigma(\operatorname{lk}_{K}(v))\)._
Also in this paper for convenience, we set \(\sum_{i=a}^{b}f(i)=0\) when \(b<a\).
## 3. Scattered Clusters of a subcomplex
To investigate the topology of the independence complex of graphs, Barmak [4] introduced a general tool using which he answered a question arisen from works of Engstrom and Jonsson and investigated lots of examples appearing from literature. It turns out this concept is a powerful tool to understand general simplicial complexes. For any subcomplex \(L\) of \(K\), we define the _star cluster_ of \(L\) in \(K\) as the subcomplex
\[\operatorname{SC}_{K}(L)=\bigcup_{v\in L}\operatorname{st}_{K}(v).\]
If \(\sigma\) is a simplex in \(K\), Barmak in [4] proved that \(\operatorname{SC}_{K}(\sigma)\) is contractible, hence homotopy equivalent to \(\sigma\). In general, given that \(L\) is a subcomplex of \(K\), \(\operatorname{SC}_{K}(L)\) is not homotopy equivalent to \(L\) as showed in the example below.
**Example 3**.: _Let \(K=\mathcal{VR}(\mathcal{P}([2]),1)\) and \(L\) be the full subsimplex with vertices \(\{\emptyset,\{1\},\{2\}\}\). Then \(L\) is contractible but in the other hand, \(\text{SC}_{K}(L)=K\) which is homotopy equivalent to \(S^{1}\)._
Next, we'll give a sufficient condition under which the star cluster of a subcomplex \(L\) in \(K\) is homotopy equivalent to \(L\). This result is a generalization of Barmak's result about \(\text{SC}_{K}(\sigma)\) being contractible for any simplex \(\sigma\) in \(K\); and it is also heavily used to determine the homotopy type of simplicial complexes in this paper.
**Lemma 4**.: _Let \(K\) be a clique complex and \(L\) a clique subcomplex of \(K\). Suppose that for any pair of vertices \(v,w\in L\), the edge \(\{v,w\}\in L\) given that \((\text{St}_{K}(v)\cap\text{St}_{K}(w))\setminus L\neq\emptyset\). Then the following hold:_
1. \(L\) _is a full subcomplex of_ \(K\)_;_
2. _for any collection of vertices,_ \(v_{1},v_{2},\ldots,v_{\ell}\) _in_ \(L\)_, the complex_ \(L^{\prime}=L\cup\bigcup_{i=1}^{\ell}\text{st}_{K}(v_{i}))\) _is homotopy equivalent to_ \(L\)_._
_In particular, ii) implies that \(\text{SC}_{K}(L)\) is homotopy equivalent to \(L\)._
Proof.: First we prove i). Let \(\sigma=\{w_{0},w_{1},\ldots,w_{k}\}\) be a simplex in \(K\) and \(w_{j}\in L\) for each \(j=0,1,\ldots,k\). Take an arbitrary pair \(w_{j},w_{j^{\prime}}\) of vertices from \(\sigma\) with \(j\neq j^{\prime}\). Suppose, for contradiction, that \(\{w_{j},w_{j^{\prime}}\}\notin L\). Since the \(1\)-simplex \(\{w_{j},w_{j^{\prime}}\}\) is in \(K\), it is in both \(\text{st}_{K}(w_{j})\) and \(\text{st}_{K}(w_{j^{\prime}})\). Hence, \(\text{st}_{K}(w_{j})\cap\text{st}_{K}(w_{j^{\prime}})\setminus L\neq\emptyset\). Then by the assumption the edge \(\{w_{j},w_{j^{\prime}}\}\in L\) which is a contradiction. Therefore each pair of vertices in \(\sigma\) forms an edge in \(L\). Since \(L\) is clique, \(\sigma\in L\).
We'll prove ii) by induction. Suppose that the vertices \(v_{1},v_{2},\ldots,v_{k-1}\) in \(L\) satisfies that the complex \(L_{0}=L\cup\bigcup\{\text{st}_{K}(v_{i}):i=1,2,\ldots,k-1\}\simeq L\). The clearly hold when \(k=1\). Let \(v_{k}\) be any other vertex in \(L\) and \(L_{1}=L_{0}\cup\text{st}_{K}(v_{k})\). We'll show that \(L_{1}\simeq L\).
We claim that \(L_{0}\cap\text{st}_{K}(v_{k})=\text{st}_{L_{0}}(v_{k})\). Note that both \(\text{st}_{K}(v_{k})\) and \(\text{st}_{L_{0}}(v_{k})\) are contractible, hence so is \(\Sigma(\text{st}_{L_{0}}(v_{k}))\). Then by Lemma 1 and the inductive assumption,
\[L_{1}=L_{0}\cup\text{st}_{K}(v_{k})\simeq L_{0}\vee\Sigma(\text{st}_{L_{0}}(v _{k}))\vee\text{st}_{K}(v_{k})\simeq L_{0}\simeq L.\]
Next we prove our claim above. The inclusion \(\text{st}_{L_{0}}(v_{k})\subseteq L_{0}\cap\text{st}_{K}(v_{k})\) is clear from definition. Then, take a simplex \(\sigma\in L_{0}\cap\text{st}_{K}(v_{k})\) and we'll prove \(\sigma\in\text{st}_{L_{0}}(v_{k})\) in the following two cases.
1. Suppose that all the vertices of \(\sigma\) are in \(L\). Since \(\sigma\in\text{st}_{K}(v_{k})\), \(\sigma\cup\{v_{k}\}\) is a simplex in \(K\) whose vertices are in \(L\). Then by i), \(\sigma\cup\{v_{k}\}\in L\subseteq L_{0}\); hence \(\sigma\in\text{st}_{L_{0}}(v_{k})\).
* Suppose that the simplex \(\sigma\) contains at least one vertex not in \(L\). Then clearly \(\sigma\notin L\). Because \(\sigma\in L_{0}\), then there exists at lease one \(k_{0}\) with \(1\leq k_{0}\leq k-1\) such that \(\sigma\in\operatorname{st}_{K}(v_{k_{0}})\). So \(\sigma\cup\{v_{k_{0}}\}\) is a simplex in \(K\). Since \(\sigma\in\operatorname{st}_{K}(v_{k})\), \(\sigma\cup\{v_{k}\}\) is also a simplex in \(K\). Also note that \(\sigma\in(\operatorname{st}_{K}(v_{k_{0}})\cup\operatorname{st}_{K}(v_{k})) \setminus L\). By the assumption \(\{v_{k_{0}},v_{k}\}\) is an edge in \(K\). Since \(K\) is clique, \(\sigma\cup\{v_{k_{0}},v_{k}\}\) is a simplex in \(K\); and this simplex is in \(\operatorname{st}_{K}(v_{k_{0}})\subseteq L_{0}\). Hence the simplex \(\sigma\cup\{v_{k_{0}},v_{k}\}\) is in \(\operatorname{st}_{L_{0}}(v_{k})\) which implies that \(\sigma\) is also in \(\operatorname{st}_{L_{0}}(v_{k})\).
Next, we give a way to split a complex \(K\) into a union of two subcomplexes using star clustering. Then we could apply Lemma 1 to investigate the homotopy type of the complex \(K\).
**Lemma 5**.: _Let \(K\) be a simplicial complex and \(K_{1},K_{2}\) be subcomplexes of \(K\) such that_
* \(K^{(0)}=K_{1}^{(0)}\cup K_{2}^{(0)}\)_;_
* \(\sigma\in K_{2}\) _if_ \(\sigma\) _is in_ \(K\) _and the vertices of_ \(\sigma\) _are in_ \(K_{2}\)_._
_Then \(K=\text{SC}_{K}(K_{1})\cup K_{2}\)._
Proof.: Let \(\sigma\) be a simplex of \(K\). If one of \(\sigma\)'s vertices, namely \(v\), is in \(K_{1}\), then \(\sigma\in\operatorname{st}_{K}(v)\subseteq\text{SC}_{K}(K_{1})\); otherwise, \(\sigma\in K_{2}\) by the assumption.
## 4. Vietoris-Rips Complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\)
Starting from this section, each vertex of a complex is a subset of \([m]\) and we'll use \(A\), \(B\), \(C\), or \(D\) to represent them. For any subset \(C\) of \([m]\), denote \(N[C]=\{A\in\mathcal{P}([m]):C\subset A\text{ and }|A\backslash C|=1\}\) and \(L[C]=\{A\in\mathcal{P}([m]):A\subset C\text{ and }|C\backslash A|=1\}\).
Fix \(n,m\in\mathbb{N}\) with \(n<m\). For any \(\{i_{1},i_{2},\ldots,i_{n},i_{n+1}\}\in[m]\) with \(i_{1}<i_{2}<\ldots<i_{n}<i_{n+1}\), we get that
\[N[i_{1},i_{2},\ldots,i_{n-1}]=\{A\in\mathcal{F}_{n}^{m}:\{i_{1},i_{2},\ldots,i _{n-1}\}\subset A\},\text{ and }\]
\[L[i_{1},i_{2},\ldots,i_{n+1}]=\{i_{1}i_{2}\cdots\hat{i_{j}}\cdots i_{n+1}:j\in \{1,...,n+1\}\}.\]
Assume that \(m\geq n+2\). We claim that \(N[i_{1},i_{2},..,i_{n-1}]\) and \(L[i_{1},i_{2},\ldots,i_{n+1}]\) are maximal simplices in the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). It is clear that \(N[i_{1},i_{2},\ldots,i_{n-1}]\) is an \((m-n)\)-simplex and \(L[i_{1},i_{2},\ldots,i_{n+1}]\) is an \(n\)-simplex in \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). Let \(A\) be an \(n\)-subset of \([m]\) such that \(A\notin N[i_{1},i_{2},\ldots,i_{n-1}]\). Without loss of generality, we assume that \(i_{1}\notin A\), then we pick \(i,j\in A\setminus\{i_{1},i_{2},\ldots,i_{n-1}\}\) and \(k\in[m]\setminus\{i,j,i_{1},i_{2},\ldots,i_{n-1}\}\). Let \(B=\{i_{1},i_{2},\ldots,i_{n-1},k\}\) which is clearly in \(N[i_{1},i_{2},\ldots,i_{n-1}]\). Clearly, \(d(A,B)\geq 4\) since \(\{i,j,k,i_{1}\}\subseteq A\Delta B\). Hence \(N[i_{1},i_{2},..,i_{n-1}]\) is a maximal simplex in \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). The proof of \(L[i_{1},i_{2},\ldots,i_{n+1}]\) being a maximal simplex in \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) is similar and we'll skip it.
For convenience in this paper, we will use \(N[i_{1},i_{2},\ldots,i_{n-1}]\) or \(L[i_{1},i_{2},\ldots,i_{n+1}]\) to represent both a simplex and the subcomplex generated by the simplex in \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) or any other complexes containing them.
For a complex \(K\), let \(M(K)\) be the collection of maximal simplices in \(K\). Clearly \(K=\bigcup M(K)\). Hence it is important to understand the collection of maximal simplices in a complex. Next, we show that there are only these two of maximal simplices in \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\).
**Lemma 6**.: _Fix \(n,m\in\mathbb{N}\) with \(1<n<m\). Let \(K\) be the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\)._
1. _Any maximal simplex_ \(\sigma\) _in_ \(K\) _is either_ \(N[i_{1},i_{2},..,i_{n-1}]\) _or_ \(L[i_{1},i_{2},..,i_{n+1}]\) _for_ \(i_{1},i_{2},i_{3},...,i_{n+1}\in[m]\) _with_ \(i_{1}<i_{2}<...<i_{n}<i_{n+1}\)_._
2. _For any_ \(k\geq 2\) _and_ \(\{A_{1},A_{2},\ldots,A_{k+1}\}\) _being a_ \(k\)_-simplex in_ \(K\) _such that_ \(|\bigcap_{\ell=1}^{k+1}A_{\ell}|<n-1\)_, the only maximal simplex containing_ \(\{A_{1},A_{2},\ldots,A_{k+1}\}\) _as a face is_ \(L[A_{1}\cup A_{2}]\)_._
Proof.: To prove i), we pick a maximal simplex \(\sigma\) in the complex \(K\).
We consider the set \(\bigcap\sigma\). If \(|\bigcap\sigma|=n-1\), then clearly \(\sigma\) is one of the simplices in the form \(N[i_{1},i_{2},..,i_{n-1}]\).
We claim that the size of the set \(\bigcap\sigma\) can't be greater than \(0\) and less than \(n-1\). For the purpose of contradiction, we suppose that \(0<|\bigcap\sigma|<n-1\). Let \(|\bigcap\sigma|=k\) with \(0<k<n-1\) and list \(\bigcap\sigma\) as \(\{i_{1},i_{2},\cdots,i_{k}\}\). Pick \(A\in\sigma\) such that \(A\setminus\bigcap\sigma=\{j_{1},j_{2},\ldots,j_{n-k}\}\). For each \(\ell=1,2,\ldots,n-k\), pick \(B_{\ell}\in\sigma\) such that \(j_{\ell}\notin B_{\ell}\). Also \(|B_{\ell}\setminus A|=1\) because \(d(B_{\ell},A)=2\) for each \(\ell\). Since \(k<n-1\), \(n-k\geq 2\). Then we let \(i_{0}\) be the number in \(B_{1}\setminus A\) and \(j_{0}\) be the number in \(B_{2}\setminus A\). If \(i_{0}\neq j_{0}\), then the \(B_{1}\Delta B_{2}=\{j_{1},i_{0},j_{2},j_{0}\}\) which is a contradiction. So \(i_{0}=j_{0}\). Therefore, by induction, \(B_{\ell}\setminus A=\{i_{0}\}\) for each \(\ell=1,2,\ldots,n-k\). Then if \(C=\{i_{0},i_{2},\ldots,i_{k},j_{1},\ldots j_{n-k}\}\), then \(C\Delta A=\{i_{0},i_{1}\}\) and \(C\Delta B_{\ell}=\{i_{1},j_{\ell}\}\) for each \(\ell=1,2,\ldots,n-k\), i.e., \(d(C,A)=2\) and \(d(C,B_{\ell})=2\) for each \(\ell=1,2,\ldots,n-k\). If \(C\) is in \(\sigma\), then \(i_{1}\notin\bigcap\sigma\); and if \(C\) is not in \(\sigma\), then \(\sigma\) is not a maximal simplex. These contradictions show that it is impossible that \(0<|\bigcap\sigma|<n-1\).
Now we suppose that \(\bigcap\sigma=\emptyset\). Pick \(A\in\sigma\) and represent \(A\) as \(i_{1}i_{2}\cdots i_{n}\). For each \(\ell=1,2,\ldots,n\), there exists \(B_{\ell}\in\sigma\) such that \(i_{\ell}\notin B_{\ell}\). Using the argument above, we can show that \(B_{\ell}\setminus A=B_{\ell^{\prime}}\setminus A\) for each \(\ell,\ell^{\prime}=1,2,\ldots,n\). Denote \(B_{1}\setminus A=\{i_{n+1}\}\). Then clearly \(\sigma=L[i_{1},i_{2},\ldots,i_{n+1}]\).
To prove ii), we start with a \(k\)-simplex \(\{A_{1},A_{2},\ldots,A_{k+1}\}\) in \(K\) such that \(|\bigcap_{\ell=1}^{k+1}A_{\ell}|<n-1\) and \(k\geq 2\). Then if \(\sigma\) is a maximal simplex in \(K\) such that \(\{A_{1},A_{2},\ldots,A_{k+1}\}\), then \(\bigcap\sigma=\emptyset\) by the argument above, and hence \(\sigma\) is in the form \(L[i_{1},i_{2},\ldots,i_{n+1}]\). Clearly \(A_{1}\cup A_{2}\subseteq\{i_{1},i_{2},\ldots,i_{n+1}\}\) which means \(A_{1}\cup A_{2}=\{i_{1},i_{2},\ldots,i_{n+1}\}\) because \(|A_{1}\cup A_{2}|=n+1\). It is clear that no other maximal simplex containing this simplex.
We need one more result before the discussion of the homotopy types of the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). Assume \(n\geq 1\). Fix a number \(a\in[m]\), let \(\mathcal{S}_{a}=\{A:A\in\mathcal{F}_{n}^{m}\text{ and }a\in A\}\). There is a natural isometric mapping between the metric spaces \(\mathcal{F}_{n-1}^{m}\) and \(\mathcal{S}_{a}\). Hence \(\mathcal{VR}(\mathcal{F}_{n-1}^{m},2)\) is homeomorphic to \(\mathcal{VR}(\mathcal{S}_{a},2)\). Next, we show that the homotopy type of the star cluster of the latter in \(K\) remains the same.
**Lemma 7**.: _Let \(n,m\) be in \(\mathbb{N}\) such that \(n<m\). Define \(\mathcal{S}_{1}=\{A\subset[m]:|A|=n\text{ and }1\in A\}\) and let \(L\) be the complex \(\mathcal{VR}(\mathcal{S}_{1},2)\). Then_
\[\text{SC}_{\mathcal{VR}(\mathcal{F}_{n}^{m},2)}(L)\simeq L.\]
Proof.: Let \(K=\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) and pick \(A\) and \(B\) in \(L\) such that \(\{A,B\}\) is not an edge in \(L\), i.e., \(|A\Delta B|\geq 4\). Hence there exist natural numbers \(i_{1},i_{2},j_{1}\) and \(j_{2}\) such that \(\{i_{1},i_{2}\}\subseteq A\setminus B\) and \(\{j_{1},j_{2}\}\subseteq B\setminus A\).
Suppose, for contradiction, that \((\text{st}_{K}(A)\cap\text{st}_{K}(B))\setminus L\neq\emptyset\). We pick \(C\in(\text{st}_{K}(A)\cap\text{st}_{K}(B))\setminus L\). Clearly \(1\notin C\). We claim that \(A\setminus\{1\}\subset C\), otherwise there exists \(i_{0}\neq 1\) such that \(i_{0}\in A\setminus C\), whence \(|A\setminus C|\geq 2\) which is a contradiction. Similarly, \(B\setminus\{1\}\subset C\). Therefore, \(\{i_{1},i_{2},j_{1},j_{2}\}\subset C\) which means that the \(C\) has
at least \((n-3)+4\) elements which is a contradiction. Then by Lemma 4, the star cluster \(\operatorname{SC}_{K}(L)\) is homotopy equivalent to the complex \(L\).
Now we are ready to give an inductive discussion of the homotopy types of \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\).
**Theorem 8**.: _Suppose that \(1<n<m-1\). The complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) is homotopic equivalent to a wedge sum of spheres. Specifically,_
\[\mathcal{VR}(\mathcal{F}_{n}^{m},2)\simeq(\bigvee_{\binom{m-1}{n+1}\cdot\binom{ n}{2}}S^{2})\vee\mathcal{VR}(\mathcal{F}_{n-1}^{m-1},2).\]
Proof.: Notice that the complex \(\mathcal{VR}(\mathcal{F}_{1}^{m-1},2)\) is contractible. Hence the result holds when \(n=2\) by Barmak's result mentioned above.
Assume that \(n>2\) and \(\mathcal{VR}(\mathcal{F}_{n-1}^{m},2)\) is homotopic to a wedge sum of spheres \(S^{2}\). We denote \(K=\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). As in Lemma 7, let \(\mathcal{S}_{1}=\{A\subset[m]:|A|=n\text{ and }1\in A\}\) and \(L\) be the complex \(\mathcal{VR}(\mathcal{S}_{1},2)\). Then, the complex \(L\) is homotopy equivalent to \(\mathcal{VR}(\mathcal{F}_{n-1}^{m-1},2)\) which is a wedge sum of \(S^{2}\)'s by the assumption. Also by Lemma 7, the star cluster \(\operatorname{SC}_{K}(L)\) is homotopy equivalent to \(L\).
Now we examine the collection of maximal simplices in \(K\) to decide which of them is not in \(\operatorname{SC}_{K}(L)\). Notice that any maximal simplex in the form \(N[i_{1},i_{2},\ldots,i_{n-1}]\) or \(L[1,i_{1},\ldots,i_{n}]\) contains at least one vertex containing \(1\) for any \(i_{1},i_{2},\ldots,i_{n}\in[m]\); hence any such simplex is in \(\operatorname{SC}_{K}(L)\). Therefore in the complement of \(\operatorname{SC}_{K}(L)\), namely \(K\setminus\operatorname{SC}_{K}(L)\), there is only one kind of maximal simplicies in the form \(L[i_{1},i_{2},\ldots,i_{n+1}]\) with \(i_{k}\neq 1\) for any \(k=1,2,\ldots,n+1\); and there are \(\binom{m-1}{n+1}\)-many such simplices and list them as \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{\binom{m-1}{n+1}}\}\). Here, let \(K_{\sigma_{\ell}}\) be the complex generated by \(\sigma_{\ell}\) for each \(\ell=1,2,\ldots,\binom{m-1}{n+1}\).
For each \(\ell\) with \(1\leq\ell\leq\binom{m-1}{n+1}\), we denote \(L_{\ell}\) to be the complex whose maximal simplices are \(\{\sigma_{j}:j=1,2,\ldots,\ell\}\). Hence the complex \(L_{\binom{m-1}{n+1}}\) is the complex \(\mathcal{VR}(\mathcal{S}_{2},2)\) where \(\mathcal{S}_{2}\) is the collection of \(n\)-subsets of \([m]\) not containing \(1\). Therefore, \(K=\operatorname{SC}_{K}(L)\cup L_{\binom{m-1}{n+1}}\).
We claim that \(\operatorname{SC}_{K}(L)\cup L_{\ell}\) is homotopic to \((\bigvee_{\ell\cdot\binom{n}{2}}S^{2})\vee\mathcal{VR}(\mathcal{F}_{n-1}^{m- 1},2)\) for each \(\ell=1,2,\ldots,\binom{m-1}{n+1}\). This claim finishes the proof. Next, we'll prove this claim by induction. For convenience, denote \(L_{0}=\emptyset\).
Suppose, for induction,that \(\operatorname{SC}_{K}(L)\cup L_{\ell-1}\) is homotopy equivalent to \((\bigvee_{(\ell-1)\cdot\binom{n}{2}}S^{2})\vee\mathcal{VR}(\mathcal{F}_{n-1} ^{m-1},2)\). This holds when \(\ell=1\) since \(L_{0}=\emptyset\). Then \(\operatorname{SC}_{K}(L)\cup L_{\ell}=\operatorname{SC}_{K}(L)\cup L_{\ell-1} \cup\{K_{\sigma_{\ell}}\}\). Denote \(\sigma_{\ell}\) to be \(L[i_{1},i_{2},\ldots,i_{n+1}]\) where \(i_{k}\neq 1\) for each \(k=1,2,\ldots,n+1\). Next we'll find the homotopy type of \((\operatorname{SC}_{K}(L)\cup L_{\ell-1})\cap\{K_{\sigma_{\ell}}\}\).
For any vertex \(B\in K_{\sigma_{\ell}}\), \(B\in L[\{1\}\cup B]\subset\operatorname{SC}_{K}(L)\). Hence the \(0\)-skeleton of \(K_{\sigma_{\ell}}\) is contained in \(\operatorname{SC}_{K}(L)\). Let \(\{B_{1},B_{2}\}\) be a \(1\)-simplex in \(K_{\sigma_{\ell}}\). Then \(|B_{1}\cap B_{2}|=n-1\). Because \(N[B_{1}\cap B_{2}]\) is in \(\operatorname{SC}_{K}(L)\), the edge \(\{B_{1},B_{2}\}\) is in \(\operatorname{SC}_{K}[L]\). So the \(1\)-skeleton \(K_{\sigma_{\ell}}^{(1)}\) of \(K_{\sigma_{\ell}}\) is also contained in \(\operatorname{SC}_{K}(L)\). Moreover, any \(k\)-simplex with \(k\geq 2\) in \(K_{\sigma_{\ell}}\) is not in \(\operatorname{SC}_{K}(L)\) since \(\sigma_{\ell}\) is the only maximal simplex containing such a \(k\)-simplex by ii) in Lemma 6. For any \(\ell^{\prime}=1,2,\ldots,\ell-1\), the intersection of the complexes \(K_{\sigma_{\ell^{\prime}}}\) and \(K_{\sigma_{\ell}}\) contains at most one vertex because of their definitions. Therefore, \((\operatorname{SC}_{K}(L)\cup L_{\ell-1})\cap K_{\sigma_{\ell}}=K_{\sigma_{\ell }}^{(1)}\). Recall that \(\sigma_{\ell}\) is an \(n\)-simplex, hence \(K_{\sigma_{\ell}}^{(1)}\) is homotopy equivalent to a wedge sum of \(\binom{n}{2}\)-many copies of \(S^{1}\)'s.
Notice that \(K_{\sigma_{\ell}}^{(1)}\) is null-homotopic in \(K_{\sigma_{\ell}}\) because \(K_{\sigma_{\ell}}\) is contractible. Also, \(K_{\sigma_{\ell}}^{(1)}\) is null-homotopic in \(\operatorname{SC}_{K}(L)\cup L_{\ell-1}\) because the homotopy type of former is a wedge sum of \(S^{1}\)'s and the homotopy type of latter is a wedge sum of \(S^{2}\)'s. Therefore by Lemma 1, \(\operatorname{SC}_{K}(L)\cup L_{\ell}\) is homotopy equivalent to \(\Sigma(\bigvee_{\binom{n}{2}}S^{1})\vee(\operatorname{SC}_{K}(L)\cup L_{\ell -1})\) which is by inductive assumption \((\vee_{\ell\binom{n}{2}}S^{2})\vee\operatorname{SC}_{K}(L)\). This finishes the proof because \(\operatorname{SC}_{K}(L)\simeq L\simeq\mathcal{VR}(\mathcal{F}_{n-1}^{m-1},2)\).
By an inductive calculation, we obtain the following corollary.
**Corollary 9**.: _Suppose that \(1<n<m-1\). The complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\) is homotopy equivalent to a wedge sum of \(\sum_{k=2}^{n}\binom{m+k-1-n}{k+1}\cdot\binom{k}{2}\)-many copies of \(S^{2}\)'s._
## 5. Vietoris-Rips Complex \(\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\)
In this section, we'll determine the homotopy type of \(\mathcal{VR}(\mathcal{F}_{\preceq A},2)\) for \(A\in\mathcal{P}([m])\) with \(|A|=n\).
As in the discussion in Section 1, \(\mathcal{VR}(\mathcal{F}_{\leq r}^{m},r)\) is a cone, hence contractible; and similarly \(\mathcal{VR}(\mathcal{F}_{\geq m-r}^{m})\) is also contractible. Hence, for any \(A\subset[m]\) with \(|A|\leq 2\), the complex \(\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\) is contractible. So in this section, we will discuss the homotopy type of \(\mathcal{VR}(\mathcal{F}_{\leq A}^{m},2)\) with \(|A|\geq 3\).
The following lemma is easy to prove, but heavily used in the discussion of \(\mathcal{VR}(\mathcal{F}_{\geq A}^{m},2)\).
**Lemma 10**.: _For any \(A,B\in\mathcal{P}[m]\) with \(|A|<|B|\), \(d(A,B)\leq 2\) if and only if \(A\subset B\) and \(|B\setminus A|\leq 2\)._
Proof.: If \(A\subset B\) and \(|B\setminus A|\leq 2\), then \(d(A,B)=|(A\setminus B)\cup(B\setminus A)|\leq 2\).
Now we suppose \(A\setminus B\neq\emptyset\), i.e. \(|A\setminus B|\geq 1\). Since \(|A|<|B|\), \(|B\setminus A|\geq 2\), therefore \(d(A,B)\geq 3\). This finishes the proof.
Next, we'll discuss the homotopy type of \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) using a similar approach as in the proof of Theorem 8.
**Theorem 11**.: _Suppose that \(1<n<m-1\). Then the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) is homotopy equivalent to a wedge sum of \((\sum_{k=2}^{n}\binom{m+k-1-n}{k+1}\cdot\binom{k}{2}+\binom{m}{n+2}\cdot\binom{ n+1}{2})\)-many copies of \(S^{2}\)._
Proof.: Let \(K=\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) and \(K_{0}=\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). By Corollary 9, the complex \(K_{0}\) is homotopy equivalent to a wedge sum of \(\sum_{k=2}^{n}\binom{m+k-1-n}{k+1}\cdot\binom{k}{2}\)-many copies of \(S^{2}\)'s.
We claim that \(\operatorname{st}_{K}(K_{0})\simeq K_{0}\). For any \(B\in\mathcal{F}_{n+1}^{m}\), \(B\in\operatorname{st}_{K}(D)\cap\operatorname{st}_{K}(D^{\prime})\) for \(D,D^{\prime}\in\mathcal{F}_{n}^{m}\) if and only if \(d(B,D)=d(B,D^{\prime})=2\), hence by Lemma 10, \(D,D^{\prime}\) are both subsets of \(B\) which implies that \(d(D,D^{\prime})=2\). Therefore by Lemma 5, the claim holds.
By Lemma 6, there are two types of maximal simplicies in \(\mathcal{VR}(\mathcal{F}_{n+1}^{m},2)\). If \(\sigma\) is a maximal simplex \(\mathcal{VR}(\mathcal{F}_{n+1}^{m},2)\) which can be represented in the form \(N[i_{1},i_{2},\dots,i_{n}]\), clearly \(\{i_{1}i_{2}\cdots i_{n}\}\cup N[i_{1},i_{2},\dots,i_{n}]\) is a simplex in \(K\); hence \(N[i_{1},i_{2},\dots,i_{n}]\in\operatorname{SC}_{K}(K_{0})\).
Now we look at the second type of maximal simplices in \(\mathcal{VR}(\mathcal{F}_{n+1}^{m},2)\). There are \(\binom{m}{n+2}\)-many type of maximal simplicies in \(\mathcal{VR}(\mathcal{F}_{n+1}^{m},2)\) which are in the form \(L[i_{1},i_{2},\dots,i_{n+2}]\); and list such \((n+1)\)-simplicies as \(\{\sigma_{1},\sigma_{2},\dots,\binom{m}{n+2}\}\). Denote
\(L_{\ell}=\operatorname{SC}_{K}(K_{0})\cup\bigcup_{j=1}^{\ell}K_{\sigma_{j}}\) for \(\ell=1,2,\ldots,\binom{m}{n+2}\). Recall that the complex \(K_{\sigma_{j}}\) is the complex generated by the simplex \(\sigma_{j}\) for \(j=1,2,\ldots,\binom{m}{n+2}\).
Assume for induction that \(L_{\ell-1}\) is homotopic to
\[\bigvee_{\sum_{k=2}^{n}\binom{m+k-1-n}{k+1}\cdot\binom{k}{2}+(\ell-1)\cdot \binom{n+1}{2}}S^{2}.\]
This is clearly true when \(\ell=1\). We claim that \(L_{\ell-1}\cap K_{\sigma_{\ell}}=K_{\sigma_{\ell}}^{(1)}\) which is homotopic to \(\bigvee_{\binom{n+1}{2}}S^{1}\) and hence is null-homotopic in both \(L_{\ell-1}\) and \(K_{\sigma_{\ell}}\). By Lemma 1, this implies that \(L_{\ell}\) is homotopy equivalent to a wedge sum of \((\sum_{k=2}^{n}\binom{m+k-1-n}{k+1}\cdot\binom{k}{2}+\ell\cdot\binom{n+1}{2})\)-many \(S^{2}\). This finishes the proof. Next, we'll prove our claim.
By part ii) of Lemma 6, any \(2\)-simplex in \(K_{\sigma_{\ell}}\) is not in \(L_{\ell-1}\). Let \(\{B_{1},B_{2}\}\) be a \(1\)-simplex in \(K_{\sigma_{\ell}}\). Then \(B_{1}\cap B_{2}\) is an \(n\)-subset, i.e., a vertex in \(K_{0}\); so \(\{B_{1},B_{2},B_{1}\cap B_{2}\}\) is a \(2\)-simplex in \(K\) which means \(\{B_{1},B_{2}\}\in\operatorname{st}_{K}(B_{1}\cap B_{2})\). This shows that \(L_{\ell-1}\cap K_{\sigma_{\ell}}=K_{\sigma_{\ell}}^{(1)}\).
To identify the homotopy types of \(K=\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\) with \(|A|\geq 3\), we'll split \(K\) up to \(K\setminus A\) and \(\operatorname{st}_{K}(A)\). So the key is to understand the link of \(A\) in \(K\), \(\operatorname{lk}_{K}(A)\). Next lemma shows that \(\operatorname{lk}_{K}(A)\) is a wedge sum of \(S^{2}\)'s.
Note that when \(n=3\), \(\sum_{k=2}^{n-2}\binom{k}{2}\) is set to be \(0\) as introduced in Section 2.
**Lemma 12**.: _Suppose that \(m\geq n>2\) and \(A=i_{1}i_{2}\cdots i_{n}\in\mathcal{P}([m])\)._
_Denote \(i_{0}=-1\) and define \(d_{\ell}=i_{\ell}-(i_{\ell-1}+1)\) for each \(\ell=1,2,\ldots,n\). Then_
\[\operatorname{lk}_{\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)}(A)\simeq \bigvee_{\sum_{k=2}^{n-2}\binom{k}{2}+\sum_{\ell=1}^{n-2}d_{\ell}\cdot\binom{ n-\ell}{2}}S^{2}.\]
Proof.: Let \(K=\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\). Note that for any \(B\) with \(|B|\leq n-3\), \(d(A,B)\geq 3\). Next we divide the vertices in the link of the vertex \(A\) in \(K\), \(\operatorname{lk}_{K}(A)\), into the following pairwise disjoint collections \(\mathcal{G}_{k}\) for \(k=0,1,\ldots,i_{n-1}\). These collections are defined as the following:
* \(\mathcal{G}_{0}=\{B\in\mathcal{P}:|B|<n\text{ and }d(B,A)=2\}\);
* for \(k\in\{1,2,\ldots,i_{n-1}\}\setminus\{i_{1},i_{2},\ldots,i_{n-1}\}\), \(\mathcal{G}_{k}\) contains all the \(B^{\prime}s\) with \(|B|=n\) such that \(B\) contains \(k\), all \(i_{j}\)'s with \(i_{j}<k\), all but one of \(i_{j}\)'s with \(i_{j}>k\);
* \(\mathcal{G}_{i_{n-1}}\) contains all the \(B\)'s with \(|B|=n\) such that \(\{i_{1},i_{2},\ldots,i_{n-1}\}\subset B\) and \(B\) contains any other number between \(i_{n-1}\) and \(i_{n}\).
* \(\mathcal{G}_{i_{j}}=\emptyset\) for \(j=1,2,\ldots,n-2\) for the purpose of convenience.
By Lemma 10, \(\mathcal{G}_{0}\) contains all the \(B\)'s such that \(B\subset A\) and \(|B|=n-1\) or \(n-2\). Also, it is clear that \(\bigcup_{k=1}^{i_{n-1}}\mathcal{G}_{k}\) contains all the \(B\)'s such that \(B\prec A\), \(d(A,B)=2\), and \(|B|=n\). Hence \(\operatorname{lk}_{K}(A)=\mathcal{VR}(\bigcup_{k=0}^{i_{n-1}}\mathcal{G}_{k},2)\). For each \(k=0,1,\ldots,i_{n-1}\), we define \(K_{k}=\mathcal{VR}(\mathcal{G}_{k},2)\) if \(\mathcal{G}_{k}\neq\emptyset\) and \(K_{\leq k}=\mathcal{VR}(\bigcup_{i=0}^{k}\mathcal{G}_{i},2)\). Hence \(\operatorname{lk}_{K}(A)=K_{\leq i_{n-1}}\).
Since \(\mathcal{G}_{0}\) is the collection of all \((n-2)\)-subsets and \((n-1)\)-subsets of the \(n\)-set \(A\), the complex \(K_{0}\) is homeomorphic to \(\mathcal{VR}(\mathcal{F}_{n-2}^{n}\cup\mathcal{F}_{n-1}^{n},2)\); hence by Theorem 11, the complex \(K_{0}=K_{\leq 0}\) is homotopy equivalent to a wedge sum of \((\sum_{j=2}^{n-2}\binom{k}{2}+\binom{n-1}{2})\)-many copies of \(S^{2}\)'s. Since \(\mathcal{G}_{i_{j}}=\emptyset\) for \(j=1,2,\ldots,n-2\), the complex \(K_{\leq i_{j}}\) is same as \(K_{\leq i_{j}-1}\) for such \(j\).
Now we investigate the complex \(K_{k}\) with \(k\geq 1\) and the collection \(\mathcal{G}_{k}\neq\emptyset\). Fix \(k\) such that \(1\leq k<i_{n-1}\) and \(\mathcal{G}_{k}\neq\emptyset\). Then there exists an \(\ell\) in the set \(\{1,2,\ldots,n-1\}\) such that \(i_{\ell-1}<k<i_{\ell}\). Then, the complex \(K_{k}\) is the complex generated by a proper face of \(L[i_{1},\ldots,i_{\ell-1},k,i_{\ell},\ldots,i_{n}]\) which consists of all \(B\) which contains \(\{i_{1},\ldots,i_{\ell-1},k\}\) and all but one of \(\{i_{\ell},\ldots,i_{n}\}\); hence it is an \((n-\ell)\)-complex. And \(K_{i_{n-1}}\) is a proper face of \(N[i_{1},i_{2},\ldots,i_{n-1}]\) which includes all \(B\)'s which contains \(\{i_{1},i_{2},\ldots,i_{n-1}\}\) and another number between \(i_{n-1}\) and \(i_{n}\); hence it is a \((d_{n}-1)\)-simplex.
Next we determine the homotopy type of \(K_{\leq i_{n-2}}\). If there is not \(k\) such that \(k\in[i_{n-2}]\backslash\{i_{1},i_{2},\ldots,i_{n-2}\}\), then \(d_{1}=1\) and \(d_{2},\ldots,d_{n-2}\) are all zeroes and the complex \(K_{\leq i_{n-2}}=K_{0}\) which is clearly homotopy equivalent \(\bigvee_{\sum_{k=2}^{n-2}{k\choose 2}+\sum_{\ell=1}^{n-2}d_{\ell}\cdot{n- \ell\choose 2}}S^{2}\). Now we suppose otherwise and fix \(k\) such that \(1\leq k\leq i_{n-2}\) and \(i_{\ell-1}<k<i_{\ell}\) for some \(\ell=1,2,\ldots,n-2\). Suppose, for induction, that \(K_{\leq(k-1)}\) is homotopy equivalent to a wedge sum of \(S^{2}\)'s. This holds when \(k\) is the minimal natural number different from \(i_{1},i_{2},\ldots i_{n-2}\) in which case \(K_{\leq k-1}\) is homotopy equivalent to \(K_{0}\). By Lemma 5, \(K_{\leq k}=\operatorname{SC}_{K_{\leq k}}(K_{\leq(k-1)})\cup K_{k}\). We'll prove the following two claims and these two claims imply that \(K_{\leq k}\simeq K_{\leq k-1}\vee(\bigvee_{{n-\ell\choose 2}}S^{2})\) by Lemma 1 and the inductive assumption.
**Claim i):**: \(\operatorname{SC}_{K_{\leq k}}(K_{\leq(k-1)})\simeq K_{\leq(k-1)}\).
**Claim ii):**: \(\operatorname{SC}_{K_{\leq k}}(K_{\leq(k-1)})\cap K_{k}\simeq\bigvee_{{n-\ell \choose 2}}S^{1}\).
**Proof of Claim i):** Let \(D\) be a vertex in \(K_{k}\). Then \(D\) contains \(k\) and an \((n-1)\)-subset of \(A\), denoted by \(C\). Then for any vertex \(B\in K_{\leq(k-1)}\), \(D\in\operatorname{st}_{K_{\leq k}}(B)\) if and only if \(B\) is one of the following: a) \(C\subset B\) and \(B\) contains one of \(1,2,\ldots,k-1\) not in \(A\); b) \(C\); c) any \((n-2)\) subset of \(C\). Any pair of such vertices have distance \(2\); hence they form a \(1\)-simplex in \(K_{\leq(k-1)}\). Therefore by Lemma 4, Claim i) holds.
**Proof of Claim ii):** Since the complex \(K_{k}\) is generated by an \((n-\ell)\)-simplex, \(K_{k}^{(1)}\) is homotopy equivalent to \(\bigvee_{{n-\ell\choose 2}}S^{1}\). We'll show that \(\operatorname{SC}_{K_{\leq k}}(K_{\leq(k-1)})\cap K_{k}=K_{k}^{(1)}\). Pick any pair of vertices, \(B_{1},B_{2}\), in \(K_{k}\). Then, \(B_{1}\cap B_{2}\) contains the number \(k\) and an \((n-2)\)-subset of \(A\), denoted by \(D\). Note that \(D\) is a vertex in the complex \(K_{0}\subseteq K_{\leq k-1}\); therefore, the \(1\)-simplex \(\{B_{1},B_{2}\}\in\operatorname{st}_{K_{\leq k}}(D)\). Hence \(K_{k}^{(1)}\subseteq\operatorname{SC}_{K_{\leq k}}(K_{\leq(k-1)})\cap K_{k}\). It is straightforward to verity that for any \(B\in\mathcal{G}_{i}\) with \(i=1,2,\ldots,k-1\), \(\operatorname{st}_{K_{\leq k}}(B)\cap K_{k}\) is a complex containing only one vertex because any vertex in this complex must contains \(B\cap A\) and the number \(k\). Similarly, for any \(B\in\mathcal{G}_{0}\) with \(|B|=n-1\), there is at most \(1\) vertex in \(K_{k}\) containing \(B\) as a subset, i.e. having a distance \(\leq 2\) from \(B\); and if \(B\in\mathcal{G}_{0}\) with \(|B|=n-2\), then there are at most two vertices in \(K_{k}\) which have distance \(2\) from \(B\). Hence, \(\operatorname{st}_{K\leq k}(B)\cap K_{k}\subseteq K_{k}^{(1)}\) for any vertex \(B\) in the complex \(K_{\leq(k-1)}\). This finishes the proof of Claim ii).
By an inductive calculation, we have proved that the complex \(K_{\leq i_{n-2}}\) is homotopy equivalent to a wedge sum of \((\sum_{k=2}^{n-2}{k\choose 2}+\sum_{\ell=1}^{n-2}d_{\ell}\cdot{n-\ell\choose 2})\)-many \(S^{2}\)'s. Next, we show that the complex \(K_{\leq i_{n-1}-1}\) is homotopy equivalent to \(K_{\leq i_{n-2}}\). If \(d_{n-1}=0\), then \(K_{<i_{n-1}}=K_{\leq i_{n-2}}\); otherwise we fix \(k\) with \(i_{n-2}<k<i_{n-1}\) and suppose that \(K_{\leq k-1}\simeq K_{\leq i_{n-2}}\). The collection \(\mathcal{G}_{k}\) contains two vertices \(i_{1}i_{2}\cdots i_{n-2}ki_{n}\) and \(i_{1}i_{2}\cdots i_{n-2}ki_{n-1}\); and the simplex \(\{i_{1}i_{2}\cdots i_{n-2}ki_{n},i_{1}i_{2}\cdots i_{n-2}ki_{n-1}\}\) is in \(\operatorname{st}(D)\)
where \(D=i_{1}i_{2}\cdots i_{n-2}\in K_{\leq(k-1)}\). Hence \(\operatorname{SC}_{K_{\leq k}}(K_{\leq k-1})=K_{\leq k}\). Using a similar discussion as in the proof of claim i) and Lemma 4, \(\operatorname{SC}_{K_{\leq k}}(K_{\leq k-1})\simeq K_{\leq k-1}\). Hence the complex \(K_{\leq k}\) is homotopy equivalent to \(K_{\leq i_{n-2}}\). Therefore by induction, the complex \(K_{\leq i_{n-1}-1}\) is homotopy equivalent to \(K_{\leq i_{n-2}}\).
In the last part, we show that the complex \(K_{\leq i_{n-1}}=\operatorname{lk}_{K}(A)\) is also homotopy equivalent to \(K_{\leq i_{n-2}}\). Applying the argument similar as the proof of Claim i), the star cluster \(\operatorname{SC}_{K_{\leq i_{n-1}}}(K_{\leq(i_{n-1}-1)})\) is homotopy equivalent to \(K_{\leq(i_{n-1}-1)}\). Recall that \(K_{i_{n-1}}\) is a proper face of the simplex \(N[i_{1}i_{2}\cdots i_{n-1}]\). Note that \(i_{1}i_{2}\cdots i_{n-1}\) is a vertex in \(K_{\leq i_{n-1}-1}\); and also, \(\{i_{1}i_{2}\cdots i_{n-1}\}\cup\mathcal{G}_{i_{n-1}}\) is a simplex in \(K_{\leq i_{n-1}}\). Therefore \(K_{i_{n-1}}\in\operatorname{st}(i_{1}i_{2}\cdots i_{n-1})\); and hence \(\operatorname{SC}_{K_{\leq i_{n-1}}}(K_{\leq(i_{n-1}-1)})=K_{\leq i_{n-1}}\). And this finishes the proof.
Then motivated by the lemma above, we define a natural number \(r_{A}\) for each \(A\subset[m]\) in the following way. For each \(A=i_{1}i_{2}\cdots i_{n}\subseteq[m]\) with \(d_{1}=i_{1}\) and \(d_{\ell}=i_{\ell}-(i_{\ell-1}+1)\) for \(\ell=2,3,\ldots,n\), we define
\[r_{A}=\sum_{k=2}^{n-2}\binom{k}{2}+\sum_{\ell=1}^{n-2}d_{\ell}\cdot\binom{n- \ell}{2}.\]
**Theorem 13**.: _Suppose that \(m\geq n>2\) and \(A=i_{1}i_{2}\cdots i_{n}\in\mathcal{P}([m])\). Then the complex \(\mathcal{VR}(\mathcal{F}_{\leq A}^{m},2)\) is homotopy equivalent to a wedge sum of \(S^{3}\)'s._
_More specifically, if \(A\) is the vertex \(\{1,2,3\}\subset[m]\),_
\[\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\simeq S^{3}.\]
_And for any other vertex \(A\) with \(\{1,2,3\}\prec A\),_
\[\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\simeq(\bigvee_{r_{A}}S^{3})\vee \mathcal{VR}(\mathcal{F}_{\prec A}^{m},2).\]
Proof.: Let \(K=\mathcal{VR}(\mathcal{F}_{\preceq A}^{m},2)\) and \(L=\mathcal{VR}(\mathcal{F}_{\prec A}^{m},2)\). Suppose \(A=\{1,2,3\}\). Then \(r_{A}=1\), hence \(\operatorname{lk}_{K}(A)\) is homotopic to \(S^{2}\) by Lemma 12. Because the complex \(L\) is contractible, the complex \(K\) is homotopy equivalent to \(S^{3}\) by Lemma 2.
Fix \(A\) with \(\{1,2,3\}\prec A\) and suppose for induction that \(L\) is homotopy equivalent to a wedge sum of \(S^{3}\)'s. Again by Lemma 12, \(\operatorname{lk}_{K}(A)\) is homotopic to a wedge sum of \(r_{A}\)-many \(S^{2}\)'s. Hence the inclusion map from \(\operatorname{lk}_{K}(A)\) to \(L\) is null-homotopic. Therefore, the general result holds due to again Lemma 2.
Then the following result is a direct application of Lemma 1, Lemma 12, and Theorem 13.
**Theorem 14**.: _Suppose that \(m\geq n>2\). Then,_
\[\mathcal{VR}(\mathcal{F}_{\leq n}^{m},2)\simeq(\bigvee_{\sum_{A\subseteq[m] \text{ with }|A|=n}r_{A}}S^{3})\vee\mathcal{VR}(\mathcal{F}_{\leq n-1}^{m},2).\]
_Furthermore, for any \(n=3,4,\ldots,m\), define_
\[t_{n}=\sum_{A\subseteq[m]\text{ with }|A|=n}r_{A}.\]
_Then the complex \(\mathcal{VR}(\mathcal{F}_{\leq m}^{m},2)\) is homotopy equivalent to the wedge sum of \((\sum_{m=3}^{m}t_{n})\)-many copies of \(S^{3}\)._
By Adamaszek and Adams's result in [2], for any \(m>2\),
\[c_{m}=\sum_{0\leq j<i<m}(j+1)(2^{m-2}-2^{i-1})=\sum_{n=3}^{m}t_{n},\]
where \(t_{n}\) is defined as in the statement of Theorem 14.
## 6. Vietoris-Rips Complex \(\mathcal{VR}(\mathcal{F}_{p}^{m}\cup\mathcal{F}_{q}^{m},2)\)
In this section, we'll investigate the homotopy types of \(\mathcal{VR}(\mathcal{F}_{p}^{m}\cup\mathcal{F}_{q}^{m},2)\) with \(p,q\in\mathbb{N}\). Clearly when \(|p-q|\geq 3\), then \(\mathcal{VR}(\mathcal{F}_{p}^{m}\cup\mathcal{F}_{q}^{m},2)\) is a disjoint union of \(\mathcal{VR}(\mathcal{F}_{p}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{q}^{m},2)\); then by the discussion in Section 4, its homotopy type is clear. The homotopy types of the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m},2)\) is discussed in Section 5 (see Theorem 11).
In the following, we'll find the homotopy types of the Vietoris-Rips complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\) for \(n+2\leq m\). Clearly for \(m\geq 3\), \(\mathcal{VR}(\mathcal{F}_{0}^{m}\cup\mathcal{F}_{2}^{m},2)\) and \(\mathcal{VR}(\mathcal{F}_{m}^{m}\cup\mathcal{F}_{m-2}^{m},2)\) are contractible because both of them are cones. Next, we'll discuss the complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\) in a general way.
The next result can be obtained by applying the proof of Lemma 12 with small modifications; so we skip the proof. For each \(A=i_{1}i_{2}\cdots i_{n}\in\mathcal{F}_{n}^{m}\) with \(c_{1}=i_{1}-1\) and \(c_{\ell}=i_{\ell}-(i_{\ell-1}+1)\) for \(\ell=2,3,\ldots,n\), we define
\[s_{A}=\sum_{k=2}^{n-2}\binom{k}{2}+\sum_{\ell=1}^{n}c_{\ell}\binom{n-\ell}{2}.\]
Note that for any \(A\subset[m]\) with \(|A|=n\), \(r_{A}=s_{A}+\binom{n-1}{2}\).
**Lemma 15**.: _Suppose that \(2\leq n<m-3\) with \(m\geq 4\) and \(A=i_{1}i_{2}\cdots i_{n+2}\subset[m]\) with \(i_{1}\geq 2\). Let \(K=\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\cap\mathcal{VR }(\mathcal{F}_{\preceq A}^{m},2)\)._
_Then,_
\[\text{lk}_{K}(A)\simeq\bigvee_{s_{A}}S^{2}\]
**Theorem 16**.: _Suppose that \(1\leq n<m-3\) with \(m\geq 4\). Then the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\) is homotopy equivalent to a wedge sum of \(S^{3}\)'s._
_More specifically,_
\[\mathcal{VR}(\mathcal{F}_{1}^{m}\cup\mathcal{F}_{3}^{m},2)\simeq\bigvee_{ \binom{m}{4}}S^{3};\]
_and for \(n\geq 2\),_
\[\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\simeq\mathcal{ VR}(\mathcal{F}_{n-1}^{m-1}\cup\mathcal{F}_{n+1}^{m-1},2)\vee\bigvee_{ \sum_{A\in\mathcal{F}_{n+2}^{m}\text{ with }\min A\geq 2}S^{3}}.\]
Proof.: We firstly prove that \(K=\mathcal{VR}(\mathcal{F}_{1}^{m}\cup\mathcal{F}_{3}^{m},2)\simeq\bigvee_{ \binom{m}{4}}S^{3}\). Let \(L_{0}=\mathcal{VR}(\mathcal{F}_{1}^{m},2)\) which is a complex generated by a simplex because each pair of singlton subsets of \([m]\) has distance \(2\). Hence by Lemma 4, \(\text{SC}_{K}(L_{0})\) is contractible. By Lemma 6, there are two types of maximal simplices in \(\mathcal{VR}(\mathcal{F}_{3}^{m},2)\), namely \(N[i_{1},i_{2}]\) and \(L[i_{1},i_{2},i_{3},i_{4}]\) for some \(i_{1},\ldots,i_{4}\in[m]\); clearly \(\{i_{1}\}\cup N[i_{1},i_{2}]\}\) is a simplex in \(K\). Hence \(N[i_{1},i_{2}]\in\text{SC}_{K}(L_{0})\) for each \(i_{1},i_{2}\in[m]\). Within \(\mathcal{VR}(\mathcal{F}_{3}^{m},2)\), there are \(\binom{m}{4}\)-many simplices in the form \(L[i_{1},i_{2},i_{3},i_{4}]\) and the intersection of
each pair of such simplices contains at most one vertex. We list such simplices as \(\{\sigma_{\ell}:\ell=1,2,\ldots,\binom{m}{4}\}\) and define \(L_{\ell}=\operatorname{SC}_{K}(L_{0})\cup\bigcup_{i=1}^{\ell}\sigma_{\ell}\). We see that \(\sigma_{\ell}\notin\operatorname{SC}_{K}(L_{0})\) for each \(\ell=1,2,\ldots,\binom{m}{4}\) because otherwise there is a number in \(\cap\sigma_{\ell}\) which is a contradiction; and because each of \(\sigma_{\ell}\)'s proper faces has an nonempty intersection, we get that \(\sigma_{\ell}^{(2)}\subset\operatorname{SC}_{K}(L_{0})\). Hence \(L_{\ell-1}\cap\sigma_{\ell}=\sigma_{\ell}^{(2)}\simeq S^{2}\). Therefore, by Lemma 4, \(L_{1}\simeq S^{3}\) and inductively \(L_{\ell}\simeq\bigvee_{\ell}S^{3}\). This finishes the proof of first part.
Now we assume that \(n\geq 2\) and \(\mathcal{VR}(\mathcal{F}_{n-1}^{m-1}\cup\mathcal{F}_{n+1}^{m-1},2)\) is homotopy equivalent to a wedge sum of \(S^{3}\)'s. Let \(\mathcal{G}_{0}=\{B\in\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m}:1\in B\}\) and \(K_{0}=\mathcal{VR}(\mathcal{G}_{0},2)\); by a straightforward isometric mapping, we see that \(K_{0}=\mathcal{VR}(\mathcal{F}_{n-1}^{m-1}\cup\mathcal{F}_{n+1}^{m-1},2)\) which is homotopy equivalent to a wedge sum of \(S^{3}\)'s by the assumption. Let \(\mathcal{G}_{1}=\{B\in\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m}:|B|=n\text{ or }1\in B\}\) and \(K_{1}=\mathcal{VR}(\mathcal{G}_{1},2)\).
Next, we show that \(K_{1}=\operatorname{SC}_{K_{1}}(K_{0})\simeq K_{0}\). Let \(\sigma\) be a simplex in \(K_{1}\) consisting of vertices not containing \(1\). Then \(\sigma\) is a face of either \(N[i_{1},i_{2},\ldots,i_{n-1}]\) or \(L[i_{1},i_{2},\ldots,i_{n+1}]\) with all the numbers \(>1\). Since \(1i_{1}i_{1}\cdots i_{n-1}\in N[i_{1},i_{2},\ldots,i_{n-1}]\), \(N[i_{1},i_{2},\ldots,i_{n-1}]\in\operatorname{SC}_{K_{1}}(K_{0})\). Also notice that \(\{1i_{1}i_{2}\cdots i_{n+1}\}\cup L[i_{1},i_{2},\ldots,i_{n+1}]\) is a simplex in \(K_{1}\); hence \(L[i_{1},i_{2},\ldots,i_{n+1}]\in K_{1}\). Therefore, \(K_{1}=\operatorname{SC}_{K_{1}}(K_{0})\).
Let \(B=i_{1}i_{2}\cdots i_{n}\) be a vertex in \(\mathcal{F}_{n}^{m}\) not containing \(1\) and \(B\in\operatorname{st}_{K_{1}}(D_{1})\cap\operatorname{st}_{K_{1}}(D_{2})\) with \(D_{1},D_{2}\in\mathcal{G}_{0}\). There are three cases to discuss.
1. Suppose \(|D_{1}|=|D_{2}|=n+2\). Then by Lemma 10, \(B\subset D_{1}\) and \(B\subset D_{2}\). Since both \(D_{1}\) and \(D_{2}\) contain \(1\), \(|D_{1}\cap D_{2}|=n+1\) and therefore \(\{D_{1},D_{2}\}\in K_{0}\).
2. Suppose \(|D_{1}|=n\) and \(|D_{2}|=n+2\). Then \(D_{1}\) contains an \((n-1)\)-subset of \(B\) and \(1\); hence \(D_{1}\subset D_{2}\). By Lemma 10, \(d(D_{1},D_{2})=2\) and therefore \(\{D_{1},D_{2}\}\in K_{0}\).
3. Suppose \(|D_{1}|=|D_{2}|=n\). Then both \(D_{1}\) and \(D_{2}\) contains an \((n-1)\)-subset of \(B\) and \(1\) and hence \(|D_{1}\cap D_{2}|=n-1\), i.e., \(d(D_{1},D_{2})=2\). Therefore \(\{D_{1},D_{2}\}\in K_{0}\).
Then by Lemma 4, \(\operatorname{SC}_{K_{1}}(K_{0})\simeq K_{0}\).
Now fix \(A\in\mathcal{F}_{n+2}^{m}\) with \(\min A\geq 2\) and assume for induction that \(\mathcal{VR}(\{B\in\mathcal{F}_{n+2}^{m}\cup\mathcal{F}_{n}^{m}:B\prec A\},2)\) is homotopy equivalent to
\[\mathcal{VR}(\mathcal{F}_{n-1}^{m-1}\cup\mathcal{F}_{n+1}^{m-1},2)\vee\bigvee_ {\sum_{B\in\mathcal{F}_{n+2}^{m}\text{ with }\min B\geq 2\text{ and }B\prec A}s_{B}}S^{3}\]
which is a wedge sum of \(S^{3}\)'s. This clearly holds if \(A=\min_{\prec}\{C:C\in\mathcal{F}_{n+2}^{m}\text{ and }\min C=2\}\). Let \(L=\mathcal{VR}(\{B\in\mathcal{F}_{n+2}^{m}\cup\mathcal{F}_{n}^{m}:B\preceq A\},2)\). Then by Lemma 15, \(\operatorname{lk}_{L}(A)\) is homotopy equivalent to \(\bigvee_{sA}S^{2}\) which is clearly contractible in \(L\setminus\{A\}\). Hence by Lemma 2, \(L\) is homotopy equivalent to
\[\mathcal{VR}(\mathcal{F}_{n+1}^{m-1}\cup\mathcal{F}_{n+1}^{m-1},2)\vee\bigvee_{ \sum_{B\in\mathcal{F}_{n+2}^{m}\text{ with }\min B\geq 2\text{ and }B\prec A}s_{B}}S^{3}\vee\Sigma(\bigvee_{sA}S^{2}),\]
i.e.
\[\mathcal{VR}(\mathcal{F}_{n+1}^{m-1}\cup\mathcal{F}_{n+1}^{m-1},2)\vee\bigvee_{ \sum_{B\in\mathcal{F}_{n+2}^{m}\text{ with }\min B\geq 2\text{ and }B\preceq A}s_{B}}S^{3}.\]
This finishes the proof.
We conclude this section by showing that the vertices \(\mathcal{F}_{n+1}^{m}\) in the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m}\cup\mathcal{F}_{n+2}^{ m},2)\) don't contribute to its homotopy type, i.e. it is homotopy equivalent to \(\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\).
**Theorem 17**.: _Suppose that \(1\leq n<m-3\) with \(m\geq 4\). Then,_
\[\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m}\cup\mathcal{F}_{n+2} ^{m},2)\simeq\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2).\]
Proof.: Let \(K=\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+1}^{m}\cup\mathcal{F}_{n+ 2}^{m},2)\) and \(K_{0}=\mathcal{VR}(\mathcal{F}_{n}^{m}\cup\mathcal{F}_{n+2}^{m},2)\). Then we claim that \(K=\mathrm{SC}_{K}(K_{0})\) and \(\mathrm{SC}_{K}(K_{0})\simeq K_{0}\).
It is clear that \(\mathrm{SC}_{K}(K_{0})\subseteq K\). Take a \(\sigma\) in \(K\) such that none of its vertices is in \(K_{0}\); hence all its vertices are in \(\mathcal{F}_{n+1}^{m}\). By Lemma 6, \(\sigma\) is a face of either \(N[i_{1},i_{2},\ldots,i_{n}]\) or \(L[i_{1},i_{2},\ldots,i_{n+2}]\). Note that \(\{i_{1}i_{2}\cdots i_{n}\}\cup N[i_{1},i_{2},\ldots,i_{n}]\) is a simplex in \(K\) with \(i_{1}i_{2}\cdots i_{n}\in K_{0}\); therefore \(N[i_{1},i_{2},\ldots,i_{n}]\in\mathrm{SC}_{K}(K_{0})\). Also \(\{i_{1}i_{2}\cdots i_{n+2}\}\cup L[i_{1},i_{2},\ldots,i_{n+2}]\) is a simplex in \(K\) with \(i_{1}i_{2}\cdots i_{n+2}\in K_{0}\); hence, \(L[i_{1},i_{2},\ldots,i_{n+2}]\in\mathrm{SC}_{K}(K_{0})\). Therefore, \(\mathrm{SC}_{K}(K_{0})=K\).
Take \(D\in\mathcal{F}_{n+1}^{m}\) with \(D\in\mathrm{st}_{K}(B_{1})\cap\mathrm{st}_{K}(B_{2})\) where \(B_{1},B_{2}\) are vertices in \(K_{0}\). Using a similar discussion as in the proof of Theorem 16, \(\{B_{1},B_{2}\}\in K_{0}\). Hence by Lemma 4, \(\mathrm{SC}_{K}(K_{0})\simeq K_{0}\).
Therefore, we conclude that \(K\simeq K_{0}\).
## 7. Open Questions
There is little known about the Vietoris-Rips complexes of these finite metric spaces with large scales. A good number of interesting open questions about the Vietoris-Rips complex on hypercube groups with large scales have been raised in [2, 16]. We'll end our paper with a couple questions related to the independence complex of Kneser graphs.
Suppose \(2<n<m-2\). For any pair of subsets \(B_{1},B_{2}\) of \([m]\) with \(|B_{1}|=|B_{2}|=n\), \(d(B_{1},B_{2})\leq 2k+1\) is equivalent to \(d(B_{1},B_{2})\leq 2k\) for any nonnegative integer \(k\). Hence the Vietoris-Rips complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},3)\) is identical with \(\mathcal{VR}(\mathcal{F}_{n}^{m},2)\). Little is known for larger scale \(r\geq 4\). The complex \(\mathcal{VR}(\mathcal{F}_{3}^{6},2)\) is a the boundary of a polytope with \(20\) vertices, hence it is homotopy equivalent to \(S^{9}\). Using polymake [8], we find the reduced homology groups of \(\mathcal{VR}(\mathcal{F}_{3}^{7},4)\) is trivial when \(n\neq 6\) or \(9\); also, \(\tilde{H}_{6}(\mathcal{VR}(\mathcal{F}_{3}^{7},4))=\mathbb{Z}^{29}\) and \(\tilde{H}_{9}(\mathcal{VR}(\mathcal{F}_{3}^{7},4))=\mathbb{Z}^{7}\). This is related to independence complex of the Kneser graphs. Notice that the complex \(\mathcal{VR}(\mathcal{F}_{3}^{m},4)\) is identical with \(\mathcal{VR}(\mathcal{F}_{3}^{m},5)\); therefore both of them are equal to the independence complex of the Kneser graph \(\mathrm{KG}_{3,m-6}\) with \(m\geq 6\). Then the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},4)\) for general \(2n<m\) is very likely to be homotopy equivalent to a wedge sum of spheres with different dimensions.
Then, we have the following question.
**Question 1**.: _Assume that \(2n<m\). Are the complexes \(\mathcal{VR}(\mathcal{F}_{n}^{m},4)\) with \(2n<m\) homotopy equivalent to a wedge sum of spheres \(S^{6}\)'s and \(S^{9}\)'s?_
In general, it is worth to investigate the following question.
**Question 2**.: _What are the homotopy types of the complex \(\mathcal{VR}(\mathcal{F}_{n}^{m},r)\) for \(r\geq 4\)?_
**Acknowledgements** The authors are grateful to Professor Henry Adams for his valuable comments and suggestions which improve the paper. |
2307.16685 | Anticipating Responsibility in Multiagent Planning | Responsibility anticipation is the process of determining if the actions of
an individual agent may cause it to be responsible for a particular outcome.
This can be used in a multi-agent planning setting to allow agents to
anticipate responsibility in the plans they consider. The planning setting in
this paper includes partial information regarding the initial state and
considers formulas in linear temporal logic as positive or negative outcomes to
be attained or avoided. We firstly define attribution for notions of active,
passive and contributive responsibility, and consider their agentive variants.
We then use these to define the notion of responsibility anticipation. We prove
that our notions of anticipated responsibility can be used to coordinate agents
in a planning setting and give complexity results for our model, discussing
equivalence with classical planning. We also present an outline for solving
some of our attribution and anticipation problems using PDDL solvers. | Timothy Parker, Umberto Grandi, Emiliano Lorini | 2023-07-31T13:58:49Z | http://arxiv.org/abs/2307.16685v1 | # Anticipating Responsibility in Multiagent Planning
###### Abstract
Responsibility anticipation is the process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome. This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider. The planning setting in this paper includes partial information regarding the initial state and considers formulas in linear temporal logic as positive or negative outcomes to be attained or avoided. We firstly define attribution for notions of active, passive and contributive responsibility, and consider their agentive variants. We then use these to define the notion of responsibility anticipation. We prove that our notions of anticipated responsibility can be used to coordinate agents in a planning setting and give complexity results for our model, discussing equivalence with classical planning. We also present an outline for solving some of our attribution and anticipation problems using PDDL solvers.
## 1 Introduction
In any multi-agent setting, a key concept is that of responsibility. There are two main notions of responsibility, which are forward-looking and backward-looking responsibility [24]. In general, forward-looking responsibility is to have an obligation to bring about or prevent a certain state of affairs, while backward-looking responsibility means to be held accountable for a particular action or state of affairs that occurred. Our paper considers only backward-looking responsibility, which is often used in multi-agent settings to determine appropriate sanctions or rewards for agents. While responsibility attribution is a well-studied problem [1, 2, 4, 12, 19], we focus on the novel concept of responsibility anticipation, which means to determine if a particular plan for a single agent _may_ lead to their responsibility for some outcome, given the possible plans of all other agents. We believe that by anticipating responsibility, agents will be better able to coordinate their actions even if they cannot communicate. We consider responsibility in a multi-agent setting with concurrent actions and where outcomes are described in Linear Temporal Logic over finite traces (LTL\({}_{f}\))[8]. Following the work of Lorini Et Al [17] we recognise two key components to responsibility, namely the causal and agentive components. The causal component requires that the actions of the agent in some way contributed to the outcome in question. Lorini Et Al identify two different notions of causal responsibility, active and passive responsibility. We formalise both in our model as well as a notion of contribative responsibility defined by Braham and Van Hees [4]. Roughly speaking, given some state of affairs \(\omega\), active responsibility means to bring about \(\omega\), passive responsibility means to allow \(\omega\) to occur, and contributive responsibility means to be part of a coalition that brings about \(\omega\). The agentive component requires that the agent is aware that their actions will (or in some cases may) contribute to the outcome. In our setting the agents have full knowledge of the action theory (i.e the capabilities of all agents), but are uncertain regarding the intended actions of other agents and the initial state of the world. This allows us to define agentive notions of active, passive and contributive responsibility.
While our model allows us to attribute responsibility retrospectively (after plan execution), the focus of our work is in anticipating responsibility to aid in plan selection for a single agent. Since agents often cannot be certain about the outcomes of their plans, we introduce a notion of anticipated responsibility, which can be applied to any of our previous notions of responsibility. We show that by minimising their anticipated responsibility for a negative outcome, agents are often capable of guaranteeing that the outcome does not occur, even in some cases where the agents cannot communicate and where no single agent can guarantee avoiding the negative outcome.
We intend for our model to be useful in real-world planning applications. This is why we have taken efforts to ensure that the our planning domain is reasonably compact while still being highly expressive. We also outline how our responsibility attribution and anticipation problems can be reduced to PDDL, both to demonstrate how pre-existing planning solvers can be applied to our problems and to encourage implementation of our model.
Our paper is organised as follows. Section 2 situates our paper with reference to related work in responsibility attribution, and compares our work to several similar papers. Section 3 introduces our multi-agent planning domain and presents an explanatory example. Section 4 formalises our notions of responsiblity attribution and anticipation and discusses their application to multi-agent planning. Section A.5 gives the complexity results for our setting and an outline of a reduction to PDDL. Finally section 6 summarises the paper and outlines directions for future work.
## 2 Related Work
This work contributes primarily to the field of formalised responsibility attribution. It also involves planning with temporally extended goals [3, 9, 5], but since we are not aware of any other work in planning that considers responsibility in plan selection, we will focus this section on responsibility. Our planning model builds on a number of previous papers which are discussed in section 3.
Furthermore, responsibility anticipation and its application to planning agents, is, to the best of our knowledge, also novel in the field of responsibility formalisation. Therefore, we will focus on approaches to responsibility attribution in the literature, and discuss how and why they differ from our work.
One approach to formalising responsibility is the work of Alechina Et Al [1], which is based on work by Chockler and Halpern [6, 11]
on the formalisation of responsibility. Rather than using \(\text{LTL}_{f}\), as our approach, this work uses structural equation modeling (SEM). Their paper focuses specifically on responsibility attribution for the failure of a previously-arranged joint plan, which is a specific sequence of tasks that all agents are expected to follow (but perhaps will not), making its application much more specific than our work. Unlike our model, the authors focus only on a single notion of responsibility, but it does model varying degrees of responsibility for different agents. Alechina Et Al also perform a complexity analysis of their model, showing that responsibility attribution is in general NP-Complete (in line with our notion of passive responsibility, see theorem 6) and identify some fragments where responsibility attribution is polynomial.
Halpern and Kleiman-Weiner [12] also use a structural equations model, but focuses on defining the intentions of agents given their actions and their epistemic state (given here as a probability distribution). As this paper does not address causal responsibility there is not much overlap with our model, but it does highlight several interesting concepts that we could attempt to incorporate in future work.
A more general but less compact approach is the work of Baier Et Al [2]. Their work covers both forward and backward-looking responsibility attribution, but we will focus on their formalisation of backward-looking responsibility. Whereas our model is based of classical planning, Baier Et Al use extensive form games with strategies instead of plans. This makes their model much less compact and more complex than ours, but also more expressive. In their work, a coalition of agents \(J\) is causal backwards responsible for some outcome \(\omega\) if fixing the strategies of all other agents (and the random choices of Nature) there exists a strategy for \(J\) where \(\omega\) does not occur in any possible execution. They also define strategic backward responsibility, which states that \(\omega\) occurs, and there is some state in the execution where the coalition of agents have a strategy such that \(\omega\) does not occur in any epistemically possible outcome for that strategy (since agents cannot distinguish between some states). Again, this model does not include any other notions of responsibility, but does model an agents degree of responsibility, which is determined by an agents membership to one or more responsible coalitions, meaning it behaves similarly to our notion of contributive responsibility, though defined on strategies instead of plans. Baier Et Al also provide a complexity result for their model. They note that the complexity of responsibility attribution is in NP, which is a lower bound than contributive responsibility in our model (see theorem 6), though our model is exponentially more compact.
A similar definition of responsibility exists in the work of Naumov and Tao [19] whose setting of Imperfect Information Strategic Games is very close to our notion of planning domain, but restricted to plans of length 1. Their notion of blameworthiness says that \(i\) is blameworthy for \(\omega\) if \(\omega\) occurs and \(i\) could have performed an action guaranteeing \(\neg\omega\) in all possible states. This is a stronger version of our notion of causal passive responsibility, as we require only that \(i\) could have avoided \(\omega\) if the state and all actions of other agents were fixed. They also present a notion of "seeing to it" which requires that an agent guarantees in all possible worlds that \(\omega\) occurs. This is very close to our notion of agentive active responsibility, the only difference being that in our model there must exist some possible history from the initial state where the outcome does not occur, whereas in their model that history can start at any epistemically possible state for \(i\). Also, unlike us Naumov and Tao formalise their notions as operators in logic, allowing for the development of a proof system for these operators (they develop a proof system for their notion of blameworthiness in a previous, perfect-information setting [18]).
Our work is heavily inspired by the work of Lorini Et Al [17]. This paper formalises the notions of active and passive responsibility that we use in this paper, as well as the variant of agentive responsibility. The model in this paper is based on STIT logic in a multi-agent setting with Kripke possible worlds. We extend this work to the setting of multi-agent planning, though for simplicity we do not model agents having knowledge of the possible actions of other agents, in our setting all plans of the other agents are considered possible.
Our work is also related to the work of Braham and van Hees [4], who analyse responsibility in a game-theoretic framework. One of the conditions for moral responsibility is that an agent's actions must have "causally contributed" to the outcome in question. We adapt the notion of causal contribution into our setting as a third notion of causal responsibility.
## 3 Model
In this section we introduce the planning framework in which we will define our notions of responsibility. As many of our definitions are drawn from existing literature, in the interests of space we have chosen to omit some of the less informative formal definitions, which can be found in the supplementary material for this paper. We will indicate where we have done this.
### Agents, Actions and Histories
The building blocks of our model are a finite set of agents \(\mathit{Agt}\) and a countable set of propositions \(\mathit{Prop}=\{p,q,\ldots\}\). From \(\mathit{Prop}\) we define a set of states \(S=2^{\mathit{Prop}}\), with elements \(s,s^{\prime},\ldots\) Let \(\mathit{Act}=\{a,b,\ldots\}\) be a finite non-empty set of action names.
To trace the actions of agents and changing states over time we define a \(k\)-history to be a pair \(H=(H_{st},H_{act})\) with \(H_{st}:[0,k]\longrightarrow S\) and \(H_{act}:\mathit{Agt}\times[k]\longrightarrow\mathit{Act}\). The set of \(k\)-histories is noted \(\mathit{Hist}_{k}\). The set of all histories is \(\mathit{Hist}=\bigcup_{k\in\mathbb{N}}\mathit{Hist}_{k}\).
### Multi-Agent Action Theory
Given the actions performed by an agent, we need to be able to determine the effects of those actions. We favour a compact action theory based on situation calculus [20].
We first define \(\mathcal{L}_{\text{PL}+}\) (propositional logic with action descriptions) as follows:
\[\varphi := \mathit{p}\mid\mathit{do}(i,a)\mid\neg\varphi\mid\varphi\wedge\varphi\]
with \(p\) ranging over \(\mathit{Prop}\), \(i\) ranging over \(\mathit{Agt}\) and \(a\) ranging over \(\mathit{Act}\). Atomic formulas in this language are those that consist of a single proposition \(p\) or a single instance of \(do(i,a)\).
Semantic interpretation of formulas in \(\mathcal{L}_{\text{PL}+}\) is performed relative to a \(k\)-history \(H\in\mathit{Hist}\) and a time point \(t\in\{0,\ldots,k\}\) and as follows (we omit boolean cases which are defined as usual):
\[H,t\models p \Longleftrightarrow p\in H_{st}(t),\] \[H,t\models do(i,a) \Longleftrightarrow t<k\text{ and }H_{act}(i,t)=a\]
We define our action theory as a pair of a positive and negative effect precondition function \(\gamma=(\gamma^{+},\gamma^{-})\), where \(\gamma^{+}:\mathit{Agt}\times\mathit{Act}\times\mathit{Prop}\longrightarrow \mathcal{L}_{\text{PL}}\) and \(\gamma^{-}:\mathit{Agt}\times\mathit{Act}\times\mathit{Prop}\longrightarrow \mathcal{L}_{\text{PL}}\). If the formula \(\gamma^{+}(i,a,p)\) holds in a state where action \(a\) is executed by agent \(i\), proposition \(p\) will be _true_ in the next state (provided no other action interferes). Similarly, \(\gamma^{-}(i,a,p)\) guarantees that \(p\) will be _false_ in the next state if action \(a\) is executed by \(i\) (without interference). In
case of conflicts between actions, we use an intertial principle: if two or more actions attempt to enforce different truth values for \(p\), then the truth value of \(p\) does not change.
If we want to signal that action \(a\) is not available to agent \(i\) we can simply set \(\gamma^{+}(i,a,p)=\gamma^{-}(i,a,p)=\bot\) for all \(p\in Prop\). We assume the existence of a "do nothing" action \(Skip\), defined such that \(\gamma^{+}(i,Skip,p)=\gamma^{-}(i,Skip,p)=\bot\) for all \(i\) and \(p\).
We say that history \(H\) is a \(\gamma\)-compatible history for action theory \(\gamma=(\gamma^{+},\gamma^{-})\) if each state respects the actions performed in the previous state. The set of \(\gamma\)-compatible histories is noted \(Hist(\gamma)\). A full formal definition can be found in the supplementary material.
### Compactness of our Action Theory
A conceptually simpler equivalent to our notion of action theory is a state transition function [14]\(\tau:S\times Act^{\mathit{agt}}\longrightarrow S\). This takes as input the current state and the actions of all agents, and outputs the next state. Since there are no limitations on what states the function can output (besides functionality), it is straightforward to see that for any deterministically consistent history \(H\) (meaning the same joint action in the same state always leads to the same outcome), there is some state transition function \(\tau\) that can be used to generate \(H\) given the start state and the actions of all agents.
However, we can show that our action theory is equally as expressive as any state transition function, and strictly more succinct, and this is achieved by our use of action descriptions.
**Proposition 1**.: _Given a state transition function \(\tau\), there exists an action theory \(\gamma\) that is equivalent to (generates the same histories as) \(\tau\) and is at worst polynomially larger in size._
**Proposition 2**.: _There exists some state transition function \(\tau_{1}\) such that any action theory \(\gamma\) that is equivalent to \(\tau_{1}\) must contain \(do(i,\,a)\) in its description._
Note that the size of \(\tau\) is always exponential in the size of \(\mathit{Prop}\) and \(\mathit{Agt}\), since the number of entries in \(\tau\) are fixed. On the other hand, entries for \(\gamma\) can in be as small as constant size (for example \(\gamma^{\pm}(i,a,p)\in\{\top,\bot\}\)). This means \(\gamma\) can be as small as \(2\times|Act|\times|\mathit{Agt}|\times|Prop|\). We conjecture that in most applications for this planning model, the action theory \(\gamma\) will be polynomial in size in \(Prop\), \(\mathit{Agt}\) and \(Act\).
### Planning Domains with Partial Information
We can now define our notion of planning domain. For now, we simply define a space where agents can create and execute plans, and where the outcomes of those plans can be determined. Since our planning domain includes partial information we make use of epistemic equivalence sets. An epistemic equivalence set \(S_{i}\subseteq S\) is the set of possible start states from the perspective of agent \(i\).
**Definition 1** (Partial Information Multi-Agent Planning Domain).: _A Partial information multi-agent Planning Domain (PPD) is a tuple \(\nabla=(\gamma,s_{0},(S_{i})_{i\in\mathit{Agt}})\) where \(\gamma=(\gamma^{+},\gamma^{-})\) is an action theory, \(s_{0}\) is an initial state, and for each \(i\in\mathit{Agt}\), \(S_{i}\) is the epistemic equivalence set for \(i\)._
Our notion of an epistemic equivalence sen, is straightforward and very general, but not very compact. A more compact alternative would be to give each agent visibility of a certain subset of the propositions in \(\mathit{Prop}\)[22]. However, this would be less general as not all epistemic equivalence sets can be expressed in terms of visibility. A more complex but much more general approach would be to give each agent a belief base \(\mathsf{Bel}_{i}\) as a set of formulas of \(\mathcal{L}_{\text{PL}}\) that describes the beliefs of \(i\) regarding the initial state [16]. We prefer epistemic equivalence sets as this is the simplest notion for defining algorithms. Furthermore, any of the above methods will induce an epistemic equivalence set, meaning our model can easily be adapted to other systems.
**Example 1** (Crossing a Junction).: _The planning domain \(\nabla_{E}\) models an autonomous vehicle (Agent 1) approaching a junction. Agent 1 knows that there is a second vehicle (Agent 2) near the junction, but does not know if Agent 2 has crossed the junction. Each vehicle can either go straight on (Move), or do nothing (Skip)._
_The example is formally defined as follows:_
* \(\mathit{Agt}=\{A1,A2\}\)__
* \(Prop=\{\mathit{crossed}_{1},\mathit{crossed}_{2},\mathit{collision}\}\)__
* \(Act=\{\mathit{Move},\mathit{Skip}\}\)__
* \(s_{0}=\emptyset\)_,_ \(s_{1}=\{\mathit{crossed}_{2}\}\)__
* \((S_{1})=\{s_{0},s_{1}\}\)__
_The action theory for our example is defined as follows, note that we have already defined the preconditions for \(\mathit{Skip}\) in section 3.2:_
\[\gamma^{+}(A1,\mathit{Move},\mathit{crossed}_{1})= \neg(\neg\mathit{crossed}_{2}\wedge\mathit{do}(A2,\mathit{Move}))\] \[\wedge\neg\mathit{collision}\] \[\gamma^{+}(A1,\mathit{Move},\mathit{collision})= \neg\mathit{crossed}_{1}\wedge\neg\mathit{crossed}_{2}\] \[\wedge\mathit{do}(A2,\mathit{Move})\] \[\gamma^{+}(A2,\mathit{Move},\mathit{crossed}_{2})= \neg(\neg\mathit{crossed}_{1}\wedge\mathit{do}(A1,\mathit{Move}))\] \[\wedge\neg\mathit{collision}\] \[\gamma^{\pm}(i,\mathit{Move},p)= \bot\mathit{unless stated otherwise above.}\]
_In words, if exactly one agent attempts to cross the junction (Move) then they will succeed. If both agents perform \(\mathit{Move}\) at the same time then they will collide, which will prevent either from being able to move._
### Action Sequences and Joint Plans
Now that we have defined a planning domain, we can define the notions of action sequence and plan. Given \(k\in\mathbb{N}\), a \(k\)-action-sequence is a function
\[\pi:\{0,\ldots,k-1\}\longrightarrow Act.\]
Figure 1: A visual representation of \(\nabla_{E}\) (Example 1), showing the start position of Agent 1 and the two possible positions of Agent 2 (crossed or not crossed the junction).
The set of \(k\)-action-sequences is noted \(\mathit{Seq}_{k}\). The set of all action sequences is \(\mathit{Seq}=\bigcup_{k\in\mathbb{N}}\mathit{Seq}_{k}\). For a (non-empty) coalition of agents \(J\in\mathit{2^{Agt}}\setminus\emptyset\) we can define a joint \(k\)-plan as a function \(\Pi:J\longrightarrow\mathit{Seq}_{k}\) (if \(J\) is a singleton coalition we call \(\Pi\) an individual plan). The set of joint \(k\)-plans for a coalition \(J\) is written \(Plan^{J}_{k}\). The set of all joint plans for \(J\) is \(\mathit{Plan}^{J}=\bigcup_{k\in\mathbb{N}}Plan^{J}_{k}\).
Given a joint plan \(\Pi\) for coalition \(J\) and another coalition \(J^{\prime}\subseteq J\), we can write the sub-plan of \(\Pi\) corresponding to \(J^{\prime}\) as \({\Pi^{J^{\prime}}}\), we can also write \({\Pi^{-J^{\prime}}}\) for sub-plan corresponding to \(J\setminus J^{\prime}\). Given two \(k\)-plans \(\Pi_{1}\) and \(\Pi_{2}\) for disjoint coalitions \(J_{1},J_{2}\), we write \(\Pi_{1}\cup\Pi_{2}\) for the joint plan for \(J_{1}\cup J_{2}\) such that \((\Pi_{1}\cup\Pi_{2})^{J_{1}}=\Pi_{1}\) and \((\Pi_{1}\cup\Pi_{2})^{J_{2}}=\Pi_{2}\). Finally, given two plans \(\Pi_{1}\) and \(\Pi_{2}\), if there exists some plan \(\Pi_{3}\) such that \(\Pi_{2}=\Pi_{1}\cup\Pi_{3}\) then we say that \(\Pi_{1}\)_is compatible with \(\Pi_{2}\)._
We can now define the notion of the history generated by a joint \(k\)-plan \(\Pi\) at an initial state \(s_{0}\) under the action theory \(\gamma\). It is the \(\gamma\)-compatible \(k\)-history along which the agents jointly execute the plan \(\Pi\) starting at state \(s_{0}\). We write this as \({H^{\Pi,s_{0},\gamma}}\)
### Linear Temporal Logic
In our model histories are temporal entities that are always finite in length, therefore the most natural choice to describe properties of histories is Linear Temporal Logic over Finite Traces [8, 9]. This allows us to describe temporal properties such as "\(\varphi\) never occurs" or "\(\varphi\) always occurs immediately after \(\psi\)". We write the language as \(\mathcal{L}_{\text{TTL}_{f}}\), defined by the following grammar:
\[\varphi\quad::=\quad p\mid do(i,a)\mid\neg\varphi\mid\varphi\land\varphi \mid\mathsf{X}\varphi\mid\varphi\mathsf{U}\;\varphi,\]
with \(p\) ranging over \(Prop\), \(i\) ranging over \(Agt\) and \(a\) ranging over \(Act\). Atomic formulas in this language are those that consist of a single proposition \(p\) or a single instance of \(do(i,a)\). \(\mathsf{X}\) and \(\mathsf{U}\) are the operators "next" and "until" of \(\text{\sf LTL}_{f}\). Operators "henceforth" (\(\mathsf{G}\)) and "eventually" (\(\mathsf{F}\)) are defined in the usual way: \(\mathsf{G}\varphi\ \stackrel{{\text{def}}}{{=}}\ \neg(\top\mathsf{U}\;\varphi)\) and \(\mathsf{F}\varphi\ \stackrel{{\text{def}}}{{=}}\ \neg\mathsf{G}\neg\varphi\). We define the semantics for \(\mathsf{X}\) and \(\mathsf{U}\) as follows, the rest is the same as \(\mathcal{L}_{\text{PL}+}\).
\[H,t\models\mathsf{X}\varphi \Longleftrightarrow\ t<k\text{ and }H,t+1\models\varphi,\] \[H,t\models\varphi_{1}\mathsf{U}\;\varphi_{2} \Longleftrightarrow\ \exists t^{\prime}\geq t:t^{\prime}\leq k\text{ and }H,t^{\prime}\models\varphi_{2}\text{ and }\] \[\forall t^{\prime\prime}\geq t:\ \text{if }t^{\prime\prime}<t^{ \prime}\ \ \text{then }\ H,t^{\prime\prime}\models\varphi_{1}.\]
## 4 Formalising Responsibility
In order to define responsibility anticipation, we must first define the responsibility attribution. Responsibility attribution is a backward-looking notion where, given some fixed history, we seek to determine which agents are responsible for some particular outcome. We distinguish between "agentive" and merely "causal" forms of responsibility. For an agent \(i\) to be causally responsible for some outcome \(\omega\) simply means that the actions of \(i\) were in some way a causal factor in the occurrence of \(\omega\). Agentive responsibility requires the additional condition that \(i\)_new_ that its actions could or would lead to \(\omega\).
Another common notion of responsibility is that of moral responsibility, which is the kind of responsibility that typically merits praise or blame. We do not attempt to formalise moral responsibility in this paper as it is an extremely complex notion, and there is widespread disagreement in the literature regarding exactly what the criteria for moral responsibility are [21]. That said, we do believe that agentive responsibility is a necessary (but not sufficient) condition for moral responsibility.
### Causal Responsibility
To be causally responsible for an outcome roughly means to have causally contributed to that outcome occurring. Two main notions of causal responsibility are active and passive responsibility. To be actively responsible means to directly cause the outcome, i.e to act in a way that guarantees the outcome will occur. To be passively responsible means to allow an outcome to occur while having the ability to prevent it. Our definitions of active and passive responsibility are based on the work of Lorini Et Al [17], but adapted for a multi-agent planning domain.
**Definition 2** (Active Responsibility).: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in\mathit{Agt}})\) be a PPD, \(i\in Agt\) an agent, and \(\Pi_{i}\) a joint plan. Let \(\omega\in\mathcal{L}_{\text{TTL}_{f}}\). Then, we say that \(i\) bears Causal Active Responsibility (CAR) for \(\omega\) in \((\Pi_{1},s_{0},\gamma)\), if \({H^{\Pi_{2},s_{0},\gamma}}\models\omega\) for all \(\Pi_{2}\) compatible with \(\Pi_{i}^{\{i\}}\) and there exists some joint plan \(\Pi_{3}\in Plan^{Agt}\) such that \({H^{\Pi_{3},s_{0},\gamma}}\not\models\omega\)._
Where \(s_{0}\) and/or \(\gamma\) are obvious from context, they are omitted from the statement "\(i\) bears CAR for \(\omega\) in \((\Pi_{1},s_{0},\gamma)\)" In words, an agent \(i\) is causally actively responsible for the occurrence of \(\omega\) if, keeping fixed the initial state and the actions of \(i\), the other agents could not have acted differently and prevented the occurrence of \(\omega\). Note that active responsibility requires that the outcome does not occur in all possible plans. This means that an agent cannot be actively responsible for something that was inevitable, such as the sun rising in the morning. This corresponds to the deliberative STIT operator of Horty and Belnap [13].
**Definition 3** (Passive Responsibility).: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in\mathit{Agt}})\) be a PPD, \(i\in Agt\) an agent, and \(\Pi_{1}\) a joint plan. Let \(\omega\in\mathcal{L}_{\text{TTL}_{f}}\). Then, we say that \(i\) bears Causal Passive Responsibility (CPR) for \(\omega\) in \((\Pi_{1},s_{0},\gamma)\) if \({H^{\Pi_{1},s_{0},\gamma}}\models\omega\) and there exists some \(\Pi_{2}\) compatible with \(\Pi_{1}^{-\{i\}}\) such that \({H^{\Pi_{2},s_{0},\gamma}}\not\models\omega\)._
An agent \(i\) is passively responsible for some outcome \(\omega\) if, keeping fixed the initial state and the actions of all other agents, it could have acted differently and prevented the occurrence of \(\omega\).
Passive and active responsibility can fail in cases of causal overdetermination. For example: suppose three men push a car off a cliff. Since the car is heavy, two of them are needed to successfully push the car, meaning no agent is actively responsible. Since any one man could have stopped pushing without changing the outcome, no man is passively responsible. Nonetheless it intuitively seems that each man is at least somewhat responsible. Therefore, we introduce the notion of contributive responsibility based on the work of Braham and van Hees [4], which is a more general notion of causal responsibility.
**Definition 4** (Contributive Responsibility).: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in\mathit{Agt}})\) be a PPD, \(i\in Agt\) an agent, and \(\Pi_{1}\) a joint plan. Let \(\omega\in\mathcal{L}_{\text{TTL}_{f}}\). Then, we say that \(i\) bears Causal Contrributive Responsibility (CCR) for \(\omega\) in \((\Pi_{1},s_{0},\gamma)\) if \({H^{\Pi_{1},s_{0},\gamma}}\models\omega\) and there exists some coalition of agents \(J\) such that \(i\in J\) and all \(\Pi_{2}\) compatible with \(\Pi_{1}^{J}\), \({H^{\Pi_{2},s_{0},\gamma}}\models\omega\) and there exists some \(\Pi_{3}\) compatible with \(\Pi_{1}^{J\setminus\{i\}}\) such that \({H^{\Pi_{3},s_{0},\gamma}}\not\models\omega\)._
In words, an agent \(i\) is contributively responsible for \(\varphi\) if it is part of some coalition of agents \(J\) such that: a) the actions of \(J\) were sufficient to guarantee \(\varphi\); and b) the actions of \(J\setminus\{i\}\) were not sufficient to guarantee \(\varphi\). In terms of STIT this can be written as "\(\exists J\subseteq Agt:i\in J\wedge STIT_{J}\omega\land\neg STIT_{J\setminus\{i\}}\omega\)".
A notable property of Causal Contributive Responsibility is that it is "complete". This means that for any outcome that occurs in a plan,
either that outcome was inevitable or there is at least one agent who is responsible (i.e bears CCR) for that outcome.
**Theorem 1**.: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD, let \(\Pi\) be a joint plan and let \(H=H^{\Pi,s_{0},\gamma}\). Let \(\omega\in\mathcal{L}_{\textsc{trl}_{\mathit{f}}}\) such that \(H\models\omega\). Then either \(H^{\prime}\models\omega\) for every history compatible with \(\nabla\), or there exists some \(i\in Agt\) such that \(i\) bears CCR for \(\omega\) in \(\Pi\)._
Another important property of our notions of responsibility is that no agent can be held causally responsible (for any form of causal responsibility) for an outcome that was inevitable (i.e occurs in every possible joint plan). This is because all three notions of responsibility require the existence of a joint plan where \(\omega\) does not occur.
### Agentive Responsibility
To bear agentive responsibility for an outcome, an agent must know that their actions will (or in some cases may) be causally responsible for the outcome occurring. Specifically, we consider the epistemic state of the agent where they have decided their own actions, but do not yet know the actions of others.
**Definition 5** (Agentive Active Responsibility).: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD, \(i\in Agt\) an agent, and \(\Pi_{1}\) a joint plan. Let \(\omega\in\mathcal{L}_{\textsc{trl}_{\mathit{f}}}\). Then, we say that \(i\) bears Agentive Active Responsibility (AAR) for \(\omega\) in \((\Pi_{1},s_{0},\gamma)\) if \(i\) is actively responsible for \(\omega\) in \(\Pi_{1}\) and for every \(\Pi_{2}\) compatible with \(\Pi_{1}^{(i)}\), and every \(s_{1}\in S_{i}\), \(H^{\Pi_{2},s_{1},\gamma}\models\omega\)._
Agent \(i\) bears agentive active responsibility for \(\omega\) if their actions were sufficient to guarantee \(\omega\) in any possible outcome (given the possible start states and possible actions of other agents). Furthermore, as with CAR, there must be some joint plan from \(s_{0}\) where \(\omega\) does not occur.
Since passive and contributive responsibility both include the notion of "allowing" something to happen rather than "forcing" it to happen, the outcome does not need to be guaranteed from the perspective of the agent, but merely possible. This means that the notions of agentive passive and agentive contributive responsibility are both equivalent to their causal definitions, as we assume that agents have full knowledge of the action theory and always consider the true initial state to be epistemically possible, meaning any actual outcome \(\omega\) must have been considered possible from the perspective of every agent. Therefore note that the acronyms CPR and CCR refer to both the causal _and_ agentive variants of passive and causal responsibility.
A more intuitive notion of agentive passive and contributive responsibility would be to say that \(\omega\) must be _reasonably likely_ from the perspective of \(i\) rather than merely "possible". However, since our model contains no notion of probability, plausibility, or knowledge of the actions of other agents, this is not currently possible, though it does present a direction for future iterations of this model.
**Example 2** (Crossing a Junction - continued).: _Consider the following joint plan \(\Pi_{1}\) from start state \(s_{0}\):_
\[A1:[1\mapsto\mathit{Move},2\mapsto\mathit{Move}],A2:[1\mapsto\mathit{Move},2 \mapsto\mathit{Move}]\]
_This will result in a collision. Agent 1 bears CPR (and also CCR) for the negation of the goal requiring that the two cars never collide (\(\omega_{1}=\textsc{G}\neg\mathit{collision}\)) since in this case A1 could have avoided a collision by waiting for one step before moving (i.e \(A1:[1\mapsto\mathit{Skip},2\mapsto\mathit{Move}]\)). However, since Agent 2 also could have waited to avoid a collision, Agent 1 is not actively responsible. Consider an alternative plan where each agent is more cautious:_
\[A1:[1\mapsto\mathit{Skip},A2\mapsto\mathit{Skip}],A2:[1\mapsto\mathit{Skip},2 \mapsto\mathit{Skip}]\]
_In this case Agent 1 bears CAR and AAR for the negation of the goal that Agent 1 eventually crosses the road (\(\omega_{2}=\textsc{F}\mathit{crossed}_{1}\)), since \(\neg\omega_{2}\) occurs in any history compatible with the actions of Agent 1 in \(\Pi_{2}\) starting from \(s_{0}\) or \(s_{1}\)._
### Anticipating Responsibility
Responsibility attribution is defined on known joint plans and known initial states. Therefore it cannot be used in planning for single agents, for whom the actions of the other agents and the initial state are unknown. However, an agent can always know if it is _potentially_ responsible for that outcome, namely if there is some possible history compatible with that plan where they are responsible.1
Footnote 1: We could also consider anticipation with universal instead of existential quantification, but being responsible in _every_ possible history is a very strong notion and we have not found much use for it.
**Definition 6** (Anticipated Responsibility).: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD, \(i\in Agt\) an agent, and \(\Pi\) an individual plan. Let \(\omega\in\mathcal{L}_{\textsc{trl}_{\mathit{f}}}\) and \(X\) a form of responsibility (CAR, CPR, CCR, AAR). Then, we say that \(i\) anticipates \(X\) for \(\omega\) in \((\Pi,\nabla)\) if there is some \(s_{1}\in S_{i}\) and some joint plan \(\Pi_{1}\) compatible with \(\Pi\) such that is \(i\) bears X for \(\omega\) in \((\Pi_{1},s_{1})\)._
We will now show the logical implications between our different forms of responsibility. The horizontal arrows indicate that in any joint plan \(\Pi\) where \(i\) is attributed some form of responsibility, \(i\) can anticipate that form of responsibility in the individual plan \(\Pi^{\{i\}}\).
**Theorem 2**.: _The implications shown in figure 3 are correct._
Instead of giving a singular modular definition for anticipated responsibility, we can instead give seperate definitions for each notion. For example, consider the following equivalent definition for anticipated Agentive Active Responsibility:
Figure 2: A visual representation of the implications between our different forms of responsibility.
**Definition 7**.: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD, \(i\in Agt\) an agent, and \(\Pi\) an individual plan. Let \(\omega\in\mathcal{L}_{\text{TL},f}\). Then, we say that \(i\) anticipates \(AAR\) for \(\omega\) in \((\Pi,\nabla)\) if for all \(s_{1}\in S_{i}\) and all joint plans \(\Pi_{1}\) compatible with \(\Pi\), \(H^{\Pi_{1},s_{1},\gamma}\models\omega\) and there is some \(s_{2}\in S_{i}\) and some joint plan \(\Pi_{2}\) such that \(H^{\Pi_{2},s_{2},\gamma}\models\neg\omega\)._
However, we prefer our modular definition as it emphasises that we have a single notion of anticipated responsibility.
### Responsibility Anticipation in Plan Selection
As previously stated, our hypothesis is that anticipating responsibility can help agents to coordinate towards a common goal, even without communication. Given some goal or value \(\varphi\), agents should avoid active responsibility for \(\neg\varphi\). This means performing a plan that does not anticipates AAR for \(\neg\varphi\). Furthermore, we prove that there is always a plan that does not anticipate AAR for \(\neg\varphi\). This means that artificial agents can be formally verified to never be potentially actively responsible for the violation of some value. This could be a useful step in creating provably safe autonomous planning agents.
**Theorem 3**.: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD, \(i\in Agt\), and \(\omega\) an \(\text{\sf LTL}_{f}\)-formula. Then there exists some individual plan \(\Pi\) for \(i\) such that \(i\) does not anticipate AAR for \(\omega\) in \(\Pi\)._
Proof.: (sketch) Either there is some compatible plan where \(\omega\) does not occur (meaning \(i\) does not anticipate AAR) or \(\omega\) occurs in every outcome of every plan, so \(i\) is not responsible.
Given some value or goal \(\varphi\), we want agents to avoid responsibility for \(\neg\varphi\), but also to seek responsibility for \(\varphi\) (preferably agentice active responsibility, as this guarantees the occurrence of \(\varphi\)). However, we can show that anticipating agentice active responsibility for \(\varphi\) is effectively equivalent to not anticipating causal passive responsibility for \(\neg\varphi\) (the dual notion of anticipating CPR).
**Theorem 4**.: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD, \(i\in Agt\), and \(\omega\) an \(\text{\sf LTL}_{f}\)-formula. If there is some plan \(\Pi\) for \(i\) such that \(i\) anticipates AAR for \(\omega\) in \(\Pi\), then for any plan \(\Pi^{\prime}\) for \(i\), \(i\) does not anticipate CPR for \(\neg\omega\) in \(\Pi^{\prime}\) if and only if \(i\) anticipates AAR for \(\omega\) in \(\Pi^{\prime}\)._
Proof.: (sketch) Given a joint plan \(\Pi\) in a planning domain \(\nabla\), \(i\) is "powerless" with respect to \(\omega\) if no alternative plan for \(i\) changes the truth value of \(\omega\) in \(H^{\Pi,s_{0},\gamma}\). If \(\omega\) occurs in all plans where \(i\) is powerless then for all plans \(\Pi^{\prime}\) for \(i\), \(i\) does not anticipate CPR for \(\neg\omega\) in \(\Pi^{\prime}\) if and only if \(i\) anticiptes AAR for \(\omega\) in \(\Pi^{\prime}\). Otherwise, there is no plan \(\Pi^{\prime}\) where \(i\) anticipates AAR for \(\omega\) in \(\Pi^{\prime}\).
By "effectively equivalent" we mean that if there exists some plan \(\Pi\) for \(i\) that anticipates AAR for \(\varphi\), then the plans that anticipate AAR for \(\varphi\) are exactly the plans that do not anticipate CPR for \(\neg\varphi\). However, the notions are not logically equivalent because it is possible that there are some plans for \(i\) that do not anticipate CPR for \(\neg\varphi\) while there are none that anticipate AAR for \(\varphi\).
This also suggests that anticipated CPR is the the most important notion of anticipated responsibility, as it is either equivalent or effectively equivalent to every other notion of anticipated responsibility. Finally, we can show that avoiding CPR for \(\neg\varphi\) is a potentially powerful method for allowing a group of agents to coordinate on a certain goal, even if those agents cannot communicate.
**Theorem 5**.: _Let \(\nabla=(\gamma,s_{0},(S_{i})_{i\in Agt})\) be a PPD and \(\omega\) an \(\text{\sf LTL}_{f}\)-formula. Let \(\Pi\) be a joint plan such that for every agent \(i\in Agt\), \(i\) does not anticipate CPR for \(\neg\omega\) in \(\Pi^{\{i\}}\). Then either \(H^{\prime}\models\neg\omega\) for every history compatible with \(\nabla\), or \(H^{\Pi,s_{0},\gamma}\models\omega\)._
Proof.: (sketch) Suppose for contradiction that \(\omega\) occurs in some plan \(\Pi^{\prime}\) and does not occur in \(\Pi\). Then by theorem 8 there is some agent \(i\) who bears CCR for \(\neg\omega\) in \(\Pi\). Then by theorem 9 it must be the case that \(i\) anticipates CPR for \(\neg\omega\) in \(\Pi^{\{i\}}\), which is a contradiction.
This shows that even when agents with a shared goal cannot communicate and when no agent can individually guarantee the success of the goal, the application of anticipated responsibility can allow the agents to successfully coordinate their actions and achieve the goal.
## 5 Computing and Implementing Responsibility
### PDDL Implementation
As previously mentioned, our model is designed to be practically useful in real-world planning problems. Therefore we outline how our model can be implemented in the multi-agent extension of PDDL 3.1 proposed by Kovacs [15].
PDDL solvers take two inputs: a domain and a problem. The domain gives the object types, actions and predicates, whereas the problem gives the objects, initial state and goal. Below is some simplified PDDL code for a multi-agent planning domain involving a number of immobile agents and some tables that is inspired by the example of Kovacs [15]. The agents can lift tables that they are next to, or do nothing (_Skip_). Our example involves two tables (table1 and table2) and two agents (A1 and A2).
```
1(define(domainresponsibility-attribution)
2
3(requirements:equality:negativepreconditions:typing:multi-agent)
4(ity=agenttable)
5(predicates(lifted?o-object)(at?a-agent?oobject))
6(:actionlift:agent?a-agent:parameters(?o-object)
7preconditions(and(not(lifted?o))
8(at?a?o))
9effect(lifted?o))
10
11(actionskip:agent?a-agent:parameters()
12?precondition()
13effect()))
```
Listing 1: Example PDDL Domain
Consider the history where each agent starts next to a separate table, A1 performs the action _Skip_ and A2 performs _Lift_. The following code illustrates how we can use PDDL to check if A1 bears CPR for \(\omega=\neg\text{\sf FG}(\text{lifted table1}\wedge\text{lifted table2})\).
Running the first problem checks if \(\omega\) actually occurs, the second problem fixes the actions of all agents besides A1 and checks if A1 could have acted differently and avoided \(\omega\). If a plan is found, then A1 bears CPR for \(\omega\) (we present just the goal as the rest is the same as the first problem).
```
1(define(problemcausal-passive-responsibility-1)
2(domainresponsibility-attribution)
3(objectsA1A2-agenttable2-table)
4(init(atA1table1)(atA2table2))
5(goal(and(liftedtable1)(liftedtable2)
6(do(A1skip1))(do(A2lift1))))
```
Listing 2: Checking that the outcome occurs.
```
1(goal(and(liftedtable1)(liftedtable2)(do(A2lift1))))
```
Listing 3: Checking for Causal Passive Responsibility
To describe the plans of agents in PDDL goals we use \(do(i,a,t)\) which is true whenever agent \(i\) does action \(a\) at time \(t\).2
Footnote 2: For simplicity, we do not define \(do(i,a,t)\) in the code as its definition is quite complex and uninteresting.
In terms of the outcomes that we can attribute or anticipate responsibility for, PDDL 3 supports any boolean combination of predicates as goals, and also features temporal operators that function as state constraints for writing \(\mathsf{LTL}_{f}\) outcomes [10]. However, since PDDL does not support nesting of temporal operators, we do not have the full expressiveness of \(\mathsf{LTL}_{f}\). That said, the expressiveness of PDDL should be sufficient for the vast majority of outcomes that one realistically might be interested in.
The following problems demonstrate how to check CAR for A1 and \(\omega\). Firstly, we have to check if \(\omega\) is inevitable by attempting to find a joint plan that achieves \(\neg\omega\). Then we have to check if the actions of A1 are sufficient to guarantee \(\omega\).
```
1(:goal(and(liftedtable1)(liftedtable2)))
```
Listing 4 Checking if \(\omega\) is inevitable.
```
1(:goal(and(liftedtable1)(liftedtable2)(do(skip1)))))
```
Listing 5 CAR Attribution part 2
For checking AAR we first have to follow the procedure for checking CAR, but then we also have to check that the actions of A1 are sufficient to guarantee \(\omega\) in every epistemically possible world for A1. In this example we will suppose that \((S_{A1})=\{\){at A1 table1, at A2 table2}, {at A1 table1, at A2 table1} } modelling that A1 does not know where A2 is.
```
1(define(problemcausal-active-responsibility-2)
2(:domainresponsibility-attribution)
3(:objectsab-agenttabletable2-table)
4(:init(atatable1)(atbtable1))
5(goal(and(liftedtable1)(liftedtable2)(dotime(skip1))))
```
Listing 6 AAR Attribution
For anticipating CAR or AAR the process is much the same as attribution, since attribution depends only on the actions of A1, meaning the actions of all other agents do not need to be defined. We simply have to repeat the procedure for CAR or AAR attribution once for each epistemically possible start state. If A1 bears CAR/AAR in any start state, then they anticipate CAR/AAR. The process for anticipating CPR is more complex. This is because we need to find start state and a plan for all agents besides A1 such that the intended plan for A1 leads to \(\omega\) but there exists some other plan for A1 that leads to \(\neg\omega\). This can be solved in a single planning problem (at least, one problem per possible start state) by creating a duplicate copy of each object, allowing us to effectively run two copies of the planning domain in parallel, with the goal enforcing that the actions of all agents besides A1 must be the same in both copies. If a plan is found for any possible start state, then A1 anticipates CPR for \(\omega\).
```
1(define(problemcausal-active-responsibility-2)
2(:domainresponsibility-attribution)
3(:objectsab-a-1b-1-agenttable1table2table1-table2-1table)
4(:init(atatable1)(atbtable2)(ata-1table1-1)
5(:goal(and(liftedtable1)(liftedtable2)(do(a(skip1)))))
```
Listing 7 CPR Anticipation
The procedure for attributing CCR is more complex, as which agents actions we have to fix varies depending on which coalition we are testing, and there are exponentially many coalitions to check. Fortunately, since anticipated CCR is equivalent to anticipated CPR, the procedure for checking that is relatively straightforward.
### Complexity Results
In this section we will demonstrate the computational complexity of determining various kinds of blameworthiness. Full proofs of our results can be found in the supplementary material. We define X-ATTRIBUTION as the problem of determining if \(i\) bears X\(\in\){CAR,CPR,CCR,AAR} for \(\omega\) in \(\Pi\) and X-ANTICIPATION as the problem of determining if \(i\)_anticipates_ X for \(\omega\) in \(\Pi\).
**Theorem 6**.: _CAR-ATTRIBUTION is a member of \(P^{NP[2]}\), CPR-ATTRIBUTION is NP-Complete, CCR-ATTRIBUTION is a member of \(\Sigma_{2}^{P}\) and AAR-ATTRIBUTION is a member of \(\Delta_{2}^{P}\)._
**Theorem 7**.: _CAR-ANTICIPATION is a member of \(\Delta_{p}^{2}\), CPR-ANTICIPATION is NP-Complete, CCR-ANTICIPATION is NP-Complete, and AAR-ANTICIPATION is a member of \(\Delta_{2}^{P}\)._
These results are only intended to give an introduction to the complexity analysis of this setting. One class of problems that deserve further study is the task of identifying if a plan exists that does/does not anticipate responsibility for some outcome \(\omega\) (decision problem) and finding such a plan if one exists (search problem). The problem of identifying if a CPR-anticipating plan exists should be NP-complete given theorem 7, as NP allows us to simply guess a plan, and then check for anticipated responsibility. This puts us in line with the computational complexity of single-agent planning with propositional goals, which is also NP-complete [23].
## 6 Conclusions and Future Work
In this paper we have presented our model for responsibility attribution and anticipation in a multi-agent planning setting with partial information regarding the initial state. We have presented both causal and agentive versions of active, passive and contributive responsibility. We have demonstrated how our notions of anticipated responsibility could be useful for plan selection in a multi-agent setting, and have given a complexity analysis of our model. Finally, we have outlined a PDDL implementation of our model.
For future work, a full PDDL implementation would be allow us to test how useful our concepts of responsibility are when applied to real-world planning problems. Furthermore, we could expand our notions of responsibility to handle additional factors. The most obvious extensions would be For example, we could include beliefs about the likely actions of other agents in line with Lorini [17], or could consider intentions, probabilities and/or degress of responsibility in line with Halpern and Kleiman-Weiner [12]. Finally, since agents may have multiple goals or values that they may be held responsible for satisfying or violating, it would be useful to extend our model to allow plan comparison based on anticipated responsibility for multiple different outcomes. |
2309.17037 | Beyond Co-occurrence: Multi-modal Session-based Recommendation | Session-based recommendation is devoted to characterizing preferences of
anonymous users based on short sessions. Existing methods mostly focus on
mining limited item co-occurrence patterns exposed by item ID within sessions,
while ignoring what attracts users to engage with certain items is rich
multi-modal information displayed on pages. Generally, the multi-modal
information can be classified into two categories: descriptive information
(e.g., item images and description text) and numerical information (e.g.,
price). In this paper, we aim to improve session-based recommendation by
modeling the above multi-modal information holistically. There are mainly three
issues to reveal user intent from multi-modal information: (1) How to extract
relevant semantics from heterogeneous descriptive information with different
noise? (2) How to fuse these heterogeneous descriptive information to
comprehensively infer user interests? (3) How to handle probabilistic influence
of numerical information on user behaviors? To solve above issues, we propose a
novel multi-modal session-based recommendation (MMSBR) that models both
descriptive and numerical information under a unified framework. Specifically,
a pseudo-modality contrastive learning is devised to enhance the representation
learning of descriptive information. Afterwards, a hierarchical pivot
transformer is presented to fuse heterogeneous descriptive information.
Moreover, we represent numerical information with Gaussian distribution and
design a Wasserstein self-attention to handle the probabilistic influence mode.
Extensive experiments on three real-world datasets demonstrate the
effectiveness of the proposed MMSBR. Further analysis also proves that our
MMSBR can alleviate the cold-start problem in SBR effectively. | Xiaokun Zhang, Bo Xu, Fenglong Ma, Chenliang Li, Liang Yang, Hongfei Lin | 2023-09-29T07:52:47Z | http://arxiv.org/abs/2309.17037v1 | # Beyond Co-occurrence: Multi-modal Session-based Recommendation
###### Abstract
Session-based recommendation is devoted to characterizing preferences of anonymous users based on short sessions. Existing methods mostly focus on mining limited item co-occurrence patterns exposed by item ID within sessions, while ignoring what attracts users to engage with certain items is rich multi-modal information displayed on pages. Generally, the multi-modal information can be classified into two categories: descriptive information (_e.g._, item images and description text) and numerical information (_e.g._, price). In this paper, we aim to improve session-based recommendation by modeling the above multi-modal information holistically. There are mainly three issues to reveal user intent from multi-modal information: (1) How to extract relevant semantics from heterogeneous descriptive information with different noise? (2) How to fuse these heterogeneous descriptive information to comprehensively infer user interests? (3) How to handle probabilistic influence of numerical information on user behaviors? To solve above issues, we propose a novel multi-modal session-based recommendation (MMSBR) that models both descriptive and numerical information under a unified framework. Specifically, a pseudo-modality contrastive learning is devised to enhance the representation learning of descriptive information. Afterwards, a hierarchical pivot transformer is presented to fuse heterogeneous descriptive information. Moreover, we represent numerical information with Gaussian distribution and design a Wasserstein self-attention to handle the probabilistic influence mode. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed MMSBR. Further analysis also proves that our MMSBR can alleviate the cold-start problem in SBR effectively.
Session-based recommendation, Multi-modal learning, Pseudo-modality contrastive learning, Hierarchical pivot transformer, Probabilistic modeling.
## 1 Introduction
As an important tool to combat information overload, recommender system (RS) plays a vital role in present information era. Especially in context of e-commerce, RS facilitates online consumption by offering personalized services to individuals. Assuming the user identity information is accessible, conventional RS [1, 2] relies on user profiles and long-term behaviors to predict their preferences. However, in most real-world scenarios, the user identification is not available due to privacy policy or unlogged-in cases, where what RS could use is the short behavior sequences of anonymous users (_i.e._, sessions). Apparently, conventional RS methods are no longer applicable or satisfactory in this case. To handle this situation, session-based recommendation (SBR) is proposed to predict next items interested by anonymous users within short sessions [3]. Nowadays, SBR has drawn significant attention from both academia and industry due to its highly practical value [4, 5].
With capacity in capturing transition patterns among items in a session, various neural networks are employed to improve SBR, such as recurrent neural networks (RNN) [4, 6], convolutional neural networks (CNN) [8], attention mechanisms [7, 9], and graph neural networks (GNN) [10, 11]. Despite having made impressive progress, most existing methods still rely on mining _co-occurrence patterns_ exposed by item ID within short sessions. This significantly limits their performance since that a session usually contains a few items under the scenario of SBR (as shown in Table II, the average session length is no more than 3). In other words, there are not enough co-occurrence patterns for them to exploit for user intent modeling in SBR. Fortunately, the available _multi-modal information_ of items provides a promising antidote to improve SBR.
Intuitively, it is the multi-modal information displayed on pages that drives users to engage with certain items. As shown in Fig. 1, a user makes a decision usually after looking through item _images_, reading description _text_, and checking _price_. Since relying on different vehicles for conveying particular item features, the multi-modal information can be categorized into two groups: _descri
Fig. 1: A user makes the decision after evaluating all multi-modal information displayed on pages including item images, description text and price.
merical_ information. The descriptive information portrays an item with image and text that can intuitively describe some item features like style, color and material. For numerical information, _i.e._, price, it delivers abstract value of an item through real numbers. In most cases, as illustrated in Fig. 1, a user would not click an item unless she is satisfied with its all aspects. Obviously, the above multi-modal information jointly determines a user's choice.
In fact, different from item ID that merely contains item co-occurrence patterns, multi-modal information presents extensive characteristics of items and encodes user fine-grained preferences. For example, a Marvel fan may have a high probability to purchase a T-shirt with the logo of iron man. Unfortunately, most existing models take neither images nor text into consideration, leading to their failure for accurate intent understanding. Moreover, co-occurrence based methods usually suffer from cold-start problem where there is no sufficient data to signify relations among new items [12]. This issue will be smoothly solved, if we can understand user preferences from multi-modal features instead of dull item ID. Although some recent models try to incorporate side information to facilitate user preferences learning, such as item category [13], description text [12] and price [14], they are still unable to reveal user intent holistically with such fragmentary information. Thus, to fully understand user fine-grained preferences, we should consider the entire multi-modal information displayed on pages. However, it's nontrivial to utilize multi-modal information in SBR due to following obstacles:
(1) _Descriptive information representation_. Under SBR scenario, images and text possess distinct noise. Normally, an item image not only contains the item for sale such as a cloth but also extra contents like accessories of the cloth. Similarly, an item description text usually includes redundant words like exaggerated statements to attract user attention. The existence of such noise in images and text increases the difficulty of extracting item semantics, hindering precise user preferences learning. Therefore, the first challenge is how to obtain relevant semantics from heterogeneous descriptive information with different noise.
(2) _Descriptive information fusion_. For an item, both image and text are utilized to describe its characteristics. Obviously, there exists shared information between them. At the same time, they also hold different purposes and focus on presenting distinct properties of items. To be specific, images are more intuitive than text to describe item colors and styles. Text can clearly express the material, _e.g._, silk or cotton, whereas we can hardly understand it from images. Thus, the image and text complement each other and present an item in a united way. Accordingly, to comprehensively infer user interest, another challenge is how to fuse these heterogeneous descriptive information.
(3) _Numerical information modeling_. In general, a user's taste is _deterministic_ on descriptive information. For instance, a user who prefers crewneck T-shirts may not click suggested ones with V-neck. In contrast, numerical price affects user behaviors in a _probabilistic_ way. More precisely, as long as the item price falls in a user's acceptable range, it does not matter if the price is slightly lower or higher. Thus, the last challenge is how to handle the probabilistic influence of numerical information on user behaviors.
In order to tackle above challenges, we propose a novel Multi-Modal Session-Based Recommendation (MMSBR) that customizes both _deterministic and probabilistic modelings_ to handle descriptive and numerical information respectively. In the deterministic modeling, we devise a _pseudo-modality contrastive learning_ to refine descriptive information representations. In particular, contrastive learning is used to enhance representation learning by pushing semantically similar (positive) pairs close, while pulling dissimilar (negative) pairs apart [15]. Since different modalities of an item refer to similar contents, it is intuitive to view them as positive pairs to tackle the noise issue. However, there are semantic gaps between distinct modalities, making it inappropriate to directly contrast them. To address this issue, we propose to utilize one modality to generate pseudo-information (namely pseudo-modality) in another modality via data generation techniques. The actual and pseudo modalities which are aligned in the same semantic space are then used as positive pairs in contrastive learning to mitigate noise existing in images and text.
Moreover, to fuse descriptive information, we present a _hierarchical pivot transformer_ in deterministic modeling. With the ability in modeling complex relations in sequences, Transformer structure has shown to be effective for merging multi-modal signals [17, 19]. Inspired by this, we further create a pivot, which serves as a mixer of valuable information, in each transformer layer to govern the fusion of heterogeneous information. The pivot hierarchically extracts and integrates useful information from images and text under Transformer operations. We then view the pivot as the comprehensive embedding of descriptive information.
In probabilistic modeling, we first represent item price as a _Gaussian distribution embedding_, which enables MMSBR to perceive range property of item price. The _Wasserstein self-attention_ is then developed to handle price distribution embeddings for obtaining user acceptable price range. With the capacity in distinguishing differences between Gaussian distributions, the Wasserstein distance [20, 21] is used in the Wasserstein self-attention to determine the relevance among price distribution embeddings. Finally, the proposed MMSBR provides personalized services for users via evaluating the entire multi-modal information displayed on pages. In summary, the main contributions of our work are as follows:
* We propose a novel MMSBR to characterize user preferences based on multi-modal information, which is more in line with user decision-making process than conventional co-occurrence based methods. To our best knowledge, this is the first work to reveal user intent from multi-modal information in SBR.
* We classify multi-modal information into descriptive and numerical types. Accordingly, we customize deterministic and probabilistic modeling that consist of several innovative techniques for comprehensively mining user intent.
* Extensive experiments over three public benchmarks demonstrate the superiority of MMSBR over state-of-the-art methods. Further analysis also justifies the effectiveness of MMSBR under cold-start scenario.
Related Work
Considering that this work aims to improve session-based recommendation by incorporating multi-modal information, we briefly review the related work from following two aspects: (1) session-based recommendation including co-occurrence based methods and side information enhanced methods; (2) multi-modal recommendation.
### _Session-based Recommendation_
**Co-occurrence based methods**. Recent years, with the tremendous achievements of neural networks in various applications, we have witnessed the transition from traditional methods to neural models in SBR [3]. With the intrinsic ability to handle sequential data, RNN as well as its variants are the first neural networks applied in SBR. For example, GRU4Rec [6] utilizes gated recurrent unit (GRU) to capture sequential patterns within sessions. NARM [4] enhances GRU4Rec with attention mechanism to explore user main purpose. Afterwards, many neural architectures are employed to model user sequential behaviors such as CNN [8], attention mechanism [7, 9, 23], GNN [10, 11], and reinforcement learning [50]. Some approaches further enhance the learning for co-occurrence patterns via exploring extra sessions [22, 24, 25], multi behaviors [26], multi user intents [27] and multi relations among items [5, 28]. Contrastive learning is an emerging technique whose target is to improve embeddings by enlarging the distance between positive and negative pairs. Many recent models utilize the technique to enable robust representation learning for accurate user intent modeling [15, 29]. Also, other methods design new distance functions with metric learning to optimize user preferences learning [21]. Although greatly promoting the development of SBR, all of these methods, essentially, focus on mining co-occurrence patterns reflected by item ID. They fail to perceive user fine-grained preferences concealing in multi-modal information, which becomes a bottleneck limiting their performance.
**Side information enhanced methods**. Considering that side information can help to unveil user unique taste, some methods try to incorporate various kinds of information to improve recommendation performance such as time (_aka_ positions) [30, 32], categories [13, 31], price [14], text [12, 33] and images [34]. There are also a few works [16, 35, 36] taking both text and images into account to handle long sequential behaviors of users. These methods have proved the effectiveness of side information in understanding user interest. However, most of them conduct information fusion with simple concatenation or addition, leading to their failure in effectively merging various information. Moreover, they can neither alleviate noise in various modalities nor distinguish influence modes of distinct modalities on user behaviors. In addition, to our best knowledge, none of existing methods collectively considers entire multi-modal information displayed on the websites, _i.e.,_ images, text and price, to simulate user behaviors. Thus, we propose a novel MMSBR to holistically reveal user intent from these multi-modal information, which is consistent with genuine decision-making process.
### _Multi-modal Recommendation_
Multi-modal recommendation has received increasing attention recently, since that we humans perceive the world by concurrently processing and fusing multi-modal information [19, 37]. To name a few, some methods [38, 39] employ Graph Neural Networks (GNN) and incorporate item images and text into the user-item interaction graph to facilitate user preferences and item characteristics learning. Beside, another line of research [40, 41] utilizes the pre-training technique to inject rich knowledge from item visual and textual modalities into recommender systems. More recently, BM3 [42] improves user and item representations by optimizing three multi-modal objectives including replicating user-item interaction graph and aligning modality features in inter- and intra-modality. Unfortunately, these methods fail to handle the situation of SBR because that they require users' identification and long-term behaviors to guide the model learning. Furthermore, there is no efforts bridging multi-modal information and SBR, hence we are the first to fill this research gap.
## 3 Preliminaries
### _Problem Statement_
Session-based recommendation (SBR) is proposed to provide personalized services for anonymous users based on their short behavior sequences. Let \(\mathcal{I}\) signify the set of all unique items, where \(|\mathcal{I}|=n\) is the total number of items. Normally, as depicted in Fig. 1, an item \(x_{i}\in\mathcal{I}\) (\(1\leqslant i\leqslant n\)) is presented to users in the form of multi-modal information including item image (\(x_{i}^{img}\)), description text (\(x_{i}^{txt}\)) and price (\(x_{i}^{pri}\)), _i.e.,_\(x_{i}=\{x_{i}^{img},\,x_{i}^{txt},\,x_{i}^{pri}\}\). In SBR, an anonymous user has chronologically interacted with \(m\) items in a certain interval, producing a session \(\mathcal{S}=\) [\(x_{1},x_{2},...,x_{m}\)], where \(x_{m}\in\mathcal{I}\). Our goal is to predict next item the user will prefer based on \(\mathcal{S}\). Note that, we rely on rich multi-modal information users can access instead of dull item ID to reveal user intent, which enables our MMSBR to capture user fine-grained preferences and support cold-start scenario easily. The important notations used in this work are detailed in Table I.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline Notation & Description \\ \hline \(\mathcal{I}\), \(n/|\mathcal{I}|\) & item set, the total number of items \\ \(x_{i}\) & an item \\ \(S=[x_{1},x_{2},...,x_{m}]\) & a session with \(m\) items \\ \(x_{i}^{img}\), \(x_{i}^{txt}\), \(x_{i}^{pri}\) & item image, description text and price \\ \(\mathbf{v}_{i}^{pri}\) & price encoding \\ \(\mathbf{e}_{i}^{img}\) & actual image embedding \\ \(\mathbf{e}_{i}^{txt}\) & actual text embedding \\ \(\mathbf{e}_{i}^{baseline}\) & pseudo image embedding \\ \(\mathbf{e}_{i}^{bestxt}\) & pseudo text embedding \\ \(\mathbf{e}_{i}\) & descriptive information embedding \\ \(\mathbf{e}_{i}^{pri}\) & numerical information embedding \\ \(\mathbf{s}_{d}\) & representing user deterministic taste \\ \(\mathbf{s}_{p}\) & representing user acceptable price range \\ \hline \hline \end{tabular}
\end{table} TABLE I: Important Notations.
### _Multi-modal Information Encoding_
Considering that distinct modalities are presented to users with completely different forms, _i.e.,_ images in RGB, text in symbolic words and price in real numbers, we need to handle these information via special methods so that they can serve as inputs to neural models. In the next, we will detail how we encode these modalities, _i.e.,_ image (\(x_{i}^{i}\)), text (\(x_{i}^{t}\)) and price (\(x_{i}^{p}\)).
**Image embedding.** The first thing that a user may notice while browsing e-commerce websites is the item image. Due to the strong ability of GoogLeNet [44] in extracting semantics from images [35], we apply it to obtain image embedding \(\mathbf{e}_{i}^{img}\in\mathbb{R}^{d}\) from original image \(x_{i}^{img}\) via,
\[\mathbf{e}_{i}^{img}=\mathrm{imgEmb}(x_{i}^{img}), \tag{1}\]
where the \(\mathrm{imgEmb}(\cdot)\) denotes the GoogLeNet model pre-trained on a large number of images.
**Text embedding.** After watching the image, the user further approaches the item by reading its description text. BERT [43] has been proved to be good at extracting text semantics by many studies [41, 45]. Therefore, we employ it to learn text embedding \(\mathbf{e}_{i}^{txt}\in\mathbb{R}^{d}\) from original description text \(x_{i}^{txt}\) via,
\[\mathbf{e}_{i}^{txt}=\mathrm{textEmb}(x_{i}^{txt}), \tag{2}\]
where the \(\mathrm{textEmb}(\cdot)\) denotes the BERT model pre-trained on large text corpus.
**Price encoding.** After evaluating descriptive image and text of an item, the user would check the item price to determine whether to purchase it. The absolute price cannot accurately indicate weather an item is expensive or not because that the price varies greatly across different categories (_e.g.,_ tens of dollars for clothes and hundreds of dollars for electronics). Thus, for an item with price \(x_{i}^{pri}\) in a certain category, we encode its price level via,
\[v_{i}^{pri}=\lfloor\frac{x_{i}^{pri}-\min}{\max-\min}\times\rho\rfloor, \tag{3}\]
where [\(\min,\max\)] is the price range of its category, and \(\rho\) is the total number of price levels. Notably, such a operation enables item price to be compared across different categories [14].
## 4 Methodology
In this section, we will elaborate on the proposed MMSBR which is illustrated in Fig. 2. MMSBR is mainly composed of following interdependent components: (1) _Deterministic modeling_ is devised to handle descriptive information, _i.e.,_ item image and description text, to capture user deterministic taste; (2) _Probabilistic modeling_ is developed to copy with numerical information, _i.e.,_ item price, for modeling user acceptable price range; (3) _Prediction_ provides personalized services for individuals based on entire multi-modal information displayed on pages.
### _Deterministic Modeling_
Deterministic modeling employs: (1) _pseudo-modality contrastive learning_ to refine descriptive information representations; (2) _hierarchical pivot transformer_ to fuse heterogeneous descriptive information; (3) _vanilla attention_ to capture user deterministic taste.
#### 4.1.1 _Pseudo-modality Contrastive Learning_
As stated before, there exists noise in item images and text, leading to inaccurate item semantics extraction. Contrastive learning can tackle this issue by maximizing the agreements between semantically similar pairs. However, image and text embeddings coming from an item locate in distinct semantic spaces. Thus, it will corrupt the original semantics if we directly view them as positive pairs. To obtain effective contrastive signals, we resort to data generation techniques to generate pseudo modality which is aligned in the same space as the corresponding actual modality. Afterwards, with the generated contrastive signals, the contrastive learning is utilized to refine image and text embeddings.
**Pseudo-modality generation.** DALLE [46] is an emerging technique to produce vivid images according to short text. For a piece of text \(x_{i}^{txt}\), therefore, we feed it into the DALLE to generate pseudo image \(x_{i}^{pseimg}\). We then use the \(\mathrm{imgEmb}(\cdot)\) to get the pseudo image embedding \(\mathbf{e}_{i}^{pseimg}\in\mathbb{R}^{d}\) via,
\[\mathbf{e}_{i}^{pseimg}=\mathrm{imgEmb}(x_{i}^{pseimg}). \tag{4}\]
As to the image \(x_{i}^{img}\), we obtain its pseudo text by image classification. Concretely, we input \(x_{i}^{img}\) into GoogLeNet to perform image classification with 1,000 categories, where each category label signifies a short text. The predicted top-\(l\) categories, _i.e.,_ a set of short texts, are then concatenated as pseudo text \(x_{i}^{psext}\). Afterwards, we get the pseudo text embedding \(\mathbf{e}_{i}^{psext}\in\mathbb{R}^{d}\) via,
\[\mathbf{e}_{i}^{psextxt}=\mathrm{textEmb}(x_{i}^{psextt}). \tag{5}\]
**Contrastive learning.** The embeddings of actual and corresponding pseudo modalities, _i.e.,_\(\mathbf{e}_{i}^{img}\) to \(\mathbf{e}_{i}^{pseimg}\) (and \(\mathbf{e}_{i}^{txt}\) to \(\mathbf{e}_{i}^{psext}\)), describe the same item and locate in the same semantic space. Naturally, we view them as positive pairs in contrastive learning to enhance image and text embeddings via,
\[\begin{split}\mathcal{L}_{con}&=-\frac{\exp(\mathrm{ sim}(\mathbf{e}_{i}^{img},\mathbf{e}_{i}^{pseimg}))}{\sum_{k=1}^{n}\exp(\mathrm{ sim}(\mathbf{e}_{i}^{img},\mathbf{e}_{k}^{pseimg}))}\\ &-\frac{\exp(\mathrm{sim}(\mathbf{e}_{i}^{txt},\mathbf{e}_{i}^{ psext}))}{\sum_{k=1}^{n}\exp(\mathrm{sim}(\mathbf{e}_{i}^{txt},\mathbf{e}_{k}^{ psext}))},\end{split} \tag{6}\]
where the \(\mathrm{sim}(\cdot)\) is cosine similarity. In the first term, for an item image (\(\mathbf{e}_{i}^{img}\)), we view its pseudo image embedding (\(\mathbf{e}_{i}^{pseimg}\)) referring similar semantics as positives, while regarding other items' pseudo image embeddings (\(\mathbf{e}_{k}^{pseimg}\)) containing different contents as negatives. With pushing the positives close while pulling negatives apart, the MMSBR can enhance image embeddings. The second term does the same for refining text embeddings. With rich knowledge about corresponding modalities, the used data generation models not only align the positive pairs in the
same space but also make pseudo modality contain core semantics of the actual modality. As shown in Fig. 2, the pseudo image retains core contents cloth and filters out redundant pants and shoes. Obviously, this is of benefit to the pseudo-modality contrastive learning for alleviating noisy information existing in distinct modalities.
#### 4.1.2 **Hierarchical Pivot Transformer**
As demonstrated early, we need to fuse image and text features for comprehensive user interest understanding. Transformer structure has shown great potential in merging multi-modal signals, given that it can effectively mine complex relations among tokens in a sequence [17, 19]. Inspired by this, we first apply several distinct MLPs to convert image/text embedding into different item feature embeddings and formulate feature sequence for image/text accordingly. Based on the feature sequences, a hierarchical pivot transformer is further proposed for effective descriptive information fusion.
**Image/Text features generation.** We apply MLP to obtain feature embeddings because many studies have demonstrated the effectiveness of MLP in capturing semantics of input data [18, 45]. Formally, an item image/text feature sequence (\(\mathbf{Z}_{img}/\mathbf{Z}_{txt}\)) is formulated via,
\[\mathbf{Z}_{img}=\{\text{MLP}_{1}^{img}(\mathbf{e}_{i}^{img}),..., \text{MLP}_{C}^{img}(\mathbf{e}_{i}^{img})\}, \tag{7}\] \[\mathbf{Z}_{txt}=\{\text{MLP}_{1}^{txt}(\mathbf{e}_{i}^{txt}),..., \text{MLP}_{C}^{txt}(\mathbf{e}_{i}^{txt})\}, \tag{8}\]
where \(\text{MLP}_{k}^{img}\) and \(\text{MLP}_{k}^{txt}\) denote feed-forward neural networks with two hidden layers, and \(C\) is the number of MLPs used for image/text features extracting. Note that, The \(\text{MLP}_{k}^{img}(\mathbf{e}_{i}^{img})\) and \(\text{MLP}_{k}^{txt}(\mathbf{e}_{i}^{txt})\in\mathbb{R}^{d}\) are certain feature embeddings of image and text respectively.
**Hierarchical pivot transformer.** A vanilla transformer layer mainly contains three modules: Multi-head Self-Attention (MSA), Layer Normalisation (LN) and Fully Connected Layer (FCL). We can define a transformer layer with the input sequence \(\mathbf{F}^{l}=[f_{1}^{in},f_{2}^{in},...,f_{k}^{in}]\) and output sequence \(\mathbf{F}^{l+1}\) =[\(\mathbf{f}_{1}^{out},f_{2}^{out},...,f_{k}^{out}]\) as \(\mathbf{F}^{l+1}=\text{Trans}(\mathbf{F}^{l})\) via,
\[\mathbf{F}_{*}^{l}=\text{MSA}(\text{LN}(\mathbf{F}^{l}))+\mathbf{F}^{l}, \tag{9}\]
\[\mathbf{F}^{l+1}=\text{FCL}(\text{LN}(\mathbf{F}_{*}^{l}))+\mathbf{F}_{*}^{l}. \tag{10}\]
Based on this, we further create a pivot \(\mathbf{P}=[\mathbf{p}_{1},...,\mathbf{p}_{T}]\) in each transformer layer to govern the fusion of multi-modal information, where \(\mathbf{p}_{i}\in\mathbb{R}^{d}\) is a trainable token embedding used to assist information transmission. The hierarchical pivot transformer integrates the information of image (\(\mathbf{Z}_{img}\)) and text (\(\mathbf{Z}_{txt}\)) via:
\[[\mathbf{Z}_{img}^{l+1},\mathbf{P}_{img}^{l}] =\text{Trans}([\mathbf{Z}_{img}^{l},\mathbf{P}^{l}]), \tag{11}\] \[\mathbf{p}_{*}^{l} =(\mathbf{P}_{img}^{l}+\mathbf{P}^{l})/2,\] (12) \[[\mathbf{Z}_{txt}^{l+1},\mathbf{P}_{txt}^{l}] =\text{Trans}([\mathbf{Z}_{txt}^{l},\mathbf{P}_{*}^{l}]),\] (13) \[\mathbf{p}^{l+1} =(\mathbf{P}_{txt}^{l}+\mathbf{P}_{*}^{l})/2, \tag{14}\]
where \(\mathbf{P}^{0}\) = \(\mathbf{P}\) (random initialization), \(\mathbf{Z}_{img}^{0}\) = \(\mathbf{Z}_{img}\) and \(\mathbf{Z}_{txt}^{0}\) = \(\mathbf{Z}_{txt}\). In each transformer layer, the pivot extracts and fuses important information from different modalities. Taking Eq. (13) as an example, the pivot absorbs text information and transmits image information to the text modality. To fully fuse descriptive information, we stack the hierarchical pivot transformer defined by Eqs. (11)-(14) \(R\) times. Finally, the last layer pivot passed by a MLP is used to represent the descriptive information of an item \(x_{i}\) as \(\mathbf{e}_{i}\in\mathbb{R}^{d}\) via,
\[\mathbf{e}_{i}=\text{MLP}(\mathbf{P}^{R})=\text{MLP}([\mathbf{p}_{1}^{R}; \mathbf{p}_{2}^{R};...;\mathbf{p}_{T}^{R}]), \tag{15}\]
where \([;]\) denotes the concatenation operation, and MLP is a feed-forward neural network with two hidden layers.
Fig. 2: The proposed MMSBR customizes deterministic and probabilistic modeling to handle descriptive and numerical information respectively. In deterministic modeling, a pseudo-modality contrastive learning is designed to enhance descriptive information representations, a hierarchical pivot transformer is presented to fuse heterogeneous descriptive information, and a vanilla attention is used to capture user deterministic taste. The probabilistic modeling represents item price with Gaussian distribution embedding and devises Wasserstein self-attention to model user acceptable price range. Finally, we predict user behaviors based on the multi-modal information.
#### 4.1.3 **Vanilla Attention**
For an item \(x_{i}\), we have obtained its embedding \(\mathbf{e}_{i}\) for descriptive information involving image and text. Apparently, a user deterministic taste hidden in items she has interacted with. Thus, based on item sequence with descriptive information \(\mathbf{E}_{d}\) = \([\mathbf{e}_{1},\mathbf{e}_{2},...,\mathbf{e}_{m}]\), we can apply the vanilla attention as used in [10, 14] to obtain user deterministic taste \(\mathbf{s}_{d}\in\mathbb{R}^{d}\) via,
\[\mathbf{s}_{d} =\sum_{k=1}^{m}\alpha_{k}\mathbf{e}_{k}, \tag{16}\] \[\alpha_{k} =\mathbf{u}\sigma(\mathbf{A}_{1}\mathbf{e}_{k}+\mathbf{A}_{2} \mathbf{\bar{e}}+\mathbf{b}), \tag{17}\]
where \(\mathbf{A}_{1}\), \(\mathbf{A}_{2}\in\mathbb{R}^{d\times d}\) and \(\mathbf{b}\) are learnable parameters, \(\mathbf{u}^{T}\in\mathbb{R}^{d}\) is a trainable vector used to determine items' importance in the session, and \(\mathbf{\bar{e}}\)=\(\frac{1}{m}\sum_{k=1}^{m}\mathbf{e}_{k}\).
### _Probabilistic Modeling_
Probabilistic Modeling employs: (1) _Gaussian distribution embedding_ to represent item price; (2) _Wasserstein self-attention_ to model user acceptable price range.
#### 4.2.1 **Gaussian Distribution Embedding**
As discussed before, user preferences on price present range instead of point-wise property. Therefore, we represent the price level \(v_{i}^{pri}\) of an item \(x_{i}\) with the Gaussian distribution via,
\[\mathbf{\bar{e}}_{i}^{pri}=\text{Gaussian}(v_{i}^{pri})\sim\mathcal{N}(\hat{ \mu}_{i},\hat{\Sigma}_{i}), \tag{18}\]
where \(\hat{\mu}_{i}\) and \(\hat{\Sigma}_{i}\in\mathbb{R}^{d}\) are mean and covariance vectors respectively. MMSBR learns them with two distinct look-up embedding tables based on item price level. Note that, the mean and covariance vectors collectively signify the price range where the item falls in. As indicated in [14], the user price preferences are related with item category, so we further incorporate category information to formulate price embedding \(\mathbf{e}_{i}^{pri}\) for the item \(x_{i}\) via,
\[\mathbf{e}_{i}^{pri}\sim\mathcal{N}(\mu_{i},\Sigma_{i})=\mathcal{N}(\hat{\mu} _{i}+\mathbf{e}_{i}^{c},\hat{\Sigma}_{i}+\mathbf{e}_{i}^{c}), \tag{19}\]
where \(\mathbf{e}_{i}^{c}\in\mathbb{R}^{d}\) is the category embedding of the item. It is noted that an item price is represented by Gaussian distribution instead of widely used point-wise vector embedding, which endows MMSBR with the ability to perceive range property of item price.
#### 4.2.2 **Wasserstein Self-attention**
Self-attention is employed by various approaches [14, 17] to model behavior sequences due to its capacity in capturing item-item transition patterns. However, the conventional self-attention calculates similarity between point-wise vector embeddings with dot product, which is unsuitable for our settings where the price is represented by Gaussian distribution. Therefore, we devise a Wasserstein self-attention which applies Wasserstein distance [20, 21] to obtain attention scores between price distribution embeddings. Formally, the Wasserstein distance between two Gaussian distribution embeddings \(\mathcal{G}_{1}\sim\mathcal{N}(\mu_{1},\Sigma_{1})\) and \(\mathcal{G}_{2}\sim\mathcal{N}(\mu_{2},\Sigma_{2})\) is defined as,
\[\mathcal{W}_{2}(\mathcal{G}_{1},\mathcal{G}_{2})=\sqrt{\left\|\mu_{1}-\mu_{2} \right\|_{2}^{2}+\left\|(\mathbf{\Sigma}_{1})^{\frac{1}{2}}-(\mathbf{\Sigma}_ {2})^{\frac{1}{2}}\right\|_{2}^{2}}. \tag{20}\]
Referring to conventional self-attention, Wasserstein self-attention (WSA) handles price sequence \(\mathbf{E}_{p}\)=\([\mathbf{e}_{1}^{pri},\,\mathbf{e}_{2}^{pri},\,...,\,\mathbf{e}_{m}^{pri}]\) via,
\[\mathbf{H}=\text{WSA}(A^{Q}\mathbf{E}_{p},A^{K}\mathbf{E}_{p},A^{V}\mathbf{E}_ {p}), \tag{21}\]
where \(\mathbf{H}=\{\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{m}\}\) is output and \(A^{*}\)=(\(A^{*}_{p},A^{*}_{\Sigma}\)) (\(*\in\{Q,K,V\}\)) is used to map each distribution in \(\mathbf{E}_{p}\) into query, key and value spaces respectively. \(A^{*}_{\mu}\) or \(A^{*}_{\Sigma}\in\mathbb{R}^{d\times d}\) converts mean or covariance embeddings to corresponding spaces. Afterwards, the Wasserstein distance is employed to calculate the attention scores between query \(A^{Q}\mathbf{e}_{i}^{pri}\) and key \(A^{K}\mathbf{e}_{j}^{pri}\) via,
\[\begin{split} a_{ij}&=\mathcal{W}_{2}(A^{Q}\mathbf{e }_{i}^{pri},A^{K}\mathbf{e}_{j}^{pri})\\ &=\mathcal{W}_{2}(\mathcal{N}(A_{\mu}^{Q}\mu_{i},A_{\Sigma}^{Q} \mathbf{\Sigma}_{i}),\mathcal{N}(A_{\mu}^{K}\mu_{j},A_{\Sigma}^{K}\mathbf{ \Sigma}_{j})).\end{split} \tag{22}\]
We then linearly sum up each value \(A^{V}\mathbf{e}_{j}^{pri}\sim\mathcal{N}(A_{\mu}^{V}\mu_{j},A_{\Sigma}^{V} \mathbf{\Sigma}_{j})\) according to its attention scores to \(i\)-th item price \(a_{ij}\) to obtain the \(i\)-th output \(\mathbf{h}_{i}\sim\mathcal{N}(\mathbf{h}_{i}^{\mu},\mathbf{h}_{i}^{\Sigma})\) via,
\[\mathbf{h}_{i}^{\mu}=\sum_{j=1}^{m}a_{ij}A_{\mu}^{V}\mu_{j},\text{ and }\mathbf{h}_{i}^{\Sigma}=\sum_{j=1}^{m}a_{ij}^{2}A_{\Sigma}^{V}\mathbf{\Sigma}_{j}. \tag{23}\]
Finally, the hidden state \(\mathbf{h}_{m}\) is used to represent acceptable price range \(\mathbf{s}_{p}\) for the user via,
\[\mathbf{s}_{p}=\mathbf{h}_{m}\sim\mathcal{N}(\mathbf{h}_{m}^{\mu},\mathbf{h}_{ m}^{\Sigma}). \tag{24}\]
### _Prediction_
So far, for an item \(x_{i}\), we have obtained its comprehensive representation (\(\mathbf{e}_{i},\mathbf{e}_{i}^{pri}\)) based on its multi-modal information, where \(\mathbf{e}_{i}\) is derived from descriptive information (image and text) and \(\mathbf{e}_{i}^{pri}\sim\mathcal{N}(\mu_{i},\mathbf{\Sigma}_{i})\) comes from numerical information (price). As to an anonymous user, \(\mathbf{s}_{d}\) represents her deterministic taste on descriptive information, and \(\mathbf{s}_{p}\) indicates her acceptable price range. Based on the entire multi-modal information displayed on pages, thus, we can infer the probability of the user clicking item \(x_{i}\) via,
\[\hat{y}_{i}=softmax(\mathbf{e}_{i}\mathbf{s}_{d}+\mathcal{W}_{2}(\mathbf{e}_{i}^ {pri},\mathbf{s}_{p})), \tag{25}\]
where we evaluate user deterministic behaviors with dot-product and user probabilistic behaviors with Wasserstein distance. As in [14, 10, 4], we employ cross-entropy to improve recommendation performance via:
\[\mathcal{L}_{rec}=-\sum_{i=1}^{n}y_{i}\log(\hat{y}_{i})+(1-y_{i})\log(1-\hat{y} _{i}), \tag{26}\]
where \(y_{i}\) is ground-truth label and \(\hat{y}_{i}\) is predicted probability of item \(x_{i}\) to be clicked. Finally, we train our MMSBR under the joint supervision of recommendation and contrastive learning via,
\[\mathcal{L}=\mathcal{L}_{rec}+\lambda\mathcal{L}_{con}, \tag{27}\]
where the \(\lambda\) is a constant controlling the strength of contrastive learning task.
## 5 Experimental setup
### _Research Questions_
We conduct extensive experiments to validate the effectiveness of MMSBR by answering following research questions:
* **RQ1** Does the proposed MMSBR achieve state-of-the-art performance? (ref. Section 6.1)
* **RQ2** What is the effect of various novel techniques proposed in MMSBR? (ref. Section 6.2-6.4)
* **RQ3** What is the performance of MMSBR under cold-start scenario? (ref. Section 6.5)
* **RQ4** How does session length influence the performance of SBR? (ref. Section 6.6)
* **RQ5** What is the influence of different modalities on the performance of SBR? (ref. Section 6.7)
* **RQ6** What is the influence of key hyperparameters on MMSBR? (ref. Section 6.8)
### _Datasets and Preprocessing_
We evaluate our MMSBR and all baselines on three datasets covering different characteristics and domains from Amazon1, _i.e._, Cell Phones and Accessories (**Cellphones**), Grocery and Gourmet Food (**Grocery**), as well as Sports and Outdoors (**Sports**). Following [14], we organize user behaviors within one day to imitate SBR scenario. The last item in a session is taken as predicted target, and remaining items are used to model user intent. As in [4, 10], we filter out sessions whose length is 1 and items appearing less than 5 times. Also, we delete items with missing or invalid images/text. We chronologically split each dataset into three parts with the ratio of 7:2:1 for training, validation and testing respectively. Relying on item ID, existing models [4, 13, 14, 47, 9] can not handle cold-start items which do not appear in training sets, so they simply delete these items from test sets. Following their settings, we also remove the cold-start items, where datasets' statistics is shown in Table II. Besides, we retain cold-start items to investigate the performance of MMSBR under cold-start scenario in Section 6.5, where the cold-start situation is reported in Table V. Note that, although our setting is ubiquitous in real scenes, available datasets containing images, text and price are very scarce instead. Thus, we sincerely hope that our work can foster the development of multi-modal datasets for SBR.
Footnote 1: [http://jmcauley.ucsd.edu/data/amazon/](http://jmcauley.ucsd.edu/data/amazon/)
### _Evaluation Metrics_
As in [4, 10, 14], we evaluate the performance of MMSBR and baselines with following two widely used metrics: **Prec@k** (Precision) calculates the proportion of cases where the target item is within recommendation list; **MRR@k** (Mean Reciprocal Rank) is the average of reciprocal ranks of target item among recommendation list. Similar as [13, 14, 47, 48], the k is set as 10 and 20 in this work.
### _Baselines_
The following two groups of competitive methods are selected as baselines for performance comparison:
**Co-occurrence based methods** focus on mining item co-occurrence patterns to provide recommendation:
* **S-POP** recommends the most frequent items in the current session;
* **SKNN** predicts next items based on items' co-occurrence frequency in all sessions;
* **NARM**[4] utilizes GRU with attention mechanism to capture user main intent;
* **SASRec**[49] applies Transformer architecture to model transitions among items;
* **BERT4Rec**[9] employs bidirectional self-attention to model user behaviors;
* **SR-GNN**[10] captures complex relations among items via GNN;
* **COTREC**[47] enhances item embeddings by contrastive learning.
* **MSGIFSR**[48] studies fine-grained co-occurrence relations by dividing a session into multiple snippets.
**Side information enhanced methods** utilize extra information to facilitate user preferences learning:
* **MGS**[13] exploits item categories for more accurate preferences estimation;
* **UniSRec**[12] incorporates description text of items to obtain universal sequence representations;
* **CoHHN**[14] emphasizes the significance of price in determining user choices.
We have not included MML [16], which focuses on text and image-based long sequence learning, in our baselines. This decision was made because MML randomly deletes some items within a sequence during model training, which is unsuitable for SBR where a session typically consists of only a few items (as shown in Table II).
### _Implementation Details_
To ensure fair comparison, we fix embedding size of all methods at 64. The other hyperparameters of MMSBR and all baselines are determined via grid search according to their performance on Prec@20 in validation set. For main hyperparameters of MMSBR, we investigate the number of stacked layers for hierarchical pivot transformer \(R\) in \(\{1,2,3,4,5\}\), the number of generated features \(C\) in \(\{2,4,6,8,10\}\) and the number of tokens in pivot \(T\) in \(\{1,2,3,4,5,6\}\). Besides, we fix balance coefficient \(\lambda\) = 0.01, retain top-2 (\(l\)=2) categories from image classification as pseudo text, and set the number of price levels as 100 (\(\rho\) = 100) for all datasets. The mini-batch size and initial learning rate is \(100\) and \(0.001\), respectively. Given that the output dimension of GoogLeNet and BERT are 1024 and 768 respectively, we utilize PCA algorithm to reduce them to 64. We have released the source code of our work2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Datasets & Cellphones & Grocery & Sports \\ \hline \#item & 8,614 & 11,638 & 18,796 \\ \#category & 48 & 665 & 1,259 \\ \#interaction & 196,376 & 364,728 & 566,504 \\ \#session & 78,026 & 127,548 & 211,959 \\ avg.length & 2.52 & 2.86 & 2.67 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Statistics of all datasets.
## 6 Results and Analysis
### _Overall Performance (RQ1)_
We report the performance of MMSBR and all baselines in Table III, where the following observations are noted:
(1) Among co-occurrence based methods, COTREC and MSGIFSR achieve competitive performance. We speculate that COTREC's good performance comes from its utilization of contrastive learning to improve session embeddings. As to MSGIFSR, it divides a session into many snippets containing consecutive items, enabling it to capture fine-grained co-occurrence relations among items.
(2) For methods with side information enhancement, CoHHN (price) and UniSRec (text) have obvious advantages over MGS (category). As opposed to category, price and text are what users can immediately observe on item pages. This observation supports our claim that modeling what displays on websites is of benefit to capturing user intent.
(3) Compared with co-occurrence based methods, the side information enhanced methods generally perform better. This signifies the validity of extra information in modeling user behaviors. It makes sense since that side information enables models to mine various user preferences, leading to effective intent understanding.
(4) Different baselines have varying performance on various datasets. Taking CoHHN as an example, it achieves the best performance on Grocery among all baselines, while its results on Sports left some to be desired. These methods just focus on a part of information that users may access, either item ID, category, text or price. In fact, however, a user evaluates all available information before making decisions. Therefore, they are incapable of capturing user preferences holistically, which results in their inferiority in discerning user intent across various context (_i.e.,_ datasets).
(5) The proposed MMSBR consistently outperforms all baselines in terms of all evaluation metrics on all datasets, which demonstrates its effectiveness for SBR. In particular, MMSBR surpasses the best baselines in Prec@20 and MRR@20 by 3.14% and 5.33% on Cellphones, 1.56% and 1.35% on Grocery and 1.45% and 2.02% on Sports. Given that a user makes decisions by evaluating item images, text and price, the modeling for entire multi-modal information in MMSBR is in line with decision process, contributing to revealing her intent more effectively. Besides, with reference to Table II and Table III, we find that MMSBR obtains largest improvements in Cellphones that contains least items among all datasets. We argue that the introduction of multi-modal information enriches data and enables MMSBR to understand user demands from multiple perspectives. Therefore, the proposed MMSBR achieves impressive performance under the condition of sparsity data. It also reminds researchers that the multi-modal information is an antidote to copy with sparsity issue.
### _Effect of Pseudo-modality Contrastive Learning (RQ2)_
To obtain relevant semantics from descriptive information under distinct noise, we propose a pseudo-modality contrastive learning. It refines image and text embeddings via contrastive learning, where generated pseudo modalities are used as contrastive signals. MMSBR-con removes contrastive learning from MMSBR, _i.e.,_ it directly fuses outputs of different modality encoders without handling
Fig. 3: The effect of pseudo-modality contrastive learning.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Cellphones} & \multicolumn{4}{c}{Grocery} & Sports \\ \cline{2-11} & Prec@10 & MRR@10 & Prec@20 & MRR@20 & Prec@10 & MRR@10 & Prec@20 & MRR@20 & Prec@10 & MRR@10 & Prec@20 & MRR@20 \\ \hline S-POP & 5.32 & 2.71 & 7.24 & 2.85 & 20.65 & 17.00 & 23.64 & 17.25 & 15.61 & 14.56 & 17.59 & 14.69 \\ SKNN & 21.07 & 9.95 & 24.71 & 10.21 & 39.83 & 25.15 & 41.88 & 25.29 & 31.79 & 21.31 & 33.86 & 21.46 \\ NARM & 20.59 & 15.32 & 24.12 & 15.56 & 40.39 & 34.53 & 42.41 & 34.62 & 31.64 & 26.94 & 34.17 & 27.12 \\ SASRec & 23.37 & 15.47 & 27.58 & 15.76 & 40.97 & 34.76 & 43.02 & 34.92 & 31.54 & 26.68 & 34.11 & 26.87 \\ BER4Rec & 22.28 & 14.39 & 27.09 & 14.73 & 40.59 & 34.09 & 42.93 & 34.31 & 31.57 & 26.85 & 34.32 & 27.07 \\ SR-GNN & 21.80 & 15.60 & 25.08 & 15.77 & 40.81 & 34.89 & 42.74 & 35.01 & 31.96 & 27.43 & 34.29 & 27.51 \\ COTREC & 23.78 & 10.82 & 28.33 & 11.13 & 41.28 & 30.60 & 43.24 & 30.75 & 32.16 & 23.28 & 35.13 & 23.46 \\ MSGIFSR & 20.92 & 14.53 & 24.51 & 14.77 & 41.34 & 35.25 & 43.40 & 35.47 & 32.28 & 27.56 & 34.95 & 27.72 \\ MGS & 21.74 & 14.29 & 25.21 & 14.54 & 40.92 & 35.06 & 42.79 & 35.20 & 31.63 & 26.75 & 33.76 & 26.89 \\ UniSRec & 22.73 & 15.36 & 26.65 & 15.63 & 41.40 & 35.12 & 43.44 & 35.24 & 31.90 & 26.91 & 34.41 & 27.04 \\ CoHHN & 23.60 & 15.77 & 27.71 & 15.96 & 41.58 & 35.33 & 43.59 & 35.58 & 32.12 & 27.13 & 35.02 & 27.31 \\ \hline
**MMSBR** & \(\mathbf{24.37^{*}}\) & \(\mathbf{16.47^{*}}\) & \(\mathbf{29.22^{*}}\) & \(\mathbf{16.81^{*}}\) & \(\mathbf{42.10^{*}}\) & \(\mathbf{35.91^{*}}\) & \(\mathbf{44.27^{*}}\) & \(\mathbf{36.06^{*}}\) & \(\mathbf{32.89^{*}}\) & \(\mathbf{28.10^{*}}\) & \(\mathbf{35.64^{*}}\) & \(\mathbf{28.28^{*}}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance comparison of MMSBR with baselines over three datasets. The results (%) produced by the best baseline and the best performer in each column are underlined and boldfaced respectively. Statistical significance of pairwise differences for MMSBR against the best baseline (*) is determined by the t-test (\(p<0.01\)).
distinct noise. MMSBR-pse projects embeddings of different modalities into a space via MLP and conducts contrastive learning accordingly like in [15, 18, 29], while ignoring the semantic gaps existing in distinct modalities.
As shown in Fig. 3, in Cellphones and Grocery, both MMSBR and MMSBR-pse outperform MMSBR-con, demonstrating that contrastive learning can enhance modality representation. Besides, MMSBR-pse is defeated by MMSBR-con on Sports. It proves our hypothesis that semantic gaps between distinct modalities may impede representation learning. Thus, directly contrasting different modalities of an item in turn leads to performance degradation in this case. Moreover, MMSBR performs much better than MMSBR-pse in all datasets, which indicates that generated pseudo modalities can fill such semantic gaps. Additionally, MMSBR achieves the best performance across all variants, which indicates the superiority of pseudo-modality contrastive learning on mitigating distinct noise existing in different modalities.
### _Effect of Hierarchical Pivot Transformer (RQ2)_
A user usually makes the decision after evaluating shared and distinct information from descriptive information. Therefore, we propose a novel hierarchical pivot transformer for heterogeneous information fusion. Following conventional operations [18, 45], MMSBR\({}_{mlp}\) maps image and text into the same space by MLP and concatenates their embeddings to fuse item descriptive information.
As shown in Table IV, MMSBR\({}_{mlp}\) is defeated by MMSBR with a large margin, which indicates the effectiveness of hierarchical pivot transformer in capturing complementary information from images and text. We believe that the pivot in each transformer layer is able to extract and integrate meaningful information from distinct modalities, thus facilitating effective information fusion. Furthermore, MMSBR\({}_{mlp}\) achieves competitive performance (especially in MRR@20) compared with the best baselines MSGIFSR and COTREC. It serves as more evidence that modeling multi-modal information rather than only mining co-occurrence patterns of item ID can assist to better user intent learning.
### _Effect of Probabilistic Modeling (RQ2)_
As discussed previously, different from descriptive information where a user's taste is deterministic, the numerical information, _i.e._, item price, affects user behaviors in a probabilistic way. Therefore, we propose a probabilistic modeling to handle this situation, where the Gaussian distribution and Wasserstein Self-attention are devised to represent item price and model user acceptable price range respectively. Following [14], the variant MMSBR\({}_{de}\) represents item price with point-wise vector embeddings instead of distribution ones. Specifically, MMSBR\({}_{de}\) first discretizes continuous item price into discrete price-level, and then obtains point-wise embedding for the price via look-up embedding table. In other words, it does not discriminate distinct influence modes of various information on user choices.
As presented in Fig. 4, MMSBR significantly outperforms MMSBR\({}_{de}\) in all cases, confirming the validity of the proposed probabilistic modeling in tackling numerical information. Moreover, it demonstrates that users exhibit different behavioral patterns on different information, _i.e._, deterministic/probabilistic mode on the descriptive/numerical information. By utilizing Gaussian distribution embeddings and Wasserstein self-attention, MMSBR is able to learn user acceptable price range, leading to its good performance on user behaviors modeling. In addition, distinguishing influence modes of different type information in a fine-grained manner is advantageous to user behaviors modeling, which is a valuable reference to future research.
### _Performance in Cold-start Scenario (RQ3)_
Recommendation systems have long struggled with the cold-start problem, where they are required to show users new items that never appear in the system before. To evaluate the performance of MMSBR under cold-start scenario, we retain fresh items which do not appear in training sets in tests, where the statistics is presented in Table V. We can get following insights from Fig. 5, :
(1) When encountering with new items, all models show a deteriorated performance, indicating that the cold-start is truly a tricky issue in SBR. Fortunately, the incorporation of extra information can aid in portrayal for new items, leading to impressive performance in cold-start situation. For instance, in Sports, COTREC/MSGIFSR based on co-occurrence patterns defeats CoHHN incorporating price
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Cellphones} & \multicolumn{2}{c}{Grocery} & \multicolumn{2}{c}{Sports} \\ \cline{2-6} & Prec@20 & MRR@20 & Prec@20 & MRR@20 & Prec@20 & MRR@20 \\ \hline COTREC & 28.33 & 11.13 & 43.24 & 30.75 & 35.13 & 23.46 \\ MSGIFSR & 24.51 & 14.77 & 43.40 & 35.47 & 34.95 & 27.72 \\ MMSBR\({}_{mlp}\) & 26.74 & 15.95 & 42.93 & 35.28 & 34.67 & 27.86 \\
**MMSBR** & **29.22\({}^{*}\)** & **16.81\({}^{*}\)** & **44.27\({}^{*}\)** & **36.06\({}^{*}\)** & **35.64\({}^{*}\)** & **28.28\({}^{*}\)** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The effect of hierarchical pivot transformer.
Fig. 4: The effect of probabilistic modeling.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Datasets & Cellphones+ & Grocery+ & Sports+ \\ \hline \#item & 10,245(+1631) & 13,493(+1855) & 22,049(+3253) \\ \#category & 48(-) & 678(+13) & 1,312(+53) \\ \#interaction & 199,065(+2689) & 367,674(+2946) & 571,789(+5285) \\ \#session & 78,987(+961) & 128,510(+962) & 213,787(+1828) \\ avg.length & 2.52(\(\cdot\)) & 2.86(\(\cdot\)) & 2.67(\(\cdot\)) \\ \hline \hline \end{tabular}
\end{table} TABLE V: Statistics of datasets with cold-start items.
and category on Prec@20/MRR@20 for non-cold items, whereas CoHHN outperforms them both in cold-start scenario. (2) The co-occurrence based methods are prone to fail in cold-start scenario, which is intuitive as there are not co-occurrence patterns for them to learn. Solely relying on co-occurrence patterns exposed by item ID, these methods could do nothing but blindly guess user interest in new items with random embeddings, resulting in their inferior performance. (3) The proposed MMSBR outperforms all methods with a large margin under cold-start scenario, indicating that MMSBR can effectively alleviate cold-start problem. Furthermore, our MMSBR has the least performance degradation compared with other methods in the cold start scenario. We believe that holistically modeling multi-modal information that a user can access enables MMSBR to mine her fine-grained preferences to the maximum, thus achieving impressive results. It also reminds researchers that utilizing multi-modal information is a promising way to copy with cold-start issue.
### _Impact of Various Session Lengths (RQ4)_
The session length can significantly affect recommendation performance since it signifies how much information we can obtain to model user intent. Therefore, we investigate the performance of MMSBR under different session lengths. As shown in Fig. 6, following observations are noted:
(1) The proposed MMSBR achieves larger improvement over baselines on short sessions (\(\leqslant 3\)) than long sessions (\(>3\)). Obviously, it is hard for co-occurrence based methods to accurately predict user behaviors within short sessions, since there is limited information for them to capture user intent. In contrast, our MMSBR can identify fine-grained preferences of users from rich multi-modal information, which alleviates data sparsity existing in short sessions. (2) Models perform better in short sessions than in long ones on Cellphones and Sports. Instead, they perform well in long sessions but poorly in short ones on Grocery. According to Table II, sessions in Grocery are much longer than that in Cellphones and Sports. We speculate that much more instances concentrated in long sessions make models achieve better performance in long sessions on Grocery. (3) MMSBR achieves the best results in all cases, which demonstrates its effectiveness on modeling user behaviors in SBR again.
### _Ablation Study (RQ5)_
In this part, we further zoom into each modality to see its specific influence on MMSBR. We successively remove each modality from MMSBR to conduct ablation study. Notably, in (a)/(b) of Table VI, the item image/text is only used to refine text/image in pseudo-modality contrastive learning while we do not include it to model user interest.
As shown in Table VI, different modalities show various influence on MMSBR's performance in distinct context. For instance, without price, (c) is overwhelmed by (a) and (b) in Cellphones, while its performance is better than (a) and (b) in other datasets. We speculate that, for electronics, users are concerned with its price because there may be a huge price gaps between cellphones with different brands. As to Grocery, users tend to care its practicality instead of price. Moreover, MMSBR achieves much better performance than all variants. It supports our motivation that a user behaviors are determined by the entire multi-modal information displayed on pages. Thus, it is rationale and imperative to model user preferences by considering these multi-modal information holistically.
### _Hyperparameter Study (RQ6)_
In this section, we investigate the influence of three main hyperparamers on MMSBR.
**The number of stacked layers for hierarchical pivot transformer \(R\).** From the first row in Fig. 7, we can find that the optimal \(R\) for Cellphones/Grocery and Sports is 3 and 4 respectively. As shown in Table II, Sports contains much more items than other datasets. We speculate that MMSBR needs to repeat hierarchical pivot transformer more times to fully integrate heterogeneous information in larger dataset.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Cellphones} & \multicolumn{2}{c}{Grocery} & Sports \\ \cline{2-7} & Prec@20 & MRR@20 & Prec@20 & MRR@20 & Prec@20 & MRR@20 \\ \hline \multirow{2}{*}{(a) w/o image} & 27.45 & 14.85 & 41.23 & 35.20 & 32.14 & 27.50 \\ & 27.19 & 14.69 & 41.11 & 35.08 & 32.22 & 27.42 \\ & 25.10 & 13.35 & 42.98 & 35.57 & 34.78 & 27.68 \\ \cline{1-1} & **MMSBR** & **29.22\({}^{*}\)** & **16.81\({}^{*}\)** & **44.27\({}^{*}\)** & **36.06\({}^{*}\)** & **35.64\({}^{*}\)** & **28.28\({}^{*}\)** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The influence of different modalities.
Fig. 5: Performance in cold-start scenario.
Fig. 6: Impact of various session lengths.
**The number of generated features \(C\).** As shown in middle row in Fig. 7, the small number of \(C\), _i.e.,_ 4, can make MMSBR achieve satisfactory results. It is consistent with cognitive anthropology that humans can only pay attention to a few aspects (features) of a matter (item) simultaneously.
**The number of tokens in pivot \(T\).** Refer to the last row in Fig. 7, if \(T\) is set too small, the pivot can not effectively extract information from different modalities. In contrast, if it is set too large, the information is sparsely distributed in each token, which is also adverse to information fusion. Accordingly, we empirically fix \(T\) at 4 in datasets.
## 7 Conclusion and Future Work
Existing methods for session-based recommendation mostly concentrate on mining limited item co-occurrence patterns exposed by item ID, while ignoring that it is rich multi-modal information displayed on pages that attracts users to engage with certain items. Based on this motivation, we propose a novel MMSBR to characterize user preferences by modeling multi-modal information including descriptive information (images and text) and numerical information (price). Specifically, we devise a pseudo-modality contrastive learning to obtain relevant semantics of item images and text. Afterwards, a hierarchical pivot transformer is presented to effectively fuse heterogeneous descriptive information. For numerical information, we first represent item price with Gaussian distribution and devise a Wasserstein self-attention to model user acceptable price range. Comprehensive experiments conducted on three public datasets demonstrate the superiority of MMSBR over state-of-the-art baselines. Additional research also validates the effectiveness of MMSBR under cold-start scenario.
In the future, we plan to explore user reviews on items for further mining user fine-grained preferences for SBR. Besides, despite tailored for SBR, the proposed pseudo-modality contrastive learning and hierarchical pivot transformer can be easily extended to other multi-modal tasks for effective multi-modal learning.
## Acknowledgments
We thank the editors and reviewers for their efforts and constructive comments. This work was supported by Natural Science Foundation of China (No.62076046, No.62006034).
|
2301.13473 | CRC-RL: A Novel Visual Feature Representation Architecture for
Unsupervised Reinforcement Learning | This paper addresses the problem of visual feature representation learning
with an aim to improve the performance of end-to-end reinforcement learning
(RL) models. Specifically, a novel architecture is proposed that uses a
heterogeneous loss function, called CRC loss, to learn improved visual features
which can then be used for policy learning in RL. The CRC-loss function is a
combination of three individual loss functions, namely, contrastive,
reconstruction and consistency loss. The feature representation is learned in
parallel to the policy learning while sharing the weight updates through a
Siamese Twin encoder model. This encoder model is augmented with a decoder
network and a feature projection network to facilitate computation of the above
loss components. Through empirical analysis involving latent feature
visualization, an attempt is made to provide an insight into the role played by
this loss function in learning new action-dependent features and how they are
linked to the complexity of the problems being solved. The proposed
architecture, called CRC-RL, is shown to outperform the existing
state-of-the-art methods on the challenging Deep mind control suite
environments by a significant margin thereby creating a new benchmark in this
field. | Darshita Jain, Anima Majumder, Samrat Dutta, Swagat Kumar | 2023-01-31T08:41:18Z | http://arxiv.org/abs/2301.13473v2 | # CRC-RL: A Novel Visual Feature Representation Architecture for Unsupervised Reinforcement Learning
###### Abstract
This paper addresses the problem of visual feature representation learning with an aim to improve the performance of end-to-end reinforcement learning (RL) models. Specifically, a novel architecture is proposed that uses a heterogeneous loss function, called CRC loss, to learn improved visual features which can then be used for policy learning in RL. The CRC-loss function is a combination of three individual loss functions, namely, contrastive, reconstruction and consistency loss. The feature representation is learned in parallel to the policy learning while sharing the weight updates through a Siamese Twin encoder model. This encoder model is augmented with a decoder network and a feature projection network to facilitate computation of the above loss components. Through empirical analysis involving latent feature visualization, an attempt is made to provide an insight into the role played by this loss function in learning new action-dependent features and how they are linked to the complexity of the problems being solved. The proposed architecture, called CRC-RL, is shown to outperform the existing state-of-the-art methods on the challenging Deep mind control suite environments by a significant margin thereby creating a new benchmark in this field.
In the recent past, deep reinforcement learning (DRL) algorithms have been successfully used to learn action policies directly from visual observations, thereby finding application in several interesting areas such as gaming [1, 2], robotics [3, 4, 5, 6] and autonomous vehicles [7, 8] etc. This success is mostly driven by the agent's ability to jointly learn feature representation and policy decisions by using long-term credit-assignment capabilities of reinforcement learning algorithms in an end-to-end fashion. In spite of this success, RL algorithms are known to be sample-inefficient and suffer from poor generalization for high-dimensional observations such as images [9, 10]. There are several approaches to address these concerns, including methods such as, transfer learning [11, 12], meta-learning [13][14] and active learning [15]. _Feature representation learning_[16] is an alternative, and sometimes complementary, to these approaches which aims at learning useful features that can simplify the intended task, e.g., classification or prediction. This paper primarily focuses on this later approach as it is now widely accepted that the problem of sample-inefficiency in RL can be solved to a great extent by learning suitable feature representation which is shown to expedite the policy learning process [17]. The feature representations are learned using self-supervised methods which are increasingly becoming popular as they obviate the need for manually generated labeled datasets thereby simplifying the practical deployment of deep learning models [18]. Some of these approaches include auto-encoders [19], GANs [20][21], contrastive learning [22] and data augmentation techniques [9][23]. The features thus obtained have been shown to greatly improve the sample efficiency and generalizability of RL methods as demonstrated in [24][10][23].
Rather than decoupling the representation learning from policy learning as done in [25][23][17], we continue working with end-to-end models because of their simplicity and aim at improving their performance by performing auxiliary tasks as demonstrated in [10][26]. Since the feature representations are learned along side the policy decisions in an end-to-end fashion, the features learned are actually _action-dependent_. This is because, the backward gradient flow from the policy-learning algorithm is allowed to update the encoder weights. This makes the learned feature vectors strongly correlated to the actions being taken by the agent [27]. Given this hindsight, we are motivated by two factors. First, it is our belief that the quality of the fea
tures learnt could be improved by using a better loss function leading to improved RL performance. Secondly, we are keen to develop a better understanding of the relationship between the feature and action spaces. Towards fulfilling these objectives, we propose a new heterogeneous loss function called _CRC loss_ for feature representation learning by combining three different loss functions, namely, _contrastive loss_[10], _reconstruction loss_ and _consistency loss_. The reconstruction loss obtained with an auto-encoder model helps in learning compact features that is sufficient to reconstruct the original observations. On the other hand, the consistency loss [23] helps in learning features that are invariant to image augmentations. In other words, by minimizing the consistency loss, the encoder is encouraged to learn _task-relevant_ features while ignoring irrelevant aspects (such as, background color) thereby avoiding observational over-fitting [28]. Similarly, the contrastive loss helps in learning _class-invariant_ features from augmented images by contrasting them against a batch of negative samples. In that sense, these three loss functions contribute complementary information and hence, should improve the feature representation learning when combined together. In order to implement feature training with this loss function, a new architecture inspired by CURL [10], is proposed that uses a Siamese Twin encoder model, a decoder network and a feature predictor to generate these losses. Through empirical analysis including feature visualizations, it is shown that the feature representations learnt by the CRC loss function is different from those learnt with the baseline CURL model, indicating the role played by the CRC-loss in learning new action-dependent features. In addition, visualization of correlation matrices between latent features generated by this model show increasingly complex patterns for complex environments with higher-dimensional action spaces, thereby providing a clue about how features are inherently linked with action in an end-to-end RL model. Through rigorous experiments on the challenging Deep Mind Control suite environments [29], it is shown that the proposed CRC-RL model outperforms the existing state-of-the-art methods by a significant margin, thereby establishing the efficacy of the approach. The design choices for the proposed model are justified through extensive ablation studies.
In short, the major contributions made in this paper are as follows:
1. A new self-supervised feature representation architecture along with a novel loss function is proposed to improve the performance of RL models in learning policies directly from image observations.
2. Through empirical analysis involving latent feature visualization, an attempt has been made to provide insights into the relationship between the action and feature space thereby providing better standing of the role played of the new loss function in learning _action-dependent_ features.
3. The resulting architecture is shown to outperform existing state-of-the-art methods on the challenging DMC environments by a significant margin.
The rest of this paper is organized as follows. Related works are reviewed in the next section. The proposed architecture is described in Section 2. The experimental results are discussed and analyzed in Section 3. The paper ends with the concluding section 4.
## 1 Related Works
This section provides an overview of related literature in the following subsections.
### Deep RL architectures for policy learning
Reinforcement learning algorithms learn the optimal policy for a given task by maximizing a measure of _cumulative discounted future reward_ for the task while balancing between _exploration_ (of new possibilities) and _exploitation_ (of post experiences) [30]. This cumulative discounted reward function, represented as Q or _value_ function, is not known a priori and, is used to evaluate a given action taken by the agent. Depending on how this function is estimated and desirable actions are derived from it, the RL-based methods can be broadly classified into two categories: _value-based_ methods and _policy-based methods_. The value-based methods aim at estimating the Q-function and then derive action from this by using a greedy policy. On the other hand, policy- based methods directly estimate the policy function by maximizing a given objective function. The traditional Q-learning algorithm estimates the Q function iteratively by using an approximate dynamic programming formulation based on Bellman's equation starting from an initial estimate [31]. The original Q-learning algorithm could be applied to problems with discrete state (observation) and action spaces and hence, suffered from the _curse-of-dimensionality_ problem with higher dimensions and range of values. This limitation is overcome by using a deep network to estimate Q function that can take arbitrary observation inputs, thereby, greatly enhancing the capabilities of RL algorithms. The resulting approach is known as Deep Q Networks (DQN) [32][33] which has been applied successfully to a wide range of problems while achieving superhuman level performances in a few cases, such as ATARI video games [34], Go [35] etc. The success of DQN has spawned a new research field known as deep reinforcement learning (DRL) attracting a large following of researchers. Readers are referred to [36] for a survey of this field. The DQN models were subsequently extended
to continuous action spaces by using policy gradient methods that used a parameterized policy function to maximize DQN output using gradient ascent methods [37][38]. This has opened the doors for solving various robotics problems that use continuous values such as joint angles, joint velocities or motor torques as input. Since then, a number of methods have been proposed to improve the performance of RL algorithms and have been applied successfully to different robotic problems - manipulation [39][40], grasping [41][42], navigation [43] etc. Some of the notable methods include actor-critic models - A2C and A3C [44], soft actor-critic (SAC) [45] and proximal policy optimization (PPO) [46]. In spite of the success of these methods, (deep) reinforcement learning algorithms, in general, suffer from limitations such as poor sampling efficiency leading to longer training time, poor generalization and instability. The work presented in this paper aims to address some of these concerns by focusing on learning better task-relevant features.
### Feature Representation Learning in RL
It is now widely accepted that learning policies from low-dimensional feature vectors based on physical states is much more sample efficient compared to learning directly from image pixels [10][47]. Hence, it is imperative to learn suitable state representations from image observations that will reduce the search space thereby improving the sample efficiency and stability of RL algorithms. The field of self-supervised representation learning has seen great progress in last few years. Auto-encoders [19][48] learn the state representation by compressing the observation into low-dimensional state that is sufficient to reconstruct the observation. These have been used to improve the performance of RL algorithms as demonstrated in [17][49][50][24]. On the other hand, contrastive learning [22][51] learns the class-relevant feature representations by maximizing the agreement between the augmented versions of the same observation. It has been shown to greatly improve the sample efficiency of RL algorithms as in [10]. Similarly, recent studies have shown that the right kind of data augmentation techniques can improve the sample efficiency and generalization capabilities of RL algorithms learning _task-relevant_ features which remain unaffected by distractions introduced by the augmentation [9][52]. This can be further enforced by making the encoder minimize the consistency loss as suggested in [23]. In short, learning suitable feature representation plays a significant role in improving the performance of RL algorithms by increasing sample efficiency, improving generalization and stability. The work presented in this paper contributes to this field by proposing a novel loss function that leads to superior learning performance for continuous control tasks as will be demonstrated later in this paper.
## 2 Method
This section provides details of the proposed CRC-RL model that uses a novel heterogeneous loss function to extract useful information from visual images to be used for learning optimal policy using an end-to-end RL framework. The discussion is organized in the following subsections.
Figure 1: CRC-RL Architecture: It consists of a Siamese Twin encoder along with a decoder and a feature predictor network. The query encoder together with the decoder forms an auto-encoder. The query encoder is used for learning policy using SAC algorithm. Observations are data-augmented to form query and key observations, which are then encoded into latent features by the respective encoders. Only the query encoder weights are updated during the training step. The weights of key encoder are exponential moving average of query encoder weights.
The architecture of the proposed model is described next.
### The Model Architecture
The overall architecture of the proposed model is shown in Figure 1. The observation is available in the form of images which are stacked together to act as input to the model. Stacking of frames is a heuristic approach to incorporate temporal information in the learning process [53]. The observations obtained from the environment is stored in a replay buffer \(\mathcal{D}\) and a batch is sampled from this replay buffer during the training process. A Siamese Twin encoder model is employed for extracting features from the input images. These two encoders, termed as _query_ and _key_ encoders, are used for computing contrastive and consistency losses. The query encoder with a decoder is used for computing the reconstruction loss. A combination of these three losses, known as the CRC loss, is used for updating the parameters of the query encoder and decoder network. The input images are augmented before applying to the key encoder. The features obtained from the query encoder is used for policy estimation using soft-actor-critic algorithm [45]. The parameters of the query encoder and decoder networks are updated using error signals obtained from their own outputs as well as from the RL algorithm. Since the encoder networks are getting influenced by the RL policy algorithm, the features learnt in the process are _action-dependent_. This aspect will be analyzed in more detail in the experiment section presented later in this paper. The weights of the key encoder network is the exponential moving average of the query encoder weights. The proposed CRC loss function used for learning the feature embeddings is discussed in the next subsection.
### The loss function for feature extraction
The query encoder is trained using the proposed CRC loss function which is a combination of the following three loss components as described below.
#### 2.2.1 Contrastive loss
In contrastive learning, we have a query \(q\) observation and a set of key observation samples \(\mathbb{K}=\{k_{0},k_{1},...\}\) consisting of both positive samples \((k_{+})\) and negative samples \((\mathbb{K}\setminus k_{+})\). The positive samples are those that belong to the same class as that of the query observation and the rest are considered to be the negative samples. The goal is to learn embeddings such that \(q\) is relatively more similar to the positive keys \(k_{+}\) than the negative keys in the latent space. The query and key observations, generated by applying data augmentation on sampled observations, are encoded using the query and key encoder respectively. The contrastive loss depends on the output of both the encoders (Siamese Twin) represented by the symbols \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) respectively. The idea behind using the contrastive loss is that the different augmentations of the same image will have the same underlying information and hence their high-level representations will be mapped together in the latent space. The similarity between the query and the key embeddings is computed using the bilinear inner-product \(q^{T}Wk>0\) where \(W\) is a symmetric matrix of parameters to be estimated [54] along with other parameters during the training process. The objective of training is to reduce this similarity measure so that the query embeddings become more distinct from the key embeddings over time (\(q^{T}Wk\approx 0\Rightarrow q\perp k,\ W>0\)). This is achieved by minimizing the InfoNCE loss [55] given by:
\[L_{q}=\log\frac{\exp(q^{T}Wk_{+})}{\sum_{k_{i}\in\mathbb{K}}\exp(q^{T}Wk_{i})} \tag{1}\]
#### 2.2.2 Reconstruction loss
A well-trained encoder-decoder network is expected to reconstruct the input image at the output of the decoder network. The reconstruction loss is computed based on the inaccuracy in the reconstructed image. A convolutional encoder \(f_{\theta}\) maps an input observation \(\mathbf{x}\in\mathbb{R}^{m\times n\times 3}\) to a lower-dimensional latent vector \(\mathbf{s}\in\mathbb{R}^{l}\), and a deconvolutional decoder \(g_{\phi}\) then reconstructs \(\mathbf{s}\) back to \(\mathbf{\hat{x}}\in\mathbb{R}^{m\times n\times 3}\) such that
\[f_{\theta}:\mathbf{x}\rightarrow\mathbf{s} \tag{2}\] \[g_{\phi}:\mathbf{s}\rightarrow\mathbf{\hat{x}} \tag{3}\]
Both the encoders and decoder are trained simultaneously by maximizing the expected log-likelihood. The reconstruction loss checks how well the image has been reconstructed from the input. The reconstruction loss forces the update such that the latent representation preserves the core attributes of the input data. An \(L_{2}\) penalty is imposed on the learned representation \(\mathbf{s}\) and a weight-decay is imposed on the decoder parameters to incorporate the regularization affects as proposed in [56].
\[L_{r}=\mathbb{E}_{\mathbf{x}\sim D}[\log p_{\theta}(\mathbf{x}|\mathbf{s})+ \lambda_{s}\|\mathbf{s}\|^{2}+\lambda_{\theta}\|\theta\|^{2}] \tag{4}\]
where \(\lambda_{s}\), and \(\lambda_{\theta}\) are hyper-parameters.
#### 2.2.3 Consistency loss
The consistency loss depends on the output of both the query and key encoder \(f_{\theta}\) and \(f_{\theta}^{\prime}\). Here, the query encoder takes the original non-augmented observation \(\mathbf{x}\) and the key encoder uses the augmented observation \(\mathbf{\tilde{x}}\) as input. The output of the Key encoder \(\mathbf{s}^{\prime}\) is then used as an input to a feature predictor module, which is nothing but an MLP, to estimate the non-augmented embedding \(\mathbf{\hat{s}}\). The
consistency loss is designed to minimize the error between the non-augmented embedding \(\mathbf{s}\) and the augmented embedding \(\mathbf{s}^{\prime}\), thereby enabling the encoder to learn essential _task-relevant_ features while ignoring irrelevant distractions (such as background clutter or texture). This eliminates the need of using negative samples for the computation of consistency loss. The consistency loss function can, therefore, be mathematically written as:
\[L_{c}(\mathbf{\hat{s}},\mathbf{s},\theta)=\mathbb{E}_{\mathbf{x}\sim\mathcal{D} }[\|\mathbf{\hat{s}}-\mathbf{s}\|^{2}] \tag{5}\]
### The CRC loss function
It is our conjecture that each of the above three loss functions enables the encoder to extract non-redundant and complementary information from the higher dimensional input. Thus, a combination of these three should improve the overall RL performance in learning optimum policy. The resulting loss function, called CRC loss, has the following mathematical form:
\[L_{CRC}=c_{1}L_{q}+c_{2}L_{r}+c_{3}L_{c} \tag{6}\]
where \(c_{i}>0,\sum_{i}c_{i}=1,i=1,2,3\) are hyper-parameters that control the relative importance of individual components. The RL model for policy learning takes the query encoder output as its input. The SAC algorithm used for learning policy is also allowed to affect the query encoder weights \(f_{\theta}\) during the backward gradient update step. At regular intervals, the key encoder \(f^{\prime}_{\theta}\) weights are updated using the exponential moving average (EMA) of the weights of the query encoder \(f_{\theta}\). The feature learning and policy learning takes place in jointly in parallel. The latent representations learned by the query encoder \(f_{\theta}\) receives gradients from both the CRC loss and the SAC algorithm losses. This makes the feature representations _action-dependent_, an aspect which will be analyzed in some more detail in the next section.
## 3 Experimental Results and Discussions
The proposed CRC-RL model architecture takes its inspiration from the original CURL implementation by Laskin et al. [10]. The original model is extended by incorporating additional decoder and feature predictor to facilitate computing the CRC loss function as described in the previous section. The model is implemented using PyTorch [57] deep learning framework. The reinforcement learning framework for policy estimation makes use of the publicly released implementation of the SAC algorithm by Yarats et al. [58]. The query encoder and decoder architecture is similar to the ones used in the above work. The query encoder weights are tied between the actor and critic so that they both use the same encoder to embed input image observations. The feature predictor module is a MLP network which consists of cascaded linear layers and ReLU activation function. The complete list of hyper-parameters is shown in Table 1. A number of experiments are carried out to establish the efficacy of the proposed model. The design choices are justified through several ablation studies as discussed below.
### Performance Comparison
The performance of the proposed CRC-RL model is compared with the current state-of-the-art methods on the challenging Deep mind control suite (DMControl) environments [29]. The outcome is shown in Table 2 and 3 after training for 100K and 500K environment steps respectively. It can be observed that the proposed CRC-RL model outperforms the current state-of-the-art methods, such as CURL [10], SODA [23], PlaNet [59], Dreamer [60], SAC+AE [17], pixel-based SAC [45] on most of the DMControl environments, thereby establishing the superiority of our approach. The environments shown in Table 3 are difficult compared to those shown in Table 2 and hence require longer training time. In this case, our proposed model outperforms the baseline CURL model in only 300K training steps.
\begin{table}
\begin{tabular}{|c|c|} \hline Hyper-parameter & Value \\ \hline Pre transform image size & (100, 100) \\ Image size & (84, 84) \\ Action repeat & 8 \\ Frame stack & 3 \\ Transform & Random crop \\ Replay buffer capacity & 100000 \\ Initial steps & 1000 \\ Batch size & 512 \\ Hidden layers & 1024 \\ Evaluation episodes & 10 \\ Optimizer & Adam \\ Learning rate \((f_{\theta},\pi_{\psi},Q_{\phi})\) & 1e-3 \\ Learning rate \((\alpha)\) & 1e-4 \\ Critic target update frequency & 2 \\ Convolution layers & 4 \\ Number of filters & 32 \\ Latent dimension & 50 \\ Discount \((\gamma)\) & 0.99 \\ Initial temperature & 0.1 \\ \hline \end{tabular}
\end{table}
Table 1: Hyper-parameters used for DMControl experiments. Most hyper-parameters values are unchanged across environments with the exception for action repeat, learning rate, and batch size.
### t-SNE Visualizations
To better understand the relationship between the learned latent representations and the action generated by the RL policy, we generate the two-dimensional t-SNE plots of feature embeddings obtained from the query encoder for 3 different environments as shown in Figure 2. These features are assigned with the corresponding action labels generated by partitioning the action space into five clusters by using the k-mean clustering algorithm. As one can observe, the proposed CRC-RL model leads to more pristine clusters with lesser outliers compared to the baseline CURL [10] algorithm for some amount of training. Compared to the Cartpole environment, other two are comparatively more complex and require larger amount of time for training. This shows that the proposed model leads to better correlation between the feature embeddings and agent actions. This aspect has not been empirically investigated extensively in the existing literature and thus, the current work makes a novel contribution by filling this void.
### Feature Correlation Heat Maps
Another study is performed to validate our hypothesis that the proposed CRC loss contribute new information resulting in learning new feature representations which are distinct from those obtained using individual losses. In this study, the correlation matrices between the latent features obtained with the baseline CURL algorithm (that uses contrastive loss) and that with the proposed CRC-RL model (that uses CRC loss) is plotted as heat-maps as shown in Figure 4. These matrices are generated by collecting \(200\) sample embeddings from both models trained with 49000 environment steps. Since, each image sample is encoded into a \(50\times 1\) feature vector, the features generated by the above two methods are grouped into two feature matrices (say, \(F_{1}\) and \(F_{2}\)) of size \(200\times 50\). The correlation between these two feature matrices results in a \(400x400\) matrix is visualized as a heat map in the above figure where the darker regions shows higher correlation and lighter regions show lower correlation. It is observed that off-diagonal regions have lower correlation (lighter regions) indicating that the two feature embeddings (\(F1\) and \(F2\) ) are very distinct from each other. The diagonal regions are highly correlated (darker regions) as they correspond to the features from the same method. Another interesting finding of this study is that these heat maps show increasingly complex patterns for difficult environments such as 'Walker-walk' or 'Cheetah-Run' compared to simpler environments such as 'Cartpole-Swingup'. These patterns evolve over time and stabilize as the training performance saturates. This is an interesting insight that may provide clue to better understand the relationship between features and action policies learned in an end-to-end RL framework.
### Ablation Study
Three separate ablation studies are carried out to justify the design choices made in this paper as described below.
#### 3.4.1 Usefulness of CRC loss function
First study validates the usefulness of the proposed CRC loss function comprising of contrastive, reconstruction and consistency losses. The outcome is shown in Figure 3. We start with the baseline CURL model [10] that uses contrastive loss to learn the feature presentations. Then this model is trained with a combined loss function of contrastive and reconstruction loss and finally, with the CRC loss function comprising of contrastive, reconstruction and consistency losses. The inclusion of these losses require modifying the existing CURL model leading to the formation of the CRC-RL model proposed in this paper. The figure shows that CRC-RL performs better than the other two for the benchmark problems from the Deepmind control suite (DMC), namely, 'Cheetah-Run' or 'Walker
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
100K Step & Our Method & CURL.[10] & SODA [23] & PLANET [59] & DREAMER [60] & SAC+AE [17] & PIXEL & STATE & \% Increase \\ Scores & & & & & & SAC [45] & SAC [45] & over CURL \\ \hline FINGER, SPIN & **793\(\pm\)36** & 767\(\pm\)36 & 363\(\pm\)185 & 136\(\pm\)216 & 341\(\pm\)70 & 740\(\pm\)64 & 179\(\pm\)66 & 811\(\pm\)46 & 3.38 \\ CARTPOLE, SWINGGUP & **813\(\pm\)45** & 582\(\pm\)146 & 474\(\pm\)143 & 297\(\pm\)39 & 3262\(\pm\)27 & 311\(\pm\)11 & 419\(\pm\)40 & 835\(\pm\)22 & 39.6 \\ REACHER, EASY & **636\(\pm\)301** & 538\(\pm\)233 & - & 20\(\pm\)50 & 314\(\pm\)155 & 274\(\pm\)14 & 145\(\pm\)30 & 746\(\pm\)25 & 18.2 \\ CHEETAH, RUN & **355\(\pm\)31** & 299\(\pm\)48 & - & 138\(\pm\)88 & 235\(\pm\)137 & 267\(\pm\)24 & 197\(\pm\)15 & 616\(\pm\)18 & 18.7 \\ WALKER, WALK & 490\(\pm\)52 & 403\(\pm\)24 & **635\(\pm\)48** & 224\(\pm\)48 & 277\(\pm\)12 & 394\(\pm\)22 & 42\(\pm\)12 & 891\(\pm\)82 & 21.5 \\ BALIL IN CUP, CATCH & **832\(\pm\)81** & 769\(\pm\)43 & 539\(\pm\)111 & 0\(\pm\)0 & 246\(\pm\)174 & 391\(\pm\)82 & 312\(\pm\)63 & 746\(\pm\)91 & 8.19 \\ \hline \end{tabular}
\end{table}
Table 2: Mean episodic reward (with standard deviation) over 10 evaluation runs on DMControl environments after training for 100k environment steps. The best scores are shown in bold letters.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Environment & Our Method & CURL. & \% Increase over CURL. \\ \hline QUADRUPED, WALK & **88 \(\pm\) 51** & 39\(\pm\)22 & 125.6 \\ HOPERE, HOP & **61 \(\pm\) 33** & 10\(\pm\)17 & 510 \\ WALKER, RUN & **306 \(\pm\) 5** & 245\(\pm\)32 & 24.8 \\ FINGER TURN, HARD & **423 \(\pm\) 78** & 207\(\pm\)32 & 104.3 \\ \hline \end{tabular}
* shows values for 300K training steps
\end{table}
Table 3: Mean episodic score (with standard deviation) for 10 evaluation runs on DMControl environments obtained after training for 500k environment steps. The best scores are shown in bold letters.
Figure 3: Effect of incorporating various loss components. CRC loss function performs better than other combinations for more difficult environments such as βCheetah-Runβ and βWalker-walkβ.
Figure 2: t-SNE visualization of latent feature embeddings obtained from query encoder at 49K training steps. Colors correspond to cluster labels in the action space. One can observe that CRC-RL leads to more pristine clusters with less outliers compared to CURL.
walk'. These two are comparatively difficult problems to solve compared to simpler problems such as the 'Cartpole-Swingup' problem which does not benefit from the proposed CRC-RL model. This observation clearly establishes the usefulness of the proposed approach. This becomes more evident in the second ablation study discussed in the next section below.
#### 3.4.2 Relative weights of loss components in the CRC loss function
In this study, the relative weights of three loss components, namely, contrastive, reconstruction and consistency loss, are varied and its effect on the validation performance is compared as shown in Figure 5. The weights are varied with the constraint of forming a convex sum. In other words, \(c_{i}>0,i\in 1,2,3\) and \(\sum_{i}c_{i}=1\). The figure shows that the individual loss functions do not always provide the best performance. The best performance is obtained by a combination of all the three losses. Having equal weights (\(c_{1}=c_{2}=c_{3}=0.33\)) for all the three losses have a regularizing effect on model performance in the sense that the performance is traded for better generalizability. This is evident from the fact that the validation performance curve with this combination of weights lie somewhere in the middle of the all the curves. The generalization capability of the proposed model is demonstrated in the third ablation study discussed next.
#### 3.4.3 Generalization Capability of the CRC-RL model
In order to test the generalization capabilities of the proposed CRC-RL model, another experiment is performed where the RL models are trained on images augmented with _random crop_ effect and then validated on images augmented with _Video-Easy_ and _Color-Hard_ artifacts [23]. The outcome is shown in Figure 6. Comparing with the validation plots in Figure 5, it is observed that the overall performance has come down significantly due to these complex augmentations which makes it difficult for the model to generalize when trained only with images augmented with only _random crop_ artefact. However, even in this case, the RL model trained with CRC loss function provides the best or closer-to-the-best evaluation performance compared to the models that use individual loss components for training. This demonstrates the superior generalization capability of the proposed CRC-RL model over the existing models that use one of the above loss functions for training. It also corroborates the earlier finding mentioned in the previous subsection that the equal weights for individual loss components in the CRC loss function have a regularizing effect on the model performance.
## 4 Conclusions
The paper addresses the problem of feature representation learning in end-to-end reinforcement learning models with visual observations. Specifically, a new loss function, called CRC loss, is proposed to learn action-dependent features leading to superior performance in learning optimal action policies. This loss function is a combination of three different loss functions, namely, the image reconstruction loss, the contrastive loss and the consistency loss. Through empirical analysis including latent feature visualization, an attempt is made to generate new insights that can better explain the relationship between the features being learnt and the actions being taken by the RL agent. The resulting architecture is shown to outperform the existing state-of-the-art methods in solving the challenging DMC problems by a significant margin thereby forming a new benchmark in this field. The future work will involve carrying out more in-depth analysis and evaluation of the individual loss components on the overall performance as well as on the quality
Figure 4: Feature correlation heat-maps for three environments showing the correlation between the latent features obtained with CURL and CRC-RL models. The models are trained for 49K environment steps and 200 latent features are used to generate this plot.
Figure 5: Effect of varying weighting parameters for different loss functions on the Evaluation performance. \(c_{1}\), \(c_{2}\) and \(c_{3}\) are the weights to the contrastive loss, the reconstruction loss and the consistency loss respectively in the CRC loss function. The environments used are: (a) Cartpole-Swingup, (b) Cheetah-Run and (c) Walker-walk. Varying these parameters have a regularizing effect on the training performance. A smoothing factor of 0.5 is applied to the plot.
Figure 6: Generalization capabilities of CRC-RL algorithm. \(c_{1}\), \(c_{2}\) and \(c_{3}\) are the weights to the contrastive loss, the reconstruction loss and the consistency loss respectively in the CRC loss function. The environments used are: (a) Cartpole-Swingup, (b) Cheetah-Run and (c) Ball-in-Cup-Catch and (d) Walker-walk. The RL models are trained on images with _random-crop_ augmentation and evaluated on images with _Video-Easy_ and _Color-Hard_ augmentations. Compared to the individual losses, CRC loss provide best or second-best evaluation performance for these new augmentations thereby establishing the superior generalization capabilities of the CRC-RL model.
of features being learned.
|
2309.07224 | Statistical analysis of long GRBs' prompt emission and X-ray flares:
multivariate clustering and correlations | The extensive observations done by the X-ray telescope onboard Neil Gehrels
Swift observatory has revealed the presence of late time flares concurrent with
the decaying afterglow emission. However, the origin of these flares are
elusive. In this work, we made use of the large database of Swift observations
(2005 - 2020) of long GRBs to conduct a systematic statistical study between
the prompt gamma ray emission and X-ray flares by characterising their temporal
and spectral properties in terms of duration, quiescent period, peak flux,
fluence, minimum variability timescale and spectral power-law index. The
multi-dimensional database of parameters, thereby, generated was investigated
by the principal component analysis which revealed there is no evident
correlation between the different parameters of the prompt emission and X-ray
flares. Furthermore, the correlation studies reveal that while there is a trend
of positive correlation between the minimum variability timescale of flare and
its duration, and a strong negative correlation with its peak flux, there are
no such correlations observed in the prompt emission. Similarly, we find a
positive correlation between the quiescent period and flare duration, and a
negative correlation with the flare peak flux, while no such correlations are
observed for the prompt emission of GRBs. Finally, among the X-ray flares, we
find two dominant classes whose variations are driven by the minimum
variability timescale, peak flux and fluences of the flares. A catalog of these
different parameters characterising the prompt and flare emissions is
presented. | Joseph Saji, Shabnam Iyyani, Kratika Mazde | 2023-09-13T18:00:18Z | http://arxiv.org/abs/2309.07224v1 | Statistical analysis of long GRBs' prompt emission and X-ray flares: multivariate clustering and correlations
###### Abstract
The extensive observations done by the X-ray telescope onboard _Neil Gehrels Swift_ observatory has revealed the presence of late time flares concurrent with the decaying afterglow emission. However, the origin of these flares are elusive. In this work, we made use of the large database of _Swift_ observations (2005 - 2020) of long GRBs to conduct a systematic statistical study between the prompt gamma ray emission and X-ray flares by characterising their temporal and spectral properties in terms of duration, quiescent period, peak flux, fluence, minimum variability timescale and spectral power-law index. The multi-dimensional database of parameters, thereby, generated was investigated by the principal component analysis which revealed there is no evident correlation between the different parameters of the prompt emission and X-ray flares. Furthermore, the correlation studies reveal that while there is a trend of positive correlation between the minimum variability timescale of flare and its duration, and a strong negative correlation with its peak flux, there are no such correlations observed in the prompt emission. Similarly, we find a positive correlation between the quiescent period and flare duration, and a negative correlation with the flare peak flux, while no such correlations are observed for the prompt emission of GRBs. Finally, among the X-ray flares, we find two dominant classes whose variations are driven by the minimum variability timescale, peak flux and fluences of the flares. A catalog of these different parameters characterising the prompt and flare emissions is presented.
Gamma-ray bursts(629) -- X-ray flares -- Multivariate analysis(1913) -- Astrostatistics techniques(1886) -- Catalogs(205) 0000-0002-4055-2886]Joseph Saji
0000-0002-4880-7885]Shabnam Iyyani
0000-0002-4888-7885]Kratika Mazde
## 1 Introduction
Gamma ray bursts (GRBs) are characterised by two main events: firstly, the prompt emission which composes of the immediate gamma rays and secondly, the afterglow emission which composes of emission ranging from radio till gamma rays (Meszaros, 2006; Kumar & Zhang, 2015; Iyyani, 2018). The quick autonomous re-pointing capability of _Neil Gehrels Swift_(Burrows et al., 2005a) spacecraft to the location of the burst, in particular, has enhanced the observations linking the prompt and X-ray afterglow emissions of GRBs. This is made possible by the onboard detectors, Burst Alert Telescope (BAT; Barthelmy et al., 2005 ) and X-ray Telescope (XRT; Burrows et al., 2005b) which make observations in the energy ranges \(15-150\,\mathrm{keV}\) and \(0.3-10\,\mathrm{keV}\), respectively.
One of the intriguing features of the gamma-ray bursts revealed by _Neil Gehrels Swift_ observations is the presence of flares in the X-ray afterglow lightcurves. Systematic survey studies regarding the X-ray flares observed in _Swift_ XRT were initially carried out by Chincarini et al. 2007a and Falcone et al. 2007. X-ray flares are found to occur in both long and short GRBs. This makes the flares a common feature in GRBs despite the difference in progenitors. Majority of the flares happen before a few 1000 s (Chincarini et al., 2007a), however, in a few of them, they are found to occur as late as \(\geq 10^{6}\) s (Swenson et al., 2010; Kumar et al., 2022).
Spectral studies such as Morris 2008 using wider energy range combining data from BAT, XRT and UVOT1 simultaneously available in a few cases have shown that the flare spectrum requires spectral models more complex than
a simple power-law model of the afterglow radiation. This affirmed that the flares are distinctly different from the observed underlying afterglow emission. Several studies have been done to analyse various spectral and temporal properties of these flares observed in X-ray and optical in comparison with the prompt emission properties (Sonbas et al., 2013; Peng et al., 2015; Yi et al., 2016, 2017; Liu and Mao, 2019; Chang et al., 2021; Lu et al., 2022; Shi et al., 2022). The spectral studies have noted that the flares possess energy fluence comparable to that of the prompt emission (Burrows et al., 2005; Falcone et al., 2007). In addition, multiple flaring episodes are also found in a certain fraction of GRBs (Chincarini et al., 2007). As the flare durations tend to increases with time, the intensity of the flares are found to decrease with time.
Despite several studies, a self-consistent model explaining the origin, energetics and the evolution of the X-ray flares is yet to be developed. The various proposed models consider the following scenarios: (a) the late time flares are due to delayed central engine activity (MacFadyen et al., 2001; King et al., 2005; Margutti et al., 2011; Dall'Osso et al., 2017; Gibson et al., 2018); or if it is an extended phase of the prompt emission episode (Beniamini and Kumar, 2016; Mu et al., 2016); or if it is the late time energy injection into the blast wave by refreshed shocks (Rees and Meszaros, 1998; Laskar et al., 2015); or if it is the externals shocks propagating into a dense and uneven circumburst medium (Hascoet et al., 2017; Ayache et al., 2020).
In this work, we present a detailed multivariate statistical study of the temporal and spectral properties of the prompt and X-ray flare behaviours. Furthermore, a catalog of the studied properties are presented. Firstly, a reasonably large dataset based on the _Swift_ observations is selected. Following which the parameters chosen to characterise the prompt and X-ray flare emissions are defined, and the methodology used to extract those features are described in the section 2. In section 3, the multivariate analysis including the distributions of the various parameters, principal component analysis, detailed correlation analysis cum modelling and clustering of X-ray flares are presented. Following, in section 4, we discuss and summarise the main results of the analysis and finally, present our conclusions.
## 2 Sample selection and methodology
The primary objective of the statistical analysis of different parameters of the prompt emission and X-ray flares of the GRBs requires a reasonably large dataset. The _Swift_ observatory has been discovering approximately 100 bursts per year since 2004 and thereby, the large database including the prompt emission and late time X-ray flares observed by BAT and XRT respectively, provides the appropriate database for conducting the statistical studies. The afterglow lightcurves of all the long bursts (lGRBs)2 detected by _Swift_ XRT until May 2020 were manually examined to look for the presence of flares, which were also confirmed by the flare detection algorithm employed by _Swift_ XRT team (Evans et al., 2009). After the primary inspection, 340 lGRBs (about 30% of the total _Swift_ lGRBs observed during this period) were found to posses flaring activity during which the flux evolved in a manner differently from the otherwise continuously decaying afterglow behaviour. Among such identified flares, we further sorted and removed certain GRBs with the following light curve behaviours:
Footnote 2: \(T_{90}\) represents the time duration within which 90% of the burst emission counts are detected by BAT. Long GRBs are those with \(T_{90}\geq 2\,s\).
* incomplete flaring episodes such as either the rising or decaying phases of the flare light curve is completely absent,
* not enough observational data points available in the flare light curve such that it is insufficient for light curve characterisation,
* the immediate background emission in the regions pre or post the flare, that is the underlying afterglow emission, is absent or not enough observational data is available,
* small peaks in the lightcurve that are of very low signal-to-noise ratio3 (\(\ll 0.1\)) which are consistent with the statistical variation in the afterglow observations were strictly avoided4. Footnote 3: Refer section 2.1 for definition. Footnote 4: Note: A very small number of bursts (less than 10) with \(0.1<\) SN ratio \(<1\) are included in the sample. In these cases, the parameters were measurable and we have included them in the database, so as to minimise selection biases.
The above sorting resulted in the removal of about 35% of the initial sample and the final sample consisted of 220 lGRBs. Examples of the afterglow lightcurves of a few excluded GRBs using the above criteria, and selected GRBs with single, double and triple flaring episodes are shown in Appendix A.
### Parameters of study
A schematic representation of the prompt emission and afterglow lightscurves consisting of late time X-ray flares is shown in the Figure 1a. The following parameters characterising the prompt and X-ray flare emissions of lGRBs are chosen for the study and reported in the catalog:
* T\({}_{P,100}\) & T\({}_{F,100}\) : The total duration of the gamma-ray burst prompt emission and X-ray flare emission respectively.
* Fluence\({}_{P}\) & Fluence\({}_{F}\): The total energy per cm\({}^{2}\) obtained after integrating the energy flux over the duration of the burst and flare respectively.
* Peak Flux, F\({}_{P,peak}\) & F\({}_{F,peak}\) : The maximum energy flux that is recorded during the burst duration of prompt emission and X-ray flare (subtracting the underlying afterglow flux) respectively.
* Peak count time, T\({}_{P,peak}\) & T\({}_{F,peak}\): It marks the time when maximum count rate is observed in the prompt emission and X-ray flare light curves respectively.
* \(\alpha_{P}\) & \(\alpha_{F}\): The low energy spectral index of the power-law spectra of the prompt emission and X-ray flare, respectively.
* Minimum variability timescale, T\({}_{P,var}\) & T\({}_{F,var}\): It describes the shortest time scale over which a significant change in the count rate of prompt emission and X-ray flare happens, respectively.
* afterglow count rate at the peak time) to the square root of the total count rate at the peak time. This measure helps to check how distinct and statistically significant the flare is from the underlying afterglow emission. In addition, the distribution of statistical significance allows to verify if the sample is biased or not.
* Quiescent period, T\({}_{q}\): It is the difference between the start time of the T\({}_{F,100}\) of the X-ray flare and the end time of the T\({}_{P,100}\) of the prompt emission. During this period, there are no source photon counts other than background noises in BAT and the underlying afterglow emission in the XRT.
### Methodology of Extracting the Parameters
In the following subsections, the methodology of how the above defined parameters are systematically extracted from the data is presented.
Figure 1: (a) A schematic representation of the GRB prompt emission and afterglow lightcurves including the late time X-ray flares. The temporal parameters, T\({}_{P,100}\),T\({}_{F,100}\) and quiescent period [T\({}_{q}\)] are marked. (b) Residue plot of GRB170705A showing polynomial fit in the rise and decay phases of X-ray flare is shown. The points at which the fit coincide with residue = 0 line are marked in blue squares. The afterglow(black) and its corresponding model(blue solid line) of the same GRB is shown in the embedded plot.
#### 2.2.1 Prompt Emission Data Analysis
The prompt BAT light curves were analysed using the Heasarc script of Bayesian Block binning named as bat-tblocks with ncp_prior = 6 and uniform time binning of 0.01s to estimate the total duration, \(T_{P,100}\), and the peak time, \(T_{P,peak}\) of the prompt emission of the burst. The time integrated and peak count BAT spectra of the burst were created for the duration of \(T_{P,100}\) and for 1 second duration around \(T_{P,peak}\) of the prompt emission in the energy range \(15-150\) keV, respectively. The spectral data were then analysed using the Multi-Mission Maximum Likelihood Framework (3ML, V.2.2.1, Vianello et al., 2015) software package. Using 3ML, the spectra were analysed using different phenomenological spectral functions such as simple power-law, cutoff power-law and Band function (Poolakkil et al., 2021). By comparing the Akaike Information Criterion (AIC)5, the spectral function fit with the minimum AIC was considered as the best fit spectral model. From the best fit model, the low energy power-law index, \(\alpha\) of the time integrated spectrum and the total energy fluence were noted. The peak energy flux was estimated from the analysis of the peak spectrum generated around the peak time of the prompt emission.
Footnote 5: AIC is the statistic estimator which is calculated using the formula: \(2k-2ln(L)\) where \(k\) is number of estimated parameters and \(L\) is the maximum likelihood function.
#### 2.2.2 X-ray Flare Data Analysis
The pre-processed XRT count rate light curves for the afterglow analysis were obtained from _Swift_ XRT repository6 in the energy range, \(0.3-10\) keV. The binned GRB XRT afterglow light curves are ensured to contain at least 15 counts and the errors are calculated using the Gaussian statistics (Evans et al., 2009). The temporal structure of the afterglow emission consists of several power-laws of decaying count rate which is entirely different from the highly varying erratic pulses of emission in the flares (Figure 1a). Therefore, the first step in processing the data was to separate the two emissions. This was achieved by removing the rough data region that included the flares from the entire afterglow light curve and modelling the remaining smoothly decaying afterglow emission using a power-law or a combination of power-laws (Evans et al., 2007, 2009). The best-fit models were determined using the 3ML and AIC statistics. A multi-broken power-law model was found to best represent the underlying afterglow emission. The resulting best-fit model of the afterglow light curve was subtracted from the total afterglow emission, which produced a residual light curve including the propagated errors (Figure 1b). The peak count time of the flare was used to identify the rise and decay phase of the flare. Using the technique of fitting a polynomial function to the residual counts light curve, the respective last and first intersection points with the background curve in the rise and decay phases are selected as the start and the stop times of the flare episode respectively. The best fit polynomial for the rise and decay phases of the flare was selected after an iteration of polynomial fits of different orders and the one which gave the reduced chi-square closest to 1.
Footnote 6: Data products are available in the link: [https://www.swift.ac.uk/xrt_live_cat/](https://www.swift.ac.uk/xrt_live_cat/)
Footnote 7: The assumption of Gaussian distribution is applied to each data point of count rate in the residual light curve, by taking into account that, if in case the count rate takes a negative value during the simulation, it would still have a physical meaning implying that the afterglow emission is more dominant at that instance. In addition, count rate allows for fractional values.
To ascertain the variation in the estimated start and stop times of the flare episode, 5000 MonteCarlo simulated residual light curves were generated by assuming a Gaussian distribution7 to each data point where the original residual count is the mean and its associated error as the standard deviation. The rise and decay phases in each residual light curve were analyzed using polynomial fits with the same best-fit order as the original residual light curve. The mean and standard deviation of the distributions of the start and stop times of the flare episode is referred to as T\({}_{F,start}\) and T\({}_{F,stop}\), and their associated errors respectively. The distribution of the duration of the flare episode was generated by taking the difference between the start and stop times of the flares of each simulated residual count curve. The mean and standard deviation of the distribution of the duration of the flare is reported as the T\({}_{F,100}\) and its associated error.
Footnote 7: [https://www.swift.ac.uk/analysis/nhtot/](https://www.swift.ac.uk/analysis/nhtot/)
The X-ray flare spectrum was generated for the whole duration, T\({}_{F,100}\) of the flare in the energy range \(0.3-10\) keV using the _Swift_-XRT repository and analysed in 3ML. In order to separate the underlying afterglow emission, the X-ray spectra was firstly generated in regions selected pre and post of the X-ray flare or either one of them, depending on if enough data were available. The underlying afterglow emission spectra was first analysed using a power-law function multiplied by two absorption terms: galactic, \(n_{H}\) and source, \(n_{H}\). The galactic, \(n_{H}\), was chosen based on the right-ascension (RA) and declination (Dec) of each burst whereas, the source, \(n_{H}\) was left as a free parameter in the fit. The Galactic \(n_{H}\) values were obtained from the Galactic \(n_{H}\) calculator available at _Swift_ science data centre8, which is based on Willingale et al. (2013). Thereafter, the X-ray spectra generated for the flare duration is analysed using two power-law functions multiplied by the absorption coefficients (source, \(n_{H}\) was fixed at the value obtained
during the afteglow spectral fit). One of the power-law functions was frozen to the afterglow spectral fit values and the other power-law function was left free to model the flare spectrum. Thereby, using the flare power-law model, the respective energy flux and fluence in the energy range, \(0.3-10\) keV, were estimated. In addition, the spectral index, \(\alpha_{F}\), along with the above mentioned parameters were added to the database. The peak flux of flare corresponding to the peak time is calculated from the flux lightcurve obtained from the _Swift_-XRT repository after subtracting the average afterglow flux
In the current sample, there are GRBs with multiple episodes of X-ray flare emissions. The end of a flaring episode is defined when the decay phase of the light curve touches the background. The further rise after this instance or after certain period of quiescence is considered as the onset of the consecutive flaring episode (Figure 1a). Using this criteria, we identify multiple episodes of the flare. Out of the 220 GRBs in the sample, there are 44 GRBs which possess two flaring episodes and 4 GRBs with three flaring episodes. In 96 GRBs, late time X-ray flares were observed concurrently by the BAT and XRT detectors. In such cases, the emission observed in BAT exempting the X-ray flare detected in XRT, is considered as the prompt emission in our study. Therefore, the \(T_{P,100}\) estimate in such cases considers only the burst duration of the first episode of radiation that is observed in BAT. The quiescent period, \(T_{q}\) for the various flaring episodes are respectively measured as the difference between the onset time of the flaring episode and the end time of the \(T_{P,100}\) of the burst (Figure 1a).
#### 2.2.3 Variability Measurement
One of the defining characteristics of gamma ray burst emission is its highly variable light curves observed during the prompt emission (MacLachlan et al., 2013). Interestingly, the X-ray flares also exhibit strong variability and erratic light curves (Sonbas et al., 2013). The minimum timescale of variability is the minimum timescale over which there is a significant or reasonable change in count rate in the observed light curve. The study of the variability pattern can provide us the information regarding the size and location of the source region (Kobayashi et al., 1997).
In this work, the Bayesian block (BB) technique (Scargle et al., 2013) of binning the counts light curve to identify the minimum variability timescale of both the prompt as well as the X-ray flare emissions was employed. BB bins the light curve such that each time interval possess a constant Poisson rate. This methodology divides the light curve into various time intervals of dynamical time widths and signal-to-noise ratio and closely follows the true underlying variation in the emission (Burgess, 2014; Vianello et al., 2018). The time interval with the minimum timescale is considered as the minimum variability (\(T_{var}\)) of the light curve. The light curves of prompt and flare emissions are analysed using the Bayesian Block binning method available in Astropy package (Astropy Collaboration et al., 2022) in python. In order to compare the estimated variability timescales of both the prompt and flares emissions of a GRB as well as between different GRBs in the sample, a common false alarm probability of \(p_{0}=0.01\) is adopted throughout the analysis. The \(p_{0}=0.01\) is found to be an optimal choice allowing to effectively capture the variabilities across different emissions and provides a confidence level of 99%. This approach, thereby, ensures a systematic analysis, avoiding the use of multiple configurations specific to different GRBs.
## 3 Multivariate Analysis
In this section, we analyse the multi-dimensional database that is generated via the methodology described in the previous section to assess the probability distributions, correlation and clustering of the various studied parameters.
### Distributions of prompt and X-ray flare parameters
The histogrammed probability density plots of the temporal and spectral related parameters of both the prompt and X-ray flare emissions studied for the GRBs in this work are presented in the Figure 2. In addition, the smoothed version of the distribution obtained via the kernel density estimation (KDE) using the Gaussian kernels are also shown.
The X-ray flare episodes have durations on average greater than that of the prompt emission and the consecutive flaring episodes tend to be even longer. The mean (median) of the prompt duration is around \(51^{+117}_{-35}\) s (59 s), whereas the first and second X-ray flaring episodes, on average (median), have longer durations of \(188^{+449}_{-133}\) s (169 s) and \(344^{+1182}_{-267}\) s (230 s), respectively (Figure 2a and 2b). The average (median) quiescent periods of the first and second flaring episodes are found to be around \(69^{+368}_{-58}\) s (91 s) and \(303^{+593}_{-200}\) s (224 s), respectively (Figure 2c). One of the immediate similarities that can be noted between the prompt emission and the X-ray flares via a visual inspection is the erratic nature of both the light curves which is quantified using variability timescale measurements. The prompt emission light curves exhibit minimum variabilitPrompt emission and X-ray flares over time scales on average (median) \(2.7^{+13.4}_{-2.2}\) s (2.4 s),
Figure 2: The distributions obtained for the prompt (red) and flare (orange for first flare episode and blue for second flare episode) temporal parameters are shown. The vertical bars represent the histograms and the respective KDE curves are shown in color filled shaded curves. The distributions of a) T\({}_{P,100}\), b) T\({}_{F,100}\), c) T\({}_{q}\), d) T\({}_{P,var}\), e) T\({}_{F,var}\), f) \(\alpha_{P}\), g) \(\alpha_{F}\), h) \(Fluctence_{P}\), i) \(Fluctence_{F}\), j) \(F_{P,peak}\), and k) \(F_{F,peak}\) are shown.
see Figure 2d, while the minimum variability timescales of first and second X-ray flare episodes are relatively of longer durations on average (median) \(15^{+56}_{-12}\) s (11 s) and \(25^{+144}_{-22}\) s (16 s), respectively (Figure 2e). We note that X-flare light curves depending on the exposure times, for majority of first episodes an average \(1\,s\) time resolutions were available, but, in some cases, the time resolutions were longer.
The low energy power-law spectral indices, \(\alpha_{P}\) for the time integrated spectrum of the total prompt emission duration (energy band: 15 - 150 keV) are found to be on average (median) around \(-1.5\pm 0.40\) (-1.5), see Figure 2f. However, the lower energy power-law spectral indices, \(\alpha_{F1}\) and \(\alpha_{F2}\) of the time integrated spectrum of the total first and second X-ray flare episodes in energy range 0.3 - 10 keV are on average (median) much softer, around \(-2.1\pm 0.83\) (\(-2.0\)) and \(-2.25\pm 0.65\) (\(-2.17\)), respectively (Figure 2g).
The fluence and the peak flux of the prompt episode is on average (median) around \(1.6^{+3.8}_{-1.1}\times 10^{-6}\) (\(1.7\times 10^{-6}\)) erg/cm\({}^{2}\) and \(1.2^{+2.1}_{-0.78}\times 10^{-7}\) (\(1.1\times 10^{-7}\)) erg/cm\({}^{2}\)/s, respectively in the energy range 15 - 150 keV (Figures 2h and 2j). The average (median) of the fluence of the first and second X-ray flaring episodes in 0.3 - 10 keV is around \(1.5^{+5.2}_{-1.1}\times 10^{-7}\) (\(1.5\times 10^{-7}\)) erg/cm\({}^{2}\)/s and \(8.7^{+24.7}_{-6.5}\times 10^{-8}\) (\(7.2\times 10^{-8}\)) erg/cm\({}^{2}\), respectively (Figure 2i). The first and second episodes of the X-ray flare episodes have peak flux on average (median) around \(2.9^{+14}_{-2.4}\times 10^{-9}\) (\(3.2\times 10^{-9}\)) erg/cm\({}^{2}\)/s and \(9.3^{+47}_{-7.8}\times 10^{-10}\) (\(1.2\times 10^{-9}\)) erg/cm\({}^{2}\), respectively (Figure 2k).
### Principal component analysis
Principle Component Analysis (PCA) allows to search for trends or correlations between parameters and sort them based on their contribution to the observed variance in the data. The PCA of the entire multi-dimensional dataset including the parameters of both prompt and X-ray flares, thereby, gives us some insight into the relation between different parameters and also shows the parameters that mainly drive the variability in clustering. PCA reduces the multi-dimensionality of the dataset into a few principal components which is a linear combination of the original parameters. We find that principal component 1 (PC1) and principal component 2 (PC2) together account for 46% of the total variability in the dataset. Using PC1 and PC2, the PCA circle of correlation is created as shown in Figure 3a. Here the correlation refers to the Pearson correlation coefficient between the parameter and the principal components. Thereby, the parameters that are positively correlated among each other are grouped together within the same quadrant and if they are negatively correlated the parameters would be positioned in opposite quadrants. The parameters that are less correlated with each other would be positioned in adjacent quadrants and would be at nearly right angles to each other. It is also key to note that the length of each vector variable from the origin (centre of the circle) in the plot represents the quality of representation of the variable. In other words, it represents the percentage of the contribution of that parameter in defining the PCA axes 1 and 2 combined, which is provided in the contribution color bar shown in the Figure 3a.
With the above understanding, from the PCA analysis9 of the dataset, we find there exists no explicit correlation between the temporal and spectral parameters of prompt emission and X-ray flares, as evident from Figure 3a. In case of \(\alpha\) and \(T_{var}\), it is important to note that the quality of representation of these parameters on PC1 and PC2 are lowest and therefore, their degree of correlations may not be strong (see section 3.3 for more details). Using this initial results of PC analysis as a guide, we look into the correlations between the various parameters and clustering in the subsequent sections in detail.
Footnote 9: For the PC analysis, the R package FACTOMINER (Le et al., 2008) was used.
### Correlation Analysis
The correlation analysis explores and identifies trends, patterns or relationships between different parameters under consideration, including positive, negative or no correlation. This can be quickly assessed by visualizing data in a scatter plot. Most importantly, correlation does not imply direct causation, as the relationship between parameters may be complex. Nonetheless, a correlation does indicate interdependence, allowing for further exploration of the underlying physical scenario.
A Pearson Correlation Matrix of the various parameters of the prompt emission and the first and second X-ray flare episode is shown in Figure 3b. The correlation matrix features markers indicating the various levels of statistical significance, including p-value \(<0.001\), \(p<0.01\), \(p<0.05\), and \(p>0.05\). The p-value represents the probability of the null hypothesis which considers there is no correlation between the variables and thereby serves as the measure of the statistical significance of accepting the alternate model i.e there exists a correlation between the two variables. The
parameters of the third flare episodes are not included in the correlation matrix because only a small fraction of the sample consisted of such multiple flaring episodes. However, they are included in the detailed correlation plots and modelling. In this work, we have modelled only those correlations between parameters that exhibited a positive or negative Pearson correlation coefficient \(>0.5\), both in first and second flaring episodes. All the modelled correlations in our investigation demonstrate strong statistical significance as indicated by the low p-values, less than 0.001.
In Figure 4a, the duration of the X-ray flare, \(T_{F,100}\), is plotted against the duration of the respective prompt emission, \(T_{P,100}\), of the GRB. No particular correlation can be observed. For the purpose of comparison, the identity line representing when \(T_{P,100}=T_{F,100}\) is marked. The duration of the flares including both first, second and third episodes of emission largely lie above this identity line suggesting that the flaring episodes tend to have durations greater than their respective prompt emissions. In addition, in case of multiple episodes of flaring, the consecutive episode is always longer than the preceding episode of flaring.
As evident from the Figure 4b, the flare minimum variability does not show any explicit correlation between the temporal properties of the prompt emission properties such as its minimum variability and duration (see Figure 3b). In comparison to the quiescent period, there is only a trend of positive correlation between \(T_{F,var}\) and \(T_{q}\), with a weak Pearson correlation coefficient of \(+0.4\) (Figure 3b). The minimum variability timescale of the flares is compared with the total duration of the flare as shown in Figure 4c. We note there is a positive correlation between \(T_{F,var}\) and \(T_{F,100}\) as follows
\[log_{10}T_{F,var}=\left(+0.71\pm 0.06\right)log_{10}(T_{F,100})-0.42\pm 0.15 \tag{1}\]
We note that this positive correlation is a weak trend as we are limited by the small number of flares with very long durations (\(>1000\) s). Thus, this correlation can be affirmed in the future with increased detections of flares with longer durations and lightcurves with smaller time resolutions. Interestingly, in contrast to this behaviour of flares, we note that there exists no significant correlation between the prompt emission minimum variability and its duration (Figure 4d) as evident from the Pearson correlation matrix shown in Figure 3b. For a small number of GRBs, the BB binning of prompt emission resulted in minimum timescale equivalent to the whole burst duration (Figure 4d).
A positive correlation is found between the quiescent period, \(T_{q}\), and the X-ray flare duration, \(T_{F,100}\) (Figure 4e). The positive correlation is modelled using a linear fit in the log-space as follows
\[log_{10}(T_{q}+1)=\left(+0.97\pm 0.05\right)log_{10}(T_{F,100})-0.12\pm 0.10 \tag{2}\]
Figure 3: (a) PCA circle of correlation showing the relationship between all the prompt and X-ray flare parameters is shown. The percentage of the total contribution of each parameter to the PCA axes is represented by the color coding. (b) The Pearson Correlation Matrix of the various parameters of the prompt emission and the first (blue), and second (green) X-ray flare episodes are shown. Markers representing the p-values of the obtained correlations are shown in each cell representing the statistical significance of the correlation.
and is shown in solid red line in Figure 4e. In comparison, there is only a weak trend of negative correlation observed between the quiescent period and the prompt emission duration of the bursts as shown in Figure 4f.
The prompt and flare emission spectra were modelled using the BAT and XRT data, obtained in the energy ranges 15-150 keV and 0.3-10 keV respectively. For the purpose of comparison of spectral behaviours, we assumed that the obtained prompt spectral model remains the same in the extrapolated lower energy range of 0.3 - 10 keV and thereby, studied the correlations between the spectral properties of the prompt and flare emissions.
Firstly, we studied the correlation between the low energy power-law index (\(\alpha_{P}\)) of either the power-law or the cutoff power-law model used to best model the time integrated prompt spectra in the energy range \(15-150\) keV and the low energy power-law index (\(\alpha_{F}\)) of the power-law model that was used to model the time integrated flare spectra in the energy range \(0.3-10\) keV, as shown in the Figure 5a. There is no explicit correlation observed between the \(\alpha_{F}\) and \(\alpha_{P}\). For majority of the sample the \(\alpha_{F}\) tends to be much softer than \(\alpha_{P}\).
The high energy (\(>10keV\)) part of the flare spectrum is not available for analysis and therefore, the peak of the flare spectrum is not known. It has been observed in the prompt emission that the spectral peaks are typically around a few hundred keVs. Therefore, for the purpose of comparing the energy fluxes between the prompt and flare, we estimate the prompt flux expected in the energy range 0.3 -10 keV by extrapolating the spectrum obtained from the analysis in the 15 -150 keV to those energies. In Figure 5b, we have compared the peak fluxes of both prompt and X-ray flare estimated in the energy range for 0.3-10 keV. Again, no significant correlation between these parameters are found. The flares are found to be less brighter than the prompt emission, Figure 5c. We further note that among the multiple flaring episodes of the GRB, the subsequent flaring episodes are found to be less intense as evident in Figure 5d. This is consistent with the findings by Chincarini et al. (2007b) and Falcone et al. (2007).
The prompt emission do not show any correlation between its peak flux and its duration (Figure 5e). However, it is interesting to note that the flare peak flux shows a weak trend of negative correlation (Pearson correlation coefficient = -0.42) with flare duration (Figure 5f). Similarly prompt emission does not show any significant correlation between it's peak flux and minimum variability and the quiescent period (Figure 5g and 5i). However, the peak flux of X-ray flare shows significant negative correlation between its minimum variability and quiescent period (Figure 5h and 5j) where modelling is given as follows:
\[log_{10}(F_{F,peak})=(-0.89\pm 0.05)\,log_{10}(T_{F,var})-7.44\pm 0.06 \tag{3}\]
\[log_{10}(F_{F,peak})=(-1.05\pm 0.07)\,log_{10}(T_{q}+1)-6.39\pm 0.17 \tag{4}\]
Certain GRBs among the sample show the occurance of flaring partly or fully during the plateau phase of the afterglow emission (Appendix A). The plateau phase is considered to be the region where the power law slope of the afterglow ranges between +/- 0.3. We have identified 13 GRBs wherein 12 of them have either first or all episodes of flares occurring partly or fully during the plateau phase while only one case (GRB110414A) has its second episode of flaring alone, occuring during the plateau phase. These GRBs are particularly highlighted in all the correlation plots. We note that these GRBs do not exhibit any unusual trend in the correlation studies in comparison to the other GRBs in the sample.
### Clustering
It is observed that among the GRB population only certain GRBs possess X-ray flaring episodes and among them certain GRBs are found to have multiple episodes of flaring, some with and without quiescent periods etc. Such variability among the different GRBs suggest the possibility of the existence of subgroups among the GRBs with X-ray flaring.
No major dissimilarity is observable between the distribution of the properties of the prompt emission of GRBs with and without flares, see Appendix B. In addition, the Kolmogorov-Smirnov (KS) test shows that it is highly likely that GRBs with and without X-ray flares belong to the same population. The Principle component analysis done in section 3.2 shows that there is no explicit correlation between the studied prompt and flare properties. Therefore, we incorporated only the X-ray flare properties to investigate the possibility of clusters among the flaring GRBs. This included the following X-ray flare parameters: T\({}_{F,100}\), T\({}_{F,var}\), T\({}_{q}\), Fluence\({}_{F}\), F\({}_{F,Peak}\) and \(\alpha_{F}\).
Figure 4: (a) T\({}_{F,100}\) is plotted against \(T_{P,100}\). The black circles, blue squares and red triangles represent the first, second and third X-ray flaring episodes, respectively. The same color code and markers are used hereafter.The green dashed line represents when T\({}_{P,100}\) = T\({}_{F,100}\). (b) The \(T_{F,var}\) is plotted against the corresponding \(T_{P,var}\). (c)The \(T_{F,var}\) is plotted against \(T_{F,100}\). The linear fit in the log space is shown by red solid line. The shaded red and blue regions correspond to 1\(\sigma\) and 2\(\sigma\) deviations of data from the fit. The color code represents the same contours hereafter. d) T\({}_{P,100}\) is plotted against \(T_{P,var}\) (e) T\({}_{g}\) is plotted against T\({}_{F,100}\) in the left hand panel and (f) against T\({}_{P,100}\) in the right hand panel respectively. Note that the grey diamonds represent the GRBs with quiescent period equal to zero, are not included in the modelling of the correlation. The parameters of flares that occur during the plateau phase of the afterglow is marked in green unfilled circle and square markers for the first and second episodes of flare respectively
Figure 5: (a) \(\alpha_{P}\) is plotted against \(\alpha_{F}\). (b) F\({}_{P,peak}\) is plotted against F\({}_{F,peak}\). (c) The ratio of F\({}_{P,peak}\) to F\({}_{F1,peak}\) and F\({}_{F2,peak}\) is plotted against the T\({}_{P,100}\) in black circles and blue squares respectively. (d) The ratio of F\({}_{F1,peak}\) to F\({}_{F2,peak}\) is plotted against T\({}_{P,100}\) in blue circles and the ratio of F\({}_{F2,peak}\) to F\({}_{F3,peak}\) in red circles. (e) F\({}_{P,peak}\) is plotted against T\({}_{P,100}\). (f) F\({}_{F,peak}\) is compared against T\({}_{F,100}\). (g) F\({}_{P,peak}\) is plotted against T\({}_{P,var}\). (h) F\({}_{P,peak}\) is compared against T\({}_{F,var}\). (i) F\({}_{P,peak}\) is plotted against T\({}_{q}\). (j) F\({}_{F,peak}\) is compared against T\({}_{q}\). The linear fit modelling the correlations in (h) and (j) are shown in red solid lines. The parameters of flares occurring during the plateau phase of the afterglow are highlighted in green unfilled circle and square markers for the first and second episodes of flare respectively.
Unsupervised clustering was carried out using Gaussian Mixture Model (GMM), Hierarchical and K-means algorithms10. To account for the large difference between numerical values of each parameters in the dataset, the data was standardised to a fixed order of range. All the chosen parameters except \(\alpha_{F}\) is converted to log scale and later the complete dataset is scaled and centered by subtracting each values by their mean and dividing them by their standard deviation. The standardised dataset was then fed into the different clustering algorithms.
Footnote 10: The clustering packages available in scikitβlearn (Pedregosa et al., 2011) were used.
Based on the various evaluation metrics such as Silhouette score, Davies-Bouldin Index and Calinski-Harabasz Index, the K-means was found to be the best clustering algorithm for the dataset. The number of clusters in K-means were determined by two methods: Silhouette score (Rousseeuw, 1987) and the Elbow method. The elbow was found to be at 4, signifying 4 possible subgroups within the GRBs with X-ray flares and the Silhouette score also peaked at 4 number of clusters. The obtained clusters are visualised in the principal component space as shown in Figure 6a. For understanding the origin of the clusters, we have marked the cases that had \(T_{q}=0\) in squares, GRBs with multiple flaring episodes with star markers and GRBs with flare in the plateau region of afterglow with yellow triangle marker. Since the \(T_{q}\)'s contribution in the PCA analysis is the largest in the fourth principal component, for the purpose of better visualising of the clusters, we chose to plot the clusters using PC4 versus PC1 in Figure 6a. The average values of the parameters of each of the cluster is reported in the Table 1. The GRBs with flares during the plateau are found across the clusters and does not display any discernible peculiar behaviours.
## 4 Discussions and Conclusions
The origin of the late time X-ray flares is largely a mystery. Owing to the similar erratic behaviour of the lightcurves, the prompt emission and X-ray flares, they are believed to be powered by the central engine. Therefore, in this work using the extensive database of _Swift_ observations of both prompt and X-ray flares via BAT and XRT instruments onboard respectively, we performed a systematic statistical study of the various temporal and spectral parameters characterising these two emissions. The sample consisted of 220 long GRBs. Among them 172 GRBs possessed single episodes of flare emission, whereas, 44 and 4 GRBs had double and triple flaring episodes respectively. Among all the GRBs in the sample, 96 GRBs had their flaring episode observed synchronously in BAT as well. The temporal properties of the prompt and X-ray flare emissions were characterised by parameters such as total duration, minimum variability and the quiescent period between the episodes; and their spectral properties were characterised by parameters such as the low energy spectral power-law index, peak energy flux and energy fluences.
Figure 6: (a) The K-means clustering results are visualized in a PC plot. Black\(-\)cross and red\(-\)square markers represent the GRBs which posses multiple flares and GRBs with zero quiescent period respectively. The yellow triangle markers represent GRBs with flares in the plateau region of afterglow. (b) The KDE distributions of the various flare properties of the cluster B and D are shown.
The distributions of the total duration of prompt and flaring episodes show that the flares largely have longer durations than the prompt with the median of \(T_{F,100}\) distribution being nearly 3 times the median of \(T_{P,100}\) distribution. The consecutive flaring episodes are found to be longer than the preceding one. The median of distribution of the duration of the second flaring episode is around 1.3 times the first episode of flare. This is consistent with the earlier studies done by Chincarini et al. (2007b).
The distribution of the minimum variability timescales of the prompt emission has a median around 2 s. On the contrary, the X-ray flare emissions tend to have larger variability timescales with a median of 12 s. The quiescent period between the prompt emission and the flaring episodes are found to range from none to around a day. The average of quiescence for the first flaring episode is around 100 s. The spectral parameters such as the low energy power-law index in the flares is found to be much softer than the prompt emission. The study of the energy fluence and peak energy flux show that the prompt emission is more intense and brighter than the X-ray flaring episodes when compared in the same energy range of \(0.3-10\) keV.
The principal component analysis and the correlation studies carried out on the dataset reveals that there is no explicit correlation between the prompt and X-ray flare properties in general. The duration of the prompt and flare episodes are uncorrelated. Interestingly, a positive correlation is observed between the quiescent period and the flare duration, while there is no significant correlation with the prompt emission duration. This implies that the property which drives the quiescent period and the onset time of the flare emission after the prompt has ended, is intrinsically connected with the origin of the flare, while it is less dependent on the prompt emission.
The minimum variability timescale observed in GRBs can point towards the size of the emitting region as well as allows one to understand the central engine activity, the impact of the circumburst medium as the jet propagates through the stellar core etc. There is no explicit correlation observed between the minimum timescale of variabilities of flares and that of prompt emission of the GRB. While there is no correlation between the prompt emission minimum variability and its duration, we find that there is a trend of positive correlation between the flare variability and its duration.
We compared the spectral properties of prompt and flare in the energy range 0.3 - 10 keV. This was done by extrapolating the spectrum obtained for prompt in the energy range 15-150 keV into the X-ray flare energy range of 0.3-10 keV. We find that the low energy power-law spectral index, \(\alpha\) obtained for the spectrum of the total duration of prompt and flare respectively are found to be largely uncorrelated. The flare largely tends to have relatively much softer spectra in comparison to prompt. This may be attributed to the relatively longer duration of flares and may also point towards more rapidly evolving spectral behaviour in flares in comparison to prompt. We also find that the peak fluxes of prompt and flare emissions in the energy range 0.3-10 keV are uncorrelated. Consistent with previous studies (Falcone et al., 2007), the peak flux of the flare emission are found to be less brighter than the prompt and also decreases with consecutive flaring episodes. There is no explicit correlations observed between the peak fluxes of either of the prompt or flare and its durations. Interestingly, we also note that though there is no explicit correlation observed between prompt emission's minimum variability timescale and its peak flux, however, there exists a strong negative correlation between the flare's minimum variability timescale and its peak flux.
Further, using the flare properties alone, we explored the variety of classes among the current dataset of flares defined in this paper. Using K-means clustering algorithm, four clusters are found to exist. Among the clusters, we note that there is no preferential clustering for GRBs with multiple episodes of flares and are found to be spread across clusters. Among them we note that cluster A represents all 25 cases of flares which have zero quiescent period (Figure 6a). Thus, we conclude that the segregation of these X-ray flare events from the rest of the flare sample point out that
\begin{table}
\begin{tabular}{l c c c c} \hline & Cluster A & Cluster B & Cluster C & Cluster D \\ \hline log\({}_{10}\)(\(T_{F,100}\)) & 2.08 \(\pm\) 0.46 & 2.26 \(\pm\) 0.32 & 4.26 \(\pm\) 0.49 & 2.18 \(\pm\) 0.35 \\ log\({}_{10}\)(\(T_{F,var}\)) & 0.97 \(\pm\) 0.42 & 0.76 \(\pm\) 0.31 & 3.14 \(\pm\) 0.91 & 1.49 \(\pm\) 0.49 \\ \(\alpha_{F}\) & -1.87 \(\pm\) 0.59 & -1.81 \(\pm\) 0.56 & -1.64 \(\pm\) 0.56 & -2.54 \(\pm\) 0.95 \\ log\({}_{10}\)(F\({}_{F,peak}\)) & -8.18 \(\pm\) 0.43 & -7.97 \(\pm\) 0.44 & -10.39 \(\pm\) 0.84 & -9.05 \(\pm\) 0.44 \\ log\({}_{10}\)(Fluence\({}_{F}\)) & -6.59 \(\pm\) 0.49 & -6.33 \(\pm\) 0.41 & -6.70 \(\pm\) 0.82 & -7.42 \(\pm\) -0.39 \\ log\({}_{10}\)(\(T_{q}\)) & 0.07 \(\pm\) 0.26 & 1.95 \(\pm\) 0.35 & 3.54 \(\pm\) 0.70 & 2.11 \(\pm\) 0.31 \\ \hline Cluster size (\% of population) & 12.3 \% & 42.3 \% & 3.6 \% & 41.8 \% \\ \hline \end{tabular}
\end{table}
Table 1: Cluster Statistics
these observed flaring episodes are actually the continuing prompt emission. Among the remaining classes, we note that cluster C contains only 3.6% of the total sample and represents largely the flares with extreme values of duration, minimum variability timescale etc. This reduces the clustering to largely two main clusters B and D, each consisting \(\sim 42\%\) of the sample. While inspecting the distributions of the various properties of these two clusters, we find that the segregation is largely driven by minimum variability timescale, peak flux and fluence as evident from the Figure 6b. Certain GRBs in the sample exhibit flaring during the plateau phase of the afterglow, but they do not display any unusual trends in correlation studies and are distributed across clusters without any discernible peculiar behaviours.
Thus, to summarise, in order to understand the origin of the prompt and X-ray flares, we find the following results of the correlation study: (a) no explicit correlation between the temporal and the spectral properties of the prompt and flare emissions; (b) there exists a trend of positive correlation between minimum variability timescale of flare and its duration, while there is a strong negative correlation between the minimum variability timescale of flare and it peak flux. However, there are no such trends to be found in the prompt emission properties; and, (c) there also exists a positive correlation between quiescent period and flare duration, and a negative correlation between the quiescent period and the flare peak flux, however, no such correlations are observed for the prompt emission of GRBs.
Therefore, using the current parameter characterisation of the prompt and X-ray flares adapted in this work, we do not find any significant evidence for the common origin for prompt emission of GRBs and the X-ray flares observed during the afterglow emission. In addition, we find the X-ray flares dominantly possess two main classes whose differences are primarily driven by the variability timescale, peak flux and fluences. Finally, we present the catalog of the estimated parameters to the community for further use in Appendix C.
One of the major future prospects of this work is to develop an understanding of the physical scenario leading to the observed correlations between late-time X-ray flares and the prompt emission. Furthermore, for a better comprehensive picture of the origin of flares, an extensive comparative study need to be carried out between the flares observed in X-rays and optical as well as high energy emission (\(>100\) MeV). The upcoming Cherenkov Telescope Array (CTA, Kakuwa et al., 2012) holds promise for further insights. We further note that a significant fraction of the initial sample had to be excluded due to incomplete data. An enhanced sample can be studied by adopting a light curve reconstruction method, similar to the one proposed in Dainotti et al. (2023) as increased sample size can lead to more confident assessments of the correlations presented in this study.
We thank the anonymous referee for valuable comments and suggestions. J.S. and S.I. are supported by DST INSPIRE Faculty Scheme (IFA19-PH245). S.I. is also supported by SERB SRG Grant (SRG/2022/000211). This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center and from the UK Swift Science Data Centre at the University of Leicester.
## Appendix A Additional Plots
## Appendix B Comparing the Prompt Emission of GRBs with and without X-ray flares
Not all GRBs detected by _Swift_ possess X-ray flares in their afterglow emission. Thus, in order to understand if there exists any difference in the type of GRBs giving rise to flares, we made comparison plots of the distributions of different prompt emission properties such as isotropic burst energy, \(E_{iso}\) estimated for known redshift cases, \(T_{90}\), low energy power-law spectral index, \(\alpha\) and energy fluence of GRBs with and without X-ray flares as shown in Figure 8. Furthermore, to assess if both the sample distributions originate from the same population, we conducted a Kolmogorov-Smirnov(KS) test11 on the distributions of the different parameters. The null hypothesis assumes both the flare and non-flare GRB's properties are sampled from the same underlying population and the alternate is they are not identical. The resultant p-value if lower than the probability \(p=0.01\) (chosen threshold), allows us to reject the null hypothesis and adopt the alternate hypothesis at a statistical significance greater than 99%. However, for the distributions of \(E_{iso}\), \(T_{90}\), \(\alpha\) and energy fluence in \(15-150\,\)keV, we find the p-values from the KS test to be 0.17, 0.07, 0.69 and 0.03 respectively. Since the p-values lie above the threshold, we conclude that there is no particular distinction between the population of GRBs that produce and that does not produce late time X-ray flares.
Footnote 11: The scipy package ks_2samp was used.
## Appendix C Table of Prompt & X-ray Flare Properties
The complete catalog of the studied parameters of the sample is made available online in the machine-readable format. A sample of the catalog is shown in Table 2 for guidance regarding its form and content.
|
2309.05355 | Parallel transport on a Lie 2-group bundle over a Lie groupoid along
Haefliger paths | We prove a Lie 2-group torsor version of the well-known one-one
correspondence between fibered categories and pseudofunctors. Consequently, we
obtain a weak version of the principal Lie group bundle over a Lie groupoid.
The correspondence also enables us to extend a particular class of principal
2-bundles to be defined over differentiable stacks. We show that the
differential geometric connection structures introduced in the authors'
previous work, combine nicely with the underlying fibration structure of a
principal 2-bundle over a Lie groupoid. This interrelation allows us to derive
a notion of parallel transport in the framework of principal 2-bundles over Lie
groupoids along a particular class of Haefliger paths. The corresponding
parallel transport functor is shown to be smooth. We apply our results to
examine the parallel transport on an associated VB-groupoid. | Saikat Chatterjee, Adittya Chaudhuri | 2023-09-11T09:55:38Z | http://arxiv.org/abs/2309.05355v1 | # Parallel transport on a Lie 2-group bundle over a Lie groupoid along Haefliger paths
###### Abstract.
We prove a Lie 2-group torsor version of the well-known one-one correspondence between fibered categories and pseudofunctors. Consequently, we obtain a weak version of the principal Lie group bundle over a Lie groupoid. The correspondence also enables us to extend a particular class of principal 2-bundles to be defined over differentiable stacks. We show that the differential geometric connection structures introduced in the authors' previous work, combine nicely with the underlying fibration structure of a principal 2-bundle over a Lie groupoid. This interrelation allows us to derive a notion of parallel transport in the framework of principal 2-bundles over Lie groupoids along a particular class of Haefliger paths. The corresponding parallel transport functor is shown to be smooth. We apply our results to examine the parallel transport on an associated VB-groupoid.
Key words and phrases:Lie groupoid fibrations, principal 2-bundles, Haefliger paths, parallel transport, thin homotopy 2020 Mathematics Subject Classification: Primary 53C08, Secondary 22A22, 58H05
###### Contents
* 1 Introduction
* 2 Some background materials
* 3 Quasi-principal 2-bundles and their characterizations
* 4 Lazy Haefliger paths and thin fundamental groupoid of a Lie groupoid
* 5 Parallel Transport on quasi-principal 2-bundles
* 6 Induced parallel transport on VB-groupoids along lazy Haefliger paths
## 1. Introduction
For the last few decades or so, higher gauge theories provided frameworks for describing the dynamics of string-like extended "higher dimensional objects." Typically, they involve an appropriately categorified version of a smooth fiber bundle equipped with a connection structure that induces a notion of parallel transport consistent with this categorification. The precise description of the categorified objects depends largely on the framework. Our object of interest in this paper is, in particular, a categorified principal bundle that lives in the realm of Lie groupoids. In our earlier paper, we introduced such an object as a principal 2-bundle over a Lie groupoid [Definition 3.6, [11]]. More precisely, it consists of
Introduction
The \(2\)-bundle \(X_{0}\) of a \(2\)-bundle \(X_{0}\) is a \(2\)-bundle \(X_{0}\)
which can be defined over a differentiable stack represented by its base Lie groupoid. In other words, we provided a Lie 2-group torsor version of the classic Grothendieck correspondence between general fibered categories and pseudofunctors, which, according to the best of our knowledge, is a new addition to the existing literature. This forms the first part of this paper.
Rest of the paper is devoted to the development of a parallel transport theory on a quasi-principal 2-bundle (a principal 2-bundle over a Lie groupoid equipped with a quasi connection) induced from the differential geometric connection structures developed by us in [11] along "certain class" of Haefliger paths in Lie groupoids ([24, 9, 27, 39, 16, 25]). This also involves the construction of a parallel transport functor between appropriate source and target categories. Before we summarize our results and main achievements, we will provide a brief overview of some works in the development of parallel transport theory on a categorified framework of bundles and connections below:
Before delving into the existing works on principal bundles over appropriately categorified bases, we mention some works concerning higher principal bundles over manifolds.
Baez[4], Baez-Schreiber[7], Picken-Martins[45], Mackaay-Picken [40], Schreiber-Waldorf's [51, 52, 53] along with some other papers cited therein comprises some of the earliest work in this area. In particular, Schreiber-Waldorf developed a general model-independent axiomatic approach to the theory of parallel transport. An interesting aspect of their approach is the axiomatic characterization of the smoothness condition for parallel transport functors/2-functors of geometric objects like a connection on principal bundles and non-abelian gerbes over a manifold. In [15], Collier, Lerman, and Wolbert introduced an alternative notion of smoothness for transport functors [Definition 3.5,[15]] and argued its equivalence with the one mentioned in [51]. They also showed its validity by proving the smoothness (in their terms) of the connection induced parallel transport functor of a classical principal bundle. More recently, in [58] and [59], Waldorf introduced a notion of parallel transport on a model of principal Lie 2-group bundle over a manifold, induced from a global connection data. Also, a few years back, again in a framework of principal 2-bundle over a manifold, Kim and Saemann, via adjusted Weil algebra in [36], introduced a notion of generalized higher holonomy functor that has been successfully kept free from the usual fake-flatness condition. One can find an aprroach to parallel transport in terms of Lie crossed module cocycles over a manifold in [54], and [62] explores its relation with the knot theory. A gluing algorithm for local 2-holonomies has been provided by [60]. Also, double category theoretic approaches to higher gauge theories can be found in both Morton-Picken's [50] and Zucchini-Soncini's [54]. Although our overview list is far from complete, in all the approaches we mentioned so far, the base space of the concerned categorifed geometric object is manifold. However, since our current framework admits a categorified base space, i.e., a Lie groupoid, we must mention at least some works in this direction.
Gengoux, Tu, and Xu in [38] introduced a notion of holonomy map along a generalized pointed loop for a principal Lie group bundle over a Lie groupoid equipped with a flat connection. More recently, in [15], Collier, Lerman, and Wolbert studied parallel transport on a principal Lie group bundle over a differentiable stack. In particular, they defined their principal bundles, connection structures, and parallel transports as 1-morphisms of stacks. There are also other approaches, such as [12] and [56], where authors considered
the base as a path groupoid and as an affine 2-space, respectively. In order to contrast with our approach and motivation, we emphasize here that we consider parallel transport on a Lie 2-group bundle over a Lie groupoid along a certain class of Haefliger paths, which, despite of its interesting properties, seems to be unexplored so far.
Now, let us come back to the current paper!
There is a notion of a path in a Lie groupoid due to Haefliger ([26, 28, 24]), along with a notion of homotopy between such paths, which yields a notion of fundamental groupoid of a Lie groupoid (for example see [16, 46, 25, 47]). In the existing literature, these paths are usually known by the names _Haefliger paths_ or _\(\mathcal{G}\)-paths in a Lie groupoid \(\mathcal{G}\)_. Consistent with our notations, we call them \(\mathbb{X}\)_-paths_ in a Lie groupoid \(\mathbb{X}=[X_{1}\rightrightarrows X_{0}]\). Loosely speaking, an \(\mathbb{X}\)-path is a sequence of the form \((\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) for some \(n\in\mathbb{N}\), where each \(\gamma_{i}\in X_{1}\) and each \(\alpha_{i}\) is a path in \(X_{0}\), such that they are compatible with each others' source-target, initial-finical point in an appropriate way. In this paper, we will introduce a notion of _thin homotopy between "lazy Haefliger paths" or lazy \(\mathbb{X}\)-paths_, a suitable generalization of the classical notion of thin homotopy between paths with sitting instants, i.e., a smooth path in a manifold which is constant near the endpoints, in the setting of Lie groupoids. Our notion of thin homotopy between lazy Haefliger paths is a minor variant of the existing notion of homotopy between Haefliger paths ([16, 46, 25, 47]). We show that our notion of thin homotopy will naturally yield a notion of _thin fundamental groupoid of a Lie groupoid_, which possesses a diffeological structure. Interestingly, we found out that the multiplicative nature of a connection 1-form(introduced by us in [11]) and a quasi connection structure combine nicely to produce a reasonable notion of parallel transport on a quasi-principal 2-bundle along such lazy Haefliger paths. This parallel transport turns out to be invariant under our notion of thin homotopy, and as a consequence produces a functor from the thin fundamental groupoid of the base Lie groupoid to a quotient category of \(\mathbb{G}\)-torsors, where \(\mathbb{G}\) is the structure Lie 2-group of the quasi-principal 2-bundle. The parallel transport functor behaves naturally with respect to the pullback and connection preserving quasi-principal 2-bundle morphisms. Moreover, we showed this parallel transport functor is "smooth" in a way that generalizes the notion of smoothness introduced by Collier, Lerman, and Wolbert for the traditional principal bundle in [Definition 3.5,[15]]. Fixing a Lie 2-group \(\mathbb{G}\) and a Lie groupoid \(\mathbb{X}\), we then extend this parallel transport functor to define a functor
\[\mathcal{F}\colon\operatorname{Bun}^{\nabla}_{\operatorname{quasi}}( \mathbb{X},\mathbb{G})\to\operatorname{Trans}(\mathbb{X},\mathbb{G}). \tag{1.1}\]
In Equation (1.1), \(\operatorname{Bun}^{\nabla}_{\operatorname{quasi}}(\mathbb{X},\mathbb{G})\) is the category of quasi-principal -\(\mathbb{G}\)-bundles equipped with strict connections (introduced in [11]) over the Lie groupoid \(\mathbb{X}\). On the other side, \(\operatorname{Trans}(\mathbb{X},\mathbb{G})\) is the category of functors from the thin fundamental groupoid of \(\mathbb{X}\) to the priorly mentioned quotiented category of \(\mathbb{G}\)-torsors. Finally, we developed a notion of induced parallel transport theory on VB-groupoids through associated bundle construction.
Except for a cursory mention of an idea in [Subsection 4.1.3, [24]], where the setup and context are very different, according to the best of our knowledge, we believe this approach to parallel transport (especially along Haefliger paths in Lie groupoids) is new to the existing ones in higher gauge theory literature body. We hope our approach will bring a new insight to the parallel transport theory (induced from suitable connection data) on any geometric object that has an underlying Lie groupoid fibration structure with an appropriate cleavage. Nonetheless, we also must mention certain worthy topics
which we have not covered in this paper such as the construction of a quasi-principal \(2\)-bundle with connection from the data of a parallel transport functor, a notion of parallel transport along higher dimensional objects like surfaces, etc. We will include these topics in a forthcoming paper that hopefully will make the relation with other existing notions of higher parallel transport theories more transparent.
### The outline and organization of the paper
Section 2 covers some standard results related to Lie \(2\)-groups, fibered categories and diffeological spaces. We also recall some necessary results from our earlier work [11].
The notions of quasi connection and a quasi-principal \(2\)-bundle are introduced in Section 3. In Section 3.2 we construct some examples of the same. Section 3.3 establishes one of the key results where we realize a quasi-principal \(2\)-bundle as a Grothendieck construction, and provide a Lie \(2\)-group torsor version of the correspondence between fibered categories and pseudofunctors. In the following subsection, a characterization for quasi connections has been given in terms of retractions. We end the section by extending a class of principal \(2\)-bundles over a differentiable stack.
In Section 4, we introduce the definitions of a lazy Haefliger path in a Lie groupoid and a thin homotopy between them, resulting in a thin fundamental groupoid of a Lie groupoid. Section 4.1 provides a diffeological structure on the thin fundamental groupoid.
Section 5 develops a notion of thin homotopy invariant parallel transport on a quasi-principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\) along a lazy Haefliger path, which results in a functor, namely the _parallel transport functor_, from the thin fundamental groupoid of \(\mathbb{X}\) to a quotiented category of \(\mathbb{G}\)-torsors. The main result of this section is Theorem 5.13, where we establish Equation (1.1). We observe the naturality of the parallel transport functor with respect to pullback constructions and connection-preserving bundle morphisms in Section 5.2. It is crucial to note that the parallel transport functor enjoys appropriate smoothness properties discussed in Section 5.3.
In Section 6, we produce a VB-groupoid associated to a quasi-principal \(2\)-bundle, and study the parallel transport on associated VB-groupoid along a lazy Haefliger path.
### Notations and conventions
Here, we fix some conventions and notations which we will follow throughout this paper.
We assume all our manifolds to be smooth, second countable and Hausdorff. We will denote the category of such manifolds by Man. For any smooth map \(f\colon M\to N,\) the differential at \(m\in M\) will be denoted as \(f_{*,m}\colon T_{m}M\to T_{f(m)}N.\) A smooth right (resp. left) action of a Lie group \(G\) on a smooth manifold \(P\) is denoted by \((p,g)\mapsto pg\) (resp. \((g,p)\mapsto gp\)), for \(g\in G\) and \(p\in P\). If the Lie group \(G\) acts on a manifold \(M\), such that the action is free and transitive, then we will call \(M\) a \(G\)-torsor, and the groupoid of \(G\)-torsors will be denoted by \(G\)-Tor. To make the concatenation of smooth paths in a manifold smooth, we will restrict ourselves to only _paths with sitting instants_ i.e. a smooth map \(\alpha:[0,1]\to M\) to a manifold \(M\), such that there exists an \(\epsilon\in(0,1/2)\) and satisfying \(\alpha(t)=\alpha(0)\) for \(t\in[0,\epsilon)\) and \(\alpha(t)=\alpha(1)\) for \(t\in(1-\epsilon,1]\), see [Definition 2.1, [51]]. We will use the notation \(PM\) to denote the set of smooth paths with sitting instants in the manifold \(M\). For any path \(\alpha\), the notation \(\alpha^{-1}\) denotes the path defined by \(t\mapsto\alpha(1-t)\) for all \(t\in[0,1]\).
Unless otherwise stated, for any category, we denote the source, target and unit maps by \(s,t\) and \(u\) respectively. For any object \(p\) in a category, \(1_{P}\) denotes the element \(u(p)\). We write the composition of a pair of morphisms \(f_{2},f_{1}\) as \(f_{2}\circ f_{1}\), when \(t(f_{1})=s(f_{2})\). In a groupoid, we denote the inverse map by \(\mathfrak{i}\) and for any morohism \(\gamma\), we shorten the notation by writing \(\gamma^{-1}\) instead of \(\mathfrak{i}(\gamma)\).
_Lie groupoids_ are groupoid object in Man, such that source and target maps are surjective submersions. We use the blackboard bold notation to denote a Lie groupoid; that is, \(\mathbb{E}\) for \([E_{1}\rightrightarrows E_{0}]\) and so on. A _morphism of Lie groupoids_\(F:\mathbb{X}\rightarrow\mathbb{Y}\) is a functor such that both the object level and morphism level maps are smooth, and we denote them respectively by \(F_{0}\colon X_{0}\to Y_{0}\) and \(F_{1}\colon X_{1}\to Y_{1}\). A _smooth natural transformation_ from a morphism of Lie groupoids \(F\colon\mathbb{X}\rightarrow\mathbb{Y}\) to another \(F^{\prime}\colon\mathbb{X}\rightarrow\mathbb{Y}\) is a natural transformation \(\eta\) between the underlying functors such that associated map \(X_{0}\to Y_{1}\) is smooth and we denote it by \(\eta\colon F\Longrightarrow F^{\prime}\). For a Lie groupoid \(\mathbb{X}\), we denote the associated _tangent Lie groupoid_ by \(T\mathbb{X}=[TX_{1}\rightrightarrows TX_{0}]\), and define the structure maps by the differentials of the respective structure maps of \(\mathbb{X}.\) For a compoasable pair of morphisms \((\gamma_{2},X_{2}),(\gamma_{1},X_{1})\) in \(T\mathbb{X}\), the composition \((\gamma_{2},X_{2})\circ(\gamma_{1},X_{1})=\big{(}m(\gamma_{2},\gamma_{1}),m_{ \ast(\gamma_{2},\gamma_{1})}(X_{2},X_{1})\big{)}\) will be denoted by \((\gamma_{2}\circ\gamma_{1},X_{2}\circ X_{1})\), where \(m\) is the composition map of the Lie groupoid \(\mathbb{X}\).
## 2. Some background materials
To make this paper self-readable, in this section, we briefly recall some notions that are either already well established in the existing literature or we have already introduced such notions in our earlier paper [11].
### Lie 2-groups and Lie crossed modules
Here, we recall the basics of Lie 2-groups, Lie crossed modules, and related notions. The material in this subsection is standard, and we refer to the following papers [7, 6, 5, 12, 14, 61] for further reading in these topics.
A _(strict) Lie 2-group_ is a Lie groupoid \(\mathbb{G}\), along with a morphism of Lie groupoids
\[\otimes:\mathbb{G}\times\mathbb{G}\rightarrow\mathbb{G}\]
inducing Lie group structures on both objects and morphisms. A _Lie crossed module_, is a 4-tuple \((G,H,\tau,\alpha)\) where \(G,H\) are Lie groups, \(\alpha:G\times H\to H\) is a smooth action of \(G\) on \(H\) such that \(\tau:H\to G\) is a morphism of Lie groups and for each \(g\in G\), \(\alpha(g,-):H\to H\) is a Lie group homomorphism such that
\[\begin{split}&\tau(\alpha(g,h))=g\tau(h)g^{-1}\,\text{for all}\,(g,h)\in G\times H,\\ &\alpha(\tau(h),h^{\prime})=hh^{\prime}h^{-1}\,\text{for all}\,h,h^{ \prime}\in H.\end{split} \tag{2.1}\]
It is well known that one can identify a Lie 2-group with a Lie crossed module. In particular, given a Lie crossed module \((G,H,\tau,\alpha)\) the associated Lie 2-group is given by the Lie groupoid \(\mathbb{G}=[H\rtimes_{\alpha}G\rightrightarrows G]\), where \(H\rtimes_{\alpha}G\) denotes the semidirect product of Lie groups \(H\) and \(G\) with respect to the action \(\alpha\) of \(G\) on \(H\). We list the structure maps of this Lie 2-group below:
* the source map is given by \(s(h,g)=g\),
* the target map is given by \(t(h,g)=\tau(h)g\),
* the composition map is given by \(m((h_{2},g_{2}),(h_{1},g_{1}))=(h_{2}h_{1},g_{1})\),
* the identity map is given by \(1_{g}=(e,g)\),
* the inverse map is given by \(\mathfrak{i}(h,g)=(h^{-1},\tau(h)g)\),
* the group properties of \(H\rtimes_{\alpha}G\) are those of the standard semidirect product of the groups, that is the bi-functor \(\otimes:\mathbb{G}\times\mathbb{G}\to\mathbb{G}\) is defined as \[\begin{split}&\otimes_{0}:(g_{1},g_{2})\mapsto g_{1}g_{2},\\ &\otimes_{1}:((h_{2},g_{2}),(h_{1},g_{1}))\mapsto(h_{2}\alpha(g_{ 2},h_{1}),g_{2}g_{1}),\end{split}\] (2.2)
whereas the group inverse and the identity elements are respectively given as \((h,g)^{-1}=(\alpha(g^{-1},h^{-1}),g^{-1})\) and \((e,e)\). Note that we use the same notation \(e\) for identity elements in both \(H\) and \(G\), and the distinction should be made according to the context. On the other hand, given a Lie 2-group \(\mathbb{G}=[G_{1}\rightrightarrows G_{0}]\), the associated Lie crossed module is given by the 4-tuple \((G_{0},\ker(s),t|_{\ker(s)}:\ker(s)\to G_{0},\alpha:G_{0}\times\ker(s) \to\ker(s))\) where \(\ker(s)=\{\gamma\in G_{1}:s(\gamma)=1_{G_{0}}\}\) and the morphism \(\alpha:G_{0}\times\ker(s)\to\ker(s)\) is defined as \((a,\gamma)\mapsto 1_{a}\cdot\gamma\cdot 1_{a^{-1}}\). One can show that the above association defines a one-one correspondence between Lie 2-groups and Lie crossed modules.
We will denote the Lie 2-group associated to a Lie crossed module \((G,H,\tau,\alpha)\) as \([H\rtimes_{\alpha}G\rightrightarrows G]\).
Given a Lie 2-group \(\mathbb{G}=[G_{1}\rightrightarrows G_{0}]\), we can associate the Lie groupoid \(L(\mathbb{G})=[L(G_{1})\rightrightarrows L(G_{0})]\) whose structure maps are obtained by taking differentials of the structure maps of \(\mathbb{G}\) at the identity.
A _right action of a Lie 2-group \(\mathbb{G}=[G_{1}\rightrightarrows G_{0}]\) on a Lie groupoid \(\mathbb{X}=[X_{1}\rightrightarrows X_{0}]\)_ is given by a morphism of Lie groupoids \(\rho:\mathbb{X}\times\mathbb{G}\to\mathbb{X}\), such that it induces classical Lie group actions on both objects and morphisms. Suppose \(\mathbb{G}\) acts on a pair of Lie groupoids \(\mathbb{X}\) and \(\mathbb{Y}\). Then a morphism of Lie groupoids \(F:=(F_{1},F_{0}):\mathbb{X}\to\mathbb{Y}\) is said to be \(\mathbb{G}\)_-equivariant_ if \(F_{0}\) is \(G_{0}\)-equivariant and \(F_{1}\) is \(G_{1}\)-equivariant. A smooth natural transformation between two such functors \(\eta\colon F\Longrightarrow F^{\prime}\) is said to be a \(\mathbb{G}\)_-equivariant natural transformation_ if \(\eta(xg)=\eta(x)1_{g}\) for all \(x\in X_{0}\) and \(g\in G_{0}\).
### Fibered categories and pseudofunctors
We suggest [57] for readers interested in further reading on this topic.
A category \(\mathcal{E}\) equipped with a functor \(\pi:\mathcal{E}\to\mathcal{X}\) is called a _category over \(\mathcal{X}\)_. A morphism \(f:x\to y\) in \(\mathcal{E}\) is called _cartesian_ if for any morphism \(h:z\to y\) in \(\mathcal{E}\) and any morphism \(u:\pi(z)\to\pi(x)\) in \(\mathcal{X}\) with \(\pi(h)=\pi(f)\circ u\), there exists a unique morphism \(\tilde{u}:z\to x\) with \(\pi(\tilde{u})=u\) and \(f\circ\tilde{u}=h\). Observe that if both \(\mathcal{E}\) and \(\mathcal{X}\) are groupoids, then every morphism in \(\mathcal{E}\) is cartesian.
If \(\pi:\mathcal{E}\to\mathcal{X}\) has the property that for any morphism \(\gamma\) in \(\mathcal{X}\) and an element \(p\in\mathcal{E}\) such that \(\pi(p)=t(\gamma)\), there is a cartesian morphism \(\tilde{\gamma}\in\mathcal{E}\) satisfying \(\pi(\tilde{\gamma})=\gamma\) and \(t(\tilde{\gamma})=p\), then it will be called as a _fibered category over \(\mathcal{X}\)_. For each object \(x\in\mathcal{X}\), the _fibre \(\pi^{-1}(x)\) of \(\pi\) over \(x\)_ is defined as the subcategory of \(\mathcal{E}\), whose objects are the objects \(p\) of \(\mathcal{E}\) such that \(\pi(p)=x\) and whose morphisms are the morphism \(\delta\) of \(\mathcal{E}\) such that \(\pi(\delta)=1_{x}\).
A _cleavage_ on the fibered category \(\pi:\mathcal{E}\to\mathcal{X}\) consists of a class \(\kappa\) of cartesian morphisms in \(\mathcal{E}\) such that for each morphism \(\gamma:x\to y\) in \(\mathcal{X}\) and each object \(p\in\pi^{-1}(y)\), there exists a unique morphism in \(\kappa\) with target \(p\), mapping to \(\gamma\) in \(\mathcal{X}\). If the cleavage contains all the identities and is closed under composition, then it is called a _splitting cleavage_. It is well known that fibered categories over \(\mathcal{X}\) equipped with a splitting cleavage are in one-one correspondence with category valued contravariant functors over \(\mathcal{X}\)
The correspondence naturally extends to a one-one correspondence (due to Grothendieck) between fibered categories over \(\mathcal{X}\) equipped with cleavage and _pseudofunctors over \(\mathcal{X}\)_ (see **sections 3.1.2** and **3.1.3** of [57] for the proof). We define a pseudofunctor below:
**Definition 2.1**.: A _pseudofunctor \(\mathcal{F}\colon\mathcal{X}^{\mathrm{op}}\to\mathrm{Cat}\) over a category \(\mathcal{X}\)_ consists of the following data:
1. a category \(\mathcal{F}(x)\) for each \(x\in\mathcal{X}\),
2. a functor \(\gamma^{*}\colon\mathcal{F}(y)\to\mathcal{F}(x)\) for each morphism \(x\xrightarrow{\gamma}y\) in \(\mathcal{X}\),
3. for each object \(x\) in \(\mathcal{X}\), we have a natural isomorphism \(I_{x}:id_{x}^{*}\Longrightarrow\operatorname{id}_{\mathcal{F}(\mathrm{x})}\),
4. for each pair of composable morphisms \(x\xrightarrow{\gamma_{1}}y\xrightarrow{\gamma_{2}}z\), we have a natural isomorphism \(\alpha_{\gamma_{1},\gamma_{2}}:\gamma_{1}^{*}\gamma_{2}^{*}\to(\gamma_{2} \gamma_{1})^{*}\), where the adjacency of \(\gamma_{2},\gamma_{1}\) denotes the composition.
These data satisfy the following coherence laws:
1. If \(x\xrightarrow{\gamma}y\) is a morphism in \(\mathcal{X}\), and \(p\) is an object in \(F(y)\), then we have \[\begin{split}\alpha_{\operatorname{id}_{\mathrm{x}},\gamma}(p)& =I_{x}(\gamma^{*}(p))\colon\operatorname{id}_{\mathrm{x}}^{*}( \gamma^{*}(p))\Longrightarrow\gamma^{*}(p)\\ \alpha_{\gamma,\operatorname{id}_{y}}(p)&=\gamma^{* }(I_{y}(p))\colon\gamma^{*}\operatorname{id}_{\mathrm{y}}^{*}(p) \Longrightarrow\gamma^{*}(p).\end{split}\] (2.3)
2. For composable morphisms \(x\xrightarrow{\gamma_{3}}y\xrightarrow{\gamma_{2}}z\xrightarrow{\gamma_{1}}w\) and any object \(p\) in \(F(x)\), the following diagram is commutative: \[\begin{split}\gamma_{3}^{*}\gamma_{2}^{*}\gamma_{1}^{*}(p) \xrightarrow{\alpha_{\gamma_{3},\gamma_{2}}(\gamma_{1}^{*}(p))}& \langle\gamma_{2}\gamma_{3}\rangle^{*}\gamma_{1}^{*}(p)\\ \gamma_{3}^{*}\big{(}\alpha_{\gamma_{2},\gamma_{1}}(p)\big{)}& \Big{\downarrow}\alpha_{\gamma_{2}\gamma_{3},\gamma_{1}}(p)\\ \gamma_{3}^{*}(\gamma_{1}\gamma_{2})^{*}(p)& \xrightarrow{}(\gamma_{1}\gamma_{2}\gamma_{3})^{*}(p)\end{split}\] (2.4)
The construction of a fibered category from a pseudofunctor is usually known as the _Grothendieck construction_ in literature (see **Chapter 10** of [34]).
### Diffeological spaces and some of its properties
This subsection will briefly review the notion of a diffeological space and some of its standard properties. Interested readers can find more details on diffeology in [33] and [3].
**Definition 2.2**.: A _diffeology_ on a set \(S\) is a collection of functions \(D_{S}\subseteq\{p\colon U\to S:U\subseteq\mathbb{R}^{n}\), where \(U\) is an open subset of \(\mathbb{R}^{n},n\in\mathbb{N}\}\) which staisfy the following conditions:
1. Every constant function is in \(D_{S}\);
2. If \(V\subseteq\mathbb{R}^{n}\) is open, \(p\colon U\to S\) is in \(D_{S}\) and \(f\colon V\to U\) is a smooth map, then \(p\circ f\colon V\to S\) is in \(D_{S}\);
3. If \(\{U_{i}\}_{i\in I}\) is an open cover of \(U\subseteq\mathbb{R}^{n}\) and \(p\colon U\to S\) is a function such that \(p|_{U_{i}}\colon U_{i}\to S\) is in \(D_{S}\) for all \(i\in I\), then \(p\colon U\to S\) is in \(D_{S}\).
The pair \((S,D_{S})\) is called a _diffeological space_ and the elements of \(D_{S}\) are called _plots_.
**Definition 2.3**.: A _map of diffeological spaces_ from a diffeological space \((X,D_{X})\) to a diffeological space \((Y,D_{Y})\) is a map of sets \(f\colon X\to Y\), such that for any \(p\in D_{X}\), \(f\circ p\) is an element of \(D_{Y}\).
Collection of diffeological spaces and maps of diffeological spaces between them naturally forms a category, which we denote by **Diffeol**.
**Example 2.1** (Smooth manifold).: Any smooth manifold \(M\) has a natural diffeological space structure, given by \(D_{M}:=\{p\colon U\to M:U\) is an open subset of \(\sqcup_{n=0}^{\infty}\mathbb{R}^{n}\) and \(p\) is smooth.}. Any smooth map \(f\colon M\to N\) between manifolds is a map of diffeological spaces \(f\colon(M,D_{M})\to(N,D_{N})\).
**Example 2.2**.: Given a diffeological space \((X,D_{X})\) and a subset \(S\subseteq X\), the _subspace diffeology_\(D_{S}\) on \(S\) is defined as the set \(D_{S}:=\{(p\colon U\to X)\in D_{X}:p(U)\subseteq S\}\).
**Example 2.3** (Path space diffeology).: For any smooth manifold \(M\), the set of paths with sitting instants \(PM\) is a diffeological space, equipped with diffeology \(D_{PM}:=\{p\colon U\to PM:\bar{p}\colon U\times[0,1]\to M,(u,x)\mapsto p(u)(x)\), is smooth.}. \(D_{PM}\) is called the _path space diffeology_. The evaluation maps \(ev_{0},ev_{1}\colon PM\to M\) at \(0\) and \(1\) respectively, are maps of diffeological spaces.
**Example 2.4** (Fibre product diffeology).: Given maps of diffeological spaces \(f\colon(Y,D_{Y})\to(X,D_{X})\) and \(g\colon(Z,D_{Z})\mapsto(X,D_{X})\), the fibre product \(Y\times_{f,X,g}Z\) is a diffeological space with _fiber product diffeology_\(D_{Y\times_{f,X,g}Z}:=\{(p_{Y},p_{Z})\in D_{Y}\times D_{Z}:f\circ p_{Y}=g \circ p_{Z}\}\). Note that the projection maps are the maps of diffeological spaces.
**Example 2.5** (Quotient diffeology ).: Given a diffeological space \((X,D_{X})\) and an equivalence relation \(\sim\) on \(X\), the quotient \(q\colon X\to\frac{X}{\sim}\) induces a diffeological structure with the diffeology as given below **[Construction A.15, [15]]**:
\(D_{\frac{X}{\sim}}:=\{p\colon U\to\frac{X}{\sim}\colon\,U\subseteq\mathbb{R}^{n} \text{ is open},n\in\mathbb{N},p\text{ is a function such that for every }u\in U\), there is an open neighbourhood \(V\) of \(u\) in \(U\) and a plot \(\bar{p}\colon V\to X\) with \(q\circ\bar{p}=p|_{V}\}\). \(D_{\frac{X}{\sim}}\) is called the _quotient diffeology_. Then, the quotient map becomes a map of diffeological spaces.
**Example 2.6**.: Let \((S_{i},D_{S_{i}})_{i\in I}\) be an arbitrary family of diffeological spaces. Then the disjoint union \(S=\sqcup_{i\in I}S_{i}\) is a diffeological space with the diffeology \(D:=\{p\colon U\to S:U\subseteq\mathbb{R}^{n}\text{ is open},n\in\mathbb{N},p\text{ is a function such that for any }x\in U\) there exists an open neighbourhood \(U_{x}\) of \(x\) and an index \(i\in I\), with \(P|_{U_{x}}\in D_{S_{i}}.\}\). The diffeology \(D\) is called the _sum diffeology_ on the family \(\{S_{i}\}_{i\in I}\), (see **Section 1.39** of [33]).
**Lemma 2.7**.: Let \(q\colon(A,D_{A})\to(B,D_{B})\) be a quotient map between two diffeological spaces and \((C,D_{C})\) another diffeological space. Then a map \(f\colon(B,D_{B})\to(C,D_{C})\) is a map of diffeological spaces if and only if for any plot \(p\colon U\to A\), the composite \(f\circ q\circ p\in D_{C}\).
Proof.: See **Lemma A.16** in [15].
### A principal Lie 2-group bundle over a Lie groupoid
In [11], we introduced a notion of a principal Lie 2-group bundle over a Lie groupoid and characterized a particular subfamily of them. In this subsection, we give a quick review of such.
**Definition 2.4** (Definition 3.6, Definition 3.8, [11]).: Let \(\mathbb{G}\) be a Lie 2-group.
1. A _principal \(\mathbb{G}\)-bundle over a Lie groupoid_\(\mathbb{X}\) is given by a morphism of Lie groupoids \(\pi:\mathbb{E}\to\mathbb{X}\) along with a right action \(\rho:\mathbb{E}\times\mathbb{G}\to\mathbb{E}\) of Lie 2-group \(\mathbb{G}\) on \(\mathbb{E}\) such that
\(\pi_{0}:E_{0}\to X_{0}\) and \(\pi_{1}:E_{1}\to X_{1}\) are principal \(G_{0}\)-bundle and principal \(G_{1}\)-bundle respectively.
2. A _morphism of principal \(\mathbb{G}\)-bundles from \(\pi:\mathbb{E}\to\mathbb{X}\) to \(\pi^{\prime}:\mathbb{E}^{\prime}\to\mathbb{X}\)_ is given by a smooth \(\mathbb{G}\)-equivariant morphism \(F:\mathbb{E}\to\mathbb{E}^{\prime}\) such that the following diagram commutes on the nose
The Lie 2-group \(\mathbb{G}\) above is said to be the _structure 2-group of the principal \(\mathbb{G}\)-bundle._ Several examples of such principal 2-bundles have been given in [11].
In particular when the structure 2-group is a discrete 2-group \([G\rightrightarrows G]\), then a principal \([G\rightrightarrows G]\)-bundle over a Lie groupoid \(\mathbb{X}\) coincides with a principal \(G\)-bundle over \(\mathbb{X}\) (as in [38, 55, 48]), that we showed in **Example 3.14**, [11].
**Definition 2.5** (Definition 2.2,[38]).: For a Lie group \(G\), a _principal \(G\)-bundle over a Lie groupoid \(\mathbb{X}\)_ is given by a principal \(G\)-bundle \(\pi\colon E_{G}\to X_{0}\) along with a smooth map \(\mu\colon s^{*}E_{G}:=(X_{1}\times_{s,X_{0},\pi}E_{G})\to E_{G}\), which satisfy the following conditions:
1. \(\mu(1_{\pi(p)},p)=p\) for all \(p\in E_{G}\),
2. for each \((\gamma,p)\in s^{*}E_{G}\), we have \(\big{(}\gamma,\mu(\gamma,p)\big{)}\in t^{*}E_{G}\),
3. if \(\gamma_{2},\gamma_{1}\in X_{1}\) such that \(t(\gamma_{1})=s(\gamma_{2})\) and \((\gamma_{1},p)\in s^{*}E_{G}\), then \(\mu(\gamma_{2}\circ\gamma_{1},p)=\mu(\gamma_{2},\mu(\gamma_{1},p))\),
4. for all \(p\in E_{G},g\in G\) and \(\gamma\in X_{1}\) we have \(\mu(\gamma,p)g=\mu(\gamma,pg)\).
In particular the conditions(i)-(iii) tells that \(\mu\) is an _action_ of \(\mathbb{X}\) on \(E_{G}\). The fourth condition tells that it commutes with the right \(G\)-action on \(E_{G}\). We denote a principal \(G\)-bundle over the Lie groupoid \(\mathbb{X}=[X_{1}\rightrightarrows X_{0}]\) by \(\big{(}\pi\colon E_{G}\to X_{0},\mu,\mathbb{X}\big{)}\). Given a Lie group \(G\) and a Lie groupoid \(\mathbb{X}\), the collection of principal \(G\)-bundles over \(\mathbb{X}\) form a groupoid which we denote by \(\operatorname{Bun}(\mathbb{X},G)\). An element in \(\operatorname{Hom}\bigl{(}(\pi\colon E_{G}\to X_{0},\mu,\mathbb{X}),(\pi^{ \prime}\colon E_{G}^{\prime}\to X_{0},\mu^{\prime},\mathbb{X})\bigr{)}\) is a morphism of principal \(G\)-bundles \(f\colon E_{G}\to E_{G}^{\prime}\) over \(X_{0}\) such that \(f(\mu(\gamma,p))=\mu^{\prime}(\gamma,f(p))\) for all \((\gamma,p)\in s^{*}E_{G}\).
For a Lie crossed module \((G,H,\tau,\alpha)\) and a principal \(G\)-bundle over a Lie groupoid \(\mathbb{X}\), we constructed in [Proposition 3.18, [11]], a particular class of principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over \(\mathbb{X}\) called _decorated principal 2-bundles_ and have extensively studied. Below, we briefly recall the construction:
**Proposition 2.8** (Proposition 3.18,[11]).: Given a Lie crossed module \((G,H,\tau,\alpha)\) and a principal \(G\)-bundle \(\big{(}\pi\colon E_{G}\to X_{0},\mu,\mathbb{X}\big{)}\) over \(\mathbb{X}\) we have the following:
1. the manifolds \((s^{*}E_{G})^{\mathrm{dec}}:=s^{*}E_{G}\times H\) and \(E_{G}\) determines a Lie groupoid \([(s^{*}E_{G})^{\mathrm{dec}}\rightrightarrows E_{G}]\) whose structure maps are given as 1. source map \(s\colon\,(\gamma,p,h)\mapsto p\), 2. target map \(t\colon\,(\gamma,p,h)\mapsto\mu(\gamma,p)\tau(h^{-1})\), 3. composition map \(m\colon\big{(}(\gamma_{2},p_{2},h_{2}),(\gamma_{1},p_{1},h_{1})\big{)} \mapsto(\gamma_{2}\circ\gamma_{1},p_{1},h_{2}h_{1})\), 4. unit map \(u:p\mapsto(1_{\pi(p)},p,e)\), 5. inverse map \(\mathfrak{i}\colon\,\big{(}\gamma,p,h\big{)}\mapsto(\gamma^{-1},\mu(\gamma,p) \tau(h^{-1}),h^{-1}\big{)}\).
2. The Lie groupoid \(\mathbb{E}^{\rm dec}:=[(s^{*}E_{G})^{\rm dec}\rightrightarrows E_{G}]\) forms a principal \(\mathbb{G}\)-bundle over \(\mathbb{X}\) such that the action of the Lie \(2\)-group \([H\rtimes_{\alpha}G\rightrightarrows G]\) and bundle projection are respectively defined as \[\rho\colon\mathbb{E}^{\rm dec}\times\mathbb{G} \rightarrow\mathbb{E}^{\rm dec}\] (2.5) \[(p,g)\mapsto p\,g\] \[\big{(}(\gamma,p,h),(h^{\prime},g)\big{)} \mapsto\big{(}\gamma,pg,\alpha_{g^{-1}}(h^{\prime-1}\,h)\big{)},\] and \[\pi^{\rm dec}\colon\mathbb{E}^{\rm dec} \rightarrow\mathbb{X}\] (2.6) \[p \mapsto\pi(p),\] \[\big{(}\gamma, p,h\big{)} \mapsto\gamma.\]
The principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle \(\pi^{\rm dec}\colon\mathbb{E}^{\rm dec}\rightarrow\mathbb{X}\) is said to be the _decorated principal \(\mathbb{G}\)-bundle associated to \((\pi\colon E_{G}\to X_{0},\mu,\mathbb{X})\) and the Lie crossed module \((G,H,\tau,\alpha)\)_. They can be characterized as prinicipal \(2\)-bundles admitting _categorical connections_ ([11],**Proposition 3.25**), a notion we recall below:
**Definition 2.6** (Definition 3.21,[11]).: Let \(\mathbb{G}\) be a Lie \(2\)-group and \(\pi\colon\mathbb{E}\rightarrow\mathbb{X}\) a principal \(\mathbb{G}\)-bundle over \(\mathbb{X}\). A _categorical connection_\(\mathcal{C}\) on \(\pi:\mathbb{E}\rightarrow\mathbb{X}\) is defined as a smooth map \(\mathcal{C}\colon s^{*}E_{0}\to E_{1}\) which satisfies the following conditions:
1. \(s(\mathcal{C}(\gamma,p))=p\) for all \((\gamma,p)\in s^{*}E_{0}\);
2. \(\pi_{1}(\mathcal{C}(\gamma,p))=\gamma\) for all \((\gamma,p)\in s^{*}E_{0}\);
3. \(\mathcal{C}(\gamma,pg)=\mathcal{C}(\gamma,p)1_{g}\) for all \((\gamma,p)\in s^{*}E_{0}\) and \(g\in G_{0}\);
4. \(\mathcal{C}(1_{x},p)=1_{p}\) for any \(x\in X_{0}\) and \(p\in\pi^{-1}(x)\);
5. if \((\gamma_{2},p_{2}),(\gamma_{1},p_{1})\in s^{*}E_{0}\) such that \(s(\gamma_{2})=t(\gamma_{1})\) and \(p_{2}=t\big{(}\mathcal{C}(\gamma_{1},p_{1})\big{)}\), then \(\mathcal{C}(\gamma_{2}\circ\gamma_{1},p_{1})=\mathcal{C}(\gamma_{2},p_{2}) \circ\mathcal{C}(\gamma_{1},p_{1})\).
**Example 2.9** (Corollary 3.26, [11]).: Let \(\mathbb{G}\) be a Lie \(2\)-group. Any principal \(\mathbb{G}\)-bundle \(\pi\colon\mathbb{E}\rightarrow[M\rightrightarrows M]\) over a discrete Lie groupoid \([M\rightrightarrows M]\) admits a unique categorical connection given by \((1_{x},p)\mapsto 1_{p}\) for \(p\in E_{0},x=\pi(p)\).
### Connection structures on a principal \(2\)-bundle over a Lie groupoid
In this subsection, we briefly recall the connection structures on a principal \(2\)-bundle over a Lie groupoid (Definition 2.4), that we introduced in [11].
**Proposition 2.10** (Proposition 3.2,[11]).: For a Lie \(2\)-group \(\mathbb{G}\), we have the following:
1. There is an action of \(\mathbb{G}\) on the Lie groupoid \(L(\mathbb{G})=[L(G_{1})\rightrightarrows L(G_{0})]\) given by the adjoint action.
2. Suppose there is an action of \(\mathbb{G}\) on a Lie groupoid \(\mathbb{X}\). Then there is an action of \(\mathbb{G}\) on the tangent Lie groupoid \(T\mathbb{X}:=[TX_{1}\rightrightarrows TX_{0}]\), given by the differential of the action.
**Definition 2.7** (Definition 5.4,[11] ).: Let \(\mathbb{G}=[G_{1}\rightrightarrows G_{0}]\) be a Lie \(2\)-group, and \(\mathbb{E}=[E_{1}\rightrightarrows E_{0}]\) a Lie groupoid. An \(L(\mathbb{G})\)_-valued \(1\)-form on the Lie groupoid \(\mathbb{E}\)_ is a morphism of Lie groupoids \(\omega:=(\omega_{1},\omega_{0})\colon T\mathbb{E}\to L(\mathbb{G})\) such that \(\omega_{i}\) is an \(L(G_{i})\)-valued differential \(1\)-form on \(E_{i}\), for \(i\in\{0,1\}.\) If \(\mathbb{G}\) acts on \(\mathbb{E}\) and \(\omega\colon T\mathbb{E}\to L(\mathbb{G})\) is \(\mathbb{G}\)-equivariant with respect to the actions defined in Proposition 2.10, then \(\omega\) is called a \(\mathbb{G}\)_-equivariant \(1\)-form_.
**Definition 2.8** (Proposition 5.11, [11]).: For a Lie 2-group \(\mathbb{G}\), let \(\pi:\mathbb{E}\to\mathbb{X}\) be a principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\). A \(\mathbb{G}\)-equivariant 1-form \(\omega:T\mathbb{E}\to L(\mathbb{G})\) on \(\mathbb{E}\) is defined as a _strict connection_ (resp. _semistrict connection_) if the following diagram of morphisms of Lie groupoids
commutes on the nose (resp. upto a \(\mathbb{G}\)-equivariant, fibre-wise linear natural isomorphism). Here, \(\delta\) and \(\mathrm{pr}_{2}\) are, respectively, the functor induced by the vertical vector fields and the 2nd projection functor.
As we see next, these connection structures behave well with the pullback along a morphism of principal 2-bundles over a Lie groupoid.
**Example 2.11** (Lemma 5.27, [11]).: Given a Lie 2-group \(\mathbb{G}\) and a Lie groupoid \(\mathbb{X}\), let \(\pi\colon\mathbb{E}\to\mathbb{X}\) and \(\pi^{\prime}\colon\mathbb{E}^{\prime}\to\mathbb{X}\) be a pair of principal \(\mathbb{G}\) bundles over a Lie groupoid \(\mathbb{X}\). Suppose \(F:=(F_{1},F_{0}):\mathbb{E}\to\mathbb{E}^{\prime}\) be a morphism of prinicpal \(\mathbb{G}\)-bundles over \(\mathbb{X}\). If \(\omega:=(\omega_{1},\omega_{0}):T\mathbb{E}^{\prime}\to L(\mathbb{G})\) is a strict connection on \(\mathbb{E}^{\prime}\) then \(F^{*}\omega:=(F_{1}^{*}\omega_{1},F_{0}^{*}\omega_{0}):T\mathbb{E}\to L( \mathbb{G})\) is a strict connection on the principal \(\mathbb{G}\)-bundle \(\pi\colon\mathbb{E}\to\mathbb{X}\).
We call \(F^{*}\omega\) above as the _pullback connection of \(\omega\) along \(F\)_.
**Example 2.12** (**Example 5.16**, [11]).: For a Lie group \(G\), let \(P\to M\) be a classical principal \(G\)-bundle over a manifold \(M.\) Then a connection 1-form \(\omega\) on \(P\to M\) defines a strict connection 1-form \((\omega,\omega)\) on the principal Lie 2-group \([G\rightrightarrows G]\)-bundle \([P\rightrightarrows P]\to[M\rightrightarrows M]\) over the Lie groupoid \([M\rightrightarrows M]\).
**Example 2.13** (**Proposition 5.24**, [11]).: Let \(\big{(}\pi\colon E_{G}\to X_{0},\mu,\mathbb{X}\big{)}\) be a principal \(G\)-bundle over the Lie groupoid \(\mathbb{X}\) and \(\omega\) a connection on the principal \(G\)-bundle \(\pi\colon E_{G}\to X_{0}\) such that \(s^{*}\omega=t^{*}\omega\). Then, for a Lie crossed module \((G,H,\tau,\alpha)\), the pair \((\omega^{\mathrm{dec}},\omega)\) is a strict connection 1-form on principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle \(\pi^{\mathrm{dec}}\colon\mathbb{E}^{\mathrm{dec}}\to\mathbb{X}\) (Proposition 2.8), where \(\omega^{\mathrm{dec}}\) is defined as \(\omega^{\mathrm{dec}}(\gamma,\,p,h)=\mathrm{ad}_{(h,e)}\big{(}(s^{*}\omega)( \gamma,p,h)\big{)}-\Theta_{h}\), and \(\Theta_{h}\) is the Maurer-Cartan form on \(H\).
We refer to our previous paper [11] for several other examples of principal Lie 2-group bundles over Lie groupoids and connections.
**Remark 2.14**.: Originally in [**Definition 5.1**, [11]], strict and semistrict connections on a principal 2-bundle over a Lie groupoid were described in terms of the splitting of a short exact sequence of VB-groupoids called the _Atiyah sequence associated to the principal 2-bundle_, which is equivalent to the Definition 2.8 [**Proposition 5.11**, [11]].
## 3. Quasi-principal 2-bundles and their characterizations
In this section, we introduce a a _quasi-principal 2-bundle over a Lie groupoid_ (Definition 3.1) and a _pseudo-principal Lie crossed module-bundle over a Lie groupoid_ (Definition 3.4). We show that the respective categories are equivalent (Theorem 3.12) via a Lie 2-group torsor version of the classical Grothendieck construction (Section 2.2). Consequently we also
extend a class of principal 2-bundle over a Lie groupoid to be defined over a differentiable stack (Section 3.5)
### Quasi-principal 2-bundles
Let \(\mathbb{G}:=[G_{1}\rightrightarrows G_{0}]\) be a Lie 2-group. Given a principal \(\mathbb{G}\)-bundle \(\pi\colon\mathbb{E}\to\mathbb{X}\) over a Lie groupoid \(\mathbb{X}\), there is a canonical morphism \(P\colon s^{*}E_{0}\to E_{1}\) of principal bundles, from the pull-back principal \(G_{0}\)-bundle \(\pi_{0}^{*}\colon s^{*}E_{0}\to X_{1}\) to the principal \(G_{1}\)-bundle \(\pi_{1}\colon E_{1}\to X_{1}\) given as \(\delta\mapsto(\pi_{1}(\delta),s(\delta)).\) Adapting same notation as above, we define the following:
**Definition 3.1**.: A _quasi connection_ on a principal \(\mathbb{G}\)-bundle \(\pi:\mathbb{E}\to\mathbb{X}\) is defined as a smooth section \(\mathcal{C}:s^{*}E_{0}\to E_{1}\) of the morphism of principal bundles \(P:E_{1}\to s^{*}E_{0}\), such that \(\mathcal{C}\) is itself a morphism of principal bundles over \(X_{1}\) along the unit map \(u\colon G_{0}\to G_{1}\). We will call the pair \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\), a _quasi principal \(\mathbb{G}\)-bundle over \(\mathbb{X}\)_.
An analogous notion to a quasi connection in the setup of VB-groupoids can be found in [23] and [18]. One main distinguishing feature of our setup is the Lie 2-group equivariance. In fact, both of these definitions are special cases of a "cleavage" (a smooth version of the one mentioned in Section 2.2) in the setup of Lie groupoid fibrations [17].
The following proposition is obvious:
**Proposition 3.1**.: Let \(\pi\colon\mathbb{E}\to\mathbb{X}\) be a principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\). Every categorical connection \(\mathcal{C}\colon s^{*}E_{0}\to E_{1}\) is a quasi connection. Conversely, any quasi connection \(\mathcal{C}\colon s^{*}E_{0}\to E_{1}\), which satisfies the following two properties:
1. \(\mathcal{C}(1_{x},p)=1_{p}\) for any \(x\in X_{0}\) and \(p\in\pi^{-1}(x)\),
2. if \((\gamma_{2},p_{2}),(\gamma_{1},p_{1})\in s^{*}E_{0}\) such that \(s(\gamma_{2})=t(\gamma_{1})\) and \(p_{2}=t\big{(}\mathcal{C}(\gamma_{1},p_{1})\big{)}\), then \(\mathcal{C}(\gamma_{2}\circ\gamma_{1},p_{1})=\mathcal{C}(\gamma_{2},p_{2}) \circ\mathcal{C}(\gamma_{1},p_{1})\),
is a categorical connection Definition 2.6.
**Definition 3.2**.: A quasi connection \(\mathcal{C}\colon s^{*}E_{0}\to E_{1}\) satisfying (i) of Proposition 3.1 will be called a _unital connection_ and a principal 2-bundle equipped with an unital connection will be called a _unital-principal 2-bundle_. In the same way, a principal 2-bundle equipped with a categorical connection will be called a _categorical-principal 2-bundle_.
We do not notationally distinguish between quasi, unital or categorical-principal 2-bundles.
It is evident that the category of quasi-principal 2-bundles over a Lie groupoid forms a groupoid.
**Proposition 3.2**.: Given a Lie 2-group \(\mathbb{G}\) and a Lie groupoid \(\mathbb{X}\), the category \(\operatorname{Bun}_{\operatorname{quasi}}(\mathbb{X},\mathbb{G})\) whose objects are quasi-principal \(\mathbb{G}\)-bundles \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) over \(\mathbb{X}\) and an arrow from \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) to \((\pi^{\prime}\colon\mathbb{E}^{\prime}\to\mathbb{X},\mathcal{C}^{\prime})\) is a morphism of principal \(\mathbb{G}\)-bundles \(F\colon\mathbb{E}\to\mathbb{E}^{\prime}\) satisfying \(F_{1}(\mathcal{C}(\gamma,p))=\mathcal{C}^{\prime}(\gamma,F_{0}(p))\) for all \((\gamma,p)\in s^{*}E_{0}\), forms a groupoid. Similarly, unital principal \(\mathbb{G}\)-bundles and categorical principal \(\mathbb{G}\)-bundles over \(\mathbb{X}\) forms the respective groupoids \(\operatorname{Bun}_{\operatorname{unital}}(\mathbb{X},\mathbb{G})\) and \(\operatorname{Bun}_{\operatorname{Cat}}(\mathbb{X},\mathbb{G})\).
In the same spirit, we propose a weaker version of Definition 2.5:
**Definition 3.3**.: For a Lie group \(G\), a _quasi-principal \(G\)-bundle over a Lie groupoid \(\mathbb{X}\)_ is given by a principal \(G\)-bundle \(\pi\colon E_{G}\to X_{0}\) along with a smooth map \(\mu\colon s^{*}E_{G}\to E_{G}\) which satisfy the following conditions:
1. for each \((\gamma,p)\in s^{*}E_{G}\), we have \(\big{(}\gamma,\mu(\gamma,p)\big{)}\in X_{1}\times_{t,X_{0},\pi}E_{G}\),
2. for all \(p\in E_{G},g\in G\) and \(\gamma\in X_{1}\) we have \(\mu(\gamma,p)g=\mu(\gamma,pg)\).
Depending on the context, \(\big{(}\pi\colon E_{G}\to X_{0},\mu,\mathbb{X}\big{)}\) may either denote a quasi-principal \(\mathbb{G}\)-bundle or a principal \(G\)-bundle (Definition 2.5).
A similar notion in the linear framework appeared in [1] and [23] by the name _quasi-action of a Lie groupoid on a vector bundle_.
**Example 3.3**.: Given a quasi-principal \(\mathbb{G}\)-bundle \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) over a Lie groupoid \(\mathbb{X}\), \((\pi_{0}\colon E_{0}\to X_{0},\mu_{\mathcal{C}}:=t\circ\mathcal{C},\mathbb{X})\) is a quasi-principal \(G_{0}\)-bundle over \(\mathbb{X}\), which we call the _underlying quasi-principal \(G_{0}\)-bundle of the quasi-principal \(\mathbb{G}\)-bundle \(\pi\colon\mathbb{E}\to\mathbb{X}\)_.
### Examples of quasi-principal 2-bundles
By Proposition 3.1, any categorical principal 2-bundle is a quasi-principal 2-bundle. Here, we construct under certain conditions, non-trivial examples of quasi-principal 2-bundles which fail to be categorical-principal 2-bundles.
**Lemma 3.4**.: Let \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a categorical-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over a Lie groupoid \(\mathbb{X}\). If there exists a smooth map \(\mathcal{H}\colon s^{*}E_{0}\to H\) such that it satisfies \(\alpha_{g}(\mathcal{H}(\gamma,pg))=\mathcal{H}(\gamma,p)\) for all \((\gamma,p)\in s^{*}E_{0}\) and \(g\in G\), then for \(\mathcal{C}_{\mathcal{H}}:=\mathcal{C}(\gamma,p)\big{(}\mathcal{H}(\gamma,p),e \big{)}\), \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}_{\mathcal{H}})\) is a quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over \(\mathbb{X}\). Moreover, \(\mathcal{C}_{\mathcal{H}}\) is a categorical connection if and only if we have
1. \(\mathcal{H}\big{(}1_{\pi(p)},p\big{)}=e\) for all \(p\in E_{0}\) and
2. \(\mathcal{H}(\gamma_{2}\circ\gamma_{1},p)=\mathcal{H}(\gamma_{2},t(\mathcal{C} (\gamma_{1},p)))\mathcal{H}(\gamma_{1},p_{1})\) for all \(\gamma_{2},\gamma_{1}\in X_{1}\), such that \(s(\gamma_{2})=t(\gamma_{1})\) and \((\gamma_{1},p)\in s^{*}E_{0}\).
Proof.: Since, for \((\gamma,p)\in s^{*}E_{0},g\in G\), we have \(\pi\Big{(}\mathcal{C}(\gamma,p)\big{(}\mathcal{H}(\gamma,p),e\big{)},s\Big{(} \mathcal{C}(\gamma,p)\big{(}\mathcal{H}(\gamma,p),e\big{)}\Big{)}\Big{)}=( \gamma,p)\) and \(\mathcal{C}_{\mathcal{H}}(\gamma,pg)=\mathcal{C}(\gamma,p)\Big{(}\alpha_{g} \big{(}\mathcal{H}(\gamma,pg)\big{)},g\Big{)}\), it follows immediately that \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}_{\mathcal{H}})\) is a quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over \(\mathbb{X}\). It is easy to verify that \(\mathcal{C}_{\mathcal{H}}\) is a categorical connection if and only if (i) and (ii) holds.
Using Lemma 3.4, next, we will construct some concrete examples of quasi-principal 2-bundles, which are not categorical-principal 2-bundles.
**Example 3.5**.: Let \(\pi^{\rm dec}\colon\mathbb{E}^{\rm dec}\to\mathbb{X}\) be the decorated principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over a Lie groupoid \(\mathbb{X}\), obtained from a Lie crossed module \((G,H,\tau,\alpha)\) and a principal \(G\)-bundle \((\pi_{G}\colon E_{G}\to X_{0},\mu,\mathbb{X})\), such that there is a non-identity element \(h\) in \(H\) satisfying \(\alpha(g)(h)=h\) for all \(g\in G\). Define a map \(\mathcal{H}\colon s^{*}E_{0}\to H\) by \((\gamma,p)\mapsto h\) for all \((\gamma,p)\in s^{*}E_{0}\). Since, the assignment \((\gamma,p)\mapsto(\gamma,p,e)\) for all \((\gamma,p)\in s^{*}E_{G}\) defines a categorical connection on \(\pi^{\rm dec}\colon\mathbb{E}^{\rm dec}\to\mathbb{X}\), it follows immediately from Lemma 3.4\(\mathcal{C}_{h}\colon s^{*}E_{G}\to s^{*}E_{G}\times H\) defined by \((\gamma,p)\mapsto(\gamma,p,e)(h,e)\) is a quasi connection on \(\pi^{\rm dec}\colon\mathbb{E}^{\rm dec}\to\mathbb{X}\). As \(h\neq e\), \(\mathcal{C}_{h}\) is not a categorical connection.
As a special case of Example 3.5, we obtain the following example:
**Example 3.6**.: Let \(\mathbb{X}\) be a Lie groupoid. Observe that the identity map \(\operatorname{id}\colon X_{0}\to X_{0}\) is a principal \(\{e\}\)-bundle over \(X_{0}\) under the natural action of the trivial Lie group \(\{e\}\). Now define
\[\mu\colon s^{*}X_{0}\to X_{0}\] \[(\gamma,p)\mapsto t(\gamma).\]
Clearly, \((\operatorname{id}\colon X_{0}\to X_{0},\mu,\mathbb{X})\) defines a principal \(\{e\}\)-bundle over \(\mathbb{X}\). For an abelian Lie group \(H\neq\{e\}\), consider the decorated principal \([H\rightrightarrows\{e\}]\)-bundle over \(\mathbb{X}\), obtained from the Lie crossed module \((\{e\},H,\tau,\alpha)\) (where \(\tau\) is trivial and \(\alpha\) is \(\operatorname{id}_{H}\)) and the principal \(\{e\}\)-bundle \((\operatorname{id}\colon X_{0}\to X_{0},\mu,\mathbb{X})\). Since \(H\) is not trivial and \(\alpha\) is \(\operatorname{id}_{H}\), it follows from Example 3.5 that for any non-identity \(h\) in \(H\), the map \(\mathcal{C}_{h}\colon s^{*}X_{0}\to X_{0}\) given by \((\gamma,p)\mapsto(\gamma,p,e)(h,e)\) is a quasi connection which is not a categorical connection.
**Example 3.7**.: Consider a principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle \(\pi\colon\mathbb{E}\to[M\rightrightarrows M]\) over a discrete Lie groupoid \([M\rightrightarrows M]\), such that there exists \(h\in H\), \(h\neq e\) and \(\alpha(g)(h)=h\) for all \(g\in G\). Then it follows from Lemma 3.4 that the map \(\mathcal{C}_{h}\colon s^{*}E_{0}\to E_{1},(1_{x},p)\mapsto 1_{p}(h,e)\) defines a quasi connection, which is not a categorical connection. Hence, contrast to the existence of a unique categorical connection (see Example 2.9), it may admit many quasi connections.
### A quasi-principal 2-bundle as a Grothendieck Construction
In this subsection, we will obtain the paper's first main result (Theorem 3.12). We begin by observing some properties of the underlying quasi-principal Lie group bundle of a quasi-principal Lie 2-group bundle (Example 3.3).
**Proposition 3.8**.: Let \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over a Lie groupoid \(\mathbb{X}\). Consider the underlying quasi-principal \(G\)-bundle \((\pi_{0}\colon E_{0}\to X_{0},\mu_{\mathcal{C}}:=t\circ\mathcal{C}, \mathbb{X})\) over \(\mathbb{X}\). Then there exist smooth maps \(\mathcal{H}_{u,\mathcal{C}}\colon E_{0}\to H\) and \(\mathcal{H}_{m,\mathcal{C}}\colon X_{1}\times_{s,X_{0},t}X_{1}\to H\) satisfying the following properties:
1. \(\mu_{\mathcal{C}}(1_{\pi(p)},p)=p\tau(\mathcal{H}_{u,\mathcal{C}}(p))\) for all \(p\in E_{0}\).
2. \(\mu_{\mathcal{C}}(\gamma_{2},\mu_{\mathcal{C}}(\gamma_{1},p))=\mu_{\mathcal{C }}(\gamma_{2}\circ\gamma_{1},p)\tau(\mathcal{H}_{m,\mathcal{C}}(\gamma_{2}, \gamma_{1}))\) for all appropriate \(\gamma_{2},\gamma_{1}\in X_{1},p\in E_{0}\).
3. \([\)_Right unitor_\(]\)\(\mathcal{H}_{m,\mathcal{C}}(\gamma,1_{\pi(p)})=\mathcal{H}_{u,\mathcal{C}}(p)\) for all \(\gamma\in X_{1}\) such that \(s(\gamma)=\pi(p)\).
4. \([\)_Left unitor_\(]\)\(\mathcal{H}_{m,\mathcal{C}}(1_{\pi(\mu_{\mathcal{C}}(\gamma,p))},\gamma)=\mathcal{H}_{u, \mathcal{C}}(\mu_{\mathcal{C}}(\gamma,p))\) for \((\gamma,p)\in s^{*}E_{0}\).
5. \(\mathcal{H}_{u,\mathcal{C}}\) is \(G\) invariant.
6. \(\alpha_{g^{-1}}(\mathcal{H}_{u,\mathcal{C}}(p))=\mathcal{H}_{u,\mathcal{C}}(p)\) for all \(g\in G\) and \(p\in E_{0}\).
7. \(\mathcal{H}_{u,\mathcal{C}}(p)\in Z(H)\) for all \(p\in E_{0}\), where \(Z(H)\) is the centre of \(H\).
8. \(\alpha_{g^{-1}}(\mathcal{H}_{m,\mathcal{C}}^{-1}(\gamma_{2},\gamma_{1}))= \mathcal{H}_{m,\mathcal{C}}^{-1}(\gamma_{2},\gamma_{1})\) for all composable \(\gamma_{2},\gamma_{1}\in X_{1}\).
9. \(\mathcal{H}_{m,\mathcal{C}}(\gamma_{2},\gamma_{1})\in Z(H)\) for all \(\gamma_{2},\gamma_{1}\in X_{1}\times_{s,X_{0},t}X_{1}\).
10. \([\)_Associator_\(]\)For \(\gamma_{3},\gamma_{2},\gamma_{1}\in X_{1}\) such that \(s(\gamma_{3})=t(\gamma_{2})\) and \(s(\gamma_{2})=t(\gamma_{1})\), we have \[\mathcal{H}_{m,\mathcal{C}}^{-1}(\gamma_{3},\gamma_{2})\mathcal{H}_{m,\mathcal{C }}^{-1}(\gamma_{3}\circ\gamma_{2},\gamma_{1})=\mathcal{H}_{m,\mathcal{C}}^{-1 }(\gamma_{2},\gamma_{1})\mathcal{H}_{m,\mathcal{C}}^{-1}(\gamma_{3},\gamma_{2} \circ\gamma_{1}).\]
11. \([\)_Invertor_\(]\) If \((\gamma,p)\in s^{*}E_{0}\), then we have \[\mathcal{H}_{m,\mathcal{C}}(\gamma^{-1},\gamma)\mathcal{H}_{m,\mathcal{C}}( \gamma,\gamma^{-1})^{-1}=H_{u,\mathcal{C}}(p)^{-1}H_{u,\mathcal{C}}(\mu_{ \mathcal{C}}(\gamma,p)).\]
Proof.: Define \(\mathcal{H}_{u,\mathcal{C}}\colon E_{0}\to H\) by \(p\mapsto h_{p}\) and \(\mathcal{H}_{m,\mathcal{C}}\colon X_{1}\times_{s,X_{0},t}X_{1}\to H\) by \((\gamma_{2},\gamma_{1})\mapsto h_{\gamma_{2},\gamma_{1}}\), where \(h_{p}\) and \(h_{\gamma_{2},\gamma_{1}}\) are respectively unique elements in \(H\) satisfying
\[\mathcal{C}(1_{\pi(p)},p)=1_{p}(h_{p},e) \tag{3.1}\]
for \(p\in E_{0}\),
\[\mathcal{C}(\gamma_{2},\mu_{\mathcal{C}}(\gamma_{1},p))\circ\mathcal{C}( \gamma_{1},p)=\mathcal{C}(\gamma_{2}\circ\gamma_{1},p)(h_{\gamma_{2},\gamma_{1 }},e) \tag{3.2}\]
for composable \(\gamma_{2},\gamma_{1}\).
**Proof of (a) and (b):** (a) and (b) follow directly by taking target map \(t\) on both sides of Equation (3.1) and Equation (3.2) respectively.
**Proof of (c) and (d):** To prove (c), note that from Equation (3.2), immediately we get \(\mathcal{C}(\gamma,p)\big{(}h_{\gamma,1_{\pi(p)}},e\big{)}=\mathcal{C}\big{(} \gamma,\mu_{\mathcal{C}}(1_{\pi(p)},p)\big{)}\circ\mathcal{C}\big{(}1_{\pi}(p),p\big{)}\) for all \((\gamma,p)\in s^{*}E_{0}\). Then (c) follows easily from (a) and the freeness of the action of \(H\rtimes_{\alpha}G\) on \(E_{1}\). (d) can be proved using similar techniques as in (c).
**Proof of (e):** A straightforward consequence of (c).
**Proof of (f):** (f) follows by observing that, using (e) we can rewrite the identity \(\mathcal{C}(1_{\pi(pg^{-1})},pg^{-1})=1_{pg^{-1}}(h_{pg^{-1}},e)\) (from Equation (3.1)) as \(1p(h_{p},e)(e,g^{-1})=1_{p}(e,g^{-1})(h_{p},e)\) for \(p\in E_{0}\) and \(g\in G\).
**Proof of (g):** (g) follows from (f) using the Peiffer identity Equation (2.1).
**Proof of (h):** For composable \(\gamma_{2},\gamma_{1}\in X_{1}\), \(g\in G\) and \((\gamma_{1},p)\in s^{*}E_{0}\), we have \(\mathcal{C}(\gamma_{2},\mu_{\mathcal{C}}(\gamma_{1},pg^{-1}))\circ\mathcal{C} (\gamma_{1},pg^{-1})=\mathcal{C}(\gamma_{2}\circ\gamma_{1},pg^{-1})(h_{\gamma_ {2},\gamma_{1}},e)\). Then (h) follows straightforwardly from Equation (3.2).
**Proof of (i):** (i) follows from (h) and the Peiffer identity Equation (2.1).
**Proof of (j):** To prove (j), consider \(\gamma_{3},\gamma_{2},\gamma_{1}\in X_{1}\), a sequence of composable morphisms such that \((\gamma_{1},p)\in s^{*}E_{0}\). Then we have
\[\mathcal{C}(\gamma_{3}\circ\gamma_{2}\circ\gamma_{1},p)(h_{\gamma_ {3}\circ\gamma_{2},\gamma_{1}},e)\] \[=\mathcal{C}(\gamma_{3}\circ\gamma_{2},\mu_{\mathcal{C}}(\gamma_ {1},p))\circ\mathcal{C}(\gamma_{1},p)[\text{by Equation \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq:eq: eq: eq:eq: eq: eq:eq: eq:eq: eq: eq: eq:eq: eq: eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq
**Proposition 3.9**.: For a Lie group \(G\), let \((\pi\colon E_{G}\to X_{0},\mu,\mathbb{X})\) be a quasi-principal \(G\)-bundle over a Lie groupoid \(\mathbb{X}\). Let \((G,H,\tau,\alpha)\) be a Lie crossed module along with a pair of smooth maps \(\mathcal{H}_{u}\colon E_{G}\to H\) and \(\mathcal{H}_{m}\colon X_{1}\times_{s,X_{0},t}X_{1}\to H\) satisfying the coherence properties in Proposition 3.8, then we have the following:
1. The manifolds \((s^{*}E_{G})^{q-\mathrm{dec}}:=s^{*}E_{G}\times H\) and \(E_{G}\) define a Lie groupoid \([(s^{*}E_{G})^{q-\mathrm{dec}}\rightrightarrows E_{G}]\) whose structure maps are given as 1. source map \(s\colon\,(\gamma,p,h)\mapsto p\), 2. target map \(t\colon\,(\gamma,p,h)\mapsto\mu(\gamma,p)\tau(h^{-1})\), 3. composition map \(m\colon\bigl{(}(\gamma_{2},p_{2},h_{2}),(\gamma_{1},p_{1},h_{1})\bigr{)} \mapsto\bigl{(}\gamma_{2}\circ^{\gamma}_{1},p_{1},h_{2}h_{1}\mathcal{H}_{m}^ {-1}(\gamma_{2},\gamma_{1})\bigr{)}\), 4. unit map \(u:p\mapsto(1_{\pi(p)},p,\mathcal{H}_{u}(p))\), 5. inverse map \(\mathrm{i}\colon\bigl{(}\gamma,p,h)\mapsto(\gamma^{-1},\mu(\gamma,p)\tau(h^{-1 }),\mathcal{H}_{u}(p)\mathcal{H}_{m}(\gamma^{-1},\gamma)h^{-1}\bigr{)}\).
2. The Lie groupoid \(\mathbb{E}^{q-\mathrm{dec}}:=[(s^{*}E_{G})^{q-\mathrm{dec}}\rightrightarrows E _{G}]\) forms a quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle \(\pi^{q-\mathrm{dec}}\colon\mathbb{E}^{q-\mathrm{dec}}\to\mathbb{X}\) over \(\mathbb{X}\) equipped with the quasi connection \(\mathcal{C}^{q-\mathrm{dec}}\colon s^{*}E_{G}\to(s^{*}E_{G})^{q-\mathrm{dec}}\), \((\gamma,p)\mapsto(\gamma,p,e)\). The action of \([H\rtimes_{\alpha}G\rightrightarrows G]\) on \(\mathbb{E}^{q-\mathrm{dec}}\) and the bundle projection coincide with that of the decorated case (See Proposition 2.8).
3. The quasi connection \(\mathcal{C}^{q-\mathrm{dec}}\) is a categorical connection if and only if the maps \(\mathcal{H}_{u}\) and \(\mathcal{H}_{m}\) are constant maps to \(e\).
Proof.:
**Proof of (1):** From Proposition 2.8, it follows that the source and target maps are surjective submersions. Let \((\gamma_{2},p_{2},h_{2}),(\gamma_{1},p_{1},h_{1})\) be a composable pair of morphisms. To show that the source is compatible with the composition, observe that \(s((\gamma_{2},p_{2},h_{2})\circ(\gamma_{1},p_{1},h_{1}))=s\bigl{(}\gamma_{2} \circ\gamma_{1},p_{1},h_{2}h_{1}\mathcal{H}_{m}^{-1}(\gamma_{2},\gamma_{1}) \bigr{)}=p_{1}=s(\gamma_{1},p_{1},h_{1})\), whereas the target consistency follows easily from the condition (b) in Proposition 3.8. For the unit map to make sense, note that we have \(t(u(p))=t(1_{\pi(p)},p,\mathcal{H}_{u}(p))=\mu(1_{\pi(p)},p)\tau(\mathcal{H}_ {u}(p)^{-1})=p\tau(\mathcal{H}_{u}(p))\tau(\mathcal{H}_{u}(p)^{-1})=p.\) The fact that \(u\) is indeed a unit map follows from the right and left unitor (conditions (c) and (d) in Proposition 3.8). More precisely, right unitor implies \(\bigl{(}\gamma,p,h\bigr{)}\circ(1_{\pi(p)},p,\mathcal{H}_{u}(p))=(\gamma,p,h \mathcal{H}_{u}(p)\mathcal{H}_{m}^{-1}(\gamma,1_{\pi(p)}))=\bigl{(}\gamma,p,h \bigr{)}.\) On the other hand, \(G\)-invariance of \(\mathcal{H}_{u}\) (condition (e) in Proposition 3.8 ) and the left unitor property imply \(\bigl{(}1_{\pi(\mu(\gamma,p)\tau(h^{-1}))},\mu(\gamma,p)\tau(h^{-1}), \mathcal{H}_{u}(\mu(\gamma,p)\tau(h^{-1}))\bigr{)}\circ\bigl{(}(\gamma,p),h \bigr{)}=\bigl{(}\gamma,p,\mathcal{H}_{u}(\mu(\gamma,p))h\mathcal{H}_{m}^{-1}( 1_{\pi(\mu(\gamma,p)},\gamma)\bigr{)}=\bigl{(}(\gamma,p),h\bigr{)}\).
To check the associativity of the composition, consider a sequence of composable morphisms \((\gamma_{3},p_{3},h_{3}),(\gamma_{2},p_{2},h_{2}),(\gamma_{1},p_{1},h_{1})\in (s^{*}E_{G})^{q-\mathrm{dec}}\). Now, \(\bigl{(}(\gamma_{3},p_{3},h_{3})\circ(\gamma_{2},p_{2},h_{2})\bigr{)}\circ( \gamma_{1},p_{1},h_{1})=\bigl{(}\gamma_{3}\circ\gamma_{2},p_{2},h_{3}h_{2} \mathcal{H}_{m}(\gamma_{3},\gamma_{2})^{-1}\bigr{)}\circ(\gamma_{1},p_{1},h_{1})\) which is same as \(\bigl{(}\gamma_{3}\circ\gamma_{2}\circ\gamma_{1},p_{1},h_{3}h_{2}\mathcal{H} _{m}(\gamma_{3},\gamma_{2})^{-1}h_{1}\mathcal{H}_{m}^{-1}(\gamma_{3}\circ^{ \gamma_{2}},\gamma_{1})\bigr{)}\). Whereas, \((\gamma_{3},p_{3},h_{3})\circ\bigl{(}(\gamma_{2},p_{2},h_{2})\circ(\gamma_{1},p _{1},h_{1})\bigr{)}=(\gamma_{3},p_{3},h_{3})\circ\bigl{(}\gamma_{2}\circ\gamma _{1},p_{1},h_{2}h_{1}\mathcal{H}_{m}^{-1}(\gamma_{2},\gamma_{1})\bigr{)}\) which is equal to \(\bigl{(}\gamma_{3}\circ\gamma_{2}\circ\gamma_{1},p_{1},h_{3}h_{2}h_{1}\mathcal{ H}_{m}^{-1}(\gamma_{2},\gamma_{1})\mathcal{H}_{m}^{-1}(\gamma_{3}, \gamma_{2}\circ\gamma_{1})\bigr{)}\). The associativity of the composition then follows from (condition (i) and (j) in Proposition 3.8). The compatibility of the inverse with the target is clear. For verifying \(\mathrm{i}\) is indeed the inverse, first observe
\[\bigl{(}\gamma^{-1},\mu(\gamma,p)\tau(h^{-1}),\mathcal{H}_{u}(p)\mathcal{H}_{m} (\gamma^{-1},\gamma)h^{-1}\bigr{)}\circ\bigl{(}\gamma,p,h\bigr{)}=(1_{\pi(p)}, p,\mathcal{H}_{u}(p)).\]
Then, from the invertor (condition (k) in Proposition 3.8), it follows \(\bigl{(}(\gamma,p),h\bigr{)}\circ\mathrm{i}(\gamma,p,h)=u(\pi(\mu(\gamma,p)\tau(h ^{-1})))\). Finally, since its structure maps are smooth by definition, it follows \(\mathbb{E}^{q-\mathrm{dec}}\) is a Lie groupoid.
**Proof of (2) and (3):** Since the action
\[\rho\colon\mathbb{E}^{\mathrm{q-dec}}\times[H\rtimes_{\alpha}G\rightrightarrows G]\to\mathbb{E}^{\mathrm{q-dec}}\]
\[(p,g)\mapsto pg,\]
\[\big{(}(\gamma,p,h),(h^{\prime},g)\big{)}\mapsto\big{(}\gamma,pg,\alpha_{g^{-1 }}(h^{\prime-1}\,h)\big{)},\]
coincides with the decorated case as in Equation (2.5), to prove \(\rho\) defines a right action of \([H\rtimes_{\alpha}G\rightrightarrows G]\) on \(\mathbb{E}^{\mathrm{q-dec}}\), we just need to check the compatibility of \(\rho\) with unit maps and the composition. But these are direct consequences of condition (f) and condition (h) in Proposition 3.8 respectively, combined with the functoriality of the action in the decorated case. Also, since the bundle projection functor coincides with the decorated case as in Equation (2.6), it follows \(\pi\colon\mathbb{E}^{\mathrm{q-dec}}\to\mathbb{X}\) is a \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over \(\mathbb{X}\). Now, as for each \(p\in E_{G}\), we have \(\mathcal{C}^{\mathrm{q-dec}}(1_{\pi(p)},p)=u(p)(\mathcal{H}_{u}(p),e)\) and for any composable sequence of morphisms \(\gamma_{2},\gamma_{1}\in X_{1}\) such that \((\gamma_{1},p)\in s^{*}E_{G}\), we have \(\mathcal{C}^{\mathrm{q-dec}}(\gamma_{2},t(\mathcal{C}(\gamma_{1},p)))\circ \mathcal{C}^{\mathrm{q-dec}}(\gamma_{1},p)=\mathcal{C}^{\mathrm{q-dec}}( \gamma_{2}\circ\gamma_{1},p)(\mathcal{H}_{m}(\gamma_{2},\gamma_{1}),e)\), it follows that \(\mathcal{C}^{\mathrm{q-dec}}\) is a quasi connection. Observe that **(3)** is an immediate consequence of the above two identities.
**Definition 3.4**.: Given a Lie crossed module \((G,H,\tau,\alpha)\), a _pseudo-principal \((G,H,\tau,\alpha)\)-bundle over a Lie groupoid \(\mathbb{X}\)_ is defined as a quasi-principal \(G\)-bundle (\(\pi_{G}\colon E_{G}\to X_{0},\mu,\mathbb{X}\)) over the Lie groupoid \(\mathbb{X}\) (Definition 3.3), equipped with a pair of smooth maps \(\mathcal{H}_{u}\colon E_{0}\to H\) and \(\mathcal{H}_{m}\colon X_{1}\times_{s,X_{0},t}\times X_{1}\to H\), satisfying the coherence properties (a)-(k) in Proposition 3.8.
We denote a pseudo-principal \((G,H,\tau,\alpha)\)-bundle by the notation \((\pi\colon E_{G}\to X_{0},\mu,\mathcal{H}_{u},\mathcal{H}_{m},\mathbb{X})\), and call the smooth maps \(\mathcal{H}_{u}\) and \(\mathcal{H}_{m}\) as _unital deviation_ and _compositional deviation_ respectively.
**Example 3.10**.: It follows directly from Proposition 3.8, that the underlying quasi-principal \(G\)-bundle (\(\pi_{0}\colon E_{0}\to X_{0},\mu_{\mathcal{C}},\mathbb{X}\)) (Example 3.3) of a quasi-principal-\([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle (\(\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}\)), equipped with the pair of smooth maps \(\mathcal{H}_{u,\mathcal{C}}\) and \(\mathcal{H}_{m,\mathcal{C}}\) (as defined in Proposition 3.8) is a pseudo-principal \((G,H,\tau,\alpha)\)-bundle over the Lie groupoid \(\mathbb{X}\). We call (\(\pi_{0}\colon E_{0}\to X_{0},\mu_{\mathcal{C}},\mathcal{H}_{u, \mathcal{C}},\mathcal{H}_{m,\mathcal{C}},\mathbb{X}\)) as the _underlying pseudo-principal \((G,H,\tau,\alpha)\)-bundle of the quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle_ (\(\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}\)). Unital deviation \(\mathcal{H}_{u,\mathcal{C}}\) and compositional deviation \(\mathcal{H}_{m,\mathcal{C}}\) together precisely mesaure the amount by with the quasi connection \(\mathcal{C}\) differs from being a categorical connection.
A quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle (\(\pi^{\mathrm{q-dec}}\colon\mathbb{E}^{\mathrm{q-dec}}\to\mathbb{X}, \mathcal{C}^{\mathrm{q-dec}}\)) in Proposition 3.9 associated to a pseudo-principal \((G,H,\tau,\alpha)\)bundle (\(\pi\colon E_{G}\to X_{0},\mu,\mathcal{H}_{u},\mathcal{H}_{m}, \mathbb{X}\)) will be called a _quasi-decorated \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle over \(\mathbb{X}\)_.
The following is evident:
**Proposition 3.11**.: Given a Lie groupoid \(\mathbb{X}\) and a Lie crossed module \((G,H,\tau,\alpha)\), there is a groupoid \(\mathrm{Pseudo}(\mathbb{X},(G,H,\tau,\alpha))\), whose objects are pseudo-principal\((G,H,\tau,\alpha)\)-bundles over \(\mathbb{X}\) and morphisms are defined in the following way:
For any pair of pseudo-principal\((G,H,\tau,\alpha)\)-bundles (\(\pi\colon E_{G}\to X_{0},\mu,\mathcal{H}_{u},\mathcal{H}_{m}, \mathbb{X}\)) and (\(\pi^{\prime}\colon E_{G}^{\prime}\to X_{0},\mu^{\prime},\mathcal{H}_{u}^{ \prime},\mathcal{H}_{m}^{\prime},\mathbb{X}\)),
* if \(\mathcal{H}_{m}\neq\mathcal{H}_{m}^{\prime}\), then \[\mathrm{Hom}\Big{(}(\pi\colon E_{G}\to X_{0},\mu,\mathcal{H}_{u}, \mathcal{H}_{m},\mathbb{X}),(\pi^{\prime}\colon E_{G}^{\prime}\to X_{0},\mu^{ \prime},\mathcal{H}_{u}^{\prime},\mathcal{H}_{m}^{\prime},\mathbb{X})\Big{)}=\emptyset,\]
* if \(\mathcal{H}_{m}=\mathcal{H}^{\prime}_{m}\), then an element of \(\mathrm{Hom}\Big{(}(\pi\colon E_{G}\to X_{0},\mu,\mathcal{H}_{u},\mathcal{H}_{m}, \mathbb{X}),(\pi^{\prime}\colon E^{\prime}_{G}\to X_{0},\mu^{\prime},\mathcal{H} ^{\prime}_{u},\mathcal{H}^{\prime}_{m},\mathbb{X})\Big{)}\) is defined as a morphism of principal \(G\)-bundles \(f\colon E_{G}\to E^{\prime}_{G}\), satisfying the following conditions: 1. \(f(\mu(\gamma,p))=\mu^{\prime}(\gamma,f(p))\) and 2. \(\mathcal{H}_{u}=\mathcal{H}^{\prime}_{u}\circ f\).
Now, we are ready to state and prove our first main result of the paper.
**Theorem 3.12**.: For a Lie crossed module \((G,H,\tau,\alpha)\) and a Lie groupoid \(\mathbb{X}\), the groupoid \(\mathrm{Bun}_{\mathrm{quasi}}(\mathbb{X},\,[H\rtimes_{\alpha}G\rightrightarrows G])\) is equivalent to the groupoid \(\mathrm{Pseudo}(\mathbb{X},(G,H,\tau,\alpha))\).
Proof.: Define
\[\mathcal{F} \colon\mathrm{Bun}_{\mathrm{quasi}}(\mathbb{X},[H\rtimes_{ \alpha}G\rightrightarrows G])\to\mathrm{Pseudo}\Big{(}\mathbb{X},(G,H,\tau, \alpha)\Big{)},\] \[(\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\mapsto(\pi_{0} \colon E_{0}\to X_{0},\mu_{\mathcal{C}},\mathcal{H}_{u,\mathcal{C}}, \mathcal{H}_{m,\mathcal{C}},\mathbb{X})\] \[(F\colon\mathbb{E}\to\mathbb{E}^{\prime})\mapsto(F_{0}\colon E_{0 }\to E^{\prime}_{0}).\]
We claim \(\mathcal{F}\) is an essentially surjective, faithful, and full functor.
Suppose \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) is a quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G])\)-bundle over \(\mathbb{X}\). Then from Example 3.10, it directly follows that \((\pi_{0}\colon E_{0}\to X_{0},\mu_{\mathcal{C}},\mathcal{H}_{u,\mathcal{C}}, \mathcal{H}_{m,\mathcal{C}},\mathbb{X})\) is indeed a pseudo-principal \((G,H,\tau,\alpha)\)-bundle over \(\mathbb{X}\). Now, let \(F\in\mathrm{Hom}\Big{(}(\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}),(\pi^{ \prime}\colon\mathbb{E}^{\prime}\to\mathbb{X}))\Big{)}\), then as a direct consequence of the identities \(F_{1}(\mathcal{C}(\gamma,p))=\mathcal{C}^{\prime}(\gamma,F_{0}(p))\) and \(F_{1}\big{(}\mathcal{C}(1_{\pi(p)},p)\big{)}=1_{F_{0}(p)}\Big{(}\mathcal{H}_{u,\mathcal{C}^{\prime}}\big{(}F(p)\big{)},e\Big{)}\), we have a well-defined \(\mathcal{F}(F)\). Functoriality of \(\mathcal{F}\) is a straightforward verification. By Proposition 3.9, \(\mathcal{F}\) is essentially surjective. Now, consider \(F,\bar{F}\in\mathrm{Hom}\Big{(}(\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}),(\pi^{\prime}\colon\mathbb{E}^{\prime}\to\mathbb{X},\mathcal{C}^{\prime}) \Big{)}\). Suppose \(\mathcal{F}(F)=\mathcal{F}(\bar{F})\), that is \(F_{0}=\bar{F}_{0}\). Now, for any \(\delta\in E_{1}\), there exists a unique \(h_{\delta}\in H\), such that \(\delta=\mathcal{C}\big{(}\pi_{1}(\delta),s(\delta)\big{)}(h_{\delta},e)\). Then the equality \(F_{1}(\delta)=\bar{F}_{1}(\delta)\) follows from the compatiblity condition of \(F\) and \(\bar{F}\) with quasi connections \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) in Proposition 3.2. Hence, \(\mathcal{F}\) is faithful. To show \(\mathcal{F}\) is full, consider an element \(f\) in \(\mathrm{Hom}\Big{(}(\pi_{0}\colon E_{0}\to X_{0},\mu_{\mathcal{C}},\mathcal{H} _{u,\mathcal{C}},\mathcal{H}_{m,\mathcal{C}},\mathbb{X}),(\pi^{\prime}_{0} \colon E^{\prime}_{0}\to X_{0},\mu_{\mathcal{C}^{\prime}},\mathcal{H}_{u, \mathcal{C}^{\prime}},\mathcal{H}_{m,\mathcal{C}^{\prime}},\mathbb{X})\Big{)}\) for a pair of quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G]\)-bundles \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) and \((\pi^{\prime}\colon\mathbb{E}^{\prime}\to\mathbb{X},\mathcal{C}^{\prime})\) over \(\mathbb{X}\). Now, define
\[F \colon\mathbb{E}\to\mathbb{E}^{\prime}\] \[p \mapsto f(p),p\in E_{0},\] \[\delta \mapsto\mathcal{C}^{\prime}\big{(}\pi_{1}(\delta),f(s(\delta)) \big{)}(h_{\delta},e),\delta\in E_{1},\]
where \(h_{\delta}\) is the unique element in \(H\), such that \(\delta=\mathcal{C}\big{(}\pi_{1}(\delta),s(\delta)\big{)}(h_{\delta},e)\). We need to show \(F:=(F_{1},F_{0})\) is a morphism of quasi-principal \([H\rtimes_{\alpha}G\rightrightarrows G])\)-bundles over \(\mathbb{X}\). Note that \(\pi^{\prime}_{0}\circ F_{0}=\pi^{\prime}_{0}\circ f=\pi_{0}\) and for any \(\delta\in E_{1}\), \(\pi^{\prime}_{1}\circ F_{1}(\delta)=\pi^{\prime}_{1}\Big{(}\mathcal{C}^{\prime} \big{(}\pi_{1}(\delta),f(s(\delta))\big{)}(h_{\delta},e)\Big{)}=\pi_{1}(\delta)\). \(G\)-equivariancy of \(f\) implies the same for \(F_{0}\). Now, observe that there exists a unique element \(\bar{h}\in H\) such that \(\mathcal{C}\big{(}\pi_{1}(\delta),s(\delta)\big{)}(h_{\delta},e)(h,g)=\mathcal{C }\big{(}\pi_{1}(\delta),s(\delta)\big{)}(e,g)(\bar{h},e)\) for \(\delta\in E_{1}\) and \((h,g)\in H\rtimes_{\alpha}G\). Then, by a little calculation, we arrive at
\[F_{1}\big{(}\delta(h,g)\big{)}=\mathcal{C}^{\prime}\big{(}\pi_{1}(\delta),f(s( \delta))\big{)}(\bar{h},g).\]
From the observation \(\bar{h}=h_{\delta}h\), we get \(F_{1}\big{(}\delta(h,g)\big{)}=\mathcal{C}^{\prime}\big{(}\pi_{1}(\delta),f(s( \delta))\big{)}(h_{\delta},e)(h,g)=F_{1}(\delta)(h,g)\). Hence, \(F_{1}\) is \(H\rtimes_{\alpha}G\)-equivariant. Now, note that from the definition itself, it is evident that \(F_{1}(\mathcal{C}(\gamma,p))=\mathcal{C}^{\prime}(\gamma,F_{0}(p))\) for all \((\gamma,p)\in s^{*}E_{0}\). Since both \(F_{0}\) and \(F_{1}\) are smooth by definition, in order to prove \(\mathcal{F}\) is full, it is sufficient to prove that \(F\colon\mathbb{E}\to\mathbb{E}^{\prime}\) is a functor. Source map consistency is trivial, whereas the target consistency follows from the compatibility condition of \(f\) with \(\mu_{\mathcal{C}}\) and \(\mu_{\mathcal{C}^{\prime}}\) as mentioned in Proposition 3.11. Now, for any \(p\in E_{0}\), we have \(F_{1}(1_{p})=\mathcal{C}^{\prime}\big{(}1_{\pi_{0}^{\prime}\circ f(p)},f(p) \big{)}\big{(}(\mathcal{H}_{u,\mathcal{C}}(p))^{-1},e\big{)}\). By appropriately using Equation (3.1), we obtain the compatibility of \(\mathcal{F}\) with unit maps. For compositional compatibility, consider \(\delta_{2},\delta_{1}\in E_{1}\) such that \(s(\delta_{2})=t(\delta_{1})\). It is not difficult to observe that to show the required compatibility, it is sufficient to prove the following identity:
\[\mathcal{H}_{m,\mathcal{C}}(\gamma_{2},\gamma_{1})h_{\delta_{1}}h_{\delta_{2 }}=h_{\delta_{2}\circ\delta_{1}}.\]
To prove the above identity, consider
\[\delta_{2}\circ\delta_{1}\] \[=\underbrace{\mathcal{C}\big{(}\pi_{1}(\delta_{2}),s(\delta_{2}) \big{)}(h_{\delta_{2}},e)}_{\delta_{2}}\circ\underbrace{\mathcal{C}\big{(}\pi _{1}(\delta_{1}),s(\delta_{1})\big{)}(h_{\delta_{1}},e)}_{\delta_{1}}\] \[=\underbrace{\mathcal{C}\Big{(}\pi_{1}(\delta_{2}),\underbrace{ \mu_{\mathcal{C}}\big{(}\pi_{1}(\delta_{1}),s(\delta_{1})\big{)}}_{s(\delta_ {2})=t(\delta_{1})}\big{)}}_{\text{\footnotesize\begin{array}{c}\text{[ by (iii), Definition 2.6]}\end{array}}}\] \[=\underbrace{\bigg{(}\mathcal{C}\Big{(}\pi_{1}(\delta_{2}),\mu_{ \mathcal{C}}\big{(}\pi_{1}(\delta_{1}),s(\delta_{1})\big{)}\Big{)}\circ \mathcal{C}\big{(}\pi_{1}(\delta_{1}),s(\delta_{1})\big{)}\bigg{)}\bigg{(} \Big{(}\alpha_{\tau(h_{\delta_{1}})}(h_{\delta_{2}}),\tau(h_{\delta_{1}}) \Big{)}\circ(h_{\delta_{1}},e)\bigg{)}}_{\text{\footnotesize\begin{array}{c} \text{by functoriality of the action.}\\ \text{\footnotesize\begin{array}{c}\text{[by Equation (\ref{eq:def_eq
Moreover, the functor \(\mathcal{F}\) in Theorem 3.12 restricts to an essentially surjective, full and faithful functor to the subcategory \(\operatorname{Bun}_{\operatorname{Cat}}(\mathbb{X},[H\rtimes_{\alpha}G\rightrightarrows G])\)) of \(\operatorname{Bun}_{\operatorname{quasi}}(\mathbb{X},[H\rtimes_{\alpha}G \rightrightarrows G])\) and hence yielding an equivalence of categories between \(\operatorname{Bun}_{\operatorname{Cat}}(\mathbb{X},[H\rtimes_{\alpha}G \rightrightarrows G])\) and \(\operatorname{Bun}(\mathbb{X},G)\).
### Quasi connections as retractions
Recall, given morphisms of Lie groupoids \(\phi\colon\mathbb{Y}\to\mathbb{X}\) and \(\psi\colon\mathbb{Z}\to\mathbb{X}\), there is a topological groupoid \(\mathbb{Y}\times_{\phi,\mathbb{X},\psi}^{h}\mathbb{Z}\). An object of this groupoid is a triple \((y,\gamma:\psi(z)\to\phi(y),z)\) for \(y\in Y_{0},z\in Z_{0}\), whereas a morphism from \((y,\gamma:\psi(z)\to\phi(y),z)\) to \((y^{\prime},\gamma^{\prime}:\psi(z^{\prime})\to\phi(y^{\prime}),z^{\prime})\) is given by a pair \((\Gamma\colon y\to y^{\prime},\delta\colon z\to z^{\prime})\) such that \(\phi(\Gamma)\gamma=\gamma^{\prime}\circ\psi(\delta)\). \(\mathbb{Y}\times_{\phi,\mathbb{X},\psi}^{h}\mathbb{Z}\) satisfies the usual universal property of a fiber product (but up to an isomorphism). Structure maps are the natural ones. If either of \(\phi_{0}\colon Y_{0}\to X_{0}\) or \(\psi_{0}\colon Z_{0}\to X_{0}\) is a submersion, then \(\mathbb{Y}\times_{\phi,\mathbb{X},\psi}^{h}\mathbb{Z}\) has a natural Lie groupoid structure and is known as _weak pull-back_ or _weak fibered product_. Now, a morphism of Lie groupoids \(F\colon\mathbb{X}\to\mathbb{Y}\) has a canonical factorization through \(\mathbb{Y}\times_{\mathbb{Y},F}^{h}\mathbb{X}:=\mathbb{Y}\times_{\operatorname {id}_{\mathbb{Y},F}\mathbb{X}}^{h}\) as \(F=F_{\mathbb{Y}}\circ F_{\mathbb{X}}\), where at the object level, the maps \(F_{\mathbb{X}}\colon\mathbb{Y}\times_{\mathbb{Y},F}^{h}\mathbb{X}\) and \(F_{\mathbb{Y}}\colon\mathbb{Y}\times_{\mathbb{Y},F}^{h}\mathbb{X}\to \mathbb{Y}\) are given as \(x\mapsto(F(x),\operatorname{id}_{F(x)},x)\) and \((\eta\colon y\to y^{\prime},\delta\colon p\to p^{\prime})\mapsto\eta\) respectively, and can be extended to the morphism level in a natural way. For a detailed discussion on these, readers can look at [49]. As a consequence of this factorization, we will characterize unital (Definition 3.2) and categorical connections (Definition 2.6) in terms of certain retractions. The proof is a suitable adaptation of the one given in **Proposition 2.2.3** and **Corollary 2.2.4** of [19] in the set-theoretic setup, and we skip it here.
**Proposition 3.14**.: For a Lie \(2\)-group \(\mathbb{G}\), let \(\pi\colon\mathbb{E}\to\mathbb{X}\) be a principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\). Then
1. there is an induced right action of \(\mathbb{G}\) on the Lie groupoid \(\mathbb{X}\times_{\mathbb{X},\pi}^{h}\mathbb{E}\), given by \[\rho\colon (\mathbb{X}\times_{\mathbb{X},\pi}^{h}\mathbb{E})\times\mathbb{G }\to\mathbb{X}\times_{\mathbb{X},\pi}^{h}\mathbb{E}\] \[\big{(}(x,\gamma,p),g\big{)}\mapsto\big{(}x,\gamma,pg\big{)},\] \[\big{(}(\Gamma\colon x\to x^{\prime},\delta\colon p\to p^{ \prime}),\phi\big{)}\mapsto(\Gamma,\delta\phi),\]
2. the set of \(\mathbb{G}\)-equivariant morphisms of Lie groupoids \(r\colon\mathbb{X}\times_{\operatorname{id}_{\mathbb{X},\pi}}^{h}\mathbb{E}\to \mathbb{E}\) satisfying \(\pi\circ r=\pi_{\mathbb{X}}\), \(r\circ\pi_{\mathbb{E}}=\operatorname{id}_{\mathbb{E}}\), is in one-one correspondence with the set of unital connections \(\mathcal{C}\) on \(\pi\colon\mathbb{E}\to\mathbb{X}\),
3. a unital connection \(\mathcal{C}\) on \(\pi\colon\mathbb{E}\to\mathbb{X}\) is a categorical connection if and only if the image of morphisms of the form \((\Gamma,1_{p})\) under the associated map \(r_{\mathcal{C}}\colon\mathbb{X}\times_{\mathbb{X},\pi}^{h}\mathbb{E}\to \mathbb{E}\), lies in the image of \(\mathcal{C}\).
### Towards a principal \(2\)-bundle over a differentiable stack
According to the **Corollary 2.12** of [38], if the Lie groupoids \(\mathbb{X}\) and \(\mathbb{Y}\) are Morita equivalent, then for any Lie group \(G\), the categories \(\operatorname{Bun}(\mathbb{X},G)\) and \(\operatorname{Bun}(\mathbb{Y},G)\) are equivalent.Then, Corollary 3.13 yields the following result:
**Proposition 3.15**.: Given a Lie \(2\)-group \(\mathbb{G}\), if Lie groupoids \(\mathbb{X}\) and \(\mathbb{Y}\) are Morita equivalent, then the category \(\operatorname{Bun}_{\operatorname{Cat}}(\mathbb{X},\mathbb{G})\) is equivalent to the category \(\operatorname{Bun}_{\operatorname{Cat}}(\mathbb{Y},\mathbb{G})\).
Proposition 3.15 implies that the definition of a principal \(2\)-bundle over a Lie groupoid, equipped with a categorical connection, can be extended over a differentiable stack represented by the base Lie groupoid. One can consult [8] for a detailed discussion on
differentiable stacks and Morita equivalent Lie groupoids. Our upcoming paper will deal with quasi-principal \(2\)-bundle over a differentiable stack.
## 4. Lazy Haefliger paths and thin fundamental groupoid of a Lie groupoid
In this section, we introduce a notion of thin fundamental groupoid of a Lie groupoid, a generalization of the classical one, and will impose a diffeological structure on it.
We start by defining the notion of a lazy Haefliger path.
**Definition 4.1**.: Let \(\mathbb{X}\) be a Lie groupoid and let \(x,y\in X_{0}\). A _lazy \(\mathbb{X}\)-path_ or a _lazy Haefliger path \(\Gamma\) from \(x\) to \(y\)_ is a sequence \(\Gamma:=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) for some \(n\in\mathbb{N}\) where
1. \(\alpha_{i}:[0,1]\to X_{0}\) is a path with sitting instants for all \(1\leq i\leq n\) and
2. \(\gamma_{i}\in X_{1}\) for all \(0\leq i\leq n\),
such that the following conditions hold:
1. \(s(\gamma_{0})=x\) and \(t(\gamma_{n})=y\);
2. \(s(\gamma_{i})=\alpha_{i}(1)\) for all \(0<i\leq n\);
3. \(t(\gamma_{i})=\alpha_{i+1}(0)\) for all \(0\leq i<n\).
We will say that \(\Gamma\) is a _lazy \(\mathbb{X}\)-path of order \(n\)_. We define the _source of \(\Gamma\)_ as \(s(\gamma_{0})=x\) and the _target of \(\Gamma\)_ as \(t(\gamma_{n})=y\). For a given Lie groupoid \(\mathbb{X}\), we shall denote by \(P\mathbb{X}\) the set of all lazy \(\mathbb{X}\)-paths of all orders. Observe that if we remove the sitting instants condition from (i), then we recover existing notion a Haefliger path as given in [24, 27, 16, 25].
We are interested in certain equivalence classes of lazy Haefliger paths. Such equivalences or its minor variations have already been studied, for example in [24, 25, 16].
**Definition 4.2**.: A lazy \(\mathbb{X}\)-path \(\Gamma:=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) is said to be _equivalent_ to another lazy \(\mathbb{X}\)-path \(\bar{\Gamma}\), if one is obtained from the other by a finite sequence of all or some of the following operations:
1. _Removing/adding a constant path_, that is if \(\alpha_{i+1}\) is a constant path in the lazy \(\mathbb{X}\)-path \(\Gamma\), then by removing it we obtain the lazy \(\mathbb{X}\)-path \((\gamma_{0},\alpha_{1},\gamma_{1},..,\gamma_{i+1}\circ\gamma_{i},\cdots, \alpha_{n},\gamma_{n})\), where \(i\in\{1,2,\cdots n-1\}\). Replacing the word "removing" by "adding" one obtains the condition for "adding a constant path". \[\begin{array}{c}\cdot\xrightarrow[]{\gamma_{i}}\cdot\xrightarrow[]{\gamma_{i+ 1}}\cdot\xrightarrow[]{\gamma_{i+1}}\cdot\\ \alpha_{i+1}=\mathrm{constant}\end{array}.\]
2. _Removing/adding an identity morphism_, that is if \(\gamma_{i}\) is an identity morphism in the lazy \(\mathbb{X}\)-path \(\Gamma\), then by removing it we obtain a lazy \(\mathbb{X}\)-path \((\gamma_{0},\alpha_{1},\gamma_{1},..,\alpha_{i+1}*\alpha_{i},\cdots,\alpha_{n},\gamma_{n})\), where \(*\) is the concatenation of paths and \(i\in\{1,2,..,n-1\}\). Replacing the word "removing" by "adding" one obtains the condition for "adding an identity morphism". \[\begin{array}{c}\cdot\xrightarrow[]{\alpha_{i}}\cdot\xrightarrow[]{\alpha_{i+ 1}}\cdot\xrightarrow[]{\alpha_{i+1}}\cdot\\ \gamma_{i}=\mathrm{identity}\end{array}.\]
3. _Replacing_ \(\alpha_{i}\) _by_ \(t\circ\zeta_{i}\)_, replacing_ \(\gamma_{i-1}\) _by_ \(\zeta_{i}(0)\circ\gamma_{i-1}\) _and_ \(\gamma_{i}\) _by_ \(\gamma_{i}\circ(\zeta_{i}(1))^{-1}\) _for a given path_ \(\zeta_{i}\colon[0,1]\to X_{1}\) _with sitting instants, such that_ \(s\circ\zeta_{i}=\alpha_{i}\) _and_ \(i\in\{1,2,\cdots,n\}\)_, that is the portion_ \((\gamma_{i-1},\alpha_{i},\gamma_{i})\) _of the lazy_ \(\mathbb{X}\)_-path_ \(\Gamma\) _is replaced by the portion_ \(\big{(}\zeta_{i}(0)\circ\gamma_{i-1},t\circ\zeta_{i},\gamma_{i}\circ(\zeta(1))^ {-1}\big{)}\) _to obtain a lazy_ \(\mathbb{X}\)_-path_ \((\gamma_{0},\alpha_{1},\gamma_{1},..,\alpha_{i-1},\underbrace{\zeta_{i}(0) \circ\gamma_{i-1},t\circ\zeta_{i},\gamma_{i}\circ(\zeta(1))^{-1}}_{\text{ \emph{\tiny$\circ$}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
from \(\Gamma\) to another lazy \(\mathbb{X}\)-path \(\Gamma^{\prime}\), then the identitites \(s\circ\zeta_{i}^{-1}=(s\circ\zeta_{i})^{-1}\) and \(t\circ\zeta_{i}^{-1}=(t\circ\zeta_{i})^{-1}\) imply that \(\{\zeta_{i}^{-1}\colon[0,1]\to X_{1}\}_{i=0,1\cdots,n}\) defined by \(r\mapsto\zeta_{i}(1-r)\), is a thin deformation from \(\Gamma^{\prime}\) to \(\Gamma\). To show the transitivity, suppose \(\{\delta_{i}\colon[0,1]\to X_{1}\}_{i}\) is a thin deformnation from \(\Gamma^{\prime}\) to \(\Gamma^{\prime\prime}\), then one verifies that \(\{\delta_{i}\ast\zeta_{i}\colon[0,1]\to X_{1}\}_{i}\) is a thin deformation from \(\Gamma\) to \(\Gamma^{\prime\prime}\), where \(\ast\) is the concatenation of paths.
**Definition 4.5**.: A _lazy \(\mathbb{X}\)-path thin homotopy_ is defined as the equivalence relation on \(P\mathbb{X}\) generated by the equivalence relations in Definition 4.2 and Proposition 4.1.
We denote the corresponding quotient set by \(\overset{P\mathbb{X}}{\sim}\). Explicitly, a pair of lazy \(\mathbb{X}\)-paths (with fixed endpoints) is related by a lazy \(\mathbb{X}\)-path thin homotopy if one is obtained from the other by a finite sequence of equivalences and thin deformations.
**Proposition 4.2**.: Given a Lie groupoid \(\mathbb{X}=[X_{1}\rightrightarrows X_{0}]\), there is a groupoid \(\Pi_{\mathrm{thin}}(\mathbb{X})\), whose object set is \(X_{0}\) and the morphism set is \(\overset{P\mathbb{X}}{\sim}\). The structure maps are given as follows:
1. Source: \(s:\overset{P\mathbb{X}}{\sim}\to X_{0}\) is defined by \([\Gamma=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})]\mapsto s (\gamma_{0})\);
2. Target: \(t:\overset{P\mathbb{X}}{\sim}\to X_{0}\) is defined by \([\Gamma=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})]\mapsto t (\gamma_{n})\);
3. Composition: if \(s([\Gamma^{\prime}=(\gamma_{0}^{\prime},\alpha_{1}^{\prime},\gamma_{1}^{ \prime},\cdots,\alpha_{n}^{\prime},\gamma_{n}^{\prime})])=t([\Gamma=(\gamma_{0 },\alpha_{1},\gamma_{1},\cdots,\alpha_{m},\gamma_{m})])\), then define \[[(\gamma_{0}^{\prime},\alpha_{1}^{\prime},\gamma_{1}^{\prime},\cdots,\alpha_ {n}^{\prime},\gamma_{n}^{\prime})]\circ[(\gamma_{0},\alpha_{1},\gamma_{1}, \cdots,\alpha_{m},\gamma_{m})]:=[(\gamma_{0},\alpha_{1},\gamma_{1},\cdots, \alpha_{m},\gamma_{0}^{\prime}\circ\gamma_{m},\alpha_{1}^{\prime},\gamma_{1}^{ \prime},\cdots,\alpha_{n}^{\prime},\gamma_{n}^{\prime})];\]
4. Unit: \(u:X_{0}\rightarrow\frac{P\mathbb{X}}{\sim}\) is given by \(x\mapsto[(1_{x},c_{x},1_{x})]\) where \(c_{x}\colon[0,1]\to X_{0}\) is the constant path at \(x\in X_{0}\);
5. Inverse: \(\mathrm{i}\colon\overset{P\mathbb{X}}{\sim}\overset{P\mathbb{X}}{\sim}\) is given by \[[(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\gamma_{n-1},\alpha_{n},\gamma_{n}) ]\mapsto[(\gamma_{n}^{-1},\alpha_{n}^{-1},\gamma_{n-1}^{-1},\cdots,\gamma_{1}^ {-1},\alpha_{1}^{-1},\gamma_{0}^{-1})].\]
Proof.: A direct consequence of the definition is that \(s\) and \(t\) are well-defined. One can verify the well-definedness of the composition by considering the following four cases separately.
1. If \(\tilde{\Gamma}^{\prime}\) is obtained from \(\Gamma^{\prime}\) by an equivalence and if \(\tilde{\Gamma}\) is obtained from \(\Gamma\) by an equivalence, then \(\Gamma^{\prime}\circ\Gamma\) is lazy \(\mathbb{X}\)-path thin homotopic to \(\tilde{\Gamma}^{\prime}\circ\tilde{\Gamma}\).
2. If \(\tilde{\Gamma}^{\prime}\) is obtained from \(\Gamma^{\prime}\) by a thin deformation and if \(\tilde{\Gamma}\) is obtained from \(\Gamma\) by a thin deformation, then \(\Gamma^{\prime}\circ\Gamma\) is lazy \(\mathbb{X}\)-path thin homotopic to \(\tilde{\Gamma}^{\prime}\circ\tilde{\Gamma}\).
3. If \(\tilde{\Gamma}^{\prime}\) is obtained from \(\Gamma^{\prime}\) by an equivalence and if \(\tilde{\Gamma}\) is obtained from \(\Gamma\) by a thin deformation, then \(\Gamma^{\prime}\circ\Gamma\) is lazy \(\mathbb{X}\)-path thin homotopic to \(\tilde{\Gamma}^{\prime}\circ\tilde{\Gamma}\).
4. If \(\tilde{\Gamma}^{\prime}\) is obtained from \(\Gamma^{\prime}\) by a thin deformation and if \(\tilde{\Gamma}\) is obtained from \(\Gamma\) by an equivalence, then \(\Gamma^{\prime}\circ\Gamma\) is lazy \(\mathbb{X}\)-path thin homotopic to \(\tilde{\Gamma}^{\prime}\circ\tilde{\Gamma}\).
**Case (i):** Straightforward succesive execution of operations on \(\Gamma^{\prime}\circ\Gamma\) that produce \(\tilde{\Gamma}^{\prime}\) and \(\tilde{\Gamma}\) from \(\Gamma^{\prime}\) and \(\Gamma\) respectively.
**Case (ii):** Observe that if \(\{\zeta_{i}\colon[0,1]\to X_{1}\}_{i=0,1\cdots,n}\) and \(\{\zeta_{i}^{\prime}\colon[0,1]\to X_{1}\}_{i=0,1\cdots,n}\) are thin deformations from \(\Gamma\) to \(\tilde{\Gamma}\) and \(\Gamma^{\prime}\) to \(\tilde{\Gamma}^{\prime}\) respectively, then \(\{\zeta_{0},\zeta_{1},\cdots,\zeta_{m-1},\zeta_{0}^{\prime}\circ\zeta_{m}, \zeta_{1}^{\prime},\zeta_{2}^{\prime},\cdots,\zeta_{n}\}\) is a thin deformation from \(\Gamma^{\prime}\circ\Gamma\) to \(\tilde{\Gamma}^{\prime}\circ\tilde{\Gamma}\).
**Case (iii):** Let \(\{\zeta_{i}\colon[0,1]\to X_{1}\}_{i=0,1\cdots,n}\) be a thin deformation from \(\Gamma\) to \(\tilde{\Gamma}\) and let \(\epsilon\) be an equivalence operation on \(\Gamma^{\prime}\) to obtain \(\tilde{\Gamma}^{\prime}\). Then, \(d:=\{\zeta_{0},\zeta_{1},\cdots,\zeta_{m},c_{\gamma^{\prime}_{0}},c_{\gamma^{ \prime}_{1}},\cdots,c_{\gamma^{\prime}_{n}}\}\) is a thin deformation from \(\Gamma^{\prime}\circ\Gamma\) to \(\Gamma^{\prime}\circ\tilde{\Gamma}\), where \(\{c_{\gamma^{\prime}_{i}}\}_{i=0,1\cdots n}\) are constant paths in \(X_{1}\) defined by \(c_{\gamma^{\prime}_{i}}(r)=\gamma^{\prime}_{i}\) for all \(r\in[0,1]\). Applying the equivalence operation \(\epsilon\) on \(\Gamma^{\prime}\circ\tilde{\Gamma}\) we will obtain \(\tilde{\Gamma}^{\prime}\circ\tilde{\Gamma}\).
**Case (iv):** It can be verified in the same way as Case(iii).
The verification of associativity of the composition and its compatibility with the unit and inverse map are straightforawrd.
**Definition 4.6**.: Given a Lie groupoid \(\mathbb{X}=[X_{1}\rightrightarrows X_{0}]\), the groupoid \(\Pi_{\mathrm{thin}}(\mathbb{X})\) is called the _thin fundamental groupoid of the Lie groupoid \(\mathbb{X}\)_.
As an element \(x\) and a path \(\alpha\) in a manifold \(M\) can respectively be identified with lazy \([M\rightrightarrows M]\)-paths \((1_{x},c_{x},1_{x})\) and \((1_{\alpha(0)},\alpha,1_{\alpha(1)})\) for a constant path \(c_{x}\) at \(x\), then \(\Pi_{\mathrm{thin}}(M\rightrightarrows M)\) reduces to \(\Pi_{\mathrm{thin}}(M)\).
In the following subsection, we will show that \(\Pi_{\mathrm{thin}}(\mathbb{X})\) is a diffeological groupoid for any Lie groupoid \(\mathbb{X}\).
### Smoothness of the thin fundamental groupoid of a Lie groupoid
Given a Lie groupoid \(\mathbb{X}\), define an infinite sequence of sets as \(\{P\mathbb{X}_{n}\}_{n\in\mathbb{N}\cup\{0\}}\), where \(P\mathbb{X}_{0}:=X_{1}\) and \(P\mathbb{X}_{n}:=X_{1}\times_{t,X_{0},ev_{0}}PX_{0}\times_{ev_{1},X_{0},s}X_{1 }\times_{t,X_{0},ev_{0}}\cdots\times_{t,X_{0},ev_{0}}PX_{0}\times_{ev_{1},X_{0 },s}\times X_{1}\), for \(n\in\mathbb{N}\). It is clear from the definition itself that \(P\mathbb{X}_{n}\) has a natural identification with the set of lazy \(\mathbb{X}\)-paths of order \(n\) for each \(n\in\mathbb{N}\). Hence, as a set \(P\mathbb{X}=\cup_{i\in\mathbb{N}\cup\{0\}}P\mathbb{X}_{i}=\sqcup_{i\in\mathbb{ N}\cup\{0\}}P\mathbb{X}_{i}\).
**Proposition 4.3**.: For any Lie groupoid \(\mathbb{X}\), the set of lazy \(\mathbb{X}\)-paths \(P\mathbb{X}\) is a diffeological space.
Proof.: Since by Example 2.1 and Example 2.3 respectively, source-target and evaluation maps are maps of diffeological spaces, the fiber product diffeology (Example 2.4) ensures that for each \(n\in\mathbb{N}\), \(P\mathbb{X}_{n}\) is a diffeologial space with diffeology given by \(D_{P\mathbb{X}_{n}}:=\left\{(p^{0}_{X_{1}},p^{1}_{PX_{0}},p^{1}_{X_{1}},\cdots, p^{n}_{PX_{0}},p^{n}_{X_{1}})\in D_{X_{1}}\times D_{PX_{0}}\times D_{X_{1}} \times\cdots\times D_{PX_{0}}\times D_{X_{1}}:t\circ p^{0}_{X_{1}}=ev_{0}\circ p ^{1}_{PX_{0}},ev_{1}\circ p^{1}_{PX_{0}}=s\circ p^{1}_{X_{1}},\cdots,ev_{1} \circ p^{n}_{PX_{0}}=s\circ p^{n}_{X_{1}}\right\}\). Then, Example 2.6, induces the sum diffeology on \(P\mathbb{X}\).
**Corollary 4.4**.: \(\frac{P\mathbb{X}}{\sim}\) is a diffeological space.
Proof.: A direct consequence of Proposition 4.3 and Example 2.5.
**Lemma 4.5**.: For any Lie groupoid \(\mathbb{X}\),
* the multiplication map \(\tilde{m}\colon P\mathbb{X}\times_{s,X_{0},t}P\mathbb{X}\to P\mathbb{X}\) \[\left((\gamma^{\prime}_{0},\alpha^{\prime}_{1},\cdots,\alpha^{ \prime}_{n},\gamma^{\prime}_{n}),(\gamma_{0},\alpha_{1},\cdots,\alpha_{m}, \gamma_{m})\right)\mapsto(\gamma_{0},\alpha_{1},\cdots,\alpha_{m},\gamma^{ \prime}_{0}\circ\gamma_{m},\alpha^{\prime}_{1},\cdots,\alpha^{\prime}_{n}, \gamma^{\prime}_{n}),\]
2. the unit map \[\tilde{u}\colon X_{0}\to P\mathbb{X}\] \[x\mapsto(1_{x},c_{x},1_{x}),\] where \(c_{x}\) is the constant path at \(x\),
3. the inverse map \[\tilde{\text{i}}\colon P\mathbb{X}\to P\mathbb{X}\] \[(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\gamma_{n-1},\alpha_{n},\gamma_{n})\mapsto(\gamma_{n}^{-1},\alpha_{n}^{-1},\gamma_{n-1}^{-1},\cdots, \gamma_{1}^{-1},\alpha_{1}^{-1},\gamma_{0}^{-1}),\]
are maps of diffeological spaces.
Proof.: : Let \((p,p^{\prime})\colon U\to P\mathbb{X}\times_{s,X_{0},t}P\mathbb{X}\) be a plot in \(P\mathbb{X}\times_{s,X_{0},t}P\mathbb{X}\) and \(x\in U\). By the definition of sum diffeolgy Example 2.6, there exist open neighbourhoods \(U_{x}^{n}\) and \(U_{x}^{n^{\prime}}\) of \(x\) and indexes \(n,n^{\prime}\in\mathbb{N}\cup\{0\}\) such that \(p|_{U_{x}^{n}}\in D_{P\mathbb{X}_{n}}\) and \(p|_{U_{x}^{n^{\prime}}}\in D_{P\mathbb{X}_{n^{\prime}}}\). Hence, by the definition of \(\tilde{m}\), it is clear that \(\big{(}m\circ(p,p^{\prime})\big{)}|_{U_{x}}\in D_{P\mathbb{X}_{n+n^{\prime}}}\), where \(U_{x}=U_{x}^{n^{\prime}}\cap U_{x}^{n}\), and hence, \(\tilde{m}\) is smooth, and this proves (a).
(b) and (c) can be proved using similar techniques used in the proof of (a).
We show next that the thin fundamental groupoid of a Lie groupoid (Definition 4.6) is a _diffeological groupoid_ i.e., a groupoid object in the category of diffeological spaces, (**8.3**, [33]).
**Theorem 4.6**.: The thin fundamental groupoid \(\Pi_{\text{thin}}(\mathbb{X})\) of a Lie groupoid \(\mathbb{X}\) is a diffeological groupoid.
Proof.: Since we have already shown \(\frac{P\mathbb{X}}{\sim}\) is a diffeological space (Corollary 4.4), it remains to be shown that the structure maps descent to maps of diffeological spaces.
Lemma 2.7 guarantee that the source-target are maps of diffeologial spaces. Now, suppose \((p_{1},p_{2})\colon U\to\frac{PX}{\sim}\times_{s,X_{0},t}\frac{P\mathbb{X}}{ \sim}\) is a plot of \(\frac{P\mathbb{X}}{\sim}\times_{s,X_{0},t}\frac{P\mathbb{X}}{\sim}\). Hence, by the definition of quotient diffeology, there exists a cover \(\{U_{i}\}\) of \(\tilde{U}\) such that for each \(i\), we have
* a plot \(\bar{p}_{1}^{i}\colon U_{i}\to P\mathbb{X}\) and \(q\circ\bar{p}_{1}^{i}=p_{1}|_{U_{i}}\) and
* a plot \(\bar{p}_{2}^{i}\colon U_{i}\to P\mathbb{X}\) and \(q\circ\bar{p}_{2}^{i}=p_{2}|_{U_{i}}\),
where \(q\) is the quotient map. Hence, clearly \((\bar{p}_{1}^{i},\bar{p}_{2}^{i})\colon U_{i}\to P\mathbb{X}\times_{s,X_{0},t }P\mathbb{X}\) is a plot of \(P\mathbb{X}\times_{s,X_{0},t}P\mathbb{X}\). The commutativity of the diagram below
ensures that the composition is a map of diffeological spaces. Here, \(\tilde{m}\) is the multiplication map defined in Lemma 4.5.
Smoothness of the unit and the inverse maps can be verified in a similar fashion.
For a morphism of Lie groupoids \(F\colon\mathbb{X}\to\mathbb{Y}\), one has the induced morphism of diffeological groupoids (see **8.3**, [33] for the definition of the morphism of diffeological groupoids) between the respective thin fundamental grouipoids \(F_{\text{thin}}\colon\Pi_{\text{thin}}(\mathbb{Y})\to\Pi_{\text{thin}}(\mathbb{ X})\).
**Lemma 4.7**.: \[F_{\text{thin}}\colon\Pi_{\text{thin}}(\mathbb{Y})\to\Pi_{\text{thin}}( \mathbb{X})\]
is defined as \(y\mapsto F(y)\) for each \(y\in Y_{0}\), and a class of lazy \(\mathbb{Y}\)-path \([\Gamma]:=[(\gamma_{0},\alpha_{1},\gamma_{1},\cdots\alpha_{n},\gamma_{n})]\) goes to the class of lazy \(\mathbb{X}\)-path \([F(\Gamma)]:=[\big{(}F(\gamma_{0}),F\circ\alpha_{1},F(\gamma_{1}),\cdots,F \circ\alpha_{n},F(\gamma_{n})\big{)}]\).
A similar result exists for fundamental groupoids of Lie groupoids (See [16]).
## 5. Parallel Transport on quasi-principal 2-bundles
In this section, we investigate the parallel transport along lazy Haefliger paths in the framework of quasi-principal 2-bundles. More interestingly, this construction leads to a parallel transport functor defined on the thin homotopy classes of lazy Haefliger paths. Moreover, this parallel transport functor enjoys certain smoothness properties (Theorem 5.18), as well as it is well behaved with respect to the pullback (Proposition 5.15 and the connection preserving morphims of quasi-principal 2-bundles (Proposition 5.12).
We will develop the notion of parallel transport in three steps.
Consider a strict connection \(\omega\) on a quasi-principal \(\mathbb{G}\)-bundle \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) and a lazy \(\mathbb{X}\)-path \(\Gamma=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\).
1. For every element \(\gamma_{i}\colon x_{i}\to y_{i}\) in \(X_{1}\), we will define a \(\mathbb{G}\)-equivariant isomorhism of Lie groupoids \(T_{\mathcal{C},\pi}\colon\pi^{-1}(y_{i})\to\pi^{-1}(x_{i})\) induced by the quasi connection \(\mathcal{C}\).
2. For every path \(\alpha_{i}\colon x_{i}^{\prime}\to y_{i}^{\prime}\), we define a \(\mathbb{G}\)-equivariant isomorphism of Lie groupoids \(T_{\omega}^{\alpha_{i}}\colon\pi^{-1}(x_{i}^{\prime})\to\pi^{-1}(y_{i}^{\prime})\) induced from the strict connection \(\omega\).
3. We will compose the above \(\mathbb{G}\)-equivariant isomorphisms of Lie groupoids successively to get \[T_{(\Gamma,\mathcal{C},\omega)}:=T_{\mathcal{C},\pi}(\gamma_{n}^{-1})\circ T _{\omega}^{\alpha_{n}}\circ\cdots\circ T_{\omega}^{\alpha_{1}}\circ T_{ \mathcal{C},\pi}(\gamma_{0}^{-1}).\]
The novelty of this approach lies in showing how a differential geometric notion of connection induced horizontal path lifting property combines with a purely categorical notion of cartesian lifts in a fibered category to produce a notion of a higher differential geometric version of the classical parallel transport functor (Theorem 5.10). The non-trivial aspect of the construction lies in proving its lazy \(\mathbb{X}\)-path thin homotopy invariance (Proposition 5.8 and Proposition 5.9).
**Step 1:**
We start with the following straightforward observation.
**Proposition 5.1**.: The underlying projection functor of a quasi-principal \(\mathbb{G}\)-bundle \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) is a fibered category over \(\mathbb{X}\) equipped with a cleavage \(\mathcal{K}_{\mathcal{C}}:=\{\mathcal{C}(\gamma^{-1},p)^{-1}\colon(\gamma,p) \in t^{*}E_{0}\}\),(see Section 2.2).
**Definition 5.1**.: For a Lie 2-group \(\mathbb{G}\), a \(\mathbb{G}\)_-torsor_ is defined as a Lie groupoid \(\mathbb{X}\) with an action of \(\mathbb{G}\) such that manifolds \(X_{0}\) and \(X_{1}\) are \(G_{0}\)-torsor and \(G_{1}\)-torsor respectively. Collection of \(\mathbb{G}\)-torsors, \(\mathbb{G}\)-equivariant morphisms of Lie groupoids and \(\mathbb{G}\)-equivariant natural transformations form a 2-groupoid which we denote by \(\mathbb{G}\)-Tor.
**Example 5.2**.: If \(\pi\colon\mathbb{E}\to\mathbb{X}\) is a principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\), then for any \(x\in X_{0}\), the _fibre_\(\pi^{-1}(x):=[\pi_{1}^{-1}(1_{x})\rightrightarrows\pi_{0}^{-1}(x_{0})]\) is a \(\mathbb{G}\)-torsor.
Now, as a direct consequence of Proposition 5.1, we get the following:
**Proposition 5.3**.: For a quasi-principal \(\mathbb{G}\)-bundle \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) over a Lie groupoid \(\mathbb{X}\), there is an associated \(\mathbb{G}\)-Tor _valued pseudofunctor_\(T_{\mathcal{C}}\colon\mathbb{X}^{\mathrm{op}}\to\mathbb{G}\)-Tor. Precisely,
1. each \(x\in X_{0}\) is assigned to the \(\mathbb{G}\)-Torsor \(T_{\mathcal{C}}(x):=\pi^{-1}(x)\),
2. each morphism \(x\xrightarrow{\gamma}y\) is assigned to an isomorphism of \(\mathbb{G}\)-torsors \[\begin{split} T_{\mathcal{C}}(\gamma)&\colon\pi^{- 1}(y)\to\pi^{-1}(x)\\ p&\mapsto\mu_{\mathcal{C}}(\gamma^{-1},p));\\ (p\xrightarrow{\zeta}q)&\mapsto\mathcal{C}(\gamma^{ -1},q)\circ\zeta\circ\big{(}\mathcal{C}(\gamma^{-1},p)\big{)}^{-1},\end{split}\] (5.1)
3. for each \(x\in X_{0}\), we have a smooth \(\mathbb{G}\)-equivariant natural isomorphism \[\begin{split} I_{x}&\colon T_{\mathcal{C}}(1_{x}) \Longrightarrow 1_{\pi^{-1}(x)}\\ p&\mapsto\bigg{(}\mu_{\mathcal{C}}(1_{x},p) \xrightarrow{\mathcal{C}(1_{x},p)^{-1}}p\bigg{)},\end{split}\] (5.2)
4. for each pair of composable arrows, we have a smooth \(\mathbb{G}\)-equivariant natural isomorphism \[\begin{split}\alpha_{\gamma_{1},\gamma_{2}}& \colon T_{\mathcal{C}}(\gamma_{1})\circ T_{\mathcal{C}}(\gamma_{2}) \Longrightarrow T_{\mathcal{C}}(\gamma_{2}\circ\gamma_{1})\\ p&\mapsto\mathcal{C}(\gamma_{1}^{-1}\circ\gamma_ {2}^{-1},p)\circ\mathcal{C}(\gamma_{2}^{-1},p)^{-1}\circ\mathcal{C}\big{(} \gamma_{1}^{-1},t(\mathcal{C}(\gamma_{2}^{-1},p))\big{)}^{-1},\end{split}\] (5.3)
such that \(\alpha_{\gamma_{1},\gamma_{2}}\) and \(I_{x}\) satisfy the necessary coherence laws of Equation (2.4) and Equation (2.3) respectively.
**Step 2:**
Let \(A\) be a connection on an ordinary principal \(G\)-bundle \(\pi:E\to M\) over a smooth manifold \(M\). Then, given a smooth path \(\alpha:[0,1]\to M\), for each point \(p\in\pi^{-1}(\alpha(0))\), the unique horizontal lift of the path \(\alpha\) starting from \(p\) will be denoted by \(\tilde{\alpha}_{A}^{p}\). In turn we have a map \(\mathrm{Tr}_{A}^{\alpha}\colon\pi^{-1}(\alpha(0))\to\pi^{-1}(\alpha(1))\) defined by \(\mathrm{Tr}_{A}^{\alpha}(p):=\tilde{\alpha}_{A}^{p}(1)\), the so called _parallel transport map_, given by \(p\mapsto\tilde{\alpha}_{A}^{p}(1)\).
Next, we observe a consequence of the functoriality of strict connections (Definition 2.8).
**Lemma 5.4**.: For a Lie \(2\)-group \(\mathbb{G}\), let \(\pi:\mathbb{E}\to\mathbb{X}\) be a prinicpal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\). Any strict connection \(\omega:T\mathbb{E}\to L(\mathbb{G})\) induces the follwoing:
For any path \(\zeta\colon[0,1]\to X_{1}\) and \(\alpha:[0,1]\to X_{0}\), we have the following identities:
1. \(\mathrm{Tr}_{\omega_{0}}^{s\diamond}(s(\delta))=s(\mathrm{Tr}_{\omega_{1}}^{ \zeta}(\delta))\) for each \(\delta\in\pi_{1}^{-1}(\zeta(0))\).
2. \(\mathrm{Tr}_{\omega_{0}}^{t\diamond}(t(\delta))=t(\mathrm{Tr}_{\omega_{1}}^{ \zeta}(\delta))\) for each \(\delta\in\pi_{1}^{-1}(\zeta(0))\).
3. \(\mathrm{Tr}_{\omega_{1}}^{u\diamond}(u(p))=u(\mathrm{Tr}_{\omega_{0}}^{\alpha} (p))\) for each \(p\in\pi_{0}^{-1}(\alpha(0))\).
Proof.: Observe that to prove the above identities, it is sufficient to show the following:
1. \(\widetilde{s\circ\zeta_{\omega_{0}}^{s(\delta)}}=s\circ\tilde{\zeta}_{\omega_ {1}}^{\delta}\),
* \(\widetilde{t\circ\zeta_{\omega_{0}}^{t(\delta)}}=t\circ\tilde{\zeta}_{\omega_{1}}^{\delta}\),
* \(\widetilde{u\circ\alpha_{\omega_{1}}^{1p}}=u\circ\tilde{\alpha}_{\omega_{0}}^{p}\),
which one can verify from the functoriality of \(\omega\) and \(\pi\).
We obtain the following consequence of the above lemma.
**Proposition 5.5**.: For a Lie \(2\)-group \(\mathbb{G}\), let \(\pi:\mathbb{E}\to\mathbb{X}\) be a prinicpal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\) with a strict connection \(\omega\). Then for any given path \(\alpha:[0,1]\to X_{0}\), there is a \(\mathbb{G}\)-equivariant isomorphism of Lie groupoids
\[T_{\omega}^{\alpha}\colon \pi^{-1}(x)\to\pi^{1}(y)\] \[p\mapsto\operatorname{Tr}_{\omega_{0}}^{\alpha}(p)\] \[\gamma\mapsto\operatorname{Tr}_{\omega_{1}}^{u\alpha\alpha}(\gamma)\]
for all \(p\in\pi_{0}^{-1}(x)\) and \(\gamma\in\pi_{1}^{-1}(1_{x})\), where \(\alpha(0)=x\) and \(\alpha(1)=y\).
**Step 3:** Combining the results of step 1 and step 2, we arrive at the following definition:
**Definition 5.2**.: Given a quasi-principal \(\mathbb{G}\)-bundle \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\), a strict connection \(\omega\) and a lazy \(\mathbb{X}\)-path \(\Gamma:=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) from \(x\) to \(y\), the \(\mathbb{G}\)-equivariant isomorphism of Lie groupoids \(T_{(\Gamma,\mathcal{C},\omega)}:=T_{\mathcal{C},\pi}(\gamma_{n}^{-1})\circ T_ {\omega}^{\alpha_{n}}\circ\cdots\circ T_{\omega}^{\alpha_{1}}\circ T_{\mathcal{ C},\pi}(\gamma_{0}^{-1})\) is defined as the \((\mathcal{C},\omega)\)_-parallel transport along the lazy \(\mathbb{X}\)-path \(\Gamma\)._
This parallel transport enjoys the crucial properties of functoriality and the lazy \(\mathbb{X}\)-path thin homotopy, which we will establish in the following subsection.
**Example 5.6** (Classical principal bundle).: Let \(\pi\colon[E\rightrightarrows E]\to[M\rightrightarrows M]\) be a principal \([G\rightrightarrows G]\) bundle over a discrete Lie groupoid \([M\rightrightarrows M]\), equipped with the strict connection \(\omega:=(\omega,\omega)\) (Example 2.12) and the unique categorical connection \(\mathcal{C}\) (Example 2.9). Then \(T_{\Gamma,\mathcal{C},\omega}=T_{\omega}^{\alpha}\).
**Example 5.7** (Principal \(2\)-bundle over a manifold).: One can show that the principal \(2\)-bundle defined in Definition 2.4 coincides with the definition of principal \(2\)-bundle over a manifold \(M\) as defined in **Definition 3.1.1**, [58] when our base Lie groupoid is of the form \([M\rightrightarrows M]\). Further in [59] a notion of parallel transport along a path \(\alpha\) on a manifold \(M\) was introduced in terms of a Lie \(2\)-group equivariant anafunctor. In our setup the Lie \(2\)-group equivariant anafunctor corresponding to the parallel transport on a quasi-principal \(\mathbb{G}\)-bundle (Definition 5.2) for a lazy \([M\rightrightarrows M]\)-path \((1_{\alpha(0)},\alpha,1_{\alpha(1)})\) relates with that of **Proposition 3.26**, [59].
### Parallel transport functor on a quasi-principal \(2\)-bundle
We start by showing a lazy \(\mathbb{X}\)-path thin homotopy invariance of the parallel transport, defined in Definition 5.2.
**Proposition 5.8**.: Let \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle with a strict connection \(\omega:T\mathbb{E}\to L(\mathbb{G})\). If a lazy \(\mathbb{X}\)-path \(\Gamma:=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) is equivalent (see Definition 4.2) to a lazy \(\mathbb{X}\)-path \(\Gamma^{\prime}\), then there is a smooth \(\mathbb{G}\)-equivariant natural isomorphism between \(T_{(\Gamma,\mathcal{C},\omega)}\) and \(T_{(\Gamma^{\prime},\mathcal{C},\omega)}\).
Proof.: For our purpose, it is sufficient to verify only the following three cases, i.e :
(A) if \(\Gamma^{\prime}\) is obtained from \(\Gamma\) by the (1) of Definition 4.2, then there is a smooth \(\mathbb{G}\)-equivariant natural isomorphism between \(T_{(\Gamma,\mathcal{C},\omega)}\) and \(T_{(\Gamma^{\prime},\mathcal{C},\omega)}\). (B) and (C) are likewise for respectively (2) and (3) in Definition 4.2.
(A) and (B) directly follows from Equation (5.3) and Equation (5.2) respectively. (C) demands a little more work.
**Proof of(C).** Let \(\Gamma^{\prime}\) be obtained from \(\Gamma:=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) by (3) of Definition 4.2. That is, given a path \(\zeta_{i}\colon[0,1]\to X_{1}\) with sitting instants, such that \(s\circ\zeta_{i}=\alpha_{i}\), we replace \(\alpha_{i}\) by \(t\circ\zeta_{i}\), \(\gamma_{i-1}\) by \(\zeta_{i}(0)\circ\gamma_{i-1}\) and \(\gamma_{i}\) by \(\gamma_{i}\circ(\zeta_{i}(1))^{-1}\), \(i\in\{1,2,\cdots,n\}\) to obtain \(\Gamma^{\prime}\). We have to show that
\[T_{\omega}^{\alpha_{i}}\cong T_{\mathcal{C}}(\gamma^{\prime})\circ T_{\omega }^{t\circ\zeta_{i}}\circ T_{\mathcal{C}}(\gamma^{-1}), \tag{5.4}\]
where \(\cong\) is a smooth \(\mathbb{G}\)-equivariant natural isomorphism, and \(\gamma:=x\xrightarrow{\zeta_{i}(0)}y\), \(\gamma^{\prime}:=x^{\prime}\xrightarrow{\zeta_{i}(1)}y^{\prime}\) are elements of \(X_{1}\). By the repetitive use of Proposition 5.3, this is equivalent to showing that given a square
the following square
commutes upto a smooth \(\mathbb{G}\)-equivariant natural isomorphism. Here, the dotted lines represent the paths \(s\circ\zeta_{i}:[0,1]\to X_{0}\) and \(t\circ\zeta_{i}:[0,1]\to X_{0}\). We claim that the desired smooth \(\mathbb{G}\)-equivariant natural isomorphism \(\eta\colon T_{\mathcal{C}}(\gamma^{\prime})\circ T_{\omega}^{t\circ\zeta_{i} }\Longrightarrow T_{\omega}^{s\circ\zeta_{i}}\circ T_{\mathcal{C}}(\gamma)\) is given by
\[p\mapsto\eta_{p}:=1_{\mu_{\mathcal{C}}\big{(}\gamma^{\prime-1},\mathrm{Tr}_{ \omega_{0}}^{t\circ\zeta_{i}}(p)\big{)}}(h_{p},e), \tag{5.5}\]
where \(h_{p}\) is the unique element in \(H\) such that
\[\mathcal{C}\big{(}\gamma^{\prime-1},\mathrm{Tr}_{\omega_{0}}^{t\circ\zeta_{i }}(p)\big{)}(h_{p},e)=\mathrm{Tr}_{\omega_{1}}^{\mathrm{i}\circ\zeta_{i}}( \mathcal{C}(\gamma^{-1},p)), \tag{5.6}\]
where \(\mathrm{i}\colon X_{1}\to X_{1}\) is the inverse map. Smoothness of the map \(p\mapsto\eta_{p}\) and the source consistency are obvious.
We verify the target consistency by observing
\[t(\eta_{p})\] \[=t(\mathcal{C}(\gamma^{\prime-1},\mathrm{Tr}_{\omega_{0}}^{t\circ \zeta_{i}}(p))(h_{p},e))\] \[=t(\mathrm{Tr}_{\omega_{1}}^{\mathrm{i}\circ\zeta_{i}}(\mathcal{C} (\gamma^{-1},p)))\quad[\mathrm{by}Equation\ (\ref{eq:t-p})].\]
Hence, using Lemma 5.4, we get
\[t(\eta_{p})=t(\mathrm{Tr}_{\omega_{1}}^{\mathrm{i}\circ\zeta_{i}}(\mathcal{C} (\gamma^{-1},p)))=\mathrm{Tr}_{\omega_{0}}^{s\circ\zeta_{i}}(\mu_{\mathcal{C}} (\gamma^{-1},p)). \tag{5.7}\]
Since
\[(h_{p},e)=1_{g}(h_{pg},e)1_{g}^{-1} \tag{5.8}\]
by Equation (5.6) and \(\eta_{pg}=1_{\mu_{\mathcal{C}}(\gamma^{\prime-1},\mathrm{Tr}_{\omega_{0}}^{t_{ \mathcal{C}}\zeta_{i}}(p))}1_{g}(h_{pg},e)\) by Equation (5.5), we have \(\eta_{pg}=\eta_{p}1_{g}\) by Equation (5.8).
Next, we ensure that \(\eta\) satisfies the naturality square, that is for every \(p\xrightarrow{\delta}q\in\pi^{-1}(y)\),
\[T_{\mathcal{C}}(\gamma^{\prime})\circ T_{\omega}^{t_{\mathcal{C} }\zeta_{i}}(p)\xrightarrow{\eta_{p}}T_{\omega}^{s\circ\zeta_{i}}\circ T_{ \mathcal{C}}(\gamma)(p)\] \[T_{\mathcal{C}}(\gamma^{\prime})\circ T_{\omega}^{t_{\mathcal{C }}\zeta_{i}}(\delta)\] \[T_{\mathcal{C}}(\gamma^{\prime})\circ T_{\omega}^{t_{\mathcal{C }}\zeta_{i}}(q)\xrightarrow{\eta_{q}}T_{\omega}^{s\circ\zeta_{i}}\circ T_{ \mathcal{C}}(\gamma)(q)\] \[\eta_{q}\circ\big{(}T_{\mathcal{C}}(\gamma^{\prime})\circ T_{ \omega}^{t_{\mathcal{C}}\zeta_{i}}(\delta)\big{)}=\big{(}T_{\omega}^{s\circ \zeta_{i}}\circ T_{\mathcal{C}}(\gamma)(\delta)\big{)}\circ\eta_{p}. \tag{5.9}\]
Since \(\delta=1_{p}(h,e)\) for a unique \(h\in H\), we have
\[q=p\tau(h), \tag{5.10}\]
\[T_{\mathcal{C}}(\gamma^{\prime})\circ T_{\omega}^{t_{\mathcal{C} }\zeta_{i}}(\delta)\big{)}=1_{\mu_{\mathcal{C}}(\gamma^{\prime-1},\mathrm{Tr}_ {\omega_{0}}^{t_{\mathcal{C}}\zeta_{i}}(p))}(h,e), \tag{5.11}\]
and
\[\mathrm{Tr}_{\omega_{0}}^{s\circ\zeta_{i}}\circ T_{\mathcal{C}}(\gamma)( \delta)=1_{\mathrm{Tr}_{\omega_{0}}^{s\circ\zeta_{i}}(\mu_{\mathcal{C}}(\gamma ^{-1},p))}(h,e). \tag{5.12}\]
By comparing the left-hand
\[\eta_{q}\circ\big{(}T_{\mathcal{C}}(\gamma^{\prime})\circ T_{ \omega}^{t_{\mathcal{C}}\zeta_{i}}(\delta)\big{)}\] \[=\underbrace{1_{\mu_{\mathcal{C}}(\gamma^{\prime-1},\mathrm{Tr}_{ \omega_{0}}^{t_{\mathcal{C}}\zeta_{i}}(p))}(h_{p},e)(e,\tau(h))}_{\eta_{q}=\eta _{p}1_{\tau(h)}[\text{Equation \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
**Proposition 5.9**.: Let \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle with a strict connection \(\omega:T\mathbb{E}\to L(\mathbb{G})\). If a lazy \(\mathbb{X}\)-path \(\Gamma^{\prime}:=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) is obtained from a lazy \(\mathbb{X}\)-path \(\Gamma\) via thin deformation, then there is a smooth \(\mathbb{G}\)-equivariant natural isomorphism between \(T_{(\Gamma,\mathcal{C},\omega)}\) and \(T_{(\Gamma^{\prime},\mathcal{C},\omega)}\).
Proof.: Consider the case when \(\Gamma^{\prime}\) is obtained from \(\Gamma\) by a thin deformation. Let \(\{\zeta_{i}:I\to X_{1}\}_{i=0,1,\cdots,n}\) be a thin deformation from lazy \(\mathbb{X}\)-path \(\Gamma=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\) to a lazy \(\mathbb{X}\)-path \(\Gamma^{\prime}=(\gamma_{0}^{\prime},\alpha_{1}^{\prime},\gamma_{1}^{\prime}, \cdots,\alpha_{n}^{\prime},\gamma_{n}^{\prime})\) such that \(s(\Gamma)=s(\Gamma^{\prime})=x\) and \(t(\Gamma)=t(\Gamma^{\prime})=y\), represented by the following diagram:
where the solid arrows are elements of \(X_{1}\), and the dotted arrows are paths in \(X_{0}\).
Let \(H_{i}:I\times I\to X_{0}\) be thin homotopies from \(\alpha_{i}\) to \((s\circ\zeta_{i})^{-1}*\alpha_{i}^{\prime}*(t\circ\zeta_{i-1})\) for all \(i=1,\cdots,n\). Consider,
\[u\circ H_{i}:I\times I\to X_{1},\]
which is a thin homotopy from \(u\circ\alpha_{i}\) to \(u\circ\big{(}(s\circ\zeta_{i})^{-1}*\alpha_{i}^{\prime}*(t\circ\zeta_{i-1}) \big{)}\) in \(X_{1}\) for each \(i\), as the rank of \(u\circ H_{i}\) is less than rank of \(H_{i}\) at all points. From the thin homotopy invariance of the parallel transport in classical principal bundles and by the same argument as in for deriving Equation (5.4) in Proposition 5.8, we immediately obtain the following family of equations and smooth \(\mathbb{G}\)-equivariant natural isomorphisms:
\[T_{\omega}^{\alpha_{i}^{\prime}}=T_{\omega}^{(s\circ\zeta_{i})}\circ T_{ \omega}^{\alpha_{i}}\circ T_{\omega}^{(t\circ\zeta_{i-1})^{-1}} \tag{5.13}\]
for all \(i=1,2\cdots,.,n\), and
\[T_{\mathcal{C}}(\gamma_{i}^{\prime-1})\cong T_{\omega}^{t\circ\zeta_{i}}\circ T _{\mathcal{C}}(\gamma_{i}^{-1})\circ T_{\omega}^{(s\circ\zeta_{i})^{-1}}, \tag{5.14}\]
for all \(i=1,..,n-1\) respectively. As a consequence of Equation (5.13) and Equation (5.14), we conclude \(T_{(\Gamma,\mathcal{C},\omega)}\cong T_{(\Gamma^{\prime},\mathcal{C},\omega)}\).
To define our desired parallel transport functor for a quasi principal 2-bundle, we need to introduce a quotient category \(\overline{\mathbb{G}-\mathrm{Tor}}\) of \(\mathbb{G}\)-Tor.
**Definition 5.3**.: For a Lie 2-group \(\mathbb{G}\), we define the category \(\overline{\mathbb{G}-\mathrm{Tor}}\) as the quotient category of \(\mathbb{G}\)-Tor obtained from the congruence relation given as follows: For each pair of \(\mathbb{G}\)-torsors \(\mathbb{X},\mathbb{Y}\), the equivalence relation on \(\mathrm{Hom}_{\mathbb{G}-\mathrm{Tor}}(\mathbb{X},\mathbb{Y})\) is given by the existence of a smooth \(\mathbb{G}\)-equivariant natural isomorphism.
With the aid of Proposition 5.8, Proposition 5.9, we derive the parallel transport functor.
**Theorem 5.10**.: Given a quasi-principal \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\) with a strict connection \(\omega:T\mathbb{E}\to L(\mathbb{G})\), there is a functor
\[\mathcal{T}_{\mathcal{C},\omega} \colon\Pi_{\mathrm{thin}}(\mathbb{X})\to\overline{\mathbb{G}- \mathrm{Tor}}\] \[x\mapsto\pi^{-1}(x),\] \[[\Gamma]\mapsto[T_{(\Gamma,\mathcal{C},\omega)}].\]
Proof.: Well-definedness of \(\mathcal{T}_{\mathcal{C},\omega}\) follows from Proposition 5.8 and Proposition 5.9. Source and target consistencies of \(\mathcal{T}_{\mathcal{C},\omega}\) are obvious. Compatibility with the unit map and the composition follows from Equation (5.2) and Equation (5.3), respectively.
**Definition 5.4**.: Given a quasi-principal \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\) with a strict connection \(\omega:T\mathbb{E}\to L(\mathbb{G})\), the functor \(\mathcal{T}_{\mathcal{C},\omega}\) is defined as the \((\mathcal{C},\omega)\)-_parallel transport functor of \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\)_.
**Remark 5.11**.: For a principal \([G\rightrightarrows G]\) bundle \(\pi\colon[E\rightrightarrows E]\to[M\rightrightarrows M]\) over a discrete Lie groupoid \([M\rightrightarrows M]\), equipped with the strict connection of the form \(\omega:=(\omega,\omega)\) (Example 2.12) and the unique categorical connection \(\mathcal{C}\) (Example 2.9) the functor \(\mathcal{T}_{\mathcal{C},\omega}\), coincides with the classical one.
### Naturality of the parallel transport functor on a quasi-principal 2-bundle
We start by showing the naturality of Definition 5.4 with respect to the connection preserving morphisms.
**Proposition 5.12**.: For any morphism of quasi-principal \(\mathbb{G}\)-bundles
\[F\colon(\pi\colon\mathbb{E}^{\prime}\to\mathbb{X},\mathcal{C}^{\prime})\to( \pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\]
over a Lie groupoid \(\mathbb{X}\), equipped with strict conections \(\omega\) and pullback connection \(F^{*}\omega\) (See Example 2.11) respectively, the functors \(\tau_{\mathcal{C},\omega}\) and \(\tau_{\mathcal{C}^{\prime},F^{*}\omega}\) are naturally isomorphic.
Proof.: Follows from the observation that for every \(x\xrightarrow{\gamma}y\in X_{1}\) and for every path \(\alpha\) in \(X_{0}\) (with sitting instants) from \(p\) to \(q\) respectively, the following two diagrams commute in the category of \(\mathbb{G}\)-torsors:
The square on the right commutes as classical parallel transports are well behaved with respect to a pullback (For instance, see **Lemma 3.11**, [15] ). Whereas the compatibility of \(F\) with \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) (see Proposition 3.2) ensures that the square on the left commutes.
For a Lie 2-group \(\mathbb{G}\) and a Lie groupoid \(\mathbb{X}\), let \(\mathrm{Bun}^{\nabla}_{\mathrm{quasi}}(\mathbb{X},\mathbb{G})\) be the category whose objects are quasi-principal \(\mathbb{G}\)-bundles equipped with strict connections over the Lie groupoid \(\mathbb{X}\), and arrows are connection preserving morphisms. Let \(\mathrm{Trans}(\mathbb{X},\mathbb{G})\) be the category whose objects are functors \(T\colon\Pi_{\mathrm{thin}}(\mathbb{X})\to\overline{\mathbb{G}-\mathrm{Tor}}\) and arrows are natural transformations. Then the following is a direct consequence of Proposition 5.12.
**Theorem 5.13**.: The map \(\big{(}(\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C}),\omega)\mapsto\mathcal{T}_{ \mathcal{C},\omega}\) defines a functor
\[\mathcal{F}\colon\operatorname{Bun}_{\operatorname{quasi}}^{\nabla}(\mathbb{X },\mathbb{G})\to\operatorname{Trans}(\mathbb{X},\mathbb{G}),\]
where \(\omega\) is the strict connection on \(\pi\colon\mathbb{E}\to\mathbb{X}\).
For any principal \(\mathbb{G}\)-bundle \(\pi:\mathbb{E}\to\mathbb{X}\) and a morphism of Lie groupoids \(F:\mathbb{Y}\to\mathbb{X}\), the pair of pullback projections defines a principal \(\mathbb{G}\) bundle \(F^{*}\mathbb{E}\colon\mathbb{Y}\times_{F,\mathbb{X},\pi}\mathbb{E}\to\mathbb{Y}\) over \(\mathbb{Y}\), where \(\mathbb{Y}\times_{F,\mathbb{X},\pi}\mathbb{E}\) is the usual strong fibered products of Lie groupoids along the morphisms \(F\) and \(\pi\) (see **Section 5.3**, [49] for details on strong fibered products of Lie groupoids). Now, the following observation is obvious.
**Lemma 5.14**.: If \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) is a quasi-principal \(\mathbb{G}\)-bundle equipped with a strict connection \(\omega\) and \(F\colon\mathbb{Y}\to\mathbb{X}\) is any morphism of Lie groupoids, then \((F^{*}\pi\colon\mathbb{F}^{*}\mathbb{E}\to\mathbb{Y},F^{*}\mathcal{C})\) is a quasi-principal \(\mathbb{G}\)-bundle with strict connection \(\operatorname{pr}_{2}^{*}\omega\), where \(F^{*}\mathcal{C}\colon s^{*}(F_{0}^{*}E_{0})\to F_{1}^{*}E_{1}\) is given by \((\gamma,(x,p))\to(\gamma,C\big{(}F_{1}(\gamma),p\big{)})\) for \(\gamma\in X_{1}\) and \(p\in E_{0}\) such that \(F_{0}(s(\gamma))=\pi_{0}(p)\).
The result below establishes the naturality of Definition 5.4 with respect to the pullback.
**Proposition 5.15**.: Given a quasi-principal \(\mathbb{G}\)-bundle \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) equipped with a strict connection \(\omega\) and a morphism of Lie groupoids \(F\colon\mathbb{Y}\to\mathbb{X}\), the functors \(\mathcal{T}_{F^{*}\mathcal{C},\operatorname{pr}_{2}^{*}\omega}\) and \(\mathcal{T}_{\mathcal{C},\omega}\circ F_{\operatorname{thin}}\) (see Lemma 4.7) are naturally isomorphic.
Proof.: We claim that \(\eta\colon Y_{0}\to(\overline{\mathbb{G}{-}\mathrm{Tor}})_{1}\) defined by \(y\mapsto\eta_{y}:=[\operatorname{pr}_{2}|_{(F^{*}\pi)^{-1}(y)}]\) is the required natural isomorphism, where \(\operatorname{pr}_{2}\colon F^{*}\mathbb{E}\to\mathbb{E}\) is the 2nd projection functor from the pull-back Lie groupoid. Our claim is a consequence of the following two easy observations:
1. For every \(x\xrightarrow{\gamma}y\in Y_{1}\), we have \[[T_{\mathcal{C}}(F(\gamma))]\circ\eta_{y}=\eta_{x}\circ[T_{F^{*}\mathcal{C}}( \gamma)],\]
2. for every path (with sitting instants) \(\alpha\colon[0,1]\to Y_{0}\) such that \(\alpha(0)=a\) and \(\alpha(1)=b\), we have \[[T_{\omega}^{F(\alpha)^{-1}}]\circ\eta_{b}=\eta_{a}\circ[T_{\operatorname{pr}_ {2}^{*}\omega}^{\alpha^{-1}}].\]
### Smoothness of the Parallel transport functor on a quasi-principal 2-bundle
We begin with the following observation.
**Lemma 5.16**.: For any \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-torsor \(\mathbb{E}\), the group of automorphisms \(\operatorname{Aut}(\mathbb{E}):=\operatorname{Hom}_{\mathbb{G}{-}\mathrm{Tor }}(\mathbb{E},\mathbb{E})\) is canoncially isomorphic to the Lie group \(G\).
Proof.: As for any Lie group \(G\), a \(G\)-torsor \(E\) (Section 1.1) and a point \(z\in G\), we have a group isomorphism given by
\[\begin{split}\psi_{z}&\colon\operatorname{Aut}(E):= \operatorname{Hom}_{G{-}\mathrm{Tor}}(E,E)\to G\\ f&\mapsto\delta(z,f(z)),\end{split} \tag{5.15}\]
where \(\delta\colon E\times E\to G\) is a smooth map defined implicitly as \(x\cdot\delta(x,y)=y\). The isomorphism does not depend on the choice of \(z\), and thus \(\operatorname{Aut}(E)\) can be canonically identified as a Lie group (see **Lemma 3.4** in [15]). Hence, it is sufficient to show that the map
\[\begin{split}\theta\colon&\operatorname{Aut}( \mathbb{E})\to\operatorname{Aut}(E_{0})\\ F&:=(F_{1},F_{0})\mapsto F_{0}\end{split} \tag{5.16}\]
is an isomorphism of groups. \(\theta\) is obviously a group homomorphism. Now, let \(\theta(F)=\theta(F^{\prime})\) for \(F,F^{\prime}\in\operatorname{Aut}(\mathbb{E})\). Let \(\delta\in E_{1}\). Then there exists unique \(h_{\delta}\in H\), such that \(\delta=1_{s(\delta)}(h_{\delta},e)\). Hence, \(F_{1}(\delta)=F_{1}(1_{s(\delta)}(h_{\delta},e))=1_{F^{\prime}_{0}(s(\delta)) }(h_{\delta},e)=F^{\prime}_{1}(\delta)\). So, \(\theta\) is injective. Now, suppose \(f\in\operatorname{Aut}(E_{0})\). For \(\delta\in E_{1}\), define \(F_{1}(\delta):=1_{f(s(\delta))}(h_{\delta},e)\). Observe that as for any \((h,g)\in H\rtimes_{\alpha}G\) the following identity holds
\[(h_{\delta(h,g)},e)=(\alpha_{g^{-1}}(h_{\delta}h),e).\]
Then it follows \(F_{1}\) is a morphism of \(H\rtimes_{\alpha}G\)-torsor. Hence, to show \(\theta\) is onto, it is enough to prove \((F_{1},f)\) is a functor. Consistencies with the source, target and unit maps are obvious.
Since for any composable \(\delta_{2},\delta_{1}\in E_{1}\), we have
\[h_{\delta_{2}\circ\delta_{1}}=h_{\delta_{1}}h_{\delta_{2}},\]
it follows \((F_{1},f)\) is consitent with the composition and hence, \(\theta\) is onto.
Let \(\overline{\operatorname{Aut}(\mathbb{E})}\) denote the automorphism group of the \(\mathbb{G}\)-torsor \(\mathbb{E}\) in the groupoid \(\overline{\mathbb{G}-\operatorname{Tor}}\) (Definition 5.3). Observe that the quotient functor \(\mathbb{G}-\operatorname{Tor}\to\overline{\mathbb{G}-\operatorname{Tor}}\) descends to a quotient map \(q\colon\operatorname{Aut}(\mathbb{E})\to\overline{\operatorname{Aut}(\mathbb{ E})}\).
**Proposition 5.17**.: For any \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-torsor \(\mathbb{E}\), the group \(\overline{\operatorname{Aut}}(\mathbb{E})\) is isomorphic to the quotient group \(G/\tau(H)\). Hence, \(\overline{\operatorname{Aut}(\mathbb{E})}\) is a diffeologial group.
Proof.: Consider the quotient map \(q\colon\operatorname{Aut}(\mathbb{E})\to\overline{\operatorname{Aut}( \mathbb{E})}\). Note that to show \(\overline{\operatorname{Aut}(\mathbb{E})}\cong G/\tau(H)\), by the first isomorphism theorem it is sufficient to prove
\[\psi_{z}\circ\theta(\ker(q))=\tau(H),\]
for some \(z\in E_{0}\), where \(\psi_{z}\) and \(\theta\) are maps as defined in Lemma 5.16. The inclusion \(\psi_{z}\circ\theta(\ker(q))\subseteq\tau(H)\) follows, since for any \(F\in\ker(q)\), there is a smooth \(\mathbb{G}\)-equivariant natural isomorphism \(\eta\colon\operatorname{id}_{\mathbb{E}}\Longrightarrow\operatorname{F}\) and in turn we get the unique element \(h_{z}\in H\) such that \(\eta(z)=1_{z}(h_{z},e)\), for which \(\psi_{z}\circ\theta(F)=\tau(h_{z})\). On the otherhand, for any \(h\in H\), one can define \(f\colon E_{0}\to E_{0}\) as \(z.g\mapsto z\tau(h)g\) for each \(g\in G\), and thus we get an element \((F_{1},f)\in\operatorname{Aut}(\mathbb{E})\) (as in Equation (5.16)). Then one sees that \((F_{1},f)\in\ker(q)\) as the prescription \(z.g\mapsto 1_{z}(h,e)(e,g)\) for each \(g\in G\) defines a smooth \(\mathbb{G}\)-equivariant natural isomorphism \(\eta\colon\operatorname{id}_{\mathbb{E}}\Longrightarrow(F_{1},f)\). Finally, as \(G\) is a Lie group, \(\overline{\operatorname{Aut}(\mathbb{E})}\) is a diffeological group (see 7.3, [33]) equipped with quotient diffeology (Example 2.5).
Now, we are ready to show that parallel transport functor Definition 5.4 of a quasi-principal \(2\)-bundle is smooth in an appropriate sense.
**Theorem 5.18**.: Let \((\pi:\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}:=[H\rtimes_{\alpha}G\rightrightarrows G]\)-bundle with a strict connection \(\omega:T\mathbb{E}\to L(\mathbb{G})\). Then for each \(x\in X_{0}\), the restriction map \(\mathcal{T}_{\mathcal{C},\omega}|_{\Pi_{\operatorname{thin}}(\mathbb{X},x)} \colon\Pi_{\operatorname{thin}}(\mathbb{X},x)\to\overline{\operatorname{Aut}( \pi^{-1}(\operatorname{x}))}\) is a map of diffeological spaces, where \(\Pi_{\operatorname{thin}}(\mathbb{X},x)\) is the automorphism group of \(x\) in the diffeological groupoid \(\Pi_{\operatorname{thin}}(\mathbb{X})\).
Proof.: Let \(P\mathbb{X}_{x}\) denote the set of lazy \(\mathbb{X}\)-paths which start and end at \(x\in X_{0}\). \(P\mathbb{X}_{x}\) being a subset of \(P\mathbb{X}\), is a diffeological space by (Example 2.2). Similarly, \(\Pi_{\operatorname{thin}}(\mathbb{X},x)\) is also equipped with the subspace diffeology induced from the diffeology on \(\frac{P\mathbb{X}}{\sim}\) (see Example 2.5). Let \(q^{P\mathbb{X}_{x}}\colon P\mathbb{X}_{x}\to\Pi_{\operatorname{thin}}(\mathbb{X},x)\) be the quotient map. Note that from Lemma 2.7, it suffices to show that for any plot
\(q^{P\mathbb{X}_{x}}\circ p\in D_{\overline{\mathrm{Aut}(\pi^{-1}(\mathbb{X}))}}\). Suppose \(x\in U\), then by Example 2.6, there exists an open neighbourhood \(U_{x}\) around \(x\) such that \(p|_{U_{x}}\) is of the form
\[p|_{U_{x}}=(p_{X_{1}}^{0},p_{PX_{0}}^{1},p_{X_{1}}^{1},\cdots,p_{PX_{0}}^{n},p_{ X_{1}}^{n})\colon U\to P\mathbb{X}_{n}\]
for some \(n\in\mathbb{N}\cup\{0\}\). Observe that the smoothness of the map
\[\theta \colon U_{x}\to\mathrm{Aut}(\pi^{-1}(x))\] \[u\mapsto T_{\left(p|_{U_{x}}(u),\mathcal{C},\omega\right)} \quad\text{[see $Definition\ 5.2$.]}\]
will imply \(\mathcal{T}_{\mathcal{C},\omega}|_{\Pi_{\mathrm{thin}}(\mathbb{X},x)}\circ q ^{P\mathbb{X}_{x}}\circ p\in D_{\overline{\mathrm{Aut}(\pi^{-1}(x))}}\).
Due to the smooth structure on \(\mathrm{Aut}(\pi^{-1}(x))\) (Lemma 5.16), \(\theta\) is smooth if and only if the map
\[\bar{\theta} \colon U_{x}\to\pi_{0}^{-1}(x)\] \[u\mapsto\left(T_{\left(p|_{U_{x}}(u),\mathcal{C},\omega\right) }\right)_{0}(z)\]
is smooth for some choice of \(z\in\pi^{-1}(x)\). But, the smoothness of \(\bar{\theta}\) follows from the following sequence of facts:
\[U_{x}\to\pi_{0}^{-1}\big{(}t(p_{X_{1}}^{0}(u))\big{)},u\mapsto t \big{(}\mathcal{C}(p_{X_{1}}^{0}(u),z)\text{ is smooth, and}\] \[U_{x}\to\pi_{0}^{-1}\big{(}ev_{0}(p_{PX_{0}}^{1})\big{)},u\mapsto \mathrm{Tr}_{\omega}^{p_{X_{0}}^{1}(u)}\Big{(}t\big{(}\mathcal{C}(p_{X_{1}}^{ 0}(u),z)\Big{)}\]
is smooth due to **Lemma 3.13**, [15]. Proceeding in this fashion for the sequence of maps in \(p|_{U_{x}}=(p_{X_{1}}^{0},p_{PX_{0}}^{1},p_{X_{1}}^{1},\cdots,p_{PX_{0}}^{n},p _{X_{1}}^{n})\colon U\to P\mathbb{X}_{n}\), we complete the proof.
**Remark 5.19**.: The smoothness of \(\mathcal{T}_{\mathcal{C},\omega}\) in Remark 5.11 obtained from Theorem 5.18 coincides with that of (**Theorem 3.9** of [15]) for the parallel transport functor of the classical principal \(G\)-bundle \(\pi\colon E\to M\) over the manifold \(M\). Recall in Theorem 5.13, we defined a functor \(\mathcal{F}\colon\mathrm{Bun}_{\mathrm{quasi}}^{\nabla}(\mathbb{X},\mathbb{G}) \to\mathrm{Trans}(\mathbb{X},\mathbb{G})\). At the moment, it is not conclusive whether \(\mathcal{F}\) provides a higher analog of **Theorem 4.1** of [15] or not. In an ongoing work, we are investigating in the said direction.
## 6. Induced parallel transport on VB-groupoids along lazy Haefliger paths
As an application of the theory developed in the precdeding sections, we investigate parallel transports on VB-groupoids along lazy Haefliger paths. For that we consider the associated VB-groupoid of a quasi-principal \(2\)-bundle with respect to an action of the Lie \(2\)-group on a Baez-Crans \(2\)-vector space. For a detailed account on VB-groupoids we refer to [10, 41, 23] and for the \(2\)-vector spaces to [5].
**Definition 6.1** (Definition 3.1, [23]).: A _VB-groupoid_ over a Lie groupoid \(\mathbb{X}\) is given by a morphism of Lie groupoids \(\pi\colon\mathbb{D}\to\mathbb{X}\)
such that the following conditions are satisfied:
1. the maps \(\pi_{1}\colon D_{1}\to X_{1}\) and \(\pi_{0}\colon D_{0}\to X_{0}\) are vector bundles,
2. the maps \((s_{D},s_{X}),(t_{D},t_{X})\) are morphisms of vector bundles,
3. for appropriate \(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\in V_{1}\), we have \((\gamma_{3}\circ\gamma_{1})+(\gamma_{4}\circ\gamma_{2})=(\gamma_{3}+\gamma_{4} )\circ(\gamma_{1}+\gamma_{2})\).
A _(linear) cleavage_ (see [18]) on a VB-groupiod \(\pi\colon\mathbb{D}\to\mathbb{X}\) is a smooth section \(\mathcal{C}\) of the map \(P^{\mathbb{D}}\colon D_{1}\to X_{1}\times_{s,X_{0},\pi_{0}}D_{0}\), defined by \(\delta\mapsto\big{(}\pi_{1}(\delta),s(\delta)\big{)}\), such that \(\mathcal{C}\) is also a morphism of vector bundles. A linear cleavage that satisfies the condition \(\mathcal{C}(1_{\pi(p)},p)=1_{p}\), for all \(p\in D_{0}\) is either called _unital_[18] or _right-horizontal lift_[23]. A (linear) cleavage is called _flat_ if it satisfies the condition that for any pair \((\gamma_{2},p_{2}),(\gamma_{1},p_{1})\in X_{1}\times_{s,X_{0},\pi_{0}}D_{0}\) satisfying \(s(\gamma_{2})=t(\gamma_{1})\) and \(p_{2}=t\big{(}\mathcal{C}(\gamma_{1},p_{1})\big{)}\), we have \(\mathcal{C}(\gamma_{2}\circ\gamma_{1},p_{1})=\mathcal{C}(\gamma_{2},p_{2}) \circ\mathcal{C}(\gamma_{1},p_{1})\).
**Definition 6.2** (Definition 3.1, [5]).: A _2-vector space_ is defined as a category \(\mathbb{V}:=[V_{1}\rightrightarrows V_{0}]\) internal to Vect, the category of finite dimensional vector spaces over a field \(\mathbb{R}\).
In other words, a 2-vector space \(\mathbb{V}\) is a category such that both \(V_{1}\), \(V_{0}\) are vector spaces and all structure maps are linear. Likewise, we have the notion of a functor internal to Vect between a pair of vector 2-spaces and a natural transformation internal to Vect between a pair of such functors internal to Vect. These data form a strict 2-category in the usual way and is denoted by 2Vect, see **Section 3** of [5]. In literature, a different notion of a 2-vector space exists, namely _Kapranov-Voevodsky 2-vector space_[35].
**Example 6.1**.: Given a VB-groupiod \(\pi\colon\mathbb{D}\to\mathbb{X}\), the groupoid \(\pi^{-1}(x):=[\pi_{1}^{-1}(1_{x})\rightrightarrows\pi_{0}^{-1}(x)]\) is a 2-vector space for every \(x\in X_{0}\).
**Example 6.2**.: Given a Lie 2-group \(\mathbb{G}\), the Lie groupoid \(L(\mathbb{G}):=[L(G_{1})\rightrightarrows L(G_{0})]\) is a 2-vector space.
Next, we prescribe a construction of a VB-groupoid from a principal 2-bundle over a Lie groupoid. We define a notion of the left action of a Lie 2-group on a 2- vector space.
**Definition 6.3** (Section 11, [12]).: A _left action of a Lie 2-group \(\mathbb{G}:=[G_{1}\rightrightarrows G_{0}]\) on a vector 2-space \(\mathbb{V}:=[V_{1}\rightrightarrows V_{0}]\)_ is defined as a functor \(\rho\colon\mathbb{G}\times\mathbb{V}\to\mathbb{V}\), such that the maps \(\rho_{1}\colon G_{1}\times V_{1}\to V_{1}\) and \(\rho_{0}\colon G_{0}\times V_{0}\to V_{0}\) are left Lie group actions which induce linear representations of \(G_{1}\) and \(G_{0}\) on \(V_{1}\) and \(V_{0}\) respectively. \(\rho_{i}(g,v)\) will be be denoted as \(gv\) for all \(g\in G_{i},v\in V_{i}\) and \(i=0,1\).
A weaker version of this action was studied in [29, 2], and for the representation theory of 2-groups we refer to [20, 13, 32].
### Construction of a VB-groupoid associated to a principal 2-bundle over a Lie groupoid
For a Lie 2-group \(\mathbb{G}:=[G_{1}\rightrightarrows G_{0}]\), let \(\pi\colon\mathbb{E}\to\mathbb{X}\) be a principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\). Suppose there is a left action of \(\mathbb{G}\) on a 2-vector space \(\mathbb{V}=[V_{1}\rightrightarrows V_{0}]\) as in Definition 6.3. Then by the usual associated vector bundle construction (see **Chapter 1**, **Section 5** of [37]), we get a pair of vector bundles \(\{\pi_{i}^{\mathbb{V}}\colon\frac{E_{i}\times V_{i}}{G_{i}}\to X_{i}\}_{i=0,1}\), defined by \([p_{i},v_{i}]\mapsto\pi_{i}(p)\) respectively. It is a straightforward, but tedious verification that the pair of manifolds \(\left\{\frac{E_{i}\times V_{i}}{G_{i}}\right\}_{i=0,1}\) define a Lie groupoid \(\frac{\mathbb{E}\times\mathbb{V}}{\mathbb{G}}:=\big{[}\frac{E_{1}\times V_{1} }{G_{1}}\rightrightarrows\frac{E_{0}\times V_{0}}{G_{0}}\big{]}\) with obvious structure maps. We call it an _associated VB-groupoid of \(\pi\colon\mathbb{E}\to\mathbb{X}\)_.
**Remark 6.3**.: One can consider the above construction as a special case of the associated groupoid bundle construction mentioned in the **Remark 3.13** of [31], where instead of a vector 2-space, the authors considered an ordinary Lie groupoid.
**Example 6.4** (Adjoint VB-groupoid).: It is easy to verify that the adjoint VB-groupoid \(\operatorname{Ad}(\mathbb{E})\) of a principal \(\mathbb{G}\)-bundle \(\pi\colon\mathbb{E}\to\mathbb{X}\), as defined in the **Section 4** of [11], can be realized as an associated VB groupoid \(\pi^{L(\mathbb{G})}\colon\frac{\mathbb{E}\times L(\mathbb{G})}{\mathbb{G}} \to\mathbb{X}\) of \(\pi\colon\mathbb{E}\to\mathbb{X}\), with respect to the usual adjoint action of \(\mathbb{G}\) on \(L(\mathbb{G})\) (Example 6.2).
The following observation is immediate.
**Proposition 6.5**.: For a Lie 2-group \(\mathbb{G}\), let \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\). Suppose there is a left action of \(\mathbb{G}\) on a 2-vector space \(\mathbb{V}\). Then the associated VB-groupoid \(\pi^{\mathbb{V}}\colon\frac{\mathbb{E}\times\mathbb{V}}{\mathbb{G}}\to\mathbb{ X}\) over \(\mathbb{X}\) admits a linear cleavage,
\[\mathcal{C}^{\mathbb{V}}\colon\left(X_{1}\times_{s,X_{0},\pi^{ \mathbb{V}}_{0}}\frac{E_{0}\times V_{0}}{G_{0}}\right) \to\frac{E_{1}\times V_{1}}{G_{1}}\] \[\left(\gamma,[p,v]\right) \mapsto[\mathcal{C}(\gamma,p),1_{v}].\]
Moreover, if \(\mathcal{C}\) is a unital, then so is \(\mathcal{C}^{\mathbb{V}}\) and likewise if \(\mathcal{C}\) is a categorical connection then \(\mathcal{C}^{\mathbb{V}}\) is flat.
As a straightforward consequence of Proposition 6.5 and Proposition 5.3, we obtain a _2Vect-valued pseudofunctor_ corresponding to an associated VB-groupoid of a quasi-principal 2-bundle.
**Proposition 6.6**.: For a Lie 2-group \(\mathbb{G}\), let \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\) with a left action of \(\mathbb{G}\) on a 2-vector space \(\mathbb{V}\). Then there is a _2-Vect-valued pseudofunctor_
\[T_{\mathcal{C}^{\mathbb{V}}}\colon\mathbb{X}^{\mathrm{op}}\to 2\mathrm{Vect}.\]
As a direct consequence of Proposition 5.5 and the traditional notion of induced parallel transport on associated fibre bundles (see **Chapter (iii)**, [37]), we get the following:
**Proposition 6.7**.: For a Lie 2-group \(\mathbb{G}\), let \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\) with a strict connection \(\omega\colon T\mathbb{E}\to L\mathbb{G}\). Suppose there is a left action of \(\mathbb{G}\) on a 2-vector space \(\mathbb{V}\). Then, given a path \(\alpha\colon x\to y\) in \(X_{0}\) there is an isomorphism of 2-vector spaces \(T^{\alpha}_{\omega,\mathbb{V}}\colon(\pi^{\mathbb{V}})^{-1}(x)\to(\pi^{\mathbb{ V}})^{-1}(y)\) defined as
\[T^{\alpha}_{\omega,\mathbb{V}}\colon(\pi^{\mathbb{V}})^{-1}(x) \to(\pi^{\mathbb{V}})^{-1}(y)\] \[[p,v] \mapsto[\mathrm{Tr}^{\alpha}_{\omega_{0}}(p),v],\] \[[\delta,\zeta] \mapsto[\mathrm{Tr}^{u\circ\alpha}_{\omega_{1}}(\delta),\zeta].\]
Combining Proposition 6.6 and Proposition 6.7, we obtain a notion of parallel transport on an associated VB-groupoid of a quasi principal 2-bundle equipped with a strict connection along a lazy Haefliger path.
**Definition 6.4**.: Let a Lie 2-group \(\mathbb{G}\) acts on a 2-vector space \(\mathbb{V}\), and \((\pi\colon\mathbb{E}\to\mathbb{X},\mathcal{C})\) be a quasi-principal \(\mathbb{G}\)-bundle over a Lie groupoid \(\mathbb{X}\), with a strict connection \(\omega\colon T\mathbb{E}\to L(\mathbb{G})\). Then the isomorphism of 2-vector spaces \(T^{\mathbb{V}}_{(\Gamma,\mathcal{C},\omega)}:=T_{\mathcal{C}^{\mathbb{V}}}(\gamma _{n}^{-1})\circ T^{\alpha_{n}}_{\omega,\mathbb{V}}\circ\cdots\circ T^{\alpha_{1 }}_{\omega,\mathbb{V}}\circ T_{\mathcal{C}^{\mathbb{V}}}(\gamma_{0}^{-1})\) will be called the \((\mathcal{C},\omega)\)_-parallel transport on the associated VB-groupoid \(\pi^{\mathbb{V}}\) along the lazy \(\mathbb{X}\)-path \(\Gamma=(\gamma_{0},\alpha_{1},\gamma_{1},\cdots,\alpha_{n},\gamma_{n})\)._
**Remark 6.8**.: Using Definition 5.2, \(T^{\mathbb{V}}_{(\Gamma,\mathcal{C},\omega)}\) in the above definition can be expressed in terms of \(T_{\Gamma,\mathcal{C},\omega}\) as follows:
\[T^{\mathbb{V}}_{(\Gamma,\mathcal{C},\omega)} \colon(\pi^{\mathbb{V}})^{-1}(x)\to(\pi^{\mathbb{V}})^{-1}(y)\] \[[p,v] \mapsto[T_{(\Gamma,\mathcal{C},\omega)}(p),v],\] \[[\delta,\zeta] \mapsto[T_{(\Gamma,\mathcal{C},\omega)}(\delta),\zeta].\]
Suitably adapting Theorem 5.10 to Definition 6.4, one derives the corresponding parallel transport functor.
**Remark 6.9**.: Although we have confined our attention to the notion of parallel transport on an associated VB-groupoid of a quasi-principal 2-bundle equipped with a strict connection, it is not difficult to generalize the results obtained in this section for an associated groupoid bundle mentioned in Remark 6.3.
|
2309.15146 | Gravitational Production of Spin-3/2 Particles During Reheating | We compute the density of a spin-$\frac32$ particle, the raritron, produced
at the end of inflation due to gravitational interactions. We consider a
background inflaton condensate as the source of this production, mediated by
the exchange of a graviton. This production greatly exceeds the gravitational
production from the emergent thermal bath during reheating. The relic abundance
limit sets an absolute minimum mass for a stable raritron, though there are
also model dependent constraints imposed by unitarity. We also examine the case
of gravitational production of a gravitino, taking into account the goldstino
evolution during reheating. We compare these results with conventional
gravitino production mechanisms. | Kunio Kaneta, Wenqi Ke, Yann Mambrini, Keith A. Olive, Sarunas Verner | 2023-09-26T18:00:01Z | http://arxiv.org/abs/2309.15146v1 | # Gravitational Production of Spin-3/2 Particles During Reheating
###### Abstract
We compute the density of a spin-\(\frac{3}{2}\) particle, the raritron, produced at the end of inflation due to gravitational interactions. We consider a background inflaton condensate as the source of this production, mediated by the exchange of a graviton. This production greatly exceeds the gravitational production from the emergent thermal bath during reheating. The relic abundance limit sets an absolute minimum mass for a stable raritron, though there are also model dependent constraints imposed by unitarity. We also examine the case of gravitational production of a gravitino, taking into account the goldstino evolution during reheating. We compare these results with conventional gravitino production mechanisms.
+
Footnote β : preprint: UMNβTHβ4225/23, FTPIβMINNβ23/17, OUβHETβ1204
## I Introduction
Any inflationary theory consists of three key components [1]. First, it must have a prolonged period of exponential expansion to account for the observed flatness of the Universe. Second, the produced density fluctuations should agree with the CMB measurements of the anisotropy spectrum and tensor-to-scalar ratio [2]. Finally, the theory should incorporate a reheating phase, resulting in a hot thermal universe. This universe should have a minimum temperature of a few MeV to allow Big Bang Nucleosynthesis (BBN), and potentially even a higher temperature nearing the TeV scale or higher, which is necessary for baryogenesis.
Reheating is most efficient when a direct decay channel exists for the inflaton to Standard Model (SM) fields [3; 4]. Assuming that the decay products thermalize instantly and with an inflaton potential \(V(\phi)\) which is quadratic about its minimum, the reheating temperature is directly related to the inflaton decay rate, \(T_{\rm RH}\propto(\Gamma_{\phi}M_{P})^{\frac{1}{2}}\), where \(\Gamma_{\phi}\) is the decay rate for the inflaton, \(\phi\), and \(M_{P}=1/\sqrt{8\pi G_{N}}\simeq 2.4\times 10^{18}\) GeV is the reduced Planck mass. The reheating process is not instantaneous; at the end of inflation, the inflaton decays producing a bath of relativistic particles [5; 6; 7]. When an inflaton potential is predominantly characterized by a quadratic term near its minimum, the inflaton energy density scales as \(\rho_{\phi}\sim a^{-3}\), where \(a\) is the cosmological scale factor. The radiation density rapidly increases, reaching a peak temperature, \(T_{\rm max}\), which then falls until the energy density in radiation becomes equal to that stored in the inflaton condensate, thus defining the reheating temperature. The reheating temperature and the scaling of the radiation density, in principle, depend on the spin of the final state particle and the shape of the potential near the minimum that governs inflaton oscillations [8; 9; 10].
Once produced, the thermal bath can generate very weakly coupled non-SM particles that do not achieve thermal equilibrium [11; 12]. Importantly, these might include a dark matter component. The gravitino is a classic example of such a feebly interacting massive particle, or FIMP [4; 13; 14; 15; 16]. For a review and related studies, see [17; 18; 19; 20; 21; 22]. The relic density of a FIMP is determined by its thermally averaged production cross section from the thermal bath. Consequently, the relic abundance is sensitive to \(T_{\rm RH}\) or \(T_{\rm max}\), which depends on the form of its coupling to the SM.
In addition to the production of matter and dark matter from the thermal bath, it is also possible to produce matter directly from inflaton decays or scatterings in which case the relic density depends on the coupling of the matter to the inflaton [23; 6; 24; 8]. In the absence of a direct coupling between the inflaton and dark matter, radiative decays of the inflaton may produce a significant relic density [25; 26], provided the dark matter has a coupling to the SM particles.
When there is no direct coupling between the dark matter and either the inflaton or SM particles, production through gravitational interactions is always
present [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. These gravitational interactions include processes that produce dark matter either from gravitational scattering within the thermal bath or directly from the inflaton condensate. Both scenarios have been explored for the production of either spin 0 or spin-\(\frac{1}{2}\) particles [38] and the thermal production with a massive spin-2 mediator was considered in [30]. The dependence on spin in gravitational production is not immediately intuitive. However, when we represent the gravitational interaction through the exchange of a massless spin-2 graviton, the relationship between spin and gravity becomes evident. The source of production is also important. In fact, inflaton scattering can be interpreted as the scattering of spin-0 particles at rest in the case of quadratic potential. Using a simple helicity argument, we expect the amplitude to be proportional to the mass of a final state fermion. On the other hand, the conformal nature of massless spin-1 particles also leads to the conclusion that they cannot be produced by gravitational interactions. In conclusion, for massless final states, only scalars can be gravitationally produced by inflaton scattering.
In this work, we demonstrate that the production of spin-\(\frac{3}{2}\) particles is more intricate than the previously mentioned cases. The production of a spin-\(\frac{3}{2}\) dark matter candidate from the thermal bath \(\psi_{\mu}\) was considered in [49], where this particle was called the _raritron_. However, to produce such a rairton, it was necessary to introduce a coupling \(\psi_{\nu}A_{\mu}\nu\) between the rairton, the photon, and a neutrino, implying its metastability. It is well known since the work of [50] that coupling a spin-\(\frac{3}{2}\) particle to the electromagnetic field leads to pathologies, though these can be addressed within the supergravity framework [51; 52]. It is important to determine whether rairitons can be produced in a generic framework solely through gravitational interactions, driven by the oscillation of the inflaton. If they are stable, this would correspond to the minimum amount of spin-\(\frac{3}{2}\) fields still present in the Universe and contributing to the dark sector.
The structure of this paper is as follows: In Section II.1, we provide a brief review of the properties of a fundamental spin-\(\frac{3}{2}\) particle. Its coupling to the graviton is discussed in Section II.2, while its production rate is explored in Section II.3. In Section II.4, we compute the relic density generated by the oscillations of the inflaton at the end of the inflationary phase, mediated by graviton exchange. The gravitational production from the thermal bath is discussed in II.5. Finally, we apply our results to one of the best-motivated rairton models, the gravitino, in Section III, and discuss a specific supergravity model in Section III.3. In Section III.4, we compare our results with the standard thermal production of gravitinos in both low-scale and high-scale supersymmetric models. We conclude in Section IV, and provide some additional details of the calculations in Appendices A, B, and C.
## II Gravitational spin-\(\frac{3}{2}\) production
In this section, we compute the gravitational production of a spin-\(\frac{3}{2}\) particle directly from the inflaton condensate as well as from scatterings among Standard Model (SM) particles in the thermal bath. In both scenarios, the interaction is mediated by the canonical gravitational perturbation \(h_{\mu\nu}\), and the only distinction between the two processes is the source fueling the production. This perturbation arises when the space-time metric is expanded around the flat Minkowski metric, with \(g_{\mu\nu}\simeq\eta_{\mu\nu}+2h_{\mu\nu}/M_{P}\). This approximation is valid during the reheating phase after the end of inflation. Importantly, such gravitational interactions are universal and invariably exist between the inflaton, the thermal bath, and the spin-\(\frac{3}{2}\) rairron, as depicted in Fig. 1.
### The Rarita-Schwinger field
Naively, while looking at the table of known fundamental particles, we observe spin-0, spin-\(\frac{1}{2}\), spin-1 and spin-2 fields. Naturally, one might question the absence of a spin-\(\frac{3}{2}\) fundamental particle. However, it is often claimed that fields with spin higher than 1 exhibit pathologies. After the 1939 paper by Fierz and Pauli [53], where they constructed the Lagrangians for spin-\(\frac{3}{2}\) and spin-2 particles, Rarita and Schwinger proposed a more compact formulation for spin-\(\frac{3}{2}\) particles [54], leading to the equations of motion
\[(i\gamma^{\mu}\partial_{\mu}-m_{3/2})\psi_{\mu}=0\,,\quad\gamma^{\mu}\psi_{\mu }=0\,. \tag{1}\]
These equations can be obtained from the Lagrangian1
Footnote 1: For a derivation of (2) from Eq. (1), see for example [55].
\[\mathcal{L}_{3/2} = \bar{\psi}_{\mu}(i\gamma^{\mu\nu\nu}\partial_{\rho}+m_{3/2}\gamma ^{\mu\nu})\psi_{\nu}\,, \tag{2}\]
with
\[\gamma^{\mu\nu} = \frac{1}{2}\left[\gamma^{\mu},\gamma^{\nu}\right]=\gamma^{\mu} \gamma^{\nu}-\eta^{\mu\nu}\,, \tag{3}\]
Figure 1: _Feynman diagram for the production of spin-\(\frac{3}{2}\) particles through the gravitational scattering of the inflaton condensate or the Standard Model particle bath._
\[\gamma^{\mu\nu\rho} = \gamma^{\mu}\gamma^{\nu}\gamma^{\rho}-\eta^{\mu\rho}\gamma^{\nu}- \eta^{\nu\rho}\gamma^{\mu}+\eta^{\mu\nu}\gamma^{\rho}\,. \tag{4}\]
A stable \(\psi_{\mu}\) is called a rairiton and can constitute the majority of the dark matter component of the Universe.
### Gravitational Couplings
To compute the gravitational interactions between the raritron and the graviton, the space-time metric is expanded around Minkowski spacetime using \(g_{\mu\nu}\simeq\eta_{\mu\nu}+\frac{2b_{\mu\nu}}{M_{P}}\). The Lagrangian can then be written as (see e.g., [38; 41; 56])
\[\sqrt{-g}{\cal L}_{\rm int} = -\frac{1}{M_{P}}h_{\mu\nu}\left(T^{\mu\nu}_{\rm SM}+T^{\mu\nu}_{ \phi}+T^{\mu\nu}_{\psi_{\mu}}\right)\,, \tag{5}\]
where SM represents Standard Model fields and \(\phi\) is the inflaton. The form of the canonical stress-energy tensor \(T^{\mu\nu}_{i}\) depends on the spin of the field, with \(i=0,\,1/2,\,1,\,3/2\). For the inflaton and SM fields, we take
\[T^{\mu\nu}_{0} = \partial^{\mu}S\partial^{\nu}S-g^{\mu\nu}\left[\frac{1}{2} \partial^{\alpha}S\partial_{\alpha}S-V(S)\right]\,, \tag{6}\] \[T^{\mu\nu}_{1/2} = \frac{i}{4}\left[\bar{\chi}\gamma^{\mu}\overset{\leftrightarrow} {\partial^{\nu}}\chi+\bar{\chi}\gamma^{\nu}\overset{\leftrightarrow}{\partial ^{\mu}}\chi\right]\] (7) \[-g^{\mu\nu}\left[\frac{i}{2}\bar{\chi}\gamma^{\alpha}\overset{ \leftrightarrow}{\partial_{\alpha}}\chi-m_{\chi}\bar{\chi}\chi\right]\,,\] \[T^{\mu\nu}_{1} = \frac{1}{2}\left[F^{\mu}_{\alpha}F^{\nu\alpha}+F^{\nu}_{\alpha}F^ {\mu\alpha}-\frac{1}{2}g^{\mu\nu}F^{\alpha\beta}F_{\alpha\beta}\right]\,, \tag{8}\]
where \(V(S)\) is the scalar potential for either the inflaton or the SM Higgs boson2, with \(S=\phi,H\), and \(A\overset{\leftrightarrow}{\partial}_{\mu}B\equiv A\partial_{\mu}B-(\partial _{\mu}A)B\). Here \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the field strength for a vector field, \(A_{\mu}\). The energy-momentum tensor for a spin-\(\frac{3}{2}\) Majorana field is given by3[57]
Footnote 2: In our calculations, we considered real scalar fields with \(H\) corresponding to 4 degrees of freedom.
Footnote 3: See Appendix A for a detailed derivation and discussion of \(T^{\mu\nu}_{3/2}\).
\[T^{\mu\nu}_{3/2} =-\frac{i}{4}\overline{\psi}_{\rho}\gamma^{(\mu}\overset{ \leftrightarrow}{\partial^{\nu)}}\psi^{\rho} \tag{9}\] \[+\frac{i}{2}\overline{\psi}(\nu\gamma^{\mu})\overset{ \leftrightarrow}{\partial_{\rho}}\psi^{\rho}+\frac{i}{2}\overline{\psi}^{\rho} \gamma^{(\mu}\overset{\leftrightarrow}{\partial_{\rho}}\psi^{\nu)}\,,\]
where parentheses surrounding indices indicate symmetrization, defined by \(A^{(\mu}B^{\nu)}\equiv(A^{\mu}B^{\nu}+A^{\nu}B^{\mu})/2\). For a Dirac spin-3/2 field instead, the right-hand side of Eq. (9) should be multiplied by a factor of 2.
The gravitational scattering amplitudes related to the production rate of the processes
\[\phi/{\rm SM}^{i}(p_{1})+\phi/{\rm SM}^{i}(p_{2})\rightarrow\psi(p_{3})+\psi( p_{4}) \tag{10}\]
can be parametrized by
\[{\cal M}^{i\frac{3}{2}}\propto M^{\frac{3}{2}}_{\mu\nu}\Pi^{\mu\nu\rho\sigma}M ^{i}_{\rho\sigma}\;, \tag{11}\]
where \(i=0,1/2,1\) denotes the spin of the initial state involved in the scattering process. Note that we are summing over all polarizations, justifying the absence of Lorentz indices in Eq. (10) for the rairiton. Here, \(\Pi^{\mu\nu\rho\sigma}\) is the graviton propagator for the canonical field \(h_{\mu\nu}\) with momentum \(k=p_{1}+p_{2}\),
\[\Pi^{\mu\nu\rho\sigma}(k)=\frac{\eta^{\mu\rho}\eta^{\nu\sigma}+\eta^{\mu\sigma }\eta^{\nu\rho}-\eta^{\mu\nu}\eta^{\rho\sigma}}{2k^{2}}\;. \tag{12}\]
The partial amplitudes, \(M^{i}_{\mu\nu}\), can be expressed by [38]
\[M^{0}_{\mu\nu} = \frac{1}{2}\left[p_{1\mu}p_{2\nu}+p_{1\nu}p_{2\mu}-\eta_{\mu\nu} p_{1}\cdot p_{2}-\eta_{\mu\nu}V^{\prime\prime}(S)\right]\,, \tag{13}\] \[M^{\frac{1}{2}}_{\mu\nu} = \frac{1}{4}\bar{v}(p_{2})\left[\gamma_{\mu}(p_{1}-p_{2})_{\nu}+ \gamma_{\nu}(p_{1}-p_{2})_{\mu}\right]u(p_{1})\,,\] (14) \[M^{1}_{\mu\nu} = \frac{1}{2}\bigg{[}\epsilon_{2}^{*}\cdot\epsilon_{1}\left(p_{1\mu }p_{2\nu}+p_{1\nu}p_{2\mu}\right)\] (15) \[- \epsilon_{2}^{*}\cdot p_{1}\left(p_{2\mu}\epsilon_{1\nu}+\epsilon _{1\mu}p_{2\nu}\right)-\epsilon_{1}\cdot p_{2}\left(p_{1\nu}\epsilon_{2\mu}^{* }+p_{1\mu}\epsilon_{2\nu}^{*}\right)\] \[+ p_{1}\cdot p_{2}\left(\epsilon_{1\mu}\epsilon_{2\nu}^{*}+ \epsilon_{1\nu}\epsilon_{2\mu}^{*}\right)\] \[+ \eta_{\mu\nu}\left(\epsilon_{2}^{*}\cdot p_{1}\epsilon_{1}\cdot p_{ 2}-p_{1}\cdot p_{2}\,\epsilon_{2}^{*}\cdot\epsilon_{1}\right)\bigg{]}\,,\]
where the masses of the SM fermions and vector fields have been neglected. The partial amplitude for the Majorana spin-\(\frac{3}{2}\) field \(\psi_{\mu}\) is given by
\[M^{\frac{3}{2}}_{\mu\nu} = \frac{1}{4}\left[\bar{v}^{\alpha}(p_{4})\gamma_{(\mu}(p_{3}-p_{4 )\nu)}u_{\alpha}(p_{3})\right. \tag{16}\] \[- 2\left.\bar{v}^{\alpha}(p_{4})\gamma_{(\mu}(p_{3}-p_{4})_{\alpha} u_{\nu)}(p_{3})\right.\] \[- 2\left.\bar{v}_{(\nu}(p_{4})\gamma_{\mu)}(p_{3}-p_{4})_{\alpha} u^{\alpha}(p_{3})\right]\,,\]
where we defined \(\psi_{\mu}(p)=u_{\mu}(p)e^{-ipx}\). In this section, we do not rely on any specific model of inflation and keep our discussion as general as possible. Any model satisfying the constraints on the slow-roll parameters as imposed by _Planck_ data [2] will suffice, provided there is a well-defined minimum and the potential can be expanded as \(V(\phi)\simeq\lambda\phi^{k}/M^{k-4}_{P}\) around this minimum. For example, both the Starobinsky model [58] and \(\alpha\)-attractor type models [59] are sufficient.
We consider two distinct processes illustrated by the Feynman diagram in Fig. 1:
i) The production of raritrons from the inflaton \(\phi+\phi\rightarrow\psi+\psi\). In the case of a quadratic potential, the inflaton behaves like a massive particle at rest,4 with four-momentum \(p_{1,2}\). Its partial amplitude is then directly
given by Eq. (13). However, for a generic potential \(V(\phi)\), we need to use the zero mode of the inflaton condensate that is valid for any arbitrary minimum (see below and [41] for a detailed discussion).
ii) The production from the thermal background, \(\mathrm{SM}+\mathrm{SM}\to\psi+\psi\), which uses Eqs. (6-8) for SM particles on the right-hand side of Eq. (11). In the following subsection, we compute the full scattering amplitudes for both channels and determine the gravitational production rate of the rairton.
### Gravitational Production from the Inflaton Condensate
We begin by examining the gravitational production of the rairton from the inflaton condensate. Although particle production occurs throughout the reheating process, the dominant source of energy density emerges at the onset of oscillations after inflation, when the oscillation amplitude peaks. Notably, despite gravitational production process being Planck-suppressed, the inflaton condensate scattering continues to be a substantial source of particle production, particularly at the beginning of the reheating process, when its energy density is very large.
#### ii.3.1 Quadratic potential minimum
We consider the case with a quadratic minimum first, with \(V(\phi)\simeq\frac{1}{2}m_{\phi}^{2}\phi^{2}\). In this case, the rate computation is straightforward. We evaluate the square of the matrix element in Eq. (11) using \(M_{\sigma\sigma}^{0}\) for the inflaton incoming state. We assume for the inflaton condensate that the incoming inflaton momentum vanishes, \(p_{1,2}=0\), and compute \(|\mathcal{M}^{0\frac{1}{2}}|^{2}\) using the spinor sums [60; 61]
\[P_{ab} = \sum_{s=-3/2}^{+3/2}u_{a}(\mathbf{p},s)\bar{u}_{b}(\mathbf{p},s) \leavevmode\nobreak\ =\leavevmode\nobreak\ \left(\not{p}+m_{3/2}\right)\times \tag{17}\] \[\left(\eta_{ab}-\frac{1}{3}\gamma_{a}\gamma_{b}-\frac{2}{3}\frac {p_{a}p_{b}}{m_{3/2}^{2}}+\frac{p_{a}\gamma_{b}-p_{b}\gamma_{a}}{3m_{3/2}} \right)\,,\]
and
\[Q_{ab} = \sum_{s=-3/2}^{+3/2}v_{a}(\mathbf{p},s)\bar{v}_{b}(\mathbf{p},s) \leavevmode\nobreak\ =\leavevmode\nobreak\ \left(\not{p}-m_{3/2}\right)\times \tag{18}\] \[\left(\eta_{ab}-\frac{1}{3}\gamma_{a}\gamma_{b}-\frac{2}{3}\frac {p_{a}p_{b}}{m_{3/2}^{2}}-\frac{p_{a}\gamma_{b}-p_{b}\gamma_{a}}{3m_{3/2}} \right)\,.\]
Using the above expressions, we find that the total matrix element squared is given by Eq. (33) shown in Appendix B. For the inflaton condensate, this expression can be simplified significantly by writing \(t=m_{3/2}^{2}-m_{\phi}^{2}\) and \(s=4m_{\phi}^{2}\), and the matrix element squared (33) becomes
\[|\overline{\mathcal{M}}|^{2} = \frac{m_{\phi}^{4}s}{18M_{P}^{4}m_{3/2}^{2}}\left(1-\frac{4m_{3/2 }^{2}}{s}\right)\left(1-\frac{6m_{3/2}^{2}}{s}+\frac{18m_{3/2}^{4}}{s^{2}} \right) \tag{19}\] \[= \frac{2}{9}\frac{m_{\phi}^{6}}{m_{3/2}^{2}M_{P}^{4}}\left(1-\frac {m_{3/2}^{2}}{m_{\phi}^{2}}\right)\left(1-\frac{3}{2}\frac{m_{3/2}^{2}}{m_{ \phi}^{2}}+\frac{9}{8}\frac{m_{3/2}^{4}}{m_{\phi}^{4}}\right)\,.\]
The production rate, \(R^{\phi^{k}}\), for a quadratic minimum with \(k=2\), can be written as [55]
\[R^{\phi^{2}} = n_{\phi}^{2}\langle\sigma v\rangle=\frac{\rho_{\phi}^{2}}{m_{ \phi}^{2}}\frac{|\mathcal{M}|^{2}}{32\pi m_{\phi}^{2}}\frac{p_{3}}{m_{\phi}}\,, \tag{20}\]
where \(p_{3}=\sqrt{m_{\phi}^{2}-m_{3/2}^{2}}\), and if we use the matrix element squared (19), we find
\[R^{\phi^{2}} = \frac{2\times\rho_{\phi}^{2}}{288\pi M_{P}^{4}}\left(\tau^{-1}- \frac{3}{2}+\frac{9}{8}\tau\right)(1-\tau)^{3/2}\leavevmode\nobreak\, \tag{21}\]
with \(\tau=m_{3/2}^{2}/m_{\phi}^{2}\). The factor of 2 explicitly accounts for the fact that 2 rairtrons are produced by annihilation.5
Footnote 5: We note that in Appendix B, we provide the amplitude in the case of scalar scattering, which yields a rate which is larger by a factor of 2 compared to inflaton scattering when considering a condensate \(\phi\).
Before extending our result to a more general potential \(V(\phi)\), we would like to make a few comments regarding Eq. (21). The massive rairton could have been considered, naively, as a Clebsch-Gordan decomposition of a massive spin-1 boson and a spin-\(\frac{1}{2}\) fermion, which is manifestly not the case when we look at the limit \(\tau\to 0\) of Eq. (21). Indeed, we would expect no production of massless fermions, due to helicity conservation [33], and no divergences are expected for the production of a massless vector field [34]. This reflects the inherent pathology of theories with spin \(>1\), implying that we should treat with care the unitarity constraints when we analyze the bounds on \(m_{3/2}\).
#### ii.3.2 General potentials
Gravitational particle production from the inflaton condensate naturally depends on the shape of the potential. We extend our discussion and consider a more general potential which about its minimum is of the form
\[V(\phi) = \lambda\frac{\phi^{k}}{M_{P}^{k-4}}\,,\qquad\phi\ll M_{P}\,. \tag{22}\]
We parameterize the time-dependent oscillating inflaton field as
\[\phi(t)\ =\ \phi_{0}(t)\cdot\mathcal{P}(t)\,, \tag{23}\]
where \(\phi_{0}(t)\) is the time-dependent envelope that includes the effects of redshift and \(\mathcal{P}(t)\) describes the periodicity of the oscillation. Then for a potential of the form (22), we can write \(V(\phi)=V(\phi_{0})\cdot\mathcal{P}(t)^{k}\) and expand the potential energy in terms of its Fourier modes [9; 62; 63]
\[V(\phi)=V(\phi_{0})\sum_{n=-\infty}^{\infty}\mathcal{P}_{k,n}e^{-in\omega t}= \langle\rho_{\phi}\rangle\sum_{n=-\infty}^{\infty}\mathcal{P}_{k,n}e^{-in\omega t }\,, \tag{24}\]
where \(\omega\) is the frequency of oscillation of \(\phi\), given by [9]
\[\omega=m_{\phi}\sqrt{\frac{\pi k}{2(k-1)}}\frac{\Gamma(\frac{1}{2}+\frac{1}{k} )}{\Gamma(\frac{1}{k})}\,, \tag{25}\]
with \(m_{\phi}^{2}=\partial^{2}V/\partial\phi^{2}|_{\phi_{0}}\),
\[\mathcal{P}(t)^{k}\ =\ \sum_{n=-\infty}^{\infty}\mathcal{P}_{k,n}e^{-in\omega t }\,, \tag{26}\]
and \(\langle\rho_{\phi}\rangle\) is the mean energy density averaged over the oscillations.
To compute the inflaton condensate scattering rate, we follow the treatment presented in Appendix C. We find that the raritron production rate is given by
\[R^{\phi^{k}}\ =\ \frac{2\times\rho_{\phi}^{2}}{72\pi M_{P}^{4}}\Sigma_{3/2}^{k}\,, \tag{27}\]
where
\[\Sigma_{3/2}^{k}\ =\ \sum_{n=1}^{+\infty}|\mathcal{P}_{k,n}|^{2} \frac{E_{n}^{2}}{m_{3/2}^{2}}\left(1-6\frac{m_{3/2}^{2}}{E_{n}^{2}}+18\frac{m_ {3/2}^{4}}{E_{n}^{4}}\right)\times\\ \times\left[1-\frac{4m_{3/2}^{2}}{E_{n}^{2}}\right]^{3/2}\,. \tag{28}\]
Here the superscript \(k\) corresponds to the type of potential minimum \(V(\phi)\sim\phi^{k}\sim\mathcal{P}^{k}\), \(E_{n}=n\omega\) is the energy of the \(n\)-th mode of the inflaton oscillation, and \(m_{3/2}\) is the produced raritron mass. In the quadratic case, where \(\omega=m_{\phi}\) (see Eq. (25)) and \(\mathcal{P}(t)^{2}=\cos^{2}(m_{\phi}t)=\frac{1}{2}+\frac{1}{4}(e^{-2m_{\phi}t }+e^{2m_{\phi}t})\), since \(\sum|\mathcal{P}_{2,n}|^{2}=|\mathcal{P}_{2,2}|^{2}=\frac{1}{16}\), only the second mode in the Fourier expansion contributes to the sum. Taking \(E_{2}=2m_{\phi}\), we find that the rate (27) reduces to Eq. (20).
We also consider separately the production of the \(\pm\frac{1}{2}\) and \(\pm\frac{3}{2}\) helicity components. One can express the spin-\(\frac{3}{2}\) polarization vector as a direct product of spin-1 and spin-\(\frac{1}{2}\) polarization vectors. We introduce the following spin-\(\frac{3}{2}\) Clebsch-Gordan decomposition for the spinor6
Footnote 6: As a side comment, from this decomposition, one can also derive the spinor-helicity formalism for massive spin-\(3/2\) fields, and compute helicity amplitudes. See for example [64].
\[u_{\pm 3/2}^{\mu}(p) =\ \epsilon_{\pm}^{\mu}(p)u_{\pm 1/2}(p)\,, \tag{29}\] \[u_{\pm 1/2}^{\mu}(p) =\ \sqrt{\frac{2}{3}}\epsilon_{0}^{\mu}(p)u_{\pm 1/2}(p)\] \[+\frac{1}{\sqrt{3}}\epsilon_{\pm}^{\mu}(p)u_{\mp 1/2}(p)\,,\] (30) \[v_{\pm 3/2}^{\mu}(p) =\ \epsilon_{\pm}^{\mu\ast}(p)v_{\pm 1/2}(p)\,,\] (31) \[v_{\pm 1/2}^{\mu}(p) =\ \sqrt{\frac{2}{3}}\epsilon_{0}^{\mu\ast}(p)v_{\pm 1/2}(p)\] \[+\frac{1}{\sqrt{3}}\epsilon_{\pm}^{\mu\ast}(p)v_{\mp 1/2}(p)\,. \tag{32}\]
We find that that the raritron production rate (27) can be decomposed as
\[R^{\phi^{k}}\ =\ \frac{2\times\rho_{\phi}^{2}}{72\pi M_{P}^{4}}\left(\Sigma_{3/2,3/2}^{k}+\Sigma_{3/2,1/2}^{k}\right)\,, \tag{33}\]
where the transverse spin \(\pm\frac{3}{2}\) contribution is given by
\[\Sigma_{3/2,3/2}^{k}\ =\ \sum_{n=1}^{+\infty}|\mathcal{P}_{k,n}|^{2} \frac{E_{n}^{2}}{m_{3/2}^{2}}\times\left(9\frac{m_{3/2}^{4}}{E_{n}^{4}}\right)\times\\ \times\left[1-\frac{4m_{3/2}^{2}}{E_{n}^{2}}\right]^{3/2}\,, \tag{34}\]
and the longitudinal spin \(\pm\frac{1}{2}\) contribution is
\[\Sigma_{3/2,1/2}^{k}\ =\ \sum_{n=1}^{+\infty}|\mathcal{P}_{k,n}|^{2} \frac{E_{n}^{2}}{m_{3/2}^{2}}\times\left(1-3\frac{m_{3/2}^{2}}{E_{n}^{2}}\right) ^{2}\times\\ \times\left[1-\frac{4m_{3/2}^{2}}{E_{n}^{2}}\right]^{3/2}\,. \tag{35}\]
We note that the sum of transverse and longitudinal components satisfy the expression (28), with \(\Sigma_{3/2}^{k}=\Sigma_{3/2,3/2}^{k}+\Sigma_{3/2,1/2}^{k}\).
Returning to the pathology of the limit \(m_{3/2}\to 0\) (\(\tau\to 0\)) discussed above, we observe that the transverse components \(\pm\frac{3}{2}\) are not produced for \(m_{3/2}=0\). These components correspond to a direct composition between a spin-\(\frac{1}{2}\) fermion and the transverse components of a spin-1, as we can see in Eqs. (29) and (31). As these transverse components are not gravitationally produced for massless particles [33], it stands to reason that their production rate vanishes in the massless limit for a spin-\(\frac{3}{2}\) particle as
well. In other words, the transverse modes are expected to be highly suppressed for light raritron, relative to the longitudinal mode which is enhanced and could be considered as the _goldstino_ in a gauged framework.
In Fig. 2, we plot separately the longitudinal and transverse components for \(k=2\). We clearly see the effect we have just described: the transverse mode is always produced in negligible quantities compared with the longitudinal mode, except in the limit where the mass of the raritron is of the order of the fundamental mode \(m_{3/2}\simeq m_{\phi}\). The slopes for masses \(m_{3/2}\lesssim m_{\phi}\), keeping only the first Fourier mode as an approximation, gives \(R_{\pm 3/2}\propto m_{3/2}^{2}\) and \(R_{\pm 1/2}\propto m_{3/2}^{-2}\) and do not depend on \(k\). The absolute value of the rates depends on the Fourier coefficients \(\mathcal{P}_{n,k}\) which themselves become very similar for large values of \(k\), hence we do not expect big differences for larger values of \(k\).
### Relic abundance calculation
Given the production rate, we next compute the abundance of raritons from the Boltzmann equation,
\[\frac{dn}{dt}+3Hn=R^{\phi^{k}}\,, \tag{36}\]
where \(H=\frac{\dot{a}}{a}\) is the Hubble parameter. It is convenient to rewrite the Boltzmann equation in terms of the scale factor,
\[\frac{dY}{da}=\frac{a^{2}R^{\phi^{k}}}{H}\,, \tag{37}\]
where \(Y\equiv a^{3}n\). To integrate this expression we need to include the dependence \(H(a)\) with
\[H(a)=\frac{\rho_{\phi}^{\frac{1}{2}}(a)}{\sqrt{3}M_{P}}\,. \tag{38}\]
The conservation of energy for the inflaton field imposes
\[\frac{d\rho_{\phi}}{dt}+3(1+w)H\rho_{\phi} = Ha\left[\frac{d\rho_{\phi}}{da}+3(1+w)\frac{\rho_{\phi}}{a}\right] \tag{39}\] \[= (1+w)\Gamma_{\phi}\rho_{\phi}\,,\]
whose solution is, for \(\Gamma_{\phi}\ll H\)
\[\rho_{\phi}(a)=\rho_{\rm end}\left(\frac{a_{\rm end}}{a}\right)^{\frac{6k}{k+ 2}}=\rho_{\rm RH}\left(\frac{a_{\rm RH}}{a}\right)^{\frac{6k}{k+2}}\,. \tag{40}\]
In these expressions, \(a_{\rm end}\) is the value of the scale factor when accelerated expansion (inflation) ends, \(\rho_{\rm end}=\rho_{\phi}(a_{\rm end})\), \(a_{\rm RH}\) is the scale factor when \(\rho_{R}(a_{\rm RH})=\rho_{\phi}(a_{\rm RH})\), defining the moment of reheating. The Boltzmann equation (37) then becomes
\[\frac{dY}{da}=\frac{\sqrt{3}M_{P}}{\sqrt{\rho_{\rm RH}}}a^{2}\left(\frac{a}{a _{\rm RH}}\right)^{\frac{3k}{k+2}}R^{\phi^{k}}(a)\,. \tag{41}\]
Restricting our attention to the case \(k=2\), we have \(\rho_{\phi}\sim a^{-3}\), \(\rho_{R}\sim T^{4}\sim a^{-3/2}\), with \(m_{\phi}^{2}=2\lambda M_{P}^{2}\). The Boltzmann equation becomes
\[\frac{dY}{da}=\frac{\sqrt{3}M_{P}}{\sqrt{\rho_{\rm RH}}}a^{2}\left(\frac{a}{a _{\rm RH}}\right)^{\frac{3}{2}}R^{\phi^{2}}(a)\,, \tag{42}\]
where \(R^{\phi^{2}}(a)\) is given by Eq. (21). Eq. (42) is easily integrated to give
\[n(a_{\rm RH}) = \frac{1}{72\sqrt{3}\pi M_{P}}\left(\frac{\rho_{\rm end}}{M_{P}^{ 4}}\right)^{\frac{1}{2}}\alpha T_{\rm RH}^{4} \tag{43}\] \[\times\left(\tau^{-1}-\frac{3}{2}+\frac{9}{8}\tau\right)(1-\tau)^ {3/2}\,,\]
where we assumed that \(a_{\rm RH}\gg a_{\rm end}\) and \(\alpha\) is defined by
\[\rho_{R} = \frac{g_{T}\pi^{2}}{30}T^{4}\equiv\alpha T^{4}\,. \tag{44}\]
Using [55]
\[\Omega h^{2}\simeq 1.6\times 10^{8}\frac{g_{0}}{g_{\rm RH}}\frac{n(T_{\rm RH})}{T _{\rm RH}^{3}}\frac{m_{3/2}}{1\ {\rm GeV}}\,, \tag{45}\]
we then obtain
\[\Omega h^{2} \simeq 3\times 10^{9}\left(\frac{T_{\rm RH}}{10^{10}{\rm GeV}}\right) \left(\frac{\rho_{\rm end}}{(5.2\times 10^{15}{\rm GeV})^{4}}\right)^{\frac{1}{2}} \tag{46}\] \[\left(\frac{m_{\phi}}{1.7\times 10^{13}{\rm GeV}}\right)^{2} \left(\frac{{\rm EeV}}{m_{3/2}}\right)\,,\]
Figure 2: _Longitudinal and transverse rariton production rates for \(k=2\) in the units of \(R\times M_{P}^{4}/\rho_{\phi}^{2}\) as a function of \(\tau=m_{3/2}^{2}/m_{\phi}^{2}\). As can be seen from the figure, the raritron production is completely dominated by the longitudinal component, which contains a factor \(\tau^{-1}\)._
where we take \(g_{0}=43/11\) and \(g_{\rm RH}=427/4\), and assume \(m_{3/2}\ll m_{\phi}\), and values of \(m_{\phi}\) and \(\rho_{\rm end}\) are normalized for an \(\alpha\)-attractor model of inflation with \(k=2\), though there is some additional dependence on \(T_{\rm RH}\) for these quantities [8; 38; 65].
As one can see, the gravitational production of raritrons is extremely efficient, much more efficient than the production of scalars, spin-\(\frac{1}{2}\) fermions, or vectors [33]. As a consequence, stable raritrons are only possible if either \(T_{\rm RH}\) is quite low (of order the weak scale or below) or \(m_{3/2}\simeq m_{\phi}\). This can be seen in Fig. 3 where the red curve shows the values of \(m_{3/2}\) and \(T_{\rm RH}\) such that \(\Omega h^{2}=0.12\) from Eq. (46). To remain consistent with big bang nucleosynthesis, \(T_{\rm RH}\gtrsim 2\) MeV, which implies that raritron dark matter is heavier than \(\gtrsim 6\) PeV. This _unavoidable_ (because it is produced gravitationally) minimal mass for the raritron is one of the main results of our work.
Using Eqs. (33)-(35), it is possible to separate out the contributions of the transverse and longitudinal contributions to \(\Omega_{\frac{3}{2}}\). For the transverse contribution for \(k=2\) we have
\[n_{\frac{3}{2}}(a_{\rm RH})=\frac{1}{128\sqrt{3}\pi M_{P}}\left(\frac{\rho_{ \rm end}}{M_{P}^{4}}\right)^{\frac{1}{2}}\alpha T_{\rm RH}^{4}\tau(1-\tau)^{ \frac{3}{2}}\,, \tag{47}\]
and
\[\Omega_{\frac{3}{2}}h^{2} \simeq 2\times 10^{-8}\left(\frac{T_{\rm RH}}{10^{10}~{}{\rm GeV}} \right)\left(\frac{\rho_{\rm end}}{(5.2\times 10^{15}{\rm GeV})^{4}}\right)^{ \frac{1}{2}} \tag{48}\] \[\left(\frac{1.7\times 10^{13}{\rm GeV}}{m_{\phi}}\right)^{2} \left(\frac{m_{3/2}}{{\rm GeV}}\right)^{3}\,.\]
As expected and discussed previously, the gravitational production of the transverse mode is completely negligible.
Similarly, we can compute the longitudinal contribution
\[n_{\frac{1}{2}}(a_{\rm RH}) = \frac{1}{72\pi\sqrt{3}M_{P}}\left(\frac{\rho_{\rm end}}{M_{P}^{4 }}\right)^{\frac{1}{2}}\alpha T_{\rm RH}^{4} \tag{49}\] \[\times\left(\tau^{-1}-\frac{3}{2}+\frac{9}{16}\tau\right)\left(1 -\tau\right)^{3/2}\,,\]
which for \(m_{3/2}\ll m_{\phi}\), gives the result in Eq. (46) for \(\Omega_{\frac{1}{2}}h^{2}\) since the production of raritrons is completely dominated by the longitudinal component which carries the factor of \(\tau^{-1}\).
At this point it is important to note that for "low" values of \(\tau\), we may run into a problem with unitarity. The amplitude in Eq. (19) becomes of order unity when \(m_{3/2}\lesssim 1\) TeV. However, raritron scattering \(\psi_{\mu}\psi_{\mu}\to h_{\mu\nu}\to\psi_{\mu}\psi_{\mu}\) is further enhanced, and we estimate that its amplitude scales as \(|{\cal M}|^{2}\propto m_{\phi}^{4}/(M_{P}^{4}\tau^{4})\), which would exceed unity when \(m_{3/2}\lesssim 40\) EeV!7 This would allow reheating temperatures \(T_{\rm RH}\gtrsim 10\) GeV.
Footnote 7: We consider here a non-supersymmetric theory where the spin-\(\frac{3}{2}\) Lagrangian is given by (2). When supersymmetry is introduced, an additional contribution to raritron scattering arises from the four-Fermi coupling, which cancels the most divergent term in the amplitude, leading to \(|{\cal M}|^{2}\propto m_{\phi}^{4}/(M_{P}^{4}\tau^{2})\)[66]. In this case, unitarity is violated when \(m_{3/2}\lesssim 0.1\) EeV.
Finally, we can also generalize this result to cases when \(k\neq 2\). We find that the number density can be expressed as
\[n(a_{\rm RH}) = \frac{(k+2)}{(k-1)}\frac{\rho_{\rm RH}^{3/2}}{72\sqrt{3}\pi M_{P }^{3}}\left(\frac{\rho_{\rm end}}{\rho_{\rm RH}}\right)^{1-\frac{1}{k}}\Sigma _{3/2}^{k}\,, \tag{50}\]
and the dark matter abundance becomes
\[\Omega h^{2}\simeq 2.2\times 10^{5}\frac{(k+2)}{(k-1)}\left(\frac{\rho_{ \rm end}}{\rho_{\rm RH}}\right)^{1-\frac{1}{k}}\frac{\rho_{\rm RH}^{3/4}}{M_{P }^{3}}\frac{m_{3/2}}{1~{}{\rm GeV}}\Sigma_{3/2}^{k}\,. \tag{51}\]
As expected, for \(k=2\) this expression reduces to Eq. (46).
### Gravitational Production of Raritrons from the Thermal Bath
The production of raritron dark matter from the thermal bath is also possible. The scattering of SM particles includes the Higgs scalars, gauge bosons, and fermions in the initial state. Since the initial particle momenta \(p_{1}\) and \(p_{2}\) are large (of order \(m_{\phi}\)) at the beginning of reheating and dominate over electroweak scale quantities, we assume that the initial particle states are massless.
For the Higgs initial state we can use Eq. (70) with the association \(\phi\to h\) and set \(m_{\phi}=0\) (i.e., we neglect the Higgs mass, and all SM masses relative to the reheating temperature in the thermal bath). In this case, Eq. (70) reduces to
\[|\mathcal{M}^{0}|^{2}=\frac{1}{72m_{3/2}^{4}M_{P}^{4}s^{2}}\left[-s^ {2}t(s+t)(s+2t)^{2}-72m_{3/2}^{12}\right.\] \[+\left.24m_{3/2}^{10}(7s+12t)-2m_{3/2}^{8}\left(47s^{2}+264st+216t ^{2}\right)\right.\] \[+\left.m_{3/2}^{6}\left(-2s^{3}+244s^{2}t+576st^{2}+288t^{3}\right)\right.\] \[+\left.m_{3/2}^{4}\left(s^{4}-34s^{3}t-210s^{2}t^{2}-240st^{3}-72 t^{4}\right)\right.\] \[\left.+m_{3/2}^{2}s\left(s^{4}+6s^{3}t+44s^{2}t^{2}+64st^{3}+24t^ {4}\right)\right]\,. \tag{52}\]
In addition to the production of raritrons from a Higgs initial state, other SM particles in thermal bath will also lead to raritron production. The amplitudes for massless fermion and gauge boson initial states are given in Eqs. (71) and (72) respectively.
The dark matter production rate \(R(T)\) for the SM+SM \(\rightarrow\psi+\psi\) process with amplitude \(\mathcal{M}\) is given by8[24; 55; 67]
Footnote 8: We note that we include the symmetry factors associated with identical initial and final states in the squared amplitude, \(|\overline{\mathcal{M}}|^{2}\).
\[R(T) = \frac{2}{1024\pi^{6}}\times\int f_{1}f_{2}E_{1}\,\mathrm{d}E_{1}E _{2}\,\mathrm{d}E_{2}\,\mathrm{d}\cos\theta_{12} \tag{53}\] \[\times\int|\overline{\mathcal{M}}|^{2}\,\mathrm{d}\Omega_{13}\,,\]
where we assumed that \(s\gg 4m_{3/2}^{2}\), and the factor of two accounts for two raritrons produced per scattering, \(E_{i}\) denotes the energy of particle \(i=1,2,3,4\). \(\theta_{13}\) and \(\theta_{12}\) are the angles formed by momenta \(\mathbf{p}_{1,3}\) (in the center-of-mass frame) and \(\mathbf{p}_{1,2}\) (in the laboratory frame), respectively. The infinitesimal solid angle in the above integral is then \(\mathrm{d}\Omega_{13}=2\pi\,\mathrm{d}\cos\theta_{13}\). In addition,
\[f_{i}=\frac{1}{e^{E_{i}/T}\pm 1}\,, \tag{54}\]
represents the assumed thermal distributions of the incoming SM particles.
The total amplitude squared for the gravitational scattering process SM+SM \(\rightarrow\psi+\psi\) is given by a sum of the three amplitudes associated with three different SM initial state spins,
\[|\overline{\mathcal{M}}|^{2}=4|\overline{\mathcal{M}}^{0}|^{2}+45|\overline{ \mathcal{M}}^{1/2}|^{2}+12|\overline{\mathcal{M}}^{1}|^{2}\,. \tag{55}\]
Using this amplitude and performing the thermal integration in Eq. (53), we find that the raritron production rate can be parameterized by
\[R_{\frac{3}{2}}^{T}= R_{\frac{3}{2}}(T)=\beta_{1}\frac{T^{12}}{m_{3/2}^{4}M_{P}^{4}}+ \beta_{2}\frac{T^{10}}{m_{3/2}^{2}M_{P}^{4}} \tag{56}\] \[+\beta_{3}\frac{T^{8}}{M_{P}^{4}}+\beta_{4}\frac{m_{3/2}^{2}T^{6 }}{M_{P}^{4}}+\beta_{5}\frac{m_{3/2}^{4}T^{4}}{M_{P}^{4}}\,,\]
where the numerical coefficients together with the details of the computation are given in Appendix B.
The gravitational scattering within the thermal plasma produces the raritrons. We focus on the case \(k=2\) and show that the thermal production rate is strongly sub-dominant compared to the production from the inflaton condensate. Following the same steps as in the previous subsection, we replace the rate in Eq. (41) by the thermal raritron production rate (56). After expressing the temperature as function of the scale factor by solving
\[\frac{d\rho_{R}}{da}+4\frac{\rho_{R}}{a}=\frac{\Gamma_{\phi}\rho_{\phi}}{Ha}\,, \tag{57}\]
we find that the thermally-produced number density is given by
\[n^{T}(T_{\mathrm{RH}})=\frac{2\beta_{1}}{\alpha^{3}}\frac{\sqrt {3}\rho_{\mathrm{RH}}^{5/2}}{m_{3/2}^{4}M_{P}^{2}}\ln\left(\frac{\sqrt{\rho_{ \mathrm{end}}}}{\sqrt{\alpha}T_{\mathrm{RH}}^{2}}\right)\] \[+\frac{4\beta_{2}}{\sqrt{3}\alpha^{5/2}}\frac{\rho_{\mathrm{RH}} ^{2}}{m_{3/2}^{2}M_{P}^{3}}+\frac{2\beta_{3}}{\sqrt{3}}\frac{\rho_{\mathrm{RH} }^{3/2}}{M_{P}^{3}}\] \[+\frac{4\beta_{4}}{3\sqrt{3}\alpha^{3/2}}\frac{\rho_{\mathrm{RH} }m_{3/2}^{2}}{M_{P}^{3}}+\frac{\beta_{5}}{\sqrt{3}\alpha}\frac{\sqrt{\rho_{ \mathrm{RH}}}m_{3/2}^{4}}{M_{P}^{3}}\,. \tag{58}\]
We note that in our computation, we assumed that \(4m_{3/2}^{2}\ll s\), where \(s=(p_{1}+p_{2})^{2}\), which would approximately correspond to \(m_{3/2}\lesssim T_{\mathrm{RH}}\), and we integrated Eq. (41) between \(a_{\mathrm{end}}\) and \(a_{\mathrm{RH}}\). 9 Moreover, since \(\beta_{1}\simeq 3.6\) is greater than \(\beta_{i=2..5}\), the first term dominates the production process for \(m_{\frac{3}{2}}<T_{\mathrm{RH}}\).
Footnote 9: As discussed below, when \(T_{\mathrm{RH}}<m_{3/2}\), the integration is limited between \(a_{\mathrm{end}}\) and \(a_{3/2}\), where the latter corresponds to the scale factor when \(T=m_{3/2}\).
Using Eq. (45) for the the relic abundance, we obtain
\[\Omega^{T}h^{2} = 5.9\times 10^{6}\,\frac{n^{T}(T_{\mathrm{RH}})}{T_{\mathrm{RH}}^{3 }}\frac{m_{3/2}}{1\ \mathrm{GeV}} \tag{59}\] \[\simeq 3.6\times 10^{-5}\left(\frac{T_{\mathrm{RH}}}{10^{10}\ \mathrm{GeV}} \right)^{7}\left(12+\ln\left(\frac{10^{10}\mathrm{GeV}}{T_{\mathrm{RH}}} \right)\right)\] \[\times\left(\frac{\mathrm{EeV}}{m_{3/2}}\right)^{3}\,,\]
where, in the approximation, we considered only the first term in Eq. (58). In Fig. 3, we show in blue the constraint on the relic abundance in the \((m_{3/2},T_{\mathrm{RH}})\) plane
from raritrons produced gravitationally from the thermal plasma. As expected, the relic density generated by the thermal source is negligible compared with that generated by inflaton oscillations in all of the parameter space. This result is also valid for \(k>2\).
## III Gravitino dark matter
The cosmological production of gravitinos has been a constant source of potential cosmological problems due to its over-production. Standard thermal production (discussed briefly in Section III.4 below) sets upper limits to the reheating temperature after inflation [68; 13; 14; 6; 15; 23; 6; 16]. There is also a non-thermal contribution to gravitino production when supersymmetry is broken during inflation [69; 70; 71; 72; 73; 74; 75]. In these cases, as we will see as well below, it is the goldstino which is produced (i.e., the longitudinal component, rather than the transverse component), and this is typically the inflatino (the superpartner of the inflaton) during and immediately after inflation. Indeed it was argued that the longitudinal component of the gravitino at low energy may be unrelated to the inflatino produced after inflation [73; 74]. Whether or not the production of inflatinos is problematic is a model dependent question and inflatino production may even be kinematically suppressed [76]. Non-thermal production may also occur if the gravitino sound speed vanishes [77; 78; 79]. However in this case, unless the models are constrained by eliminating the pseudoscalar, fermionic, and auxiliary components of the inflaton, no catastrophic production occurs [80; 81; 82].
In the remainder of this section, we consider first toy models involving two Majorana fermions coupled to the inflaton. This is a (highly) simplified example of the inflaton coupling to the gravitino and inflatino. In this case, as in the non-thermal production of the raritron, the longitudinal component of the gravitino may be easily overproduced. However, as we just alluded, the produced state may not be the longitudinal component of the gravitino at low energy. Finally, we consider a specific model of inflation and supersymmetry breaking. If reheating is prolonged, the gravitino may be produced though with a suppressed abundance.
### Toy models
In the previous section, we considered the gravitational production of raritrons from the inflaton condensate and thermal bath. As we have seen in Fig. 3, the production from the condensate is dominant, and from Fig. 2, we see that the production of the longitudinal (spin-\(\frac{1}{2}\)) component dominates over the transverse (spin-\(\frac{3}{2}\)) component, particularly at low masses. In addition, as we will be interested in the production of gravitinos from the inflaton condensate as a concrete example in the next subsection, we would like to consider a toy model (not based on supergravity) which couples two spin-\(\frac{1}{2}\) Majorana fields, \(\psi\) and \(\chi\), to the inflaton. We will consider the production of \(\psi\) through \(\chi\)-exchange having in mind the production of a goldstino through inflatino exchange when considering the supersymmetric analogue.
The toy model assumes a Yukawa coupling of the form
\[\mathcal{L}_{\rm int} = -y\phi\bar{\chi}\psi+h.c.\,, \tag{60}\]
and a direct coupling of the inflaton to a pair of \(\psi\)'s is absent. We further assume \(m_{\chi}>m_{\phi}>m_{\psi}\) in our setup, so a direct decay of \(\phi\to\chi\psi\) is not allowed kinematically. The coherent oscillation of \(\phi\) during reheating can however still produce \(\psi\) through \(\phi\phi\to\psi\psi\) by exchanging \(\chi\), whose diagrams are shown in Fig. 4.
As in the case of raritron production in the previous section, the abundance of \(\psi\) can be obtained by integrating the Boltzmann equation given the production rate \(\Gamma_{\phi\phi\to\psi\psi}\rho_{\phi}/m_{\phi}\)
\[R(t) = \frac{2\times y^{4}}{\pi}\frac{\rho_{\phi}^{2}}{m_{\phi}^{4}} \frac{\tau_{\psi}(1-\tau_{\psi})^{3/2}}{(1+\tau_{\chi}-\tau_{\psi})^{2}}\,, \tag{61}\]
where \(\tau_{i}\equiv m_{i}^{2}/m_{\phi}^{2}\). As for previous rates, the factor of 2 in the numerator explicitly accounts for the fact that two \(\psi\)'s are produced per reaction. Using this rate, the Boltzmann equation (41) can be integrated to give
\[n(a_{\rm RH}) = \frac{4y^{4}M_{P}^{3}}{\sqrt{3}\pi m_{\phi}^{4}}\left(\frac{ \rho_{\rm end}}{M_{P}^{4}}\right)^{\frac{1}{2}}\alpha T_{\rm RH}^{4} \tag{62}\] \[\times\frac{\tau_{\psi}(1-\tau_{\psi})^{3/2}}{(1+\tau_{\chi}- \tau_{\psi})^{2}}\,,\]
and
\[\Omega_{\phi}h^{2} \simeq 0.12y^{4}\left(\frac{T_{\rm RH}}{10^{10}\ {\rm GeV}}\right)\left(\frac{1.7 \times 10^{13}\ {\rm GeV}}{m_{\phi}}\right)^{2}\left(\frac{\rho_{\rm end}}{(5.2 \times 10^{15}{\rm GeV})^{4}}\right)^{\frac{1}{2}}\left(\frac{10^{14}\ {\rm GeV}}{m_{\chi}}\right)^{4}\left(\frac{m_{\psi}}{33\ {\rm TeV}}\right)^{3}\,, \tag{63}\]
where we have taken the limit that \(m_{\chi}\gg m_{\phi}\gg m_{\psi}\).
Given the above normalizations, the spin-\(\frac{1}{2}\) fermion will
provide the correct relic density when \(m_{\psi}\simeq 33\) TeV. One further sees that rather than a divergence at small \(m_{\psi}\), the relic density goes to \(0\), in this limit. This can be easily understood on the basis of helicity conservation. Fig. 5 shows the parameter space satisfying \(\Omega_{\psi}h^{2}=0.12\) with \(y=1\), \(m_{\phi}=1.7\times 10^{13}\) GeV, and \(\rho_{\rm end}=(5.2\times 10^{15}\) GeV\()^{4}\). Indeed, because of the helicity suppression, the relic density of \(\psi\) scales as \(m_{\psi}^{3}\) as opposed to \(m_{3/2}^{-1}\) and hence we see very different behaviors when comparing the results in Fig. 3 and Fig. 5.
We can also consider a similar toy model which matches more closely the supergravity couplings of the gravitino longitudinal mode. This simple Lagrangian can be written as
\[\mathcal{L}_{\rm int} = -\frac{y}{M_{P}}\partial_{\mu}\phi\bar{\chi}\gamma^{\mu}\psi+h.c.\,. \tag{64}\]
Repeating the above exercise to calculate the production rate of \(\psi\), we find
\[R(t) = \frac{2\times y^{4}}{\pi}\frac{\rho_{\phi}^{2}}{M_{P}^{4}}\frac{ \tau_{\psi}(1-\tau_{\psi})^{3/2}}{(1+\tau_{\chi}-\tau_{\psi})^{2}}\,, \tag{65}\]
which is suppressed relative to the rate in Eq. (61) by a factor of \((m_{\phi}/M_{P})^{4}\). The integration of the rate will be identical and the number density of \(\psi\)'s in Eq. (62) will be suppressed by the same factor. As a result, the mass needed to achieve \(\Omega_{\psi}h^{2}=0.12\) is significantly larger, \(m_{\psi}\simeq 2.4\times 10^{11}\) GeV. The relation between \(T_{\rm RH}\) and \(m_{\psi}\) is also shown in Fig. 5. As one can clearly see, the derivative coupling leading to the suppression requires a significantly larger mass, \(m_{\psi}\) for a given reheating temperature in order to achieve the same relic density.
### Inflatino exchange
We next turn to the example of the gravitino in supergravity models. When supergravity models of inflation are considered, gravitino (\(\psi_{\mu}\)) generally couples to the inflaton (\(\Phi\)) and the inflatino (\(\chi\)) through the following terms:
\[\mathcal{L}_{\rm int} = -\frac{i}{\sqrt{2}M_{P}}\left[(\partial_{\mu}\Phi)^{*}\bar{\psi} _{\nu}\gamma^{\mu}\gamma^{\nu}P_{L}\chi-(\partial_{\mu}\Phi)\bar{\chi}P_{R} \gamma^{\nu}\gamma^{\mu}\psi_{\nu}\right].\]
In the following argument, we assume that the imaginary part of \(\Phi\) is strongly stabilized, and the canonically-normalized real part \(\phi\equiv\sqrt{2}{\rm Re}\Phi\) is oscillating with an inflaton potential \(V(\phi)\) after the end of inflation.
In addition to the gravitational production of gravitinos discussed previously, a pair of gravitinos can also be produced from inflaton condensate via inflatino exchange. There are both the t- and u-channels shown in Fig. 4 with the replacement of \(\psi\to\psi_{\mu}\). Here, we are making the (naive) assumption, that the supersymmetry breaking sector is distinct from the inflationary sector and that the inflaton does not break supersymmetry. In this case, the goldstino (or spin-\(\frac{1}{2}\) component of the gravitino) is distinct from the inflatino. We return to a more realistic example in the next section. In the Boltzmann
Figure 4: Feynman diagrams of the dark matter production processes.
equation, the production rate \(R(t)\) is computed as
\[R(t) = \frac{2\rho_{\phi}^{2}}{9\pi M_{P}^{4}}\frac{(1-\tau_{3/2})^{7/2}}{ \tau_{3/2}(1+\tau_{\chi}-\tau_{3/2})^{2}}\,, \tag{67}\]
where we have assumed \(k=2\). Note that as discussed in Appendix C, only the spin-\(\frac{1}{2}\) component of \(\psi_{\mu}\) is produced. Details of the computation of the production rate are also given in Appendix C. It is also interesting to see, by comparing equations (61) and (67), that the production of the longitudinal component of \(\psi_{\mu}\) (the spin-\(\frac{1}{2}\) part) is enhanced \(\propto m_{3/2}^{-4}\) for light gravitino compared to the production of a spin-\(\frac{1}{2}\) fermion (61) for reasons similar to those invoked when discussing the gravitational production of the raritrons.10
Footnote 10: Arguments based on the equivalence theorem can also be used to understand this.
With this rate, the number density of gravitinos is obtained by solving Eq. (41),
\[n(a_{\rm RH}) = \frac{4}{9\sqrt{3}\pi M_{P}}\left(\frac{\rho_{\rm end}}{M_{P}^{ 4}}\right)^{\frac{1}{2}}\alpha T_{\rm RH}^{4} \tag{68}\] \[\times\frac{(1-\tau_{3/2})^{7/2}}{\tau_{3/2}(1+\tau_{\chi}-\tau_ {3/2})^{2}}\,.\]
When \(\tau_{\chi}\simeq 1\), and \(\tau_{3/2}\ll 1\), this number density is roughly 8 times larger than that from graviton exchange given in Eq. (43). It should not be surprising that the two results (43) and (68) are so similar, since the exchange of a graviton involves couplings of the order \(\partial_{\mu}/M_{P}\) generated by terms of the type \(T_{\mu\nu}/M_{P}\), which have exactly the same form as the couplings between the inflaton, the inflatino and gravitino given in Eq. (66). Only the graviton propagator differs from the inflatino propagator, but only in its structure, not in its order of magnitude.
The relic gravitino abundance can be then estimated by using Eq. (45)
\[\Omega h^{2} \simeq 2.4\times 10^{10}\left(\frac{T_{\rm RH}}{10^{10}{\rm GeV}}\right) \left(\frac{\rho_{\rm end}}{(5.2\times 10^{15}{\rm GeV})^{4}}\right)^{\frac{1}{2}} \tag{69}\] \[\left(\frac{m_{\phi}}{1.7\times 10^{13}{\rm GeV}}\right)^{2} \left(\frac{{\rm EeV}}{m_{3/2}}\right)\,,\]
which is, as expected, about 8 times larger than Eq. (46). Furthermore as we saw previously this abundance is highly dominated by the spin-\(\frac{1}{2}\) component. However, as we stressed earlier, this result ignores any contribution to supersymmetry breaking from the inflaton sector and any possible mixing between the spin-\(\frac{1}{2}\) partner of the inflation, the inflatino, and the partner of the scalar associated with supersymmetry breaking in the vacuum.
### Specific Supergravity Model
Let us now consider a more realistic example, in which the identity of the goldstino evolves during the reheating process [80; 89]. To be more specific, we consider a model based on no-scale supergravity [90]. The Kahler potential can be written as
\[K=-3\ln\left[\Phi+\overline{\Phi}-\frac{1}{3}(|S|^{2}+|z|^{2})+g (S,\overline{S})+h(z,\overline{z})\right]\,.\]
The inflaton, \(\phi\), is the real part of the canonically-normalized field, \(\Phi\simeq\frac{1}{2}e^{\sqrt{2/3}\phi}\) (up to a small correction of order \(\mu^{2}\) (defined below)). The matter-like field \(S\) and Polonyi field \(z\) are stabilized by \(g(S,\overline{S})=|S|^{4}/\Lambda_{S}^{2}\) and \(h(z,\overline{z})=|z|^{4}/\Lambda_{s}^{2}\)[83; 84; 85; 86; 87; 88; 89; 91; 92; 93; 94]. The inflaton can decay into gauge bosons and gauginos if the gauge kinetic function depends on the inflaton field value [92; 94; 95; 96; 97; 92]. Barring a direct superpotential coupling of the inflaton to SM fields, this is the dominant decay mechanism in low-scale supersymmetric models. In models of high-scale supersymmetry, the inflaton decays predominately into a pair of the SM Higgs bosons [88; 89; 89; 25; 98]. The choice of the inflaton sector superpotential [99]
\[W_{\rm inf} = \sqrt{3}m_{\phi}S(\Phi-1/2) \tag{71}\]
gives the Starobinsky-like inflaton potential [91; 100; 101], and the inflaton energy density at the end of inflation becomes \(\rho_{\rm end}=0.175m_{\phi}^{2}M_{P}^{2}\) with \(m_{\phi}=3\times 10^{13}\) GeV [23; 102]. The inflatino is nearly degenerate in mass with inflaton. The scalar and fermionic components of \(S\) are also nearly degenerate with the inflaton [89] (note that \(\Lambda_{S}\) does not affect the spectrum at leading order).
One should, however, be careful about the time dependence of the mixing in the goldstino mode. The goldstino in a non-static background is given by [74]\(\nu=G_{I}\chi^{I}+\not{\partial}\phi_{I}\chi^{Q}G_{J}^{I}\), where \(G\equiv K+\ln|W|^{2}\), \(G_{I}\equiv\partial G/\partial\phi^{I}\), and \(G_{J}^{I}\equiv\partial G/\partial\phi^{I}/\partial\phi^{*J}\) with \(\phi^{I}\) a superfield that participates in super-Higgs mechanism, and \(\chi^{I}\) is the fermionic component in \(\phi^{I}\). We consider the Polonyi sector superpotential, given by [103]
\[W_{P} = \widetilde{m}(z+b)\,, \tag{72}\]
with \(b\simeq 1/\sqrt{3}\). When the reheating phase begins, the various contributions to the goldstino are given by
\[G_{\Phi}\simeq-\sqrt{\frac{3}{2}}\phi,\ G_{S}\simeq 2\mu+ \sqrt{\frac{3}{2}}\frac{\phi}{\mu},\] \[G_{z}\simeq\sqrt{3}-\frac{3}{\sqrt{2}}\phi,\ \not{\partial}\Phi G_{\Phi}^{\Phi}\simeq m_{\phi}\phi\,, \tag{73}\]
where \(\mu\equiv\widetilde{m}/m_{\phi}\ll 1\).
Initially, when \(\phi\gg\mu M_{P}\), supersymmetry is broken by the F-term of \(S\). As discussed earlier, for the quadratic
case considered here (\(k=2\)), the energy density of the inflaton scales as \(\frac{1}{2}m_{\phi}^{2}\phi^{2}\propto a^{-3}\) and \(\phi\propto a^{-\frac{3}{2}}\) during reheating. But when \(\phi/\mu\lesssim 1\), the primary component of the goldstino becomes the fermionic partner of the Polonyi field, \(z\). We can estimate the corresponding scale factor \(a_{\rm p}\), when \(\phi/\mu=1\) using \(\phi_{\rm p}=\phi_{\rm end}(a_{\rm end}/a_{\rm p})^{3/2}\), and \(\phi_{\rm end}=\sqrt{2\rho_{\rm end}}/m_{\phi}\), then
\[\frac{a_{\rm p}}{a_{e}}=\left(\frac{\sqrt{2\rho_{e}}}{M_{P}\tilde{m}}\right)^{ \frac{2}{3}}\,, \tag{74}\]
and
\[\frac{a_{\rm p}}{a_{\rm RH}}=\left(\frac{2\rho_{\rm RH}}{M_{P}^{2}\tilde{m}^{ 2}}\right)^{\frac{1}{3}}\,. \tag{75}\]
It would be tempting to deduce that for \(a>a_{\rm p}\), the Polonyi field dominates in the goldstino and is produced by the inflaton condensate. However, this ignores the mixing between the states. Furthermore, the degree of mixing [74], \(\Delta\), gives rise to the gravitino sound speed, \(c_{s}^{2}=1-\Delta^{2}\), which can be expressed as [79]
\[\Delta^{2}=\frac{4}{\left(|\dot{\varphi}|^{2}+|F|^{2}\right)^{2}}\,\left\{| \dot{\varphi}|^{2}|F|^{2}-|\dot{\varphi}\cdot F^{*}|^{2}\right\}\,, \tag{76}\]
where \(F^{i}\equiv{\rm e}^{K/2}K^{ij^{*}}(W_{j}+K_{j}W)\), and the \(D\)-term is absent in our analysis. The dot operator in Eq. (76) denotes a scalar product with the Kahler metric \(K_{ij}\), namely \(|\dot{\varphi}|^{2}=\dot{\varphi}^{i}K_{ij^{*}}\,\dot{\varphi}^{j*}\), and analogously for the other terms. As noted earlier if the gravitino sound speed vanishes [77; 78; 79], catastrophic production of gravitinos ensues. Likewise, in the absence of mixing, divergent (as \(m_{3/2}\to 0\)) production of gravitinos ensues as seen in Eqs. (68) and (69). In the absence of constraints (for example the imposition of nilpotent fields), mixing is sufficiently large so as to suppress this non-thermal source of gravitino production [80; 81; 82]. Indeed the three-field model considered here, was also considered in [80]. There, it was found that although the leading contribution to the sound speed may be small, the mixing parameter, \(\Delta\), in this case is large. The detailed numerical analysis in a two-field model [74] showed that the primary consequence of the mixing is that even though supersymmetry is initially broken (\(a<a_{\rm p}\)) by the inflationary sector and later (\(a>a_{\rm p}\)) through the Polonyi sector, the eigenstates rotate and the heavy mass eigenstate associated with the inflatino is always the field which is predominantly produced. Though a full numerical analysis of the three-field model was not performed, it was concluded that due to the large mixing, there is no catastrophic production of gravitinos in this model.
It is interesting to note that the Lagrangian (66), suitably extended to include the coupling of the Polonyi field to the gravitino, can be separated into the parts providing the couplings of the transverse and longitudinal components. The latter will contain the mixing between the "flavor" eigenstates. This Lagrangian, written in [74], contains the basic elements found in our toy Lagrangian in Eq. (64). In agreement with the numerical results found in [74], our calculation of the production of \(\psi\) is suppressed for low \(m_{\psi}\).
In the remainder of this section, we briefly review the standard thermal production of gravitinos.
### Thermal production
Before concluding this section, we can compare the production mechanisms above with the well known thermal production of gravitinos [6; 68; 23]. So long as the scale of supersymmetry breaking is below the inflationary scale, gravitinos can be singly produced, for example, by the scattering of two gluons producing a gluino and gravitino. Here, we use the parametrization in [6; 23] and consider only the gauge boson contribution to the production cross section, which we approximate as
\[\langle\sigma v\rangle\simeq\frac{26.24}{M_{P}^{2}}\left(1+0.56\,\frac{m_{1/2 }^{2}}{m_{3/2}^{2}}\right)\,. \tag{77}\]
For \(m_{3/2}\) significantly less than an assumed universal (at the GUT scale) gaugino mass, \(m_{1/2}\), the 2nd term corresponding to the production of the longitudinal mode dominates. The production rate can then be written as
\[R_{1}\simeq 0.4\frac{T^{6}}{M_{P}^{2}}\left(1+0.56\,\frac{m_{1/2}^{2}}{m_{3/2} ^{2}}\right)\,. \tag{78}\]
Integrating this rate (for \(k=2\)) we arrive at
\[n(a_{\rm RH})=\frac{4\sqrt{3}}{9\sqrt{\alpha}}\frac{0.4}{M_{P}}T_{\rm RH}^{4} \left(1+0.56\,\frac{m_{1/2}^{2}}{m_{3/2}^{2}}\right)\,, \tag{79}\]
and
\[\Omega h^{2}\simeq 0.04\left(\frac{T_{\rm RH}}{10^{10}{\rm GeV}}\right)\left( \frac{m_{3/2}}{100\,\,{\rm GeV}}\right)\left(1+0.56\,\frac{m_{1/2}^{2}}{m_{3/2 }^{2}}\right)\,, \tag{80}\]
which gives the typical upper limit on the reheating temperature in supersymmetric models of \(T_{\rm RH}\lesssim 2\times 10^{10}\) GeV, for \(m_{1/2}\simeq m_{3/2}=100\) GeV, and the limit becomes stronger when \(m_{1/2}>m_{3/2}\). This limit results from the fact that the gravitino mass is related to the gaugino mass in a specific framework of SUSY breaking. We have neglected a kinematic factor which roughly requires \(T_{\rm RH}\gtrsim m_{3/2}\). Note that for these models we are using \(g_{\rm RH}=915/4\).
In the case of high-scale supersymmetry breaking, gravitinos can only be pair produced if the masses of all other supersymmetric partners are greater than the inflationary scale. Nevertheless, gravitinos can be produced
from SM particle annihilations with a rate
\[R_{2}=\frac{T^{12}}{\Lambda^{8}}\,, \tag{81}\]
where \(\Lambda^{8}\equiv(9/21.65)m_{3/2}^{4}M_{P}^{4}\)[24; 25; 67]. Integrating this rate gives
\[n(a_{\rm RH})=\frac{21.65}{9\sqrt{3}\sqrt{\alpha}}\frac{T_{\rm RH}^{10}}{m_{3/2 }^{4}M_{P}^{3}}\ln\frac{\rho_{\rm end}}{\rho_{\rm RH}}\,, \tag{82}\]
and
\[\Omega_{3/2}h^{2} \simeq 4\times 10^{-6}\left(\frac{\rm{EeV}}{m_{3/2}}\right)^{3}\left( \frac{T_{\rm RH}}{10^{10}~{}\rm{GeV}}\right)^{7} \tag{83}\] \[\times\left(12+\ln\left(\frac{10^{10}\rm{GeV}}{T_{\rm RH}}\right) \right)\,.\]
In Fig. 6, we show the regions of the parameter space allowed by the relic abundance constraint in the \((m_{3/2},T_{\rm RH})\) plane for four of the processes we discussed in this paper. Specifically, we compare the thermal production of gravitinos in both low- and high-scale supersymmetric models from Eqs. (80) and (83) with the non-thermal raritron production given in Eq. (46) and the thermal production from Eq. (59). The solid red line shows the"classic" production of gravitinos in weak-scale supersymmetry, whose source is predominantly the gluons of the thermal bath. This is given by Eq. (80), for which \(\Omega h^{2}\propto T_{\rm RH}m_{3/2}\). This provides the well known bound on the reheating temperature for stable gravitinos11 and is applicable when \(T_{\rm RH}>m_{3/2}\)[9]. For large gravitino masses, we must cut off the integration of the Boltzmann equation at \(a_{3/2}\) corresponding to the scale factor when \(T=m_{3/2}\), rather than integrating down to \(a_{\rm RH}\). For large masses, this leads to a suppression by a factor of \((a_{3/2}/a_{\rm RH})^{9/4}=(T_{\rm RH}/m_{3/2})^{6}\). Thus for parameters with \(T_{\rm RH}<m_{3/2}\), \(\Omega h^{2}\propto T_{\rm RH}^{7}m_{3/2}^{-5}\). This effect accounts for the change in the slope when \(m_{3/2}\gtrsim T_{\rm RH}\) seen in the figure.
Footnote 11: A similar bound applies when gravitinos are unstable if R-parity is conserved as the produced gravitino abundance is transferred to the lightest supersymmetric particle.
This thermal bound on \(T_{\rm RH}\) is greatly relaxed in models of high scale supersymmetry (shown here by the red dotted line) as single production of gravitinos becomes kinematically forbidden [24; 67]. In this case, from Eq. (83), we see that \(\Omega h^{2}\propto T_{\rm RH}^{7}m_{3/2}^{3}\). In contrast, the gravitational production of a stable spin-\(\frac{3}{2}\) raritron, whose source is the inflaton, provides a significantly stronger constraint, particularly at low masses. This constraint is shown by the blue dot-dashed line and given by Eq. (46) where \(\Omega h^{2}\propto T_{\rm RH}m_{3/2}^{-1}\). As discussed earlier the gravitational production of raritrons from the thermal bath is always sub-dominant. It is shown by the blue dashed line from Eq. (59) and is found extremely close to the line corresponding to the thermal production in high-scale supersymmetry.
This figure is one of the most important results of our studies, and admirably reflects the dominance of gravitational effects over classical thermal gravitino-raritron production, within the parameter space allowed by the unitarity limit, i.e., \(m_{3/2}\gtrsim 40\) EeV (non-supersymmetric). However, we caution the reader that the thermal constraints shown here reflect the production of the gravitino in supersymmetric models. The gravitational production of the raritron is definitely not the gravitino in supersymmetry. As we have seen the gravitational production from the inflaton condensate produces primarily the longitudinal component and occurs just after inflation when supersymmetry is broken by the inflationary sector. As a result the longitudinal component is the inflatino and the resulting particles produced are not related to the gravitino at low energies.
Figure 6: The \((m_{3/2},T_{\rm RH})\) plane showing the contours of \(\Omega_{3/2}h^{2}=0.12\). The blue dot-dashed line is derived from the inflaton condensate via single graviton exchange, Eq. (46). The blue dashed line is the thermal contribution from graviton exchange, given by Eq. (59). The red solid line corresponds to single gravitino production when the scale of supersymmetry breaking is below the inflationary scale, Eq. (80), where \(m_{1/2}=m_{3/2}\) is assumed. The red dotted line corresponds to the case of high-scale supersymmetry, Eq. (83), where gravitinos must be pair produced.
Summary
Gravitational particle production after inflation is inevitable. All particles couple to gravity through their energy-momentum tensor and can be produced directly from the inflaton condensate during reheating. While it is difficult to create the thermal bath directly from minimal gravitational interactions [38; 39; 42; 44], the production of stable particles making up all or some of the dark matter is feasible [33].
While all particles couple to the inflaton through gravity, they do not couple equally. The production rate, \(R\), for particles produced from the condensate are generally proportional to \(\rho_{\phi}^{2}\). Since \(\rho_{\phi}\) redshifts with the expansion of the Universe as in Eq. (40), and the production rate redshifts faster than the Hubble rate, production occurs at the start of the reheating process. The production of scalars, \(S\), is to a good approximation independent of the mass of the scalars when \(m_{S}\ll m_{\phi}\)[33; 38] and absent for massless vectors. However, the production rate for fermions is suppressed due to the necessity of a spin flip for the final state fermion [38]. In this work, we considered the gravitational production of a massive spin-\(\frac{3}{2}\) particle dubbed the raritron. We do not however, necessarily associate this particle with the gravitino. As in the case of scalars and spin-\(\frac{1}{2}\) fermions, the inflaton couples to raritrons through their respective energy-momentum tensors, \(T_{0}^{\mu\nu}\) and \(T_{3/2}^{\mu\nu}\).
Figure 7: The relic abundance of raritrons/gravitinos, \(\Omega h^{2}\) as a function of the reheating temperature for fixed \(m_{3/2}=1\) ZeV (upper left panel), \(m_{3/2}=1\) EeV (upper right panel), and \(1\) TeV (lower panel). The blue solid line is derived from Eq. (46) and does not appear in the lower panel due to concerns over unitarity violations. The thermal production of raritrons mediated by gravity is shown as the blue dotted line from Eq. (59). Also shown is the thermal production of gravitinos in both the case of weak-scale (solid red) and high-scale (dotted red) supersymmetry (with \(m_{1/2}=m_{3/2}\)) taken from Eqs. (80) and (83) respectively. The horizontal black line at \(\Omega h^{2}=0.12\) is shown for reference.
As we saw in Fig. 2, the production rate of raritrons is largely dominated by the production of the raritron longitudinal modes, particularly at low raritron masses, as the rate is proportional to \(m_{3/2}^{-2}\). This yields a very large abundance of raritrons and unless \(m_{3/2}\) is relatively large, the reheating temperature is strongly constrained as we showed in Fig. 3. This result is summarized in Fig. 7 where we show the relic abundance \(\Omega h^{2}\) as a function of the reheating temperature for three choices of \(m_{3/2}\). The solid blue line is derived from Eq. (46). It is not shown in the lower panel with \(m_{3/2}=1\) TeV, as we expect unitarity violations at low masses. We also see in the upper right panel, that even \(m_{3/2}=1\) EeV (\(10^{9}\) GeV), would require \(T_{\rm RH}\lesssim 0.1\) GeV to avoid overproduction. When \(m_{3/2}=1\) ZeV (\(10^{12}\) GeV), as in the upper left panel, the reheating temperature may be as large as \(\sim 300\) GeV. The horizontal black line is set at \(\Omega h^{2}=0.12\) to guide the eye.
Other mechanisms for raritron/gravitino production are also shown in Fig. 7. Note the huge variation in the relic abundance obtained from the different mechanisms. As discussed above, the thermal production of raritrons mediated by gravity is always sub-dominant when compared to the direct production from the inflaton condensate. This source of production is shown by the blue dotted curve in Fig. 7 taken from Eq. (59). Extrapolating to larger masses and reheating temperatures we can see, however, because of the steep dependence (\(\Omega h^{2}\propto T_{\rm RH}^{7}\)), there are regions where thermal production dominates, but \(\Omega h^{2}\) is orders of magnitude too large in this case.
We also show in Fig. 7, the thermal production of gravitinos in both the case of weak-scale (solid red) and high-scale (dotted red) supersymmetry taken from Eqs. (80) and (83) respectively. The abundance of thermally-produced gravitinos in weak-scale scale supersymmetry is proportional to \(T_{\rm RH}\) so long as \(T_{\rm RH}>m_{3/2}\). As we have already seen in Fig. 6, for large gravitino masses, we must cut off the integration of the Boltzmann equation at \(a_{3/2}\) corresponding to the scale factor when \(T=m_{3/2}\), rather than integrating down to \(a_{\rm RH}\). This leads to a suppression by a factor of \((T_{\rm RH}/m_{3/2})^{6}\). For \(m_{3/2}=1\) ZeV, and the parameter range shown in Fig. 7, \(T_{\rm RH}<m_{3/2}\) and the we see only the steeper slope. For the other two values of \(m_{3/2}\) shown, we see the change in slope when \(T_{\rm RH}=m_{3/2}\). For \(m_{3/2}=1\) TeV, EeV, ZeV, we have limits of \(T_{\rm RH}\lesssim 2\times 10^{9}\) GeV, \(4\times 10^{7}\) GeV, and \(5\times 10^{9}\) GeV respectively to avoid overproduction of gravitinos in weak-scale supersymmetry.
For the case of high-scale supersymmetry, cutting off the integration in the Boltzmann equation only results in a change in the log term in Eq. (83) resulting in a replacement of \(T_{\rm RH}\) with \(m_{3/2}\) for \(T_{\rm RH}<m_{3/2}\). This change is unobservable on the scale shown in the figure. The same is true for the thermal production via gravity in Eq. (59). Since the relic abundance in both cases is \(\propto 1/m_{3/2}^{3}\), the heavier the dark matter, the larger the permitted range of \(T_{\rm RH}\). The limits in the high-scale supersymmetry cases for \(m_{3/2}=1\) TeV, Eev, ZeV, are \(T_{\rm RH}\lesssim 8\times 10^{7}\) GeV, \(3.1\times 10^{10}\) GeV, \(6.3\times 10^{11}\) GeV respectively.
We have not included the production of gravitinos from inflatino exchange as that production is suppressed due to mixing with inflatinos. A quantitative measure of the abundance in that case would require an analysis similar to what is done in [74].
Of course we do not know how dark the dark sector is. At its darkest, gravitational interactions may play a leading role in the production of dark matter. A generic Rarita-Schwinger is easily overproduced in the early Universe through its (minimal) gravitational coupling to the the inflaton. We have derived strong limits on the raritron mass in this case, though depending on the detailed model, unitarity limits may be even stronger. The gravitino in models of broken supersymmetry can also be produced gravitationally, however only the transverse components are produced as the longitudinal states are primarily composed of the inflatino. In this case the standard thermal production of gravitinos still provides limits on its mass and the inflationary reheating temperature.
###### Acknowledgements.
The authors thank K. Benakli, E. Dudas, and G. Casagrande for extremely valuable discussions during the completion of our work. This project has received support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN, and the IN2P3 Master Projet UCMN. The work of K.K. was supported in part by JSPS KAKENHI No. 20H00160. The work of K.A.O. was supported in part by DOE grant DE-SC0011842 at the University of Minnesota. The work of S.V. was supported in part by DOE grant DE-SC0022148.
## Appendix A Energy-momentum tensor of spin-3/2 field
In this appendix, we provide a brief review of the computation for the energy-momentum tensor of a spin-\(\frac{3}{2}\) particle. We begin with a theory that closely resembles \(\mathcal{N}=1\) pure supergravity, where the spin-\(\frac{3}{2}\) particle is a Majorana fermion known as the gravitino.
We begin by introducing the full action, which is the sum of the Einstein-Hilbert action and the Rarita
Schwinger action for the massive gravitino,
\[S=\int d^{4}x(\mathcal{L}_{2}+\mathcal{L}_{3/2})\,, \tag{104}\]
where
\[\mathcal{L}_{2} =-\frac{M_{P}^{2}}{2}eR\,, \tag{105}\] \[\mathcal{L}_{3/2} =-\frac{1}{4}\epsilon^{\mu\nu\rho\sigma}\overleftarrow{\psi}_{\mu }\gamma_{5}\gamma_{\nu}\overleftarrow{\nabla}_{\rho}\psi_{\sigma}-\frac{1}{4} em_{3/2}\overline{\psi}_{\mu}[\gamma^{\mu},\gamma^{\nu}]\psi_{\nu}\] \[=\frac{i}{4}e\overline{\psi}_{\mu}\gamma^{\mu\nu\rho} \overleftarrow{\nabla}_{\rho}\psi_{\nu}-\frac{1}{4}em_{3/2}\overline{\psi}_{ \mu}[\gamma^{\mu},\gamma^{\nu}]\psi_{\nu}\,, \tag{106}\]
with the determinant of the frame field given by \(\det e^{a}_{\mu}\equiv e\) and \(A\overleftarrow{\nabla}_{\mu}B\equiv A\overleftarrow{\nabla}_{\mu}B-A \overleftarrow{\nabla}_{\mu}B\). The covariant derivative acting on the spin-\(\frac{1}{2}\) field is defined as
\[\nabla_{\mu}\psi_{\nu} \equiv\left(\partial_{\mu}+\frac{1}{4}\omega_{\mu ab}\gamma^{ab} \right)\psi_{\nu}\,, \tag{107}\] \[\overline{\psi}_{\nu}\overleftarrow{\nabla}_{\mu} =\overline{\psi}_{\nu}\left(\overleftarrow{\partial}_{\mu}-\frac{1 }{4}\omega_{\mu ab}\gamma^{ab}\right)\,,\] (108) \[\gamma^{ab} =\gamma^{[a}\gamma^{b]}=\frac{1}{2}[\gamma^{a},\gamma^{b]}\,, \tag{109}\]
and \(\overline{\psi}_{\nu}\overleftarrow{\partial}_{\mu}\equiv\partial_{\mu} \overline{\psi}_{\nu}\). The frame field \(e^{a}_{\mu}\) is related to the flat Minkowski metric as
\[g_{\mu\nu} =e^{a}_{\mu}e^{b}_{\nu}\eta_{ab}\,, \tag{110}\] \[\eta_{ab} =\text{diag}(+1,-1,-1,-1)\,. \tag{111}\]
The curvature tensor is given by
\[R_{\mu\nu}{}^{ab} =\partial_{\mu}\omega_{\nu}{}^{ab}-\partial_{\nu}\omega_{\mu}{}^ {ab}+\omega_{\mu}{}^{ac}\omega_{\nu c}{}^{b}-\omega_{\nu}{}^{ac}\omega_{\mu e }{}^{b}, \tag{112}\] \[R_{\mu\nu\rho\sigma} =e_{a\rho}e_{b\sigma}R_{\mu\nu}{}^{ab}\,,\] (113) \[R_{\mu\nu} =R^{\rho}{}_{\mu\rho\nu}\,,\] (114) \[R =e^{\mu}_{a}e^{\nu}_{b}R_{\mu\nu}{}^{ab}=g^{\mu\nu}R_{\mu\nu}\,, \tag{115}\]
where \(\omega_{\mu ab}\) is the spin connection, given by
\[\omega_{\mu ab}=\omega_{\mu ab}(e)+K_{\mu ab}\,. \tag{116}\]
Here \(K_{\mu\nu\rho}\) is the contorsion tensor and
\[\omega^{ab}_{\mu}(e)=2e^{\nu[a}\partial_{[\mu}e^{b]}_{\nu]}-e^{\nu[a}e^{b] \sigma}e_{\mu c}\partial_{\nu}e^{c}_{\sigma} \tag{117}\]
is the torsionless contribution. In the non-supersymmetric case, the contorsion can be set to zero. However, in supergravity, \(K_{\mu\nu\rho}\) is expressed as a combination of terms bilinear in \(\psi_{\mu}\). We derive the energy-momentum tensor \(T^{\mu\nu}_{3/2}\) by varying the _total_ Lagrangian \(\mathcal{L}=\mathcal{L}_{2}+\mathcal{L}_{3/2}\) with respect to the frame field \(e\), and then expressing the result in the Minkowski limit \(g_{ab}\rightarrow\eta_{ab}\).
The energy-momentum tensor can be derived from the Einstein equation, where the terms other than the pure spin-2 contribution are grouped to define \(T^{\mu\nu}_{3/2}\). In this context, two main approaches exist: the Palatini and the metric formalism, or the first- and second-order formalism in the context of supergravity. Since it is a quite involved task to compute the energy-momentum tensor using either method, we briefly discuss the distinctions between the two.
In the first-order formalism, the parameters \(e\), \(\omega\), and \(\psi\) (with Lorentz indices suppressed) are treated as independent variables when varying the action. The spin connection \(\omega\) is subsequently expressed as a function of \(e\) and \(\psi\) by requiring \(\delta S/\delta\omega=0\)[104]. The second-order formalism treats only \(e\) and \(\psi\) as independent variables, with \(\omega\) chosen to ensure that supersymmetry [105] is preserved. This approach assumes Eq. (116) at the starting point. For a more detailed discussion on the first- and second-order formalisms, see Ref. [106].
In the first-order formalism, the total Lagrangian is treated as a function of \(e\), \(\omega\), and \(\psi\). The solution to \(\delta S/\delta\omega=0\) is given by Eq. (116), with \(\omega=\omega(e,\psi)\). We then solve the condition \(\delta S/\delta e|_{\omega=\omega(e,\psi)}=0\) and find that the Einstein equation is given by
\[G_{\mu\nu}(e,\omega)|_{\omega=\omega(e,\psi)}= \frac{e^{-1}e_{\mu a}}{M_{P}}\left.\frac{\delta\mathcal{L}_{3/2}}{ \delta e^{a}_{\nu}}\right|_{\omega=\omega(e,\psi)}\,, \tag{118}\] \[G_{\mu\nu}= R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\,. \tag{119}\]
We note that the Einstein tensor \(G_{\mu\nu}\) on the left-hand side of Eq. (118) includes both \(e\) and \(\psi\), the latter due to the contorsion term in the spin connection. Therefore, to derive the correct energy-momentum tensor, the \(\psi\)-dependent terms must be move to the right-hand side.
On the other hand, in the second-order formalism, deriving the energy-momentum tensor is more straightforward. As the Lagrangian is treated as a function of \(e\) and \(\psi\), with Eq. (116) already applied to eliminate the explicit \(\omega\) dependence, the Einstein equation is simply derived from \(\delta\mathcal{L}/\delta e=0\), and \(G_{\mu\nu}\) does not depend on \(\psi\). Consequently, the symmetrized energy-momentum tensor is defined as
\[T_{3/2,\mu\nu}=e^{-1}e_{(\mu a}\frac{\delta\mathcal{L}_{3/2}}{\delta e^{\nu}_{ a}}\,, \tag{120}\]
where as before \(\mathcal{L}_{3/2}\) is given by eliminating the \(\omega\) dependence in the second-order formalism. This approach allows for a direct computation to compute the gravitino energy-momentum tensor, given by
\[T_{3/2,\mu\nu}=-\frac{i}{4}\overline{\psi}_{\rho}\gamma_{(\mu} \overleftarrow{\nabla}_{\nu)}\psi^{\rho}+\frac{i}{2}\overline{\psi}_{(\nu} \gamma_{\mu)}\overleftarrow{\nabla}_{\rho}\psi^{\rho}+\frac{i}{2}\overline{ \psi}^{\rho}\gamma_{(\mu}\overleftarrow{\nabla}_{\rho}\psi_{\nu)}, \tag{121}\]
where we have used the equation of motion and the gravitino constraints. We note that the above result does not depend on the gravitino mass. In the flat Minkowski
limit, we can replace \(\stackrel{{\leftrightarrow}}{{\nabla}}\rightarrow\stackrel{{ \leftrightarrow}}{{\partial}}\) and neglect the four-Fermi terms originating from the torsion contribution as they are not relevant for our analysis, which leads to Eq. (9).
If we do not assume supersymmetry, the spin-\(\frac{3}{2}\) particle is not necessarily a Majorana fermion. For a Dirac spin-\(\frac{3}{2}\) particle, the Lagrangian is given by
\[\mathcal{L}_{3/2}=-\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\overline{\psi}_{\mu} \gamma_{5}\gamma_{\nu}\stackrel{{\leftrightarrow}}{{\nabla}}_{ \rho}\psi_{\sigma}-\frac{1}{2}em_{3/2}\overline{\psi}_{\mu}[\gamma^{\mu}, \gamma^{\nu}]\psi_{\nu}, \tag{105}\]
and the energy-momentum tensor is given by
\[T_{3/2,\mu\nu}^{\rm(Dirac)}=2T_{3/2,\mu\nu}^{\rm(Majorana)}\,, \tag{106}\]
where \(T_{3/2,\mu\nu}^{\rm(Majorana)}\) is given by Eq. (104).
## Appendix B Amplitudes and Thermal Rates
In this appendix, we compute the thermal production rate of rairtons, \(R_{\frac{3}{2}}^{T}\). We consider only the massless Standard Model particles in the initial state, which include scalars, fermions, and gauge bosons. The dark matter production rate for the process \(\text{SM}+\text{SM}\rightarrow\psi+\psi\) is given by the general expression Eq. (53), where we assumed that \(4m_{3/2}^{2}\ll s\) and included a factor of two in the numerator to account that two dark matter particles are produced per scattering event.
We express the squared amplitudes in terms of the Mandelstam variables \(s\) and \(t\), which are given by
\[t = \frac{s}{2}\left(\sqrt{1-\frac{4m_{3/2}^{2}}{s}}\cos\theta_{13}-1 \right)+m_{3/2}^{2}\,, \tag{107}\]
\[s = 2E_{1}E_{2}(1-\cos\theta_{12})\,. \tag{108}\]
The general squared amplitude for the thermal processes involving SM initial states is given by Eq. (55), where we include 4 degrees for 1 complex Higgs doublet, 12 degrees for 8 gluons and 4 electroweak bosons, and 45 degrees for 6 (anti)quarks with 3 colors, 3 (anti)charged leptons and 3 neutrinos. We note that the squared amplitudes include the symmetry factors of both the initial and final states, and this is indicated with an overbar.
When summing over all polarizations, the total squared amplitude of the gravity-mediated scalar production of rairtons is given by
\[|\overline{\mathcal{M}}^{0\frac{3}{2}}|^{2}=\frac{1}{72m_{3/2}^{4}M_{P}^{4}s^ {2}}\left\{-s^{2}\left(s+2t-2m_{\phi}^{2}\right)^{2}\times\right.\]
\[\left[m_{\phi}^{4}-2m_{\phi}^{2}t+t(s+t)\right]-72m_{3/2}^{12}+24m_{3/2}^{10}(7 s+12t)\]
\[-2m_{3/2}^{8}\left[47s^{2}+264st+216t^{2}+72m_{\phi}^{4}-12m_{\phi}^{2}(s+12t)\right]\]
\[-2m_{3/2}^{6}\left[s^{3}-122s^{2}t-288st^{2}-144t^{3}+12m_{\phi}^{4}(7s-12t)\right.\]
\[\left.+m_{\phi}^{2}\left(288t^{2}+216st-62s^{2}\right)\right]+m_{3/2}^{4} \left[s^{4}-34s^{3}t-210s^{2}t^{2}\right.\]
\[\left.-240st^{3}-72t^{4}-72m_{\phi}^{8}+24m_{\phi}^{6}(s+12t)\right.\]
\[\left.-18m_{\phi}^{4}\left(s^{2}+16st+24t^{2}\right)-4m_{\phi}^{2}\left(s^{3} -35s^{2}t-126st^{2}\right.\right.\]
\[\left.\left.-27t^{3}\right)\right]+m_{3/2}^{2}s\left[24m_{\phi}^{8}+s^{4}+6s^ {3}t+44s^{2}t^{2}+64st^{3}\]
\[+24t^{4}-32m_{\phi}^{6}(s+3t)+16m_{\phi}^{4}\left(2s^{2}+8st+9t^{2}\right)\]
\[\left.-2m_{\phi}^{2}\left(5s^{3}+24s^{2}t+80st^{2}+48t^{3}\right)\right]\frac{ }{}, \tag{109}\]
where \(m_{\phi}\) is the scalar mass and \(m_{3/2}\) is the raritron mass. For the incoming SM Higgs bosons, we set \(m_{\phi}=0\), and this expression simplifies to
\[|\overline{\mathcal{M}}^{0\frac{3}{2}}|^{2}=\frac{1}{72m_{3/2}^{4}M_{P}^{4}s ^{2}}\left[-s^{2}t(s+t)(s+2t)^{2}-72m_{3/2}^{12}\right.\]
\[+24m_{3/2}^{10}(7s+12t)-2m_{3/2}^{8}\left(47s^{2}+264st+216t^{2}\right)\]
\[+m_{3/2}^{6}\left(-2s^{3}+244s^{2}t+576st^{2}+288t^{3}\right)\]
\[+m_{3/2}^{4}\left(s^{4}-34s^{3}t-210s^{2}t^{2}-240st^{3}-72t^{4}\right)\]
\[+m_{3/2}^{2}s\left(s^{4}+6s^{3}t+44s^{2}t^{2}+64st^{3}+24t^{4}\right)\Big{]}\,\,.\]
Similarly, the matrix element squared for the gravity-mediated raritron production from _massless_ fermions is given by
\[|\overline{\mathcal{M}}^{\frac{1}{2}\frac{3}{2}}|^{2}=\frac{1}{144m_{3/2}^{4}M_ {P}^{4}s^{2}}\left\{576m_{3/2}^{12}-768m_{3/2}^{10}(s+3t)\right.\]
\[+4m_{3/2}^{8}\left(137s^{2}+768st+864t^{2}\right)-8m_{3/2}^{6}\left(58s^{3}+2 65s^{2}t\right.\]
\[+504st^{2}+288t^{3}\right)+4m_{3/2}^{4}\left(13s^{4}+197s^{3}t+513s^{2}t^{2}\]
\[\left.+480st^{3}+144t^{4}\right)+4m_{3/2}^{2}s\left(2s^{4}-31s^{3}t-106s^{2}t ^{2}\right.\]
\[\left.-128st^{3}-48t^{4}\right)+s^{2}\left(s^{4}+10s^{3}t+42s^{2}t^{2}\right.\]
\[\left.+64st^{3}+32t^{4}\right)\Big{\}}\,, \tag{110}\]
and the production from _massless_ gauge bosons is given by
\[|\overline{\mathcal{M}}^{1\frac{3}{2}}|^{2}=\frac{1}{18m_{3/2}^{4}M_{P}^{ 4}s^{2}}\left\{-36m_{3/2}^{12}+12m_{3/2}^{10}(s+12t)\right.\]
\[-4m_{3/2}^{8}\left(23s^{2}+30st+54t^{2}\right)+4m_{3/2}^{6}\left(3s^{3}+53s^{2 }t\right.\]
\[\left.+54st^{2}+36t^{3}\right)-m_{3/2}^{4}\left(25s^{4}+118s^{3}t+186s^{2}t^{2}\right.\]
\[\left.+120st^{3}+36t^{4}\right)+2m_{3/2}^{2}s\left(3s^{4}+7s^{3}t+16s^{2}t^{2}\right.\]
\[\left.+16st^{3}+6t^{4}\right)-s^{2}t(s+t)\left(s^{2}+2st+2t^{2}\right)\right\}\,. \tag{111}\]
By evaluating the integral, we find that the thermal production rate of raritrons can be written as
\[R_{3/2}^{T} = \beta_{1}\frac{T^{12}}{m_{3/2}^{4}M_{P}^{4}}+\beta_{2}\frac{T^{10} }{m_{3/2}^{2}M_{P}^{4}}+\beta_{3}\frac{T^{8}}{M_{P}^{4}} \tag{101}\] \[+\beta_{4}\frac{m_{3/2}^{2}T^{6}}{M_{P}^{4}}+\beta_{5}\frac{m_{3/ 2}^{4}T^{4}}{M_{P}^{4}}\,,\]
where
\[\beta_{1} = \frac{205511\pi^{7}}{85730400}\,, \tag{102}\]
\[\beta_{2} = \frac{16453(5)^{2}}{15\pi^{5}}\,, \tag{103}\]
\[\beta_{3} = -\frac{369149\pi^{3}}{93312000}\,, \tag{104}\]
\[\beta_{4} = -\frac{8759\zeta(3)^{2}}{1152\pi^{5}}\,, \tag{105}\]
\[\beta_{5} = -\frac{49}{5760\pi}\,. \tag{106}\]
## Appendix C Computation of the gravitino production rate
The production rate of the gravitino can be derived from the energy transfer rate from the inflaton energy density \(\rho_{\phi}\) to the gravitino sector. Using the equation of state parameter \(w_{\phi}=p_{\phi}/\rho_{\phi}\) for the inflaton, the evolution of \(\rho_{\phi}\) follows
\[\frac{d\rho_{\phi}}{dt}+3H(1+w_{\phi})\rho_{\phi} = -(1+w_{\phi})\Gamma_{\phi}\rho_{\phi}\,, \tag{107}\]
where the right-hand side is given by the energy transfer per space-time volume (\(\text{Vol}_{4}\)) due to the inflaton decay or scattering processes to particles \(A\) and \(B\), defined as
\[(1+w_{\phi})\Gamma_{\phi}\rho_{\phi} \equiv \frac{\Delta E}{\text{Vol}_{4}}\,, \tag{108}\]
where
\[\Delta E \equiv \int\frac{d^{3}p_{A}}{(2\pi)^{3}2p_{A}^{0}}\frac{d^{3}p_{B}}{(2 \pi)^{3}2p_{B}^{0}}(p_{A}^{0}+p_{B}^{0})\] \[\times \left|\frac{1}{n!}\Big{\langle}\text{f}\left|\left(i\int d^{4}x_ {1}\mathcal{L}_{\text{int}}\right)\cdots\left(i\int d^{4}x_{n}\mathcal{L}_{ \text{int}}\right)\right|0\right\rangle\right|^{2},\]
and \(\mathcal{L}_{\text{int}}\) is the interaction Lagrangian (see [9] for more details).
We decompose the oscillating inflaton as \(\phi(t)\simeq\phi_{0}(t)\mathcal{P}(t)\), where \(\mathcal{P}\) represents the rapidly oscillating component and \(\phi_{0}\) is its envelope that slowly evolves (redshifts) with time. In practice, \(\phi_{0}\) can be taken as a constant quantity when computing a reaction that occurs over time scales much shorter than the change in \(\phi_{0}\). Therefore, the fast oscillating component can be decomposed as
\[\mathcal{P}(t) = \sum_{n=-\infty}^{\infty}\mathcal{P}_{n}e^{-in\omega t}\,, \tag{109}\]
where \(\omega\) is the frequency of the inflaton oscillation.
Using the interaction Lagrangian \(\mathcal{L}_{\text{int}}\), given by Eq. (66), the amplitudes for the \(t\)- and \(u\)-channels are given by
\[\mathcal{M}_{t}^{(n,m)} = \frac{1}{4M_{P}^{2}}\frac{nm\mathcal{P}_{n}\mathcal{P}_{m}}{t_{n}- m_{\chi}^{2}}(\omega\phi_{0})^{2}\bar{u}_{\mu}(p_{A})\not{\psi}\gamma^{\mu}( \not{p}_{\phi,n}-\not{p}_{A})\gamma^{\nu}\not{\delta}P_{R}u_{\nu}^{c}(p_{B})\,, \tag{110}\] \[\mathcal{M}_{u}^{(n,m)} = \frac{1}{4M_{P}^{2}}\frac{nm\mathcal{P}_{n}\mathcal{P}_{m}}{u_{n}- m_{\chi}^{2}}(\omega\phi_{0})^{2}\bar{u}_{\mu}(p_{A})\not{\psi}\gamma^{\mu}( \not{p}_{B}-\not{p}_{\phi,n})\gamma^{\nu}\not{\delta}P_{L}u_{\nu}^{c}(p_{B})\,, \tag{111}\]
where \(p_{\phi,n}^{\mu}=(n\omega,\vec{0})^{\mu}\), \(t_{n}\equiv(p_{\phi,n}-p_{A})^{2}\), \(u_{n}\equiv(p_{B}-p_{\phi,n})^{2}\), and \(\not{\delta}\equiv\delta_{\mu}^{0}\gamma^{\mu}\) is introduced to account for \(\partial_{\mu}\phi=(\dot{\phi},\vec{0})_{\mu}\). Using these amplitudes, the energy transfer rate can be written as
\[\frac{\Delta E}{\text{Vol}_{4}} = \int\frac{d^{3}p_{A}}{(2\pi)^{3}2p_{A}^{0}}\frac{d^{3}p_{B}}{(2 \pi)^{3}2p_{B}^{0}}(p_{A}^{0}+p_{B}^{0})\sum_{n+m>0}\sum_{\text{spin}}| \mathcal{M}_{t}^{(n,m)}+\mathcal{M}_{u}^{(n,m)}|^{2}(2\pi)^{4}\delta^{4}(p_{ \phi,n}+p_{\phi,m}-p_{A}-p_{B})\,. \tag{112}\]
If we use the equation of motion for \(\psi_{\mu}\), the sum of the amplitudes greatly simplifies and becomes
\[\mathcal{M}_{t}^{(n,m)}+\mathcal{M}_{u}^{(n,m)} =\frac{m_{3/2}}{M_{P}^{2}}\frac{nm\mathcal{P}_{n}\mathcal{P}_{m}}{ nm\omega^{2}+m_{\chi}^{2}-m_{3/2}^{2}}\] \[\times(\omega\phi_{0})^{2}\delta_{0}^{\mu}\delta_{0}^{\nu}\bar{u} _{\mu}(p_{A})u_{\nu}^{c}(p_{B})\,, \tag{100}\]
where we used \(t_{n}=u_{n}=m_{3/2}^{2}-nm\omega^{2}\) and \(p_{A}^{0}=p_{B}^{0}=(n+m)\omega/2\). We emphasize that only the \(\mu=0\) contribution of \(\psi_{\mu}\) is produced. The gravitino wave function may be written as \(\psi_{\mu}\sim\psi\epsilon_{\mu}\), where \(\psi\) and \(\epsilon_{\mu}\) denote the spin-\(\frac{1}{2}\) and \(1\) components, respectively. It is important to note that the spin \(\pm 3/2\) component is proportional to the transverse polarization of \(\epsilon_{\mu}\), which does not have the \(\mu=0\) component. As a result, only spin \(\pm 1/2\) mode of the gravitino may have a nonzero amplitude.
Thus, the amplitude given by Eq. (100) can be further simplified by substituting \(u_{0}(p)=\sqrt{2/3}\epsilon_{0}(p)u(p)=\sqrt{2/3}(|\vec{p}|/m_{3/2})u(p)\), where \(u(p)\) is the spin-\(\frac{1}{2}\) component that satisfies \((\not{p}-m_{3/2})u(p)=0\). Without specifying the oscillatory solution of the inflaton, we derive obtain the energy transfer rate:
\[(1+w_{\phi})\Gamma_{\phi\phi\rightarrow\psi_{\mu}\psi_{\mu}} \rho_{\phi}=\sum_{n+m\geq 1}\frac{(nm)^{2}(n+m)^{7}(\mathcal{P}_{n}\mathcal{ P}_{m})^{2}\omega^{11}\phi_{0}^{4}}{144\pi m_{3/2}^{2}M_{P}^{4}(nm\omega^{2}+m_{ \chi}^{2}-m_{3/2}^{2})^{2}}\left(1-\frac{4m_{3/2}^{2}}{(n+m)^{2}\omega^{2}} \right)^{7/2}\,. \tag{101}\]
For concreteness, we consider \(V(\phi)=(m_{\phi}^{2}/2)\phi^{2}\), which implies that \(w_{\phi}=0\). The solution for \(\phi\) can be expressed as \(\phi(t)\simeq\phi_{0}(t)\cos(\omega t)\), where \(\omega=m_{\phi}\) and \(\rho_{\phi}\simeq(m_{\phi}^{2}/2)\phi_{0}^{2}\). Consequently, \(\mathcal{P}_{n=\pm 1}=1/2\), and is zero otherwise. Therefore, only the \(n=m=1\) modes contribute, and we obtain
\[\Gamma_{\phi\phi\rightarrow\psi_{\mu}\psi_{\mu}} = \frac{2m_{\phi}\rho_{\phi}}{9\pi M_{P}^{4}}\frac{(1-\tau_{3/2})^{7 /2}}{\tau_{3/2}(1+\tau_{\chi}-\tau_{3/2})^{2}}\,. \tag{102}\]
where \(\tau_{i}\equiv m_{i}^{2}/m_{\phi}^{2}\).
|
2309.14258 | OmniEvent: A Comprehensive, Fair, and Easy-to-Use Toolkit for Event
Understanding | Event understanding aims at understanding the content and relationship of
events within texts, which covers multiple complicated information extraction
tasks: event detection, event argument extraction, and event relation
extraction. To facilitate related research and application, we present an event
understanding toolkit OmniEvent, which features three desiderata: (1)
Comprehensive. OmniEvent supports mainstream modeling paradigms of all the
event understanding tasks and the processing of 15 widely-used English and
Chinese datasets. (2) Fair. OmniEvent carefully handles the inconspicuous
evaluation pitfalls reported in Peng et al. (2023), which ensures fair
comparisons between different models. (3) Easy-to-use. OmniEvent is designed to
be easily used by users with varying needs. We provide off-the-shelf models
that can be directly deployed as web services. The modular framework also
enables users to easily implement and evaluate new event understanding models
with OmniEvent. The toolkit (https://github.com/THU-KEG/OmniEvent) is publicly
released along with the demonstration website and video
(https://omnievent.xlore.cn/). | Hao Peng, Xiaozhi Wang, Feng Yao, Zimu Wang, Chuzhao Zhu, Kaisheng Zeng, Lei Hou, Juanzi Li | 2023-09-25T16:15:09Z | http://arxiv.org/abs/2309.14258v1 | # Omni event: A Comprehensive, Fair, and Easy-to-Use Toolkit for Event Understanding
###### Abstract
Event understanding aims at understanding the content and relationship of events within texts, which covers multiple complicated information extraction tasks: event detection, event argument extraction, and event relation extraction. To facilitate related research and application, we present an event understanding toolkit OmniEvent, which features three desiderata: (1) **Comprehensive.** OmniEvent supports mainstream modeling paradigms of all the event understanding tasks and the processing of \(15\) widely-used English and Chinese datasets. (2) **Fair.** OmniEvent carefully handles the inconspicuous evaluation pitfalls reported in Peng et al. (2023), which ensures fair comparisons between different models. (3) **Easy-to-use**. OmniEvent is designed to be easily used by users with varying needs. We provide off-the-shelf models that can be directly deployed as web services. The modular framework also enables users to easily implement and evaluate new event understanding models with OmniEvent. The toolkit1 is publicly released along with the demonstration website and video2.
Footnote 1: [https://github.com/THU-KEG/OmniEvent](https://github.com/THU-KEG/OmniEvent)
Footnote 2: [https://omnievent.xlore.cn/](https://omnievent.xlore.cn/)
## 1 Introduction
Correctly understanding events is fundamental for humans to understand the world. Event understanding requires identifying real-world events mentioned in texts and analyzing their relationships, which naturally benefits various downstream applications, such as stock prediction Ding et al. (2015), adverse drug event detection Wunnava et al. (2019), narrative event prediction Wang et al. (2021), and legal case analysis Yao et al. (2022).
As illustrated in Figure 1, event understanding covers three complicated information extraction tasks: (1) event detection (ED), which is to detect the event triggers (keywords or phrases evoking events in texts) and classify their event types, (2) event argument extraction (EAE), which is to extract the event arguments for each trigger and classify their argument roles, and (3) event relation extraction (ERE), which is to identify the complex relationships between events, typically including temporal, causal, coreference, and subevent relations. ED and EAE together constitute the conventional event extraction (EE) task.
In recent years, event understanding research has grown rapidly Ma et al. (2022); Wang et al. (2022); Yue et al. (2023); Huang et al. (2023), and multiple practical systems Wadden et al. (2019); Lin et al. (2020); Zhang et al. (2020); Du et al. (2022); Zhang et al. (2022) have been developed. However, as shown in Table 1, existing systems exhibit several non-negligible issues: (1) **Incomprehensive Tasks**. Existing systems mainly focus on the two EE subtasks and rarely cover the whole event understanding pipeline with ERE tasks. The notable exception EventPlus Ma et al. (2021) merely covers the temporal relations. (2) **Limited Support
Figure 1: An illustration for the event understanding tasks, including event detection (ED), event argument extraction (EAE), and event relation extraction (ERE).
for Redevelopment and Evaluation**. Most of the existing event understanding systems are highly integrated and not extensible, which means users cannot easily develop new models within their frameworks. Especially considering the recent rise of large language models (LLMs)3, adequate support for LLMs is urgent but often missing. Moreover, the complicated data processing and evaluation details often lead to inconsistent and unfair evaluation results Peng et al. (2023), but existing systems do not pay much attention to evaluations.
Footnote 3: The definition of LLM is vague. Here we use βLLMβ to refer to models with more than 10 billion parameters.
To address these issues, we develop OmniEvent, a comprehensive, fair, and easy-to-use toolkit for event understanding, which has three main features: (1) **Comprehensive Support for Task, Model, and Dataset.** OmniEvent supports end-to-end event understanding from plain texts, i.e., all the ED, EAE, and ERE tasks. For ED and EAE, we classify the mainstream methods into four paradigms, including classification, sequence labeling, span prediction, and conditional generation. We implement various representative methods for each paradigm. For ERE, we provide a unified modeling framework and implement a basic pairwise classification method Wang et al. (2022). We also cover the preprocessing of \(15\) widely-used English and Chinese datasets. (2) **Fair Evaluation.** As found in Peng et al. (2023), there are three major pitfalls hidden in EE evaluation, including data processing discrepancy, output space discrepancy, and absence of pipeline evaluation. OmniEvent implements all the proposed remedies to help users avoid them. Specifically, we implement unified pre-processing for all the datasets and a method to convert the predictions of different paradigms into a unified space. OmniEvent also provides unified prediction triggers of supported datasets for fair pipeline comparisons. (3) **Easy-to-Use for Various Needs.** We design a modular and extensible framework for OmniEvent, which appeals to users with various needs. We provide several off-the-shelf models that can be easily deployed and used by users interested in applications. Model developers and researchers can train implemented methods within several lines of code or customize their own models and evaluate them. By integrating Transformers Wolf et al. (2020) and DeepSpeed Rasley et al. (2020), OmniEvent also supports efficiently fine-tuning LLMs as backbones.
To demonstrate the effectiveness of OmniEvent, we present the results of several implemented methods on widely-used benchmarks. We also conduct experiments with models at different scales and show that fine-tuning LLMs helps achieve better event understanding results. We hope OmniEvent could facilitate the research and applications of event understanding.
## 2 Related Work
With the advancement of research in NLP, various toolkits or systems for event understanding have been developed. They tend to focus on developing advanced EE systems to achieve improved results on public benchmarks Wadden et al. (2019); Lin et al. (2020); Nguyen et al. (2021) or perform robustly in real-world scenarios Vossen et al. (2016); Du et al. (2022). However, these toolkits or systems, designed based on a specific EE model, do not support comprehensive implementations of EE models and are inconvenient for secondary development. There is also some work that has meticulously designed user-friendly algorithmic frameworks Zhang et al. (2020, 2022), which are convenient for usage and secondary development. However, they are not specifically designed for event understanding, hence the corresponding support is limited. EventPlus Ma et al. (2021) is the only work supporting the entire event understanding pipeline but it only supports temporal relation extraction and does not provide comprehensive implementations of event understanding models. Moreover, existing work also neglects the discrepancies in EE evaluation as mentioned in Peng et al.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{System} & \multirow{2}{*}{EE} & \multirow{2}{*}{ERE} & \begin{tabular}{c} \#Supported \\ Models \\ \end{tabular} & \begin{tabular}{c} \#Supported \\ Datasets \\ \end{tabular} &
\begin{tabular}{c} LLM \\ Support \\ \end{tabular} \\ \hline DYGIE & β & β & \(1\) & \(1\) & β \\ OneIE & β & β & \(1\) & \(4\) & β \\ OpenUE & β & β & \(1\) & \(2\) & β \\ EvenPlus & β & β & \(1\) & N/A & β \\ FourIE & β & β & \(1\) & N/A & β \\ RESIN-11 & β & β & \(1\) & N/A & β \\ DeepKE & β & β & \(2\) & \(1\) & β \\ \hline OmniEvent & β & β & \(>\)\(20\) & \(15\) & β \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons between OmniEvent and other event understanding systems. The number of supported models and datasets only includes those of event understanding tasks. N/A denotes that the system is an integrated service and does not process benchmark datasets. For OmniEvent, the module combination enables many possible models and \(20\) is the number of models we have tested for usability.
(2023), which may result in unfair comparison. Finally, in the era of LLMs, existing work (except for DeepKE) also lacks support for LLMs.
Considering the mentioned issues, we present OmniEvent, a comprehensive, fair, and easy-to-use toolkit for event understanding. Compared to other systems in Table 1, OmniEvent supports the entire event understanding pipeline and comprehensively implements various models. OmniEvent also supports efficient fine-tuning and inference of LLMs. Meanwhile, OmniEvent provides respective remedies for eliminating the discrepancies as mentioned in Peng et al. (2023). With a modular implementation and several released off-the-shelf models, OmniEvent is user-friendly and easy to use.
## 3 The OmniEvent Toolkit
We introduce the overview (SS 3.1) and main features of OmniEvent (SSSS 3.2 to 3.4), as well as an online demonstration (SS 3.5) powered by OmniEvent.
### Overview
The overall architecture of OmniEvent is illustrated in Figure 2. OmniEvent provides a data pre-processing module for unified pre-precessing. Users can either use the supported datasets or customize their own datasets. After pre-processing, OmniEvent provides a flexible modular framework for model implementation. OmniEvent abstracts and disassembles the mainstream models into three basic modules and implements the basic modules in a highly encapsulated way. By combining our provided modules or implementing their own modules, users can easily assemble a model. OmniEvent reproduces several widely-used models in this way. Finally, OmniEvent provides a fair evaluation protocol to convert predictions of different models into a unified and comparable output space.
### Comprehensive Support
OmniEvent implements the entire event understanding pipeline, i.e., all the ED, EAE, and ERE tasks, and can serve as a one-stop event understanding platform. Furthermore, OmniEvent provides comprehensive coverage of models and datasets.
ModelsOmniEvent comprehensively implements representative models for ED, EAE, and ERE. For ED and EAE, OmniEvent covers four mainstream method paradigms, which contain: (1) classification methods, including DMCNN (Chen et al., 2015), DMBERT (Wang et al., 2019), and CLEVE (Wang et al., 2021), which classify event or argument candidates into appropriate types, (2) sequence labeling methods, including BiLSTM+CRF (Wang et al., 2020) and BERT+CRF (Wang et al., 2020), which labels the sequences with the BIO format, (3) span prediction method, including EEQA (Du and Cardie, 2020), which predicts the boundaries of event and argument spans, (4) conditional generation method, including Text2Event (Lu et al., 2021), which directly generates the answers. Moreover, as shown in Figure 2, OmniEvent implements various basic modules and the users can easily combine different modules to build new models, e.g., combining GPT-2 (Radford et al., 2019) and CRF (Lafferty et al., 2001) (GPT-2+CRF). For event relation extraction, OmniEvent implements a unified pairwise
Figure 2: Overview of the OmniEvent toolkit. OmniEvent can serve as a system offering event understanding services to users, while also serving as a toolkit for researchers in model development and evaluation. OmniEvent provides pre-processing scripts for widely-used datasets and converts the datasets into a unified data format. OmniEvent provides modular components and users can easily develop a new model based on the components. OmniEvent also supports large language models (T5-XXL (Raffel et al., 2020) and FLAN-UL2 (Tay et al., 2023)).
relation extraction framework. Especially for the event coreference resolution task, OmniEvent develops an antecedent ranking method. As extracting different relations (causal, temporal) may benefit each other [22], we develop a joint event relation extraction model in OmniEvent.
DatasetsAs shown in Table 2, OmniEvent includes various widely-used Chinese and English event understanding datasets, covering general, legal, and financial domains. For each included dataset, we provide a pre-processing script to convert the dataset into a unified format, as shown in appendix A. For datasets with different pre-processing scripts, e.g., ACE 2005, OmniEvent provides all the mainstream scripts for users.
### Fair Evaluation
As discussed in Peng et al. (2023), there exist several pitfalls in EE evaluation that significantly influence the fair comparison of different models. They are in three aspects: data-preprocessing discrepancy, output space discrepancy, and absence of pipeline evaluation. OmniEvent proposes remedies for eliminating them.
Specify data pre-processingAs the data pre-processing discrepancy mainly comes from using different processing options, OmniEvent provides all the widely-used data pre-processing scripts. Users only need to specify the pre-processing script for comparable results with previous studies.
Standardize output spaceAs suggested in Peng et al. (2023), OmniEvent provides several easy-to-use functions to convert the predictions of different models into a unified output space. Code 1 shows the conversion codes of sequence labeling, span prediction, and conditional generation predictions for event detection. Users can easily utilize the functions to obtain fair and comparable results.
Pipeline evaluationThe pipeline evaluation requires conducting EAE based on predicted triggers. Therefore, the results of EAE models are comparable only when using the same predicted triggers. OmniEvent provides a unified set of predicted triggers for widely-used datasets. Specifically, OmniEvent leverages CLEVE [22], an advanced ED model, to predict triggers for widely-used EE datasets: ACE 2005, KBP 2016, KBP 2017, and RichERE.
### Easy-to-Use
OmniEvent is designed to be user-friendly and easy to use. Specifically, OmniEvent incorporates the following designs.
Easy start with off-the-shelf models OmniEvent provides several off-the-shelf models for event understanding. Specifically, we train a multilingual T5 [23] for ED and EAE
\begin{table}
\begin{tabular}{l l} \hline \hline EE & ACE 2005 [Walker et al., 2006], TAC KBP (Ellis et al., 2014, 2015, 2016; Getman et al., 2017), RichERE (Song et al., 2015), MAVEN (Wang et al., 2020), _ACE 2005 (sh)_ [Walker et al., 2006], _LEVEN_ (Yao et al., 2022), _DuEE_ (Li et al., 2020), _FewFC_ (Zhou et al., 2021) \\ \hline ERE & MAVEN-ERE (Wang et al., 2022), ACE 2005 [Walker et al., 2006], TB-Dense (Chambers et al., 2014), MATRES (Ning et al., 2018), TCR (Ning et al., 2018), CausalTBR (Mirza et al., 2014), EventStoyrLine (Caselli and Vossen, 2017), HiEve (Glavas et al., 2014) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Currently supported datasets in OmniEvent. _Italics_ represent Chinese datasets.
on the collection of included EE datasets, respectively. And we train a joint ERE model based on RoBERTa Liu et al. (2019) on the training set of MAVEN-ERE. As shown in Code 2, OmniEvent provides an interface for inference and users can easily use these models in their applications with a few lines of code.
Modular implementationAs shown in Figure 2, OmniEvent abstracts and disassembles the mainstream models into basic modules. The backbone module implements various text encoders, such as CNN Krizhevsky et al. (2012) and BERT Devlin et al. (2019), to encode plain texts into low-dimension dense vectors. The backbone module also supports LLMs such as T5-XXL Raffel et al. (2020) and FLAN-UL2 Tay et al. (2023). The aggregation module includes various aggregation operations, which aggregate and convert the dense vectors into representations of events, arguments, and relations. The classification module projects the representations into distributions of classification candidates. With the highly modular implementation, users can easily combine the basic modular components to develop new models.
Efficient support for LLMsOmniEvent is built upon Huggingface's Transformers Wolf et al. (2020) and DeepSpeed Rasley et al. (2020), an efficient deep learning optimization library. With the built-in DeepSpeed support, OmniEvent can be used to train and infer LLMs efficiently with only modifications of the startup shell scripts.
### Online Demonstration
Besides the OmniEvent toolkit, we also develop an online demonstration system4 powered by OmniEvent. We train and deploy a multilingual T5BASE model for EE and a RoBERTaBASE model for event relation extraction. The website example is shown in Figure 3. The online system supports EE based on various English and Chinese classification schemata and ERE based on the MAVEN-ERE schema. The website mainly contains three parts. The input part includes a text entry field and several options. Users can choose the language, task, and ontology (i.e., classification schema) for event understanding. The results of EE are shown in the output field with extracted triggers and arguments highlighted. The results of ERE are shown as an event knowledge graph, where a node is an event and an edge is an identified relation between events. The example in Figure 3 shows the results of end-to-end event understanding (ED, EAE, and ERE) from the input plain text.
Footnote 4: [https://omnievent.xlore.cn/](https://omnievent.xlore.cn/)
## 4 Evaluation
In this section, we conduct empirical experiments to evaluate the effectiveness of the OmniEvent toolkit on widely-used datasets.
### Event Extraction
We evaluate the performance of representative EE models implemented in OmniEvent on various widely-used datasets. All the models are evaluated using the unified evaluation protocol, i.e., the output space is standardized and the results of EAE
Figure 3: Example of the online demonstration. We re-arrange the layout of the website for a compact presentation. Better visualization in color.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Task & Dataset & CLS & SL & SP & CG \\ \hline \multirow{4}{*}{ED} & ACE 2005 & \(68.6\) & \(68.6\) & \(71.0\) & \(66.0\) \\ & RicheRE & \(51.4\) & \(50.1\) & \(50.4\) & \(51.4\) \\ & MAVEN & \(68.6\) & \(68.6\) & \(68.1\) & \(61.9\) \\ & ACE 2005 (ZH) & \(75.8\) & \(75.9\) & \(73.5\) & \(71.6\) \\ & LEVEN & \(85.2\) & \(84.7\) & \(84.3\) & \(81.4\) \\ & FewFC & \(67.2\) & \(62.3\) & \(59.0\) & \(71.3\) \\ \hline \multirow{4}{*}{EAE} & ACE 2005 & \(58.7\) & \(49.4\) & \(40.1\) & \(45.7\) \\ & RicheRE & \(68.3\) & \(59.7\) & \(24.3\) & \(24.9\) \\ \cline{1-1} & ACE 2005 (ZH) & \(73.1\) & \(67.9\) & \(35.4\) & \(49.0\) \\ \cline{1-1} & FewFC & \(68.7\) & \(59.8\) & \(46.7\) & \(53.7\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results (F1,%) of implemented EE models in OmniEvent on various EE datasets. CLS: Classification; SL: Sequence labeling; SP: Span prediction; CG: Conditional generation. We evaluate the representative models: DMBERT, BERT+CRF, EEQA, and Text2Event for CLS, SL, SP, and CG, respectively.
are from pipeline evaluation. The pre-processing script for ACE 2005 is the same as in Wadden et al. (2019). For EEQA, we utilize the same prompts as in the original paper for ACE 2005 and manually curate prompts for all the other datasets. The results of event detection and event argument extraction are shown in Table 3. The results demonstrate the effectiveness of OmniEvent, which achieves similar performance compared to their original implementations. OmniEvent provides all the experimental configuration files in the YAML format, which records all the hyper-parameters. Users can easily reproduce the results using the corresponding configuration files.
### Event Relation Extraction
We also conduct empirical experiments to evaluate the performance of ERE models developed in OmniEvent on various widely-used datasets. As shown in Table 4, the results are on par or slightly better than the originally reported results in Wang et al. (2022), which demonstrates the validity of ERE models in OmniEvent. We also provide configuration files containing all the hyper-parameter settings for reproduction.
### Experiments using LLMs
OmniEvent supports efficient fine-tuning and inference for LLMs. To examine the effectiveness and validity of LLMs support in OmniEvent and investigate the performance of models at different scales, we train a series of models on several datasets. Specifically, for ED and EAE, we fine-tune FLAN-T5 (Wei et al., 2022) (from Small to XXL) and FLAN-UL2 (Tay et al., 2023), an LLM with 20 billion parameters on ACE 2005 and RichERE. For ERE, due to the lack of encoder-only LLMs, we use the same models as ED and EAE. We convert the ERE task into a sequence generation task. All the experiments are run on Nvidia A100 GPUs. Fine-tuning FLAN-UL2 on ACE 2005 consumes only about 25 GPU hours, which demonstrates the efficiency of LLMs support in OmniEvent. The results are shown in Figure 4. We can observe that larger models perform better and FLAN-UL2 achieves remarkable performance on ACE 2005 and RichERE datasets, which demonstrates the validity of LLMs support in OmniEvent. We can also notice that the results of ERE are much worse than the results in Table 4, which may be due to the extremely long contexts and complex output space of the ERE task. We hope the findings based on OmniEvent can inspire future research on how to better leverage LLMs for event understanding.
## 5 Conclusion and Future Work
In the paper, we present OmniEvent, a comprehensive, fair, and easy-to-use toolkit for event understanding. With the comprehensive and modular implementation, OmniEvent can help researchers and developers conveniently develop and deploy models. OmniEvent also releases several off-the-shelf models and deploys an online system for enhancing the applications of event understanding models. In the future, we will continually maintain OmniEvent to support more models and datasets and release more effective models.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Relation Type & Dataset & P & R & F1 \\ \hline \multirow{2}{*}{Coreference} & ACE 2005 & \(94.5\) & \(81.7\) & \(87.7\) \\ & MAVEN-ERE & \(97.9\) & \(98.5\) & \(98.2\) \\ \hline \multirow{3}{*}{Temporal} & TB-Dense & \(67.9\) & \(54.0\) & \(60.2\) \\ & MATRES & \(87.2\) & \(93.8\) & \(90.4\) \\ & TCR & \(78.3\) & \(78.3\) & \(78.3\) \\ & MAVEN-ERE & \(53.3\) & \(61.4\) & \(57.1\) \\ \hline \multirow{3}{*}{Causal} & CausalTB & \(100.0\) & \(50.0\) & \(66.7\) \\ & EventStoryLine & \(19.5\) & \(25.8\) & \(22.2\) \\ & MAVEN-ERE & \(36.0\) & \(26.4\) & \(30.5\) \\ \hline \multirow{2}{*}{Subevent} & HiEve & \(21.4\) & \(13.4\) & \(16.5\) \\ & MAVEN-ERE & \(30.8\) & \(24.3\) & \(27.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental results (%) of the implemented pairwise-based ERE model in OmniEvent on various ERE datasets. The backbone is RoBERTaBASE. The evaluation metric for coreference is B-cubed (Bagga and Baldwin, 1998).
Figure 4: Experimental results of models at different scales on all event understanding tasks.
### Limitations
The major limitations of OmniEvent are two-fold: (1) OmniEvent currently does not support document-level event extraction models and datasets, such as RAMS (Ebner et al., 2020) and WikiEvents (Li et al., 2021). OmniEvent also lacks support for a wider range of ERE models, such as constrained loss (Wang et al., 2020) and ILP inference (Han et al., 2019). In the future, we will continue to maintain OmniEvent to support a broader range of models and datasets. (2) OmniEvent currently only supports two languages, Chinese and English, and does not yet support event relation extraction in Chinese. This might constrain the widespread usage of the OmniEvent toolkit. In the future, OmniEvent will support more languages.
## Ethical Considerations
We will discuss the ethical considerations and broader impact of this work here: (1) **Intellectual property.** OmniEvent is open-sourced and released under MIT license5. We adhere to the original licenses for all datasets and models used. Regarding the issue of data copyright, we do not provide the original data and we only provide processing scripts for the original data. (2) **Environmental Impact.** The experiments are conducted on the Nvidia A100 GPUs and consume approximately 350 GPU hours. This results in a substantial amount of carbon emissions, which incurs a negative influence on our environment (Strubell et al., 2019). (3) **Intended Use.** OmniEvent can be utilized to provide event understanding services for users, and it can also serve as a toolkit to assist researchers in developing and evaluating models. (4) **Misuse risks.** OmniEvent **should not** be utilized for processing and analyzing sensitive or uncopy-righted data. The output of OmniEvent is determined by the input text and **should not** be used to support financial or political claims.
Footnote 5: [https://opensource.org/license/mit](https://opensource.org/license/mit)
|
2309.03724 | HSTF-Model: an HTTP-based Trojan Detection Model via the Hierarchical
Spatio-Temporal Features of Traffics | HTTP-based Trojan is extremely threatening, and it is difficult to be
effectively detected because of its concealment and confusion. Previous
detection methods usually are with poor generalization ability due to outdated
datasets and reliance on manual feature extraction, which makes these methods
always perform well under their private dataset, but poorly or even fail to
work in real network environment. In this paper, we propose an HTTP-based
Trojan detection model via the Hierarchical Spatio-Temporal Features of
traffics (HSTF-Model) based on the formalized description of traffic
spatio-temporal behavior from both packet level and flow level. In this model,
we employ Convolutional Neural Network (CNN) to extract spatial information and
Long Short-Term Memory (LSTM) to extract temporal information. In addition, we
present a dataset consisting of Benign and Trojan HTTP Traffic (BTHT-2018).
Experimental results show that our model can guarantee high accuracy (the F1 of
98.62%-99.81% and the FPR of 0.34%-0.02% in BTHT-2018). More importantly, our
model has a huge advantage over other related methods in generalization
ability. HSTF-Model trained with BTHT-2018 can reach the F1 of 93.51% on the
public dataset ISCX-2012, which is 20+% better than the best of related machine
learning methods. | Jiang Xie, Shuhao Lia, Xiaochun Yun, Yongzheng Zhang, Peng Chang | 2023-09-07T14:06:15Z | http://arxiv.org/abs/2309.03724v1 | HSTF-Model: an HTTP-based Trojan Detection Model via the Hierarchical Spatio-Temporal Features of Traffics
###### Abstract
HTTP-based Trojan is extremely threatening, and it is difficult to be effectively detected because of its concealment and confusion. Previous detection methods usually are with poor generalization ability due to outdated datasets and reliance on manual feature extraction, which makes these methods always perform well under their private dataset, but poorly or even fail to work in real network environment. In this paper, we propose an HTTP-based Trojan detection model via the Hierarchical Spatio-Temporal Features of traffics (HSTF-Model) based on the formalized description of traffic spatio-temporal behavior from both packet level and flow level. In this model, we employ Convolutional Neural Network (CNN) to extract spatial information and Long Short-Term Memory (LSTM) to extract temporal information. In addition, we present a dataset consisting of Benign and Trojan HTTP Traffic (BTHT-2018). Experimental results show that our model can guarantee high accuracy (the F1 of 98.62%\(\sim\)99.81% and the FPR of 0.34%\(\sim\)0.02% in BTHT-2018). More importantly, our model has a huge advantage over other related methods in generalization ability. HSTF-Model trained with BTHT-2018 can reach the F1 of 93.51% on the public dataset ISCX-2012, which is 20+% better than the best of related machine learning methods.
keywords: HTTP-based Trojan Detection, Spatio-Temporal Features, Deep Learning +
Footnote β : journal: Computers \(\&\) Security
## 1 Introduction
Trojan, especially HTTP-based Trojan, is used by attackers to pass control instructions and perform malicious actions. Nowadays, on average, nearly 1.74 million host IP addresses
in the global Internet were infected with "Flying" worm Trojan every month[1]. With HTTP traffic as the carrier, attackers use Trojan scripts to transmit malicious information in the network, which brings great threats to network security of equipment and data. Moreover, it is difficult to distinguish the HTTP-based Trojan traffic from benign traffic because of its concealment and confusion.
The network equipment and data security issues caused by HTTP-based Trojan have attracted more attention from people. HTTP-based Trojan traffic detection belongs to intrusion detection. Generally, people deploy intrusion detection systems (IDSs) to prevent network attacks. Anomaly detection is one of the main methods for intrusion detection field. It can detect unknown (0-day) attacks by constructing effective feature engineering of malicious and benign behaviors[2]. At present, researchers have proposed many anomaly detection methods based on machine learning (ML-based) and deep learning (DL-based), for detection of specific attacks (XSS[3], DDoS[4], _etc._). There is less research on HTTP-based Trojan traffic detection. Moreover, these methods rely heavily on expert knowledge for the design of feature engineering and are limited by experimental scenarios. However, it is difficult for the original detection method or feature engineering to maintain excellent detection performance under new conditions[5]. designing feature engineering requires expert knowledge and designing effective feature engineering for a specific application scenario is a hard work[6]. In some cases, it is not even possible.
Deep learning (DL) is widely applied and researched because of its powerful feature learning capabilities over the past several years. In intrusion detection, researchers propose DL-based methods to detect various specific network attacks (malicious traffic[7; 8], malware[9], _etc._). After inputting low-dimensional features, DL can automatically abstract them into high-dimensional features through hierarchical transfer.
According to survey, there is currently no special HTTP-based Trojan traffic dataset. Although there is related malicious traffic in some well-known datasets[10; 11; 12], it is relatively small and outdated for HTTP-based Trojan traffic, so that it is difficult to cover the full picture of HTTP-based Trojan attack patterns. Therefore, we provide a real dataset consisting of Benign and Trojan HTTP Traffic (BTHT-2018) to support related research.
In this paper, we build a detection model based on DL knowledge. Based on the feature analysis of significant real traffic data, our model can learn the essential characteristics better, and obtain stronger interpret ability and generalization. In general, the contributions of this paper are as follows.
* For HTTP-based Trojan detection, we present a formalized description of traffic spatio-temporal behavior from both packet level and flow level. Based on temporal and spatial dimensions, we take the Trojan's sending behavior (on-line, heartbeat, _etc._) and receiving behavior (control instruction, _etc._) as key monitoring steps to monitor traffic at the gateway. Traffic statistics characteristics are extracted from packet level and flow level, and then data is preprocessed and encoded by feature encoders.
* HSTF-Model (Model based on Hierarchical Spatio-Temporal traffic Features), a DL-based hierarchical hybrid structure neural network, is proposed, based on characteristics of the data on two attributes of temporal and spatial. In the temporal domain,
a flow usually consists of multiple packets, and there is a timing relationship between packets. We use LSTM to extract their temporal characteristics. In the spatial domain, most flow payloads are large-scale, with both structural and textual features. Therefore, we use CNN to accelerate the convergence speed while extracting features. Also, we use fully-connected Multilayer Perceptron (MLP)[13] to process statistics and maximize its feature attributes. Experiments show that the model has excellent robustness and generalization. In the experiment, HSTF-Model can obtain the F1 of 99.47% when detecting HTTP-based Trojan traffic.
* We built a prototype system based on HSTF-Model. Experimental verification was performed in dataset BTHT-2018 and public dataset ISCX-2012 that we collected and cleaned. In robustness, the model can reach F1 of 98.62% in imbalanced dataset with the ratio of 1 : 100 (malicious : benign). In generalization, models trained with BTHT-2018 can achieve 91.14% precision, 95.72% recall, and 93.51% F1 on ISCX-2012, while other methods can only reach 73.5% F1 at most. In addition, there is currently no special HTTP-based Trojan traffic dataset. Therefore, we provide BTHT-2018 to help researchers to further investigate such attacks 1. Footnote 1: The dataset can be found at [https://github.com/ComputersSecurity/BTHT-2018](https://github.com/ComputersSecurity/BTHT-2018) and [https://drive.google.com/open?id=1d_SVIOzzgw2kYPIC5dKjgO151YXTDZUi](https://drive.google.com/open?id=1d_SVIOzzgw2kYPIC5dKjgO151YXTDZUi). Researchers who are going to use the dataset should indicate the original source of data by citing this paper.
The remainder of this paper is organized as follows. HTTP-based Trojan scene analysis is described in Section 2, and the introduction of traffic data structure and feature in Section 3. Section 4 introduces the methodology of the model. We conduct experiments in Section 5. Related works are described in Section 6, Subsequently, we discuss the model and experiments in Section 7. Section 8 draws our conclusion and gives future research.
## 2 HTTP-based Trojan scene analysis
In this paper, we show the general process of HTTP-based Trojan deployment and implementation of attacks, as shown in Fig. 1. It can be divided into 4 phases: implantation phase, incubation phase, on-line phase, and attack phase.
During the implantation phase, the attacker uploads to the controlled web server. Then, the script is downloaded to the host when the victim host accesses the server. Then, the HTTP-based Trojan script enters the incubation phase. After successfully lurking, it will contact the controlled server from time to time, such as sending heartbeat packets. At this time, the script enters the on-line phase. After receiving the HTTP-based Trojan script's contact information, the attacker will send instructions to remotely control the HTTP-based Trojan to perform malicious actions (stealing private information, further intrusions, deploying springboards, _etc._). The HTTP-based Trojan script successfully received the instruction means that it entered the attack phase and began to carry out the attack.
The operation of the victim is unpredictable due to human factors, so the HTTP-based Trojan is difficult to prevent during the implantation phase. During the incubation phase,
the device and information are still safe, and the HTTP-based Trojan will not show malicious actions, which makes it difficult for us to discover the insecure factors. If the Trojan is discovered during the attack phase, it is too late, at which point it has started executing malicious instructions in the victim's host.
During the on-line phase, the HTTP-based Trojan starts to communicate with the outside world, sending on-line data packets and receiving instructions, which have spatio-temporal characteristics. In spatial, the HTTP-based Trojan generates packet data that is different from benign network behavior. The character distribution in the packet and the relative position between the characters (like the pixels of the picture) are distinguishable. In temporal, multiple packets of a flow have a natural temporal relationship. The flow generated by the HTTP-based Trojan and benign network behavior has different packet sequence characteristics (size, number, _etc._), which is also effective discrimination for detecting HTTP-based Trojan and benign traffic. The corresponding formal description is shown below.
During the on-line phase, a HTTP-based Trojan generates data stream \(bflow=(bpkt_{1},\)\(bpkt_{2},...,bpkt_{N})\), the benign network behavior produces data stream \(wflow=(wpkt_{1},wpkt_{2},\)\(...,wpkt_{M})\).
Let there be a spatial information discrimination function \(f_{spatial}(pkt1,pkt2)=\lambda(0\leq\lambda\leq 1)\) of packets. When \(pkt1=pkt2\), \(\lambda=0\), when \(pkt1\neq pkt2\), \(\lambda>0\). Then for \(bflow\) and \(wflow\), there is:
\[f_{spatial}(bpkt_{i},wpkt_{j})>0\;\;(i=1,2,...,N;j=1,2,...M)\]
Let there be a temporal information discrimination function \(f_{temporal}(flow1,flow2)=\lambda(0\leq\lambda\leq 1)\) of packets. When \(flow1=flow2\), \(\lambda=0\), when \(flow1\neq flow2\), \(\lambda>0\). Then for \(bflow\) and \(wflow\), there is:
\[f_{temporal}(bflow,wflow)>0\]
This paper focuses on HTTP-based Trojan malicious behavior. When the attack enters on-line phase to communicate with the outside world, data packets it sends and receives
Figure 1: General attack process of HTTP-based Trojan.
usually have different characteristics from benign data based on spatio-temporal characteristics. Under the premise of ensuring the security of equipment and data, we can effectively detect the malicious behavior of the HTTP-based Trojan.
## 3 Traffic data structure and feature analysis
In this paper, a sample is a flow. A flow is defined as packets traveling between two computer addresses using a particular protocol on a particular pair of ports[14]. Packets with the same tuple of information (\(host_{src}\), \(port_{src}\), \(host_{dst}\), \(port_{dst}\), \(HTTP\)) belong to the same flow. We define that flow is full-duplex. That is, the request and response data belong to the same flow.
### Traffic data structure
We collected data from the actual network environment and generated a dataset BTHT-2018 (Benign and Trojan HTTP Traffic) after data cleaning and statistical analysis. Also, we extracted HTTP-based Trojan related traffic from ISCX-2012 and processed these into the same data format as BTHT-2018. For details, see section 5.2.
BTHT-2018 consists of BTHT-R (Raw) and BTHT-S (Statistical), which represent the raw traffic data and the corresponding statistical features. The data types include HTTP-based Trojan traffic and Benign traffic. The former comes from network operators. and the latter is collected at a laboratory gateway.
Fig. 2 shows the on-line package of a HTTP-based Trojan. It can be seen that it has features on key fields of structure and text. Because of the data security protection, the sensitive data (Host, IP, _etc._) of these flows are irreversibly hashed. In order to ensure data consistency, benign traffic uses the same processing technology.
### Statistical feature analysis
Raw traffic data has valuable statistical features. We use expert knowledge to combine statistics and raw information, and then, extract comprehensive features based on deep learning. The url length of malicious traffic, for instance, is usually longer than that of benign urls. These statistical features are useful in identifying the type of traffic. In this paper, a raw flow sample generates a corresponding statistical feature sample. The raw dataset is dataset BTHT-R, and these statistical feature samples constitute the corresponding dataset BTHT-S.
Figure 2: On-line packet generated by a HTTP-based Trojan script.
Traffic is just like the written content produced by people communicating with language. It consists of paragraphs, sentences, phrases and words. Similarly, an HTTP flow is also hierarchical. Fig.3 illustrates this feature. An HTTP flow consists of multiple packets. A packet consists of a header line, multiple field lines, and a payload. Therefore, we also use this property to build hierarchical statistical features.
We extract features from two levels to form two feature vectors, packet-level (PL) and flow-level (FL), which take advantage of the hierarchical temporal nature of HTTP traffic data. Neural networks can further improve detection performance based on these statistical features.
#### 3.2.1 **Packet-level statistical feature**
The packets of HTTP traffic can be divided into requests and responses. The statistical features of the request packets and response packets constructed in this paper are shown in Tab. 1. A request packet mainly counts the request method, url length, field name and field value length of each field row, and payload length. A response packet mainly counts the response mode and the length of the status description. RFC1998[15] recommends using 47 field lines for HTTP-based web services, but most web services do not use that much. In our dataset, 99.9% of the data uses only a few field features. Therefore, we choose to count the data of 18 fields. Surely, this value can be adjusted according to different application scenarios. We use the method of replacing superfluous fields and deficient fields with 0 to maintain consistent data. In this paper, a packet generates a \(1\times 41\) vector.
#### 3.2.2 **Flow-level statistical feature**
A flow consists of multiple request packets and response packets, as a paragraph consists of multiple sentences. We usually analyze a paragraph through multiple sentences. Similarly, in the network, characteristics of traffic at the flow level can reflect more behavioral information and show the attacker's intention more comprehensively. In this paper, we di
Figure 3: Hierarchical structure of HTTP traffic data.
vide the flow into two flow-level sequences, request sequence and response sequence, and then extract FL-vector separately.
The first is the request sequence. We count the number of common packets, the number of request types, and the byte size sequence. The second is the response sequence. We count the number of packets, the number of response types, and the byte size sequence. We combine the analysis of the dataset to specify that a sequence will not generate more than 50 packets. If it exceeds, it will be discarded. If it is missing, it will be filled with 0. Of course, this can also be adjusted flexibly. The composition format of the statistical feature of request request and response sequence is shown in Tab.2. In this paper, a request sequence constitutes a \(1\times 57\) feature vector, and a response sequence generates a \(1\times 58\) feature vector.
An raw flow generates PL-vector and FL-vector through feature analysis. The two become a sample in BTHT-S, which will be used as supplement to the raw data in subsequent experiments for traffic behavior analysis. Feature analysis uses empirical expert knowledge, which can provide more comprehensive feature information for model discrimination.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Request** & & \multicolumn{2}{c|}{**Response**} \\ \hline Item Type & Position & Item Type & Position \\ \hline \hline Req type & 0 & Res type & 0 \\ Length of url & 1 & Length of state description & 1 \\ Protocol version & 2 & Protocol version & 2 \\ Lines of fields & 3 & Lines of fields & 3 \\ Length of fields name & 4-21 & Length of fields name & 4-21 \\ Length of fields value & 22-39 & Length of fields value & 22-39 \\ Length of payload & 40 & Length of payload & 40 \\ \hline \end{tabular}
\end{table}
Table 1: Composition of the packet-level vector (PL-vector).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Request** & & \multicolumn{2}{c|}{**Response**} \\ \hline Item Type & Position & Item Type & Position \\ \hline \hline Count of req pkts & 0 & Count of res pkts & 0 \\ Count of βgetβ & 1 & Count of β1XXβ & 1 \\ Count of βpostβ & 2 & Count of β2XXβ & 2 \\ Count of βheadβ & 3 & Count of β3XXβ & 3 \\ Count of βoptionsβ & 4 & Count of β4XXβ & 4 \\ Count of other requests & 5 & Count of β5XXβ & 5 \\ Mean of pkt bytes & 6 & Count of other responses & 6 \\ Seq of pkts bytes & 7-56 & Mean of pkt bytes & 7 \\ & & Seq of pkts bytes & 8-57 \\ \hline \end{tabular}
\end{table}
Table 2: Composition of the flow-level vector (FL-vector).
## 4 Model methodology
In this section, we build a hybrid structure neural network model based on deep learning, called HSTF-Model. We introduce the structure of the model and the main network components, and analyze the complexity of the model.
### Overview
HSTF-Model model structure is shown in Fig. 4. A flow is divided into request and response. In addition to the raw data, there are a series of spatio-temporal sequence statistical features. Feature encoders based on MLP are used for feature encoding. Subsequently, CNN is designed to process the raw packet feature to extract spatial information. The raw packet features are combined with the corresponding statistical features to form feature vectors at the packet level. Multiple feature vectors form a temporal feature sequence, and the aggregated features are extracted by the LSTM. Finally, the processing results are combined with statistical features at the flow level, and the discrimination results are output through the fully connected network. Experiments show that this combination of raw information and statistical features can effectively improve the performance of the model. And we can change the parameters of the model in the combination stage such as CNN output and the corresponding packet-level statistical feature size to indicate the credibility of the raw traffic and statistical features.
Figure 4: Architecture of HSTF-Model. (**F**: FL-vector; **P**: PL-vector; **R**: Raw-data; ***\({}^{\prime}\)**: Intermediate variables (\(R^{\prime}\), _etc._); **EF**: Feature Encoder of statistical data at flow-level; **EP**: Feature Encoder of statistical data at packet-level; **ER**: Feature Encoder of raw data).
### Encoder
The encoder converts the input data into feature codes suitable for neural network processing. We use MLP to build multiple encoders. MLP is a kind of feedforward artificial neural network model. It maps each data input to a single output. The structure includes input layer, hidden layer, and output layer. Non-linear activation functions are used between layers, which allows MLP to handle non-linear problems (xor problems, _etc._). MLP is inherited and developed by DNN, adding more hidden layers and richer activation function methods to improve the ability of the model to fit complex functions. Therefore, MLP can be considered as simple DNN, and DNN is complex upgraded MLP.
For the raw packet data, the model extracts features from the traffic in the preprocessing stage and converts them into a standard format that can be processed by the neural network. The basic data preprocessing is to transform the packet into two-dimensional data in the form of an image, and each pixel is a text character tensor. The packets composed of multiple messages form a matrix sequence and enter the subsequent neural network through the feature encoder. The raw data feature encoder is shown in Fig. 5. The raw text after preliminary processing is transformed into a richer and suitable tensor representation through MLP.
Before entering the encoder, we normalized the data to generalize the distribution of the unified sample to speed up the convergence of the model. Since most of the HTTP data is ascl encoded, we directly use the mod (%) 128 operation to scale a character's ascl value to between \(0\sim 1\). In this way, the data is transformed as a whole, and the original relative relationship between the data can be maintained while the values are normalized to a specific interval. The preprocessing algorithm for text data is described in Algorithm 1.
For the spatio-temporal sequence statistical features, at the packet level, a request packet and a response packet each generate a \(1\times 41\) statistical vector. At the flow level, the request sequence and response sequence generate \(1\times 57\) and \(1\times 58\) feature vectors, respectively. A flow sample generates four statistical vectors of \(n\times 41\), \(m\times 41\), \(1\times 57\), and \(1\times 58\), where \(n\) and \(m\) represent the number of request and response packets in the flow. There is no clear correlation between statistical features, the amount of data input is small, and feature
Figure 5: Feature encoder of raw data.
relationships are relatively easy to extract. Therefore, we built encoders based on MLP to process the statistical features, learn the statistical features, derive more data expressions, and further transform the feature vectors into multiple feature combination representations. As shown in Fig. 6, the statistical feature encoders are divided into PL feature encoders and FL feature encoders, but the difference is only in the difference of neurons, so they have the same preprocessing process.
Before entering the feature encoder, we also used normalization to scale it to \(0\sim 1\) on the basis of maintaining the relative relationship of the statistical data, which is more convenient for learning by the encoder. As a rule of thumb, we propose a sigmoid-like function as the normalization method for statistical data[16]. The corresponding statistical feature processing algorithm is described in Algorithm 2.
### Cnn
CNN belongs to deep feed-forward neural network that includes convolution and pooling operations. It has the capability of representation learning and can perform translation-invariant classification in the input data according to its hierarchical structure. The iterative upgrade of the computing power of the hardware equipment enables CNN to train faster. In addition, the hidden layer of CNN has the sparseness of convolution kernel parameter sharing and inter-layer connection, which makes it possible to effectively extract the spatial structure features of the data with a small amount of calculation without feature engineering.
The core of CNN is the convolution and pooling layer, and the corresponding operations used in this paper are shown in Eq (1) and Eq (2). The cross-correlation calculation is performed in the output feature map \(Z^{l}\) of the \(l\) layer and the convolution kernel \(w^{l+1}\) of the \(l+1\) layer. The output of the convolutional layer is formed, and then the pooling operation (if it exists) is performed to form the convolutional overall output of the \(l+1\) layer. \(K\) is the number of channels of the feature map, \(f\) is the corresponding convolution kernel and pooling size, \(s_{0}\) is the stride, and \(p\) is the number of padding. For structured information,
Figure 6: Feature encoder of statistical data at packet level and flow level.
```
1:\(raw\_pkt\): a raw packet. \(m\): matrix size. \(s1,s2,...,sn\): Number of neurons in each layer of encoder.
2:\(Image\_pkt\): Feature image matrix of raw packet.
3:Step 1: Convert the packet into standard matrix of numerical form
4:\(raw\_pkt=(int)raw\_pkt\) % 128
5:\(mat=a\ matrix\ of\ m[0]\times m[1]\)
6:\(mat[0]=the\ first\ m[1]\ elements\ of\ raw\_pkt[0]\)
7:for\(i=1\) to \(m[0]-2\)do
8:\(mat[i]=the\ first\ m[1]\ elements\ of\ raw\_pkt[i]\)
9:endfor
10:\(mat[m[0]-1]=the\ first\ m[1]\ elements\ of\ raw\_pktpayload\)
11:Step 2: Build n-layer feature encoder
12:for\(i=1\) to \(n\)do
13: Layer-i = The full connectivity layer of \(si\) neurons
14:endfor
15: raw_encoder = Layer-1 + Layer-2 +... + Layer-n
16:Step3: Compute feature images
17:\(Image\_pkt=raw\_encode(mat)\)
```
**Algorithm 1** Feature coding algorithm of raw data
```
1:\(stat\_seq\): a statistical sequence. \(s1,s2,...,sn\): Number of neurons in each layer of encoder.
2:\(Feature\_seq\): Statistical feature coding sequence.
3:Step 1: Normalize each element of statistical sequence
4:\(stat\_seq=\frac{1-e^{-stat\_seq}}{1+e^{-stat\_seq}}\)
5:Step 2: Build n-layer feature encoder
6:for\(i=1\) to \(n\)do
7: Layer-i = The full connectivity layer of \(si\) neurons
8:endfor
9:\(stat\_encoder=Layer-1\) + Layer-2 +... + Layer-n
10:Step3: Compute feature images
11:\(Feature\_seq=stat\_encode(stat\_seq)\)
```
**Algorithm 2** Feature coding algorithm of statistical data
each convolution kernel traverses the input features regularly, performs matrix multiplication and superimposes the amount of deviation in the features. Multiple convolution kernels can extract high-order global features of the data from multiple angles. The function of the pooling operation is feature selection and information filtering, and processing downsampling of the convolution kernel output. According to the preset pooling function, the pooling layer can convert point features into regional features, further aggregate features and reduce model overfitting.
\[\begin{split} Z^{l+1}(i,j)&=[Z^{l}\otimes w^{l+1}](i,j)+b\\ &=\sum_{k=1}^{K_{l}}\sum_{x=1,y=1}^{f}\left[Z_{k}^{l}(s_{0}i+x,+s _{0}j+y)w_{k}^{l+1}(x,y))\right]+b\\ & Z_{k}^{l}(i,j)=\left[\sum_{x=1,y=1}^{f}Z_{k}^{l}(s_{0}i+x,s_{0}j +y)^{p}\right]^{\frac{1}{p}}\end{split} \tag{1}\]
Based on the structural characteristics of the packet and the advantages of the CNN, we use CNN to process the raw packet data. A packet is preprocessed to form a two-dimensional 'graphic' with a channel 1 similar to an image. We extract pixel features based on convolution kernel, a single convolution kernel extracts local features, and multiple convolution kernels form a convolution layer to extract the overall features. Then, we use maximum pooling (\(p\rightarrow+\infty\) in Eq (2)) to perform feature filtering and processing in the convolution layer output. The activation function \(ReLU\) is connected between network layers as an activation function to inject nonlinear features into the model.
### **Lstm**
LSTM is an improvement of Recurrent Neural Network (RNN). Traditional RNNs have the disadvantages of memory loss and the inability to control the nonlinear relationship of long-span sequence information. When the sequence is too long, traditional RNNs will experience gradient disappearance or explosion, which causes it to use only neighboring data features. LSTM uses gate structure to selectively remember and forget past information. A gate consists of an activation layer and a pointwise operation. By learning the previous data and using it in the current task, the gate structure allows the LSTM to selectively store information for subsequent processing, which allows information to be passed further along the timing chain.
Sundermeyer _et al._[17] show that the most important structure in the LSTM is the Forget gate, followed by the Input gate, and finally the Output gate. Three gate structures constitute an LSTM cell, an intelligent network unit. We enter this cell through loop input sequence data, which can record the long-term dependent data characteristics.
Before storing new information, cell will choose to discard a part of the information passed in the previous sequence. The forget gate, \(f_{t}\), gives a value of 0-1 according to the stored information \(H_{t-1}\) and the input \(x_{t}\) at the current stage, and selects the proportion of the retained information.
\[f_{t}=\delta(W_{f}\cdot[h_{t-1},x_{t}]+b_{f}) \tag{3}\]
After discarding the information, the input gate, \(i_{t}\), decides how much new information to add.
\[i_{t}=\delta(W_{i}\cdot[h_{t-1},x_{t}+b_{i}]) \tag{4}\]
Based on the current task, cell decides the information \(\tilde{C}_{t}\) that needs to be updated. The input and output gates combine to determine the latest information \(C_{t}\).
\[\begin{split}\tilde{C}_{t}&=\tanh(W_{C}\cdot[h_{t -1},x_{t}]+b_{C})\\ C_{t}&=f_{t}\times C_{t-1}+i_{t}\times\tilde{C}_{t }\end{split} \tag{5}\]
After the new information is stored, we need to determine the output state \(h_{t}\) of the cell, which is controlled by the output gate, \(o_{t}\).
\[\begin{split} o_{t}&=\delta(W_{o}\cdot[h_{t-1},x_{t }]+b_{o})\\ h_{t}&=o_{t}\times\tanh(C_{t})\end{split} \tag{6}\]
The LSTM can automatically select the processed and transmitted information through these three gate structures, so that it can effectively avoid the problem of long-term dependence.
In the flow, a packet is processed with CNN and MLP to output feature data of the same dimension. These data are naturally temporal sequence because of the conversation. The latter data depends on the previous data. And the dependencies in the data sequence may have a long span, due to some factors such as network delay and retransmission. Therefore, we use LSTM to process these feature data. After the previous feature data is processed by the LSTM cell, it is passed to the subsequent tasks. LSTM cell outputs the processing result of the whole sequence at the last moment. These features are combined with the statistical feature coding at the flow level to enter the subsequent network layer.
### **Complexity**
We analyze the complexity of HSTF-Model. Under the premise of determining the data feature and model parameters, the training time complexity of the model is linear with the sample size, and the space complexity is constant.
#### 4.5.1 **Time complexity**
Time complexity is the number of statement executions in an algorithm, called statement-frequency or time-frequency. In this paper, the time complexity of neural networks is the number of operations the model performs in training.
For CNN, the time complexity is the sum of the computing time of each layer. A single convolution kernel needs to traverse all the inputs, and the time complexity is the product of the size of the convolution kernel and the size of the input data. The overall time complexity of CNN is shown in Eq (7), \(D_{CNN}\) is the depth of CNN, \(M\) is the size of a convolution kernel
output feature map, \(K\) is the convolution and size, \(C_{l-1}\) is the number of output channels of the previous layer,\(C_{l}\) is the number of output channels of the current layer, that is, the number of convolution kernels (\(C_{0}=(P\cdot n_{req}\cdot n_{res})\), \(n_{req}\) and \(n_{res}\) are the size of the request and response in a flow, and \(P\) is the packet size).
\[Time_{CNN}\sim O\left(\sum_{l=1}^{D_{CNN}}M_{l}^{2}\cdot K_{l}^{2}\cdot C_{l-1 }\cdot C_{l}\right) \tag{7}\]
For LSTM, time is mainly consumed by the length of the input sequence, as shown in Eq (8), \(L\) is the sequence length (\(L=n_{req}+n_{res}\)), and \(C_{LSTM}\) is the cell state dimension.
\[Time_{LSTM}\sim O\left(C_{LSTM}\cdot L\right) \tag{8}\]
For fully connected layers, neurons between layers are connected to each other, and the operations between layers are multiplicative. Therefore, the time complexity is shown in Eq (9), which is the multiplication of the neuron state dimensions of each layer, \(D_{FC}\) is the number of layers, and \(S_{l}\) is the number of neuron of \(l\) layer.
\[Time_{FC}\sim O\left(\prod_{l=1}^{D_{FC}}S_{l}\right) \tag{9}\]
The time complexity of our proposed HSTF-Model is mainly composed of sample size, CNN processing, LSTM processing, and fully connected layer, as shown in Eq (10), where \(N\) is the sample size.
\[Time_{HSTF-Model} =N\cdot\left(Time_{CNN}+Time_{LSTM}+Time_{FC}\right) \tag{10}\] \[\sim O\left(\alpha\cdot N\right)\sim O\left(N\right)\]
In Eq (10), \(\alpha\) is a variable determined by the data feature size and model parameters, when these are determined, it is a constant. In general, for time complexity, there is a linear relationship with the sample size.
#### 4.5.2 **Space complexity**
Space complexity is a measure of the storage space required for an algorithm to execute within a computer. The spatial complexity of the neural network is the sum of its total parameters and the output feature maps of each layer. The total parameter amount is the sum of the weight and bias parameters of each neuron in the network. The output feature map of each layer is the temporary space occupied by the output of the neuron when processing data.
The space complexity of the parameter quantity of HSTF-Model is shown in Eq (11), which is mainly composed of CNN, LSTM, and fully connected layer, \(D_{CNN}\) is the depth of CNN, \(K\) is the convolution and size, \(C_{l-1}\) is the number of output channels of the previous layer, \(C_{l}\) is the number of output channels of the current layer. \(C_{LSTM}\) is the number of state of LSTM, and \(D_{FC}\) is the depth of fully connected layer, and \(S_{l}\) is the number of neuron states of \(l\) layer.
\[Space_{parameter}\sim O\left(\left(\sum_{l=1}^{D_{CNN}}K_{l}^{2}\cdot C_{l-1} \cdot C_{l}\right)+C_{LSTM}+\left(\sum_{l=1}^{D_{FC}}S_{l}\right)\right) \tag{11}\]
The output feature map complexity is the size of the feature map calculated by each network layer in the actual operation. After the depth parameter is fixed, space is mainly related to the amount of data. As shown in Eq (12), \(M\) is the output space size of CNN.
\[Space_{feature}\sim O\left(\left(\sum_{l=1}^{D_{CNN}}M_{l}^{2}\cdot C_{l} \right)+C_{LSTM}\cdot n_{req}\cdot n_{res}+\left(\sum_{l=1}^{D_{FC}}S_{l} \right)\right) \tag{12}\]
The overall spatial complexity of HSTF-Model is shown in Eq (13), \(b\) is the sample size for a batch.
\[Space_{HSTF-Model} =Space_{parameter}+b\cdot Space_{feature} \tag{13}\] \[\sim O\left(\beta\right)\sim O\left(1\right)\]
In Eq (13), \(\beta\) is determined by the batch data size and model size, when these are determined, it is a constant. In general, for space complexity is \(O(1)\).
#### 4.5.3 **Comparison with other methods in complexity**
We analyze machine learning methods that are widely used in anomaly detection (SVM(Sundundararajan et al., 2017), _etc._), and find the relationship between complexity and training sample size on the premise of determining sample features and model parameters. The results are shown in Tab 3, \(N\) is the training sample size. The time and space complexity of HSTF-Model is the same as the optimal complexity. In the experiment in section 5.7, in addition, HSTF-Model has better detection and generalization performance.
## 5 Experiment and evaluation
In this section, we build a prototype system and use different evaluation indicators and the same evaluation criteria to test the proposed detection method on real and public datasets. The optimal structure of the model was determined, the robustness and generalization were tested, and the results were analyzed and discussed.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Complexity**} \\ \cline{2-3} & _Time_ & _Space_ \\ \hline \hline Naive Bayes & \(O(N)\) & \(O(1)\) \\ \hline Decision Tree & \(O(NlogN)\) & \(O(1)\) \\ \hline SVM & \(O(N^{2})\) & \(O(1)\) \\ \hline HSTF-Model & \(O(N)\) & \(O(1)\) \\ \hline \end{tabular}
\end{table}
Table 3: Complexity comparison of HSTF-Model with other methods.
### Prototype system
We proposed a prototype system based on HSTF-Model, combined with traffic spatio-temporal behavior modeling, to effectively detect HTTP-based Trojan traffic. The algorithm is shown in Algorithm 3. First, the data is preprocessed (normalized, _etc._) according to the sequence model, and transformed into tensors suitable for neural network processing. Subsequently, low-latitude features are input into HSTF-Model, and feature encoders are used to aggregate and unify the data features. The model extracts packet information through CNN, extracts sequence information through LSTM, to further learn and abstract features, and finally, discriminates and outputs detection results comprehensively.
```
1:\(raw\_flow\) : a raw flow. \(\lambda\) : threshold to determine whether data is malicious. \(P\) : Relevant parameters of building model.
2:\(class\): "Malicious" or "Benign".
3:Step 1: Preliminary processing data and Extract statistical features
4:\(req\_pkt\_seq,req\_pl\_seq,req\_fl,res\_pkt\_seq,res\_pl\_seq,res\_fl=raw\_flow\)
5:Step 2: Build HSTF-Model
6:\(\text{model = HSTF-Model}(P)\)
7:Step3: Compute results
8:\(\text{p = model}(req\_pkt\_seq,req\_pl\_seq,req\_fl,res\_pkt\_seq,res\_pl\_seq,res\_fl)\)
9:if\(p[1]>\lambda\)then
10: class = "Malicious"
11:else
12: class = "Benign"
13:endif
```
**Algorithm 3** Detection algorithm against malicious traffic
### Data collection
We collected traffic from the actual network and generated a dataset BTHT-2018 to verify the detection method proposed in this paper. The data includes HTTP-based Trojan traffic and benign traffic, which are filtered and extracted through data cleaning technology (deletion of irrelevant data, flow reorganization, _etc._). At the same time, this paper analyzes and screens the public dataset ISCX-2012 to support the generalization verification of the model.
#### 5.2.1 **Btht-2018**
BTHT-2018 consists of BTHT-R and BTHT-S, which represent the raw traffic data and the corresponding spatio-temporal statistical features. Benign traffic accounts for 99% of the dataset, which is derived from benign behavior, and HTTP-based Trojan traffic accounts for approximately 1%, which is derived from malicious Trojans. The specific dataset details are shown in Tab 4.
Benign traffic comes from the gateway exit of our network lab. After obtaining the authorization, we deployed a traffic collection device at the gateway under the premise of
ensuring security and data privacy. Finally, we collected about 300GB of traffic data in a month, which is mainly divided into news browsing, social activities, web traffic, data download, and other types. Benign traffic is labeled based on trusted applications and trusted access object whitelist (IP, domain name, etc.). We filter the data to ensure that the data is benign traffic to the greatest extent. After removing error and irrelevant information through data cleaning, about 4 million benign traffic is obtained in units of flows.
HTTP-based Trojan traffic is generated by various Trojan horse attacks. The data of this paper is provided by network operators. There are two sources of malicious traffic data. One is based on malicious domain names, IP and other characteristic rules, using malicious traffic detection system to capture from the actual network. The other is to obtain malicious data by breeding malicious scripts under controlled and harmless conditions. In addition, we manually checked part of the marked malicious data to further enhance the correctness. After processing the data collected through these two methods, a total of about 37,000 flows were obtained. Data types include malicious downloads, stealing attacks, malicious promotion, and secondary implants of Trojans.
We performed irreversible desensitization on the published dataset to protect user privacy, while also eliminating the threat of malicious data and preventing it from damaging the network environment.
#### 5.2.2 **Iscx-2012**
ISCX-2012 is a supervised network dataset in PCAP format published by the Canadian Institute for Cybersecurity of UNB in 2012, including actual traffic generated by HTTP, SMTP, SSH, IMAP, POP3, and FTP. By constructing various intrusion scenarios, the agency collected and tracked comprehensive interactive traffic for 7 consecutive days, and injected network attacks such as Infiltrating the network from inside, Distributed Denial of Service using an IRC Botnet within 4 days.
Based on the statistical analysis of the dataset ISCX-2012, we selected the traffic related to the Trojan and extracted benign and malicious HTTP flow data from it. Through the same processing method as BTHT-2018, 22,854 malicious samples and 222,494 benign samples were obtained. The specific dataset details are shown in Tab 5. These data are combined with BTHT to verify the generalization performance of the model, that is, the detection effect of training on one dataset on another dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Statistics**} & \multicolumn{2}{c|}{**Malicious**} & \multicolumn{2}{c|}{**Benign**} & \multicolumn{2}{c|}{**Total**} \\ \cline{2-7} & \(Packet\) & \(Flow\) & \(Packet\) & \(Flow\) & \(Packet\) & \(Flow\) \\ \hline \hline Count & 2,842,054 & 37,847 & 7,280,541 & 4,044,741 & 10,122,595 & 4,082,588 \\ \hline Size & 1,013,098,849 & 2,842,054 & 4,319,581,422 & 7,280,541 & 5,332,680,271 & 10,122,595 \\ \hline Min & 93 & 1 & 12 & 1 & 12 & 1 \\ \hline Max & 8,444 & 99 & 10,427 & 2,441 & 10,427 & 2,441 \\ \hline Mean & 356.47 & 75.09 & 593.31 & 1.8 & 526.81 & 2.48 \\ \hline Var & 32.23 & 39.06 & 512.39 & 7.53 & 447.72 & 10.94 \\ \hline \end{tabular}
\end{table}
Table 4: Statistics on packet size (in bytes) and flow size (in packets) in BTHT-2018.
### **Evaluation metrics and environment configuration**
#### 5.3.1 **Evaluation metrics**
Precision and recall are used as primary evaluation indicators to verify the detection performance of the model, as shown in Eq(14). \(F_{\beta}\) is also calculated as a comprehensive evaluation index. A represents the weight of P and R in this index. The larger \(\beta(>1)\) represents R is more important, and the smaller \(\beta(<1)\) represents P is more important. We set \(\beta=1\) to show that both are equally important.
\[Precision(P) =\frac{TP}{TP+FP} \tag{14}\] \[Recall(R) =\frac{TP}{TP+FN}\] \[F_{\beta} =\frac{(1+\beta^{2})\times P\times R}{(\beta^{2}\times P)+R}\]
In addition, the false positive rate(FPR) and the true positive rate(TPR) (Eq (15)) as the horizontal and vertical axes, respectively, are used for drawing ROC curves to show the expected generalization of the model. In all experiments in this paper, the ratio of malicious : benign in test set of is always 1 : 1. Therefore, we can infer the value of FPR from P and R.
\[FPR =\frac{FP}{TN+FP}=\frac{R\times(1-P)}{P} \tag{15}\] \[TPR =\frac{TP}{TP+FN}=R\]
#### 5.3.2 **Environment configuration**
HSTF-Model is implemented in Python3.5 based on the libraries of Keras and TensorFlow. The system environment of experiments is Ubuntu16.04 LTS. All software applications are deployed on a server machine with 64 CPU cores and 128GB memory. To further accelerate matrix computing, 3 NVIDIA TITAN XP are installed in the server.
Based on experience and previous experimental basis, we determine some basic parameters of the model. The two local networks handling requests and responses have the same structure. The convolutional layer on CNN contains two \(2\times 8\) convolution kernels with strides = 2. The maximum pooling layer size is \(2\times 2\), and strides = 1. The hidden state dimension of the LSTM cell is 16. The model output dimension is 2, which represents the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Statistics** & \multicolumn{2}{c|}{**Malicious**} & \multicolumn{2}{c|}{**Benign**} & \multicolumn{2}{c|}{**Total**} \\ \cline{2-7} & \(Packet\) & \(Flow\) & \(Packet\) & \(Flow\) & \(Packet\) & \(Flow\) \\ \hline
**Count** & 7,310,763 & 22,854 & 2,224,730 & 222,494 & 9,535,493 & 245,348 \\ \hline Size & 4,229,399,657 & 7,310,763 & 978,717,447 & 2,224,730 & 5,208,117,104 & 9,535,493 \\ \hline Min & 16 & 1 & 29 & 1 & 16 & 1 \\ \hline Max & 184,748 & 813 & 259,200 & 3,531 & 259,200 & 3,531 \\ \hline Mean & 578.52 & 319.89 & 439.93 & 10 & 546.18 & 38.87 \\ \hline Var & 454.76 & 119.94 & 662.57 & 43.9 & 514.21 & 105.83 \\ \hline \end{tabular}
\end{table}
Table 5: Statistics on packet size (in bytes) and flow size (in packets) in ISCX-2012.
probability that the input data is malicious and benign. Moreover, we take dropout with 0.3 ratios behind LSTM to avoid overfitting, and the model is trained with \(Adam\) optimizer with learning rate = 0.0001.
There are two sources of experimental data in this paper, one is the dataset BTHT-2018 collected and cleaned from the actual network, and the other is the public dataset ISCX-2012 obtained through processing. In the experiment, HSTF-Model not only uses the traffic in the same dataset for training and testing but also to verify the generalization ability of the model, we train the model on BTHT-2018 and apply it to ISCX-2012 to observe its detection ability. We also designed a variety of experimental scenarios with different proportions of malicious traffic and benign traffic to test the performance of the model in imbalanced datasets, that is, robustness. For each experiment, we randomly select data and repeat 10 times to take the average. There is no intersection between the train set and the test set. In addition, the ratio of malicious : benign in test set is always 1 : 1.
### **Determination of key parameters**
The feature distribution of different datasets is different. Similarly, the flows are generally different. A flow may have many interactive packets, each of which is large, while the other flow may have only one packet and the size is very small. The difference between the two flows is obvious. In model detection, we need to determine the number and size of packets in the flow, which needs to be determined experimentally. However, the combination of the number and size of the packets is diverse, so we have adopted a method of controlling variables for determination. Because the request and response are equally important, we set the size of the request packet and response packet to the same size, and the number of corresponding packets is also set to the same size.
#### 5.4.1 **Determination of optimal packet size**
Combined with the statistical analysis of the dataset, we set the number of packets, flow size = 3, in the experiment, and set the packet size = 100, 200, 300, 400, 500, 600, 800, 1000, 1200, 1600, 2400, 4000, 5000 bytes respectively. According to the experimental results, we determine the optimal packet size.
After HSTF-Model is trained in the dataset BTHT-2018, the detection results in the test set are shown in Fig. 7. It shows that a sample can provide less feature information when the packet size is small, and the detection effect of the model is relatively poor. Then, increasing the packet size can provide more information, and the indicators of the model are more than 99%. The packet size has a better detection ability within 300-5000 bytes, which indicates that the model can obtain good results with fewer features. But this also means that with the increase of feature information, the detection effect of the model is not increased. This is because we mainly study the traffic of the Trojan horse during the on-line phase. The packets in this traffic are usually not too large and only include simple communication and command transmission. Therefore, it can be effectively judged by the data model in the previous part of the packets. Here we set the packet size = 800 as our selected optimal value because the model can have good detection results and generalization at the same time.
#### 5.4.2 **Determination of optimal flow size**
The sequence of packets in HSTF-Model is processed using LSTM, so we need to determine the flow size (number of packets). In the experiment, we set flow size = \((1,2,3,4,5,6,\)\(8,16,24,32,50)\) respectively and choose the optimal value through experiments.
Fig. 8 shows the experimental results of determining the flow size. We choose flow size = 3 as the optimal value because the model can achieve the best balance between accuracy and generalization. During the on-line phase, the connection between the attacker and the HTTP-based Trojan basically does not include the transmission of a large amount of data, so the characteristics of the data are usually reflected in the first few packets of the flow. When the flow increases, the effective information contained in the newly added data will gradually decrease. When there is too much data, it will lead to a decrease in
Figure 8: Effect of flow size on HSTF-Model (packet size = 800 bytes).
Figure 7: Effect of packet size on HSTF-Model (flow size = 3 packets).
detection performance, because invalid data is equivalent to noise, which will interfere with the essential characteristics of the data extracted by the model.
### Efficiency of statistical features
In order to evaluate the contribution of spatio-temporal sequence statistical features, we construct a contrast model (C-Model), which is a simplified HSTF-Model without extracting statistical features. The input only has raw data, the model only uses CNN to process the raw packet, and then LSTM extracts sequence features of CNN outputs. Finally, the model outputs the discrimination results through the fully connected layer.
We conducted experiments in scenarios with different proportions of malicious traffic in training sets, different packet sizes, and different flow sizes. The results are shown in Fig 9. Experiments show that HSTF-Model is superior to models without statistical feature sequences under various conditions. Because the combination of statistical features and raw traffic provides a richer representation for data samples, the model can relatively use fewer neurons to analyze and extract feature information, and it is easier to learn the essential features of the data.
### Robustness analysis of HSTF-Model
In actual networks, the proportion of malicious traffic and benign traffic is extremely unbalanced, and most of the traffic is benign. Therefore, whether the model has good
Figure 9: The detection performance of C-Model (the simplified HSTF-Model without extracting statistical features) and HSTF-Model in different scenarios. β1 : 1, 400, 3β, for instance, means that the ratio of malicious : benign = 1 : 1 in training, packet size = 400 bytes and flow size = 3 packets.
detection performance in a data imbalanced experimental environment is one of the keys to its application in real network environments. We design a variety of malicious : benign training scenarios with different proportions to test the robustness of the model.
The training process of the model is shown in Fig. 10. where figure (a) represents the change in accuracy of the model during convergence, and figure (b) represents the change in the recall of the model in the validation set during convergence. As the number of malicious samples in the training dataset decreases, the convergence rate of the model becomes slower, and the recall rate for malicious samples decreases. However, HSTF-Model has achieved convergence in these experimental scenarios, maintaining a detection accuracy of more than 97%.
The test results of the model in the dataset are shown in Tab 6. The analysis shows that with the reduction of malicious samples in the dataset, the neural network is trained to fit more benign data and extract the characteristics of benign traffic. This makes the detection threshold of the model for malicious traffic increase, and the model is more and more inclined to judge the network behavior as benign. This means that the model is more
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**malicious : benign**} & \multicolumn{3}{c|}{**Evaluation index(\%)**} \\ \cline{2-4} & \(P\) & \(R\) & _F1_ \\ \hline \hline
1 : 1 & 99.66 & 99.28 & 99.47 \\ \hline
1 : 3 & 99.76 & **99.74** & 99.75 \\ \hline
1 : 6 & 99.96 & 99.66 & **99.81** \\ \hline
1 : 10 & 99.76 & 99.42 & 99.59 \\ \hline
1 : 16 & 99.84 & 99.00 & 99.42 \\ \hline
1 : 24 & 99.78 & 99.34 & 99.56 \\ \hline
1 : 50 & 99.90 & 98.34 & 99.11 \\ \hline
1 : 100 & **99.98** & 97.30 & 98.62 \\ \hline \end{tabular}
\end{table}
Table 6: Test results of HSTF-Model in different scenarios.
Figure 10: Training process of the model in different scenes (malicious : benign).
biased towards the precision. Only when the malicious characteristics of a flow are very obvious, the model will judge it as malicious. When the number of malicious samples is reduced to a certain extent, the model will lose its ability to discriminate for malicious traffic. However, HSTF-Model has excellent robustness, from 1:1 to 1:100, the F1 of model is 98.62%\(\sim\)99.81% and the FPR 0.34%\(\sim\)0.02%. Even when the ratio is 1:100, that is, when the malicious sample only accounts for about 0.99% in training, the model can still achieve the precision rate of 99.98%, the recall rate of 97.3% and the F1 of 98.62%.
### **Generalization analysis of HSTF-Model**
#### 5.7.1 **Generalization of HSTF-Model in different scenarios**
A method is considered to have good generalization performance when it can be used to detect other datasets with different distributions after learning in one dataset. In this paper, we train HSTF-Model in the dataset BTHT-2018 and use the dataset ISCX-2012 to verify its generalization.
The test results of the experiment are shown in Tab. 7. When HSTF-Model is used to detect datasets with different distributions, the performance of model is degraded. When flow size = 3 and packet size = 800, the model can get the best generalization, that is, the precision = 91.41%, the recall rate = 95.72%, and the F1 = 93.51% are obtained in ISCX-2012. At the same time, in order to better show the expected generalization performance of the model, we show the ROC curves of the model in different scenarios in Fig. 11.
#### 5.7.2 **Comparison with other methods**
A model usually pays more attention to generalization and hopes to obtain good detection performance in different experimental scenarios. Therefore, we compared HSTF-Model with other methods in generalization. In addition, there are no matched high-quality related papers in existing research work for HTTP-based Trojan malicious traffic detection during the on-line phase. Therefore, we select several machine learning methods that are widely used in the current intrusion detection field for comparison. Tian _et al.[19]_ used decision trees to conduct Android repackaged malware detection based on code heterogeneous analysis, and Senavirathne_et al.[20]_ used decision trees for privacy attack detection based on the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**malicious : benign**} & \multirow{2}{*}{**packet size**} & \multirow{2}{*}{**flow size**} & \multicolumn{2}{c|}{**Evaluation index(\%)**} \\ \cline{3-5} & & & \(P\) & \(R\) & _F1_ \\ \hline \hline
1 : 1 & 400 & 3 & 85.76 & 95.64 & 90.43 \\ \hline
1 : 1 & 400 & 6 & 77.19 & 96.02 & 85.58 \\ \hline
1 : 1 & 800 & 3 & **91.41** & 95.72 & **93.51** \\ \hline
1 : 1 & 800 & 6 & 80.69 & 96.44 & 87.86 \\ \hline
1 : 10 & 400 & 3 & 83.52 & 94.98 & 88.88 \\ \hline
1 : 10 & 400 & 6 & 80.54 & 96.43 & 87.73 \\ \hline
1 : 10 & 800 & 3 & 87.03 & 95.72 & 91.17 \\ \hline
1 : 10 & 800 & 6 & 80.70 & **96.52** & 87.91 \\ \hline \end{tabular}
\end{table}
Table 7: The generalization detection results of HSTF-Model in ISCX-2012 under different scenarios.
intruder's uncertainty. Gu _et al.[21]_ proposed a support vector machine (SVM) integration framework for intrusion detection, which can get good robustness. Vijayanand _et al.[22]_ proposed an SVM-based intrusion detection system based on a wireless mess network with high detection accuracy. Bost _et al.[23]_ used hyperplane decision (SVM, _etc._), Naive Bayes, and decision trees in the classification of encrypted data to construct a comprehensive classifier for detection.
The experimental results are shown in Tab 8. Obviously, HSTF-Model has the best comprehensive performance. Although Naive Bayes is 98.22% in the recall rate, the accuracy rate is only 58.78%. This means that the method will produce a large number of false positives during actual detection, which is unacceptable. In addition, the decision tree and SVM series methods trained in the dataset BTHT-2018 can obtain detection rate of 90+% in BTHT-2018, but both show serious over-fitting phenomenon in dataset ISCX-2012, in which the SVM (linear) completely loses the detection ability in ISCX-2012. In general, other methods cannot detect the attack effectively.
In addition, we also compared the HSTF-Model with two non-hybrid neural networks (individual CNN and LSTM model) to verify the effectiveness of our hybrid structure. The experimental results are shown in Tab 9. Although CNN and LSTM are basically consistent with HSTF-Model in terms of recall, the precision is far lower than HSTF-Model. Therefore, the hybrid structure improves the ability of model to distinguish malicious traffic, which can effectively reduce false positives.
## 6 Related work
The work of this paper is HTTP-based Trojan detection, which belongs to the field of intrusion detection. The method used is anomaly detection based on deep learning. In this
Figure 11: ROC curves of the model in ISCX-2012 under different scenarios. β1 : 1, 400, 3β, for instance, means that the ratio of malicious : benign = 1 : 1 in training, packet size = 400 bytes and flow size = 3 packets.
section, we introduce related research work and detection methods.
### **Intrusion detection systems**
Intrusion detection is an auxiliary method for network firewalls, which uses various technical methods to detect and alert network attacks. An intrusion detection system (IDS) is a network security product that uses intrusion detection methods to monitor the traffic flowing through the device, and issues an alert or takes proactive measures when the suspicious transmission is found. In an increasingly complex network environment, people often deploy IDSs to protect equipment and information. An effective intrusion detection technology is the core of IDSs. Trojans attack detection is an important supplement to IDSs.
The methods used for intrusion detection can generally be divided into two categories, signature detection, and anomaly detection[24]. Signature detection, also called misuse detection, establishes a characteristic behavior model for known attacks. When a similar pattern is detected from the network, the corresponding behavior will be determined to be malicious. Signature detection can maintain a high accuracy rate for known attacks, but this method cannot detect a new type of unknown attack, that is, a 0-day attack, so its practical application has limitations and there are not many related studies. Anomaly detection, also called behavior detection, is one of the mainstream methods of intrusion detection[25]. It can detect 0-day attacks through feature analysis, which is very important for network security. Because the data in the network is constantly updated and new types of attacks are constantly generated, we need detection methods that can detect new types of attacks. In this paper, therefore, we focus on anomaly detection. Based on survey analysis, we divide it into classic anomaly detection, traditional machine learning-based (TML-based) anomaly detection, and deep learning-based (DL-based) anomaly detection.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**Evaluation index(\%)**} \\ \cline{2-4} & \(\mathbf{P}\) & \(\mathbf{R}\) & \(\mathbf{F1}\) \\ \hline \hline Naive Bayes & 58.78 & **98.22** & 73.50 \\ \hline Decision Tree (CART) & 1.25 & 0.02 & 0.04 \\ \hline Decision Tree (C4.5) & 11.36 & 0.10 & 0.20 \\ \hline SVM (linear) & 0 & 0 & 0 \\ \hline SVM (rbf) & 0.11 & 0.06 & 0.10 \\ \hline HSTF-Model & **91.41** & 95.72 & **93.51** \\ \hline \end{tabular}
\end{table}
Table 8: Generalization comparison of HSTF-Model with other methods in dataset ISCX-2012.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**Evaluation index(\%)**} \\ \cline{2-4} & \(\mathbf{P}\) & \(\mathbf{R}\) & \(\mathbf{F1}\) \\ \hline \hline CNN & 78.14 & **95.72** & 86.04 \\ \hline LSTM & 79.37 & 95.66 & 86.76 \\ \hline HSTF-Model & **91.41** & **95.72** & **93.51** \\ \hline \end{tabular}
\end{table}
Table 9: Generalization comparison of HSTF-Model with individual CNN and LSTM in dataset ISCX-2012.
### **Classical anomaly detection**
Classical anomaly detection is the method used in early anomaly detection. For the benign network behavior, a corresponding behavior pattern feature library is established[24]. When the traffic is detected to deviate from the outline of benign network behavior, the system determines it to be abnormal[26].
Classic anomaly detection methods can detect unknown attacks to a certain extent. However, it cannot achieve the same detection capability as feature detection when detecting known attacks[27]. Furthermore, abandoning the modeling of malicious behaviors and learning only the current benign behaviors will make the model tend to judge all-new network behaviors as abnormal, which will lead to a high false-positive rate and make the model lose its discriminating ability[24]. Also, the completeness of feature engineering will greatly affect the detection performance of the model. Therefore, at present, researchers begin to pay more attention to combining signature detection and anomaly detection, while learning normal and abnormal network behaviors, and comprehensively extracting features to discriminate behaviors. Because this hybrid detection method mainly focuses on anomaly detection and can detect unknown attacks, it is also considered to be anomaly detection methods[25].
### **TML-based anomaly detection**
There are many anomaly detection methods that use traditional machine learning methods to detect malicious traffic[28] anomaly detection. For instance, Bayes[29], Markov[30] and so on. In general, these anomaly detection methods learn both normal and abnormal characteristics, and can effectively detect both known attacks and new types of attacks.
Mishra _et al.[31]_ conducted a detailed investigation and analysis of various machine learning technologies, compared the ability of machine learning technologies to detect attacks, looking for the cause of problems in machine learning when detecting intrusion behaviors. Aljawarneh _et al.[32]_ proposed a hybrid machine learning method for intrusion detection, which uses a voting algorithm to filter the data. The hybrid algorithm consists of J48, Meta Pagging, RandomTree, REPTree, AdaBoostM1, and so on. On binary and multi-class NSL-KDD datasets[12], the accuracy of the model is 99.81% and 98.56%, respectively. Chen _et al.[33]_ proposed an unbalanced data gravity-based classification (IDGC) algorithm to classify unbalanced data for detecting malicious mobile applications. Gezer _et al.[34]_ studied the use of machine learning to monitor banking Trojans. Using random forest classifiers in experiments can achieve 99.95% accuracy. Al-Yaseen _et al.[35]_ proposed an improved K-means algorithm for building high-quality training datasets, proposed a multilayer hybrid intrusion detection model and used support vector machines and extreme learning machines for detection. In the KDD-Cup'99 dataset[36], 95.75% accuracy can be achieved. Wang _et al.[37]_ propose BotMark for botnets detection based on flow-based and graph-based network traffic behaviors. BotMark uses k-means to measure the similarity and stability of flows, uses the least-square technique and Local Outlier Factor (LOF) to calculate anomaly scores that measure the differences of their neighborhoods. Experimental results show that BotMark's detection accuracy reaches 99.94%.
There are also some shortcomings for TML-based anomaly detection. Most anomaly detection requires expert knowledge to design feature engineering. Feature engineering is first designed, and then supervised or unsupervised algorithms are used to build detection models based on these features. However, designing a high performance feature engineering that reflects the essential characteristics of data is a problem that is still an ongoing research issue, namely representation learning[6]. Simultaneously, this detection method relies on feature engineering. The uncertainty of human factors and the limitations of the scene will prevent the detection model from achieving good robustness and generalization.
### **DL-based anomaly detection**
Deep learning has also attracted widespread attention in the field of intrusion detection[38]. Many researchers have proposed intrusion detection methods based on deep learning. After a simple preprocessing of the network behavior, the neural network can automatically extract features from the data for updating parameters and can perform incremental learning. DL-based anomaly detection methods do not require researchers to pay more resources to establish feature engineering, which reduces the difficulty of experimental research. Because the model can automatically abstract features, the model can maintain good generalization as long as the data resources are satisfied[39]. This is an advantage over other anomaly detection methods.
Kwon _et al.[40]_ summarized deep learning methods, introduced the latest research on deep learning technology with network anomaly detection as the core, and proved the feasibility of deep learning methods in network traffic analysis. Javaid _et al.[41]_ studied the use of deep learning techniques to help system administrators detect network security vulnerabilities in organizations. In the high-dimensional problem domains of anomaly detection, Erfani _et al.[42]_ proposed a hybrid model that trains unsupervised DBN to extract general underlying features, which can effectively improve the detection speed. Zhou _et al.[43]_ extended a deep autoencoder to eliminate outliers and noise. The superior performance of this method is proved on a series of benchmark problems. Shone _et al.[44]_ proposed an asymmetric deep autoencoder (NDAE) for unsupervised feature learning, which was evaluated on KDD-Cup'99 and NSL-KDD datasets. Li _et al.[45]_ proposed an image conversion method for NSL-KDD data, using convolutional neural networks to automatically learn the features of the graph NSL-KDD transform. The results show that CNN is sensitive to image transformation of attack data and can be used for intrusion detection.
In general, based on existing research, the following points can be improved in the field of Trojan attack detection: 1) The benchmark dataset (dataset KDD-Cup'99, _etc._) is becoming obsolete. As the network environment becomes more complex, new datasets are needed. 2) At present, there are is no specific work based on HTTP-based Trojan detection in the on-line phase. 3) Anomaly detection methods based on deep learning have advantages over other detection methods. However, most of the current detection methods directly use the raw traffic for learning and do not effectively use the artificial feature processing experience. In this paper, we propose a method for malicious traffic detection based on deep learning. In addition to the raw data, we also use expert knowledge to extract statistical features of
spatio-temporal sequences for model training and detection, which makes the model have excellent detection performance in the actual network environment.
## 7 Discussion
### Limitations of HSTF-Model
HSTF-Model can learn the characteristics of traffic data well and can perform incremental learning. New data can be input to the model iteratively, and the decision function of the model is continuously updated, so that the model can resist concept drift to a certain extent.
However, the model still has some shortcomings. First, if the flow data of a sample is small and the corresponding extractable features are small, the model is prone to misjudgment, but this is reasonable. Secondly, although the generalization of HSTF-Model is good enough compared with other methods, there is still a 5% -6% decrease in detection performance on different datasets, which is a problem that we need to improve in the future. In addition, we have not fully verified the performance of the model on encrypted traffic. But HSTF-Model extracts information from the statistical characteristics of the traffic. Therefore, we believe that the model can also get good results on encrypted traffic in the basis of sufficient training.
### Limitations of experimental design and datasets
In this paper, we determine some optimal parameters through experiments. Because there are many combinations of parameters, we cannot implement and evaluate all the schemes, but only select some local optimums. As a rule of thumb, we use control variables and distributed values to cover the optimal solution as much as possible.
For the number of malicious traffic. In the actual network environment, the proportion of HTTP-based Trojan malicious traffic is very low. We captured about 37,000 HTTP-based Trojan flows from the actual network environment. We admit that this number occupies a small proportion in the BTHT-2018 dataset, but we believe that this amount of data can guarantee to meet the experimental needs. Experiments show that the HTTP-based Trojan data we captured can roughly cover the behavioral characteristics of such attacks.
For the content of experimental data, there are few or even single packets in many flows, and the behavior cycle time is short. This is because: 1) The attacker's server cannot receive the request (the domain name expires and is disabled, the server is shut down, _etc._). 2) The response may be blocked by other detection systems. 3) The flow itself is a one-way message transfer. However, experiments show that HSTF-Model still has excellent detection ability for such data.
## 8 Conclusion
In this paper, we propose a spatio-temporal sequence feature model to describe HTTP-based Trojan attacks, and build a prototype detection method HSTF-Model to detect HTTP-based Trojan traffic based on deep learning. Also, we collected the traffic from the actual network to generate the dataset BTHT-2018. Experiments show that the combination
of raw data and statistical features can more fully display the inherent characteristics of the traffic, and neural networks can more fully learn the data. HSTF-Model fed with the dataset BTHT-2018 can reach F1 of 99.47% in the same dataset, and it can also reach 93.51% in the public dataset ISCX-2012 (20+% improvement compared with other methods), which proves that the generalization performance of the model is excellent, while other traditional methods do not have generalization. In addition, HSTF-Model can obtain the F1 value of 98.62% in the 1 : 100 dataset, proving that the model has excellent robustness.
In the future, we will take the improvement and expansion of the BTHT-2018 dataset as one of the main tasks, and provide more complete data scenarios. Simultaneously, it is planed that the fine-grained multiple classification will be performed in order to detect more types of network attacks.
|
2309.13537 | Speech enhancement with frequency domain auto-regressive modeling | Speech applications in far-field real world settings often deal with signals
that are corrupted by reverberation. The task of dereverberation constitutes an
important step to improve the audible quality and to reduce the error rates in
applications like automatic speech recognition (ASR). We propose a unified
framework of speech dereverberation for improving the speech quality and the
ASR performance using the approach of envelope-carrier decomposition provided
by an autoregressive (AR) model. The AR model is applied in the frequency
domain of the sub-band speech signals to separate the envelope and carrier
parts. A novel neural architecture based on dual path long short term memory
(DPLSTM) model is proposed, which jointly enhances the sub-band envelope and
carrier components. The dereverberated envelope-carrier signals are modulated
and the sub-band signals are synthesized to reconstruct the audio signal back.
The DPLSTM model for dereverberation of envelope and carrier components also
allows the joint learning of the network weights for the down stream ASR task.
In the ASR tasks on the REVERB challenge dataset as well as on the VOiCES
dataset, we illustrate that the joint learning of speech dereverberation
network and the E2E ASR model yields significant performance improvements over
the baseline ASR system trained on log-mel spectrogram as well as other
benchmarks for dereverberation (average relative improvements of 10-24% over
the baseline system). The speech quality improvements, evaluated using
subjective listening tests, further highlight the improved quality of the
reconstructed audio. | Anurenjan Purushothaman, Debottam Dutta, Rohit Kumar, Sriram Ganapathy | 2023-09-24T03:25:51Z | http://arxiv.org/abs/2309.13537v1 | # Speech Dereverberation with Frequency Domain Autoregressive Modeling
###### Abstract
Speech applications in far-field real world settings often deal with signals that are corrupted by reverberation. The task of dereverberation constitutes an important step to improve the audible quality and to reduce the error rates in applications like automatic speech recognition (ASR). We propose a unified framework of speech dereverberation for improving the speech quality and the ASR performance using the approach of envelope-carrier decomposition provided by an autoregressive (AR) model. The AR model is applied in the frequency domain of the sub-band speech signals to separate the envelope and carrier parts. A novel neural architecture based on dual path long short term memory (DPLSTM) model is proposed, which jointly enhances the sub-band envelope and carrier components. The dereverberated envelope-carrier signals are modulated and the sub-band signals are synthesized to reconstruct the audio signal back. The DPLSTM model for dereverberation of envelope and carrier components also allows the joint learning of the network weights for the down stream ASR task. In the ASR tasks on the REVERB challenge dataset as well as on the VOICES dataset, we illustrate that the joint learning of speech dereverberation network and the E2E ASR model yields significant performance improvements over the baseline ASR system trained on log-mel spectrogram as well as other benchmarks for dereverberation (average relative improvements of \(10\)-\(24\)% over the baseline system). The speech quality improvements, evaluated using subjective listening tests, further highlight the improved quality of the reconstructed audio.
Frequency domain auto-regressive modeling, Dereverberation, end-to-end ASR, Joint modeling.
## I Introduction
The wide spread adoption of voice technologies like meeting assistants, smart speakers, in-car entertainment systems, and virtual assistants imply that the audio signal at the input of these system is impacted by reverberation and noise artifacts [1]. The performance of the downstream applications like, automatic speech recognition, speaker/language recognition, emotion recognition or voice activity detection, is shown to degrade significantly in reverberant conditions [2, 3, 4, 5, 6]. The performance deterioration is primarily attributed to the smearing of the temporal envelopes caused by reverberation [7]. The temporal smearing is caused by the emplacement of the direct path signal on reflected signals, resulting in a weighted summation of delayed components [8].
One of the approaches to deal with the adverse far-field conditions is to develop a front-end which performs signal enhancement. Several techniques for dereverberation like signal processing based (for example, weighted prediction error (WPE) [9]), mask estimation based (for example, time-frequency mask estimation [10]) and multi-channel beamforming based (for example, time-delay estimation [11], generalized eigen-value [12, 13]) have been explored to improve the signal quality. On the other hand, another effective approach for system development in reverberant conditions is that of multi-condition training [14]. However, even with these pre-processing and multi-condition training methods, the beam-formed signal contains significant amount of temporal smoothing which adversely impacts the ASR performance [15].
In the traditional setting, the first step in the analysis of a signal is the short-term Fourier transform (STFT). The key assumptions about the convolution model of reverberation artifacts, is applicable for a long-analysis window in the time domain, or using convolutional transfer function with cross-band filters in the STFT domain [16, 17]. In our case, we use the former approach of long analysis window and explore dereverberation in the sub-band envelope domain. As the reverberation is a long-term convolution effect, we highlight that room impulse response (typically with a \(\mathrm{T}60>400ms\)) can be absorbed as a multiplication in the frequency domain, as well as a convolution in the sub-band envelope domain.
In this paper, we investigate the effect of reverberation on the long-term sub-band signals of speech using an envelope-carrier decomposition. The extraction of the sub-band envelope is achieved using the autoregressive (AR) modeling approach in the spectral domain, termed as frequency domain linear prediction (FDLP). Our previous work showed that a feature level enhancement with the FDLP envelope improves speech recognition performance [18, 19]. However, the prior works did not allow the reconstruction of the audio signal for quality improvement. Further, the enhancement of the carrier signal was not addressed in the previous work due to the challenges in the handling the impulsive nature of the carrier signal.
In this paper, we propose a novel approach to the joint dereverberation of the envelope and carrier signals using a neural modeling framework. While using the sub-band signals directly, the sample level de-convolution with a suitable loss function can be a difficult design choice to learn using neural models. Hence, we propose using an envelope-carrier decomposition of the sub-band signals. Our rationale for the envelope-carrier decomposition based setup is the fact the envelope information is alone used in the ASR experiments.
Thus, the ASR loss has to impact only the envelope dereverberation branch. However, the carrier and the envelope components are part of the signal reconstruction branch.
We develop a dual path long short term memory (DPLSTM) architecture for the dereverberation of the temporal envelope and carrier signals. In our case, the goal of the neural model is to perform a dereverberation of the envelope and the carrier components of the sub-band signal. These signals have a time profile, with varying dynamic range and properties. Further, merging all the sub-band signals in the decomposition also brings in a frequency profile. Thus, the design choice of the neural model, for enhancing the sub-band envelope-carrier signals, has to learn the sequence level patterns in both the time and frequency domains. The DPLSTM [20] is a suitable choice, as the model is able to integrate information effectively in both the time and frequency domains.
Following the dereverberation step, the sub-band modulation and synthesis step generates the reconstructed audio signal. The neural enhancement and sub-band synthesis can also be implemented as a part of the larger neural pipeline for downstream tasks like ASR, thereby enabling the joint learning of the ASR and dereverberation model parameters. We refer to the proposed approach as Dual path dereverberation using Frequency domain Auto-Regressive modeling (DFAR) and the joint end-to-end model as E2E-DFAR.
Various ASR experiments are performed on the REVERB challenge dataset [21] as well as the VOiCES dataset [22, 23]. The key contributions from this work, over the prior work [18], can be summarized as follows,
* Proposing an analysis for dereverberation with a sub-band decomposition and envelope-carrier demodulation.
* Proposing a dual-path long short time memory model named, DPLSTM for the dereverberation of sub-band envelope and carrier signals. This approach is termed as DFAR.
* Developing a joint learning scheme, where the ASR model and the DFAR model are optimized in a single end-to-end framework. This model is referred to as the E2E-DFAR.
* REVERB challenge dataset and the VOiCES dataset.
## II Related prior work
### _Enhancement and dereverberation_
For speech enhancement, Xu et. al. [24] devised a mapping from noisy speech to clean speech using a supervised neural network. In a similar manner, ideal ratio mask based neural mappings [25] have been explored for speech separation tasks. On the dereverberation front, Zhao et. al. proposed an LSTM model for late reflection prediction in the spectrogram domain [26]. Han et. al [27] developed a spectral mapping approach using the log-magnitude inputs and Williamson et. al [10] proposed a mask-based approach for dereverberation on the complex short-term Fourier transform. In a different line of work, speech enhancement in the time domain was pursued by Pandey et. al [28].
The application of speech dereverberation as a pre-processing step for downstream applications like ASR have been explored in several works (for example, [29, 30, 31]). The recent years have seen the use of recurrent neural network architectures for dereverberation. For example, Maas et. al [32], utilized a recurrent neural network (RNN) to establish mapping between noise-corrupted input features and their corresponding clean targets. Also, the use of a context-aware recurrent neural network-based convolutional encoder-decoder architecture was investigated by Santos et. al. [33].
### _Robust multi-channel ASR_
In the design of robust ASR, Generalized sidelobe canceller (GSC) [34, 35] is a common approach. It was introduced by Li et. al in [36], where the authors proposed a neural network-based generalized side-lobe canceller. To combine spectral and spatial information from multiple channels using attention layers, an end-to-end multi-channel transformer was investigated in [37]. In another attention modelling approach, the streaming ASR model based on monotonic chunk-wise attention was proposed by Kim et. al in [38]. Ganapathy et. al. [4] proposed a 3-D CNN model for far-field ASR.
### _Joint modeling of enhancement and ASR_
The attempt proposed by Wang et. al. [39] incorporates a DNN based speech separation model coupled with a DNN based acoustic model. The work reported by Wu et. al. [40] explored a unification of separately trained speech enhancement neural model and the acoustic model, where the joint model is fine-tuned to improve the ASR performance. Here, the DNN based dereverberation front end leverages the knowledge about reverberation time. While traditional GSC is optimized for signal level criteria, the neural network-based GSC, proposed by Li et. al [36], was optimized for ASR cost function.
## III Proposed DFAR approach
### _Quadrature Mirror Filter (QMF)_
For the sub-band decomposition, we had the following design considerations
* The decomposition approach should allow the long-term artifacts of reverberation to be captured in the sub-band domain as a convolution,
* The analysis method should allow a perfect reconstruction back to the audio using the synthesis part, and
* The sub-band components should be critically sampled for efficient computation of the dereverberated components in a deep neural model.
The quadrature mirror filter (QMF) met all the above requirements and hence, this work has used the QMF analysis and synthesis for speech dereverberation task.
A quadrature mirror filter (QMF) is a filter whose magnitude response is a mirror reflection at quadrature frequency (\(\frac{\pi}{2}\)) of another filter [41]. In signal processing, the QMF filter-pairs are used for the design of perfect reconstruction filter
banks. Let \(H_{0}(e^{j\Omega})\) and \(H_{1}(e^{j\Omega})\) denote low-pass and high-pass filter's frequency domain function, where \(\Omega\) is the digital frequency. In addition to the quadrature property (\(H_{1}(e^{j\Omega})=H_{0}(e^{j(\Omega-\pi)})\)), the filters used in QMF filter-banks also satisfy the complimentary property,
\[|H_{0}(e^{j\Omega})|^{2}+|H_{1}(e^{j\Omega})|^{2}=1. \tag{1}\]
The design of sub-band decomposition scheme with QMF involves a series of filtering and down-sampling operations for the analysis [42]. The synthesis is achieved by up-sampling and filtering operations. A tree-like structure can be formed using a recursive decomposition operation. The down-sampling process enables a critical rate of processing, where the sum of the number of samples in each sub-band equals the number of the samples in the full-band signal.
In this work, we use an uniform \(64\)-band Quadrature Mirror Filter bank (QMF) for decomposing the input signal into \(64\) uniformly spaced frequency bands. Inspired by the audio decomposition scheme outlined in Mottileck et. al. [43], we use a 6-level binary tree structure. The schematic of the sub-band decomposition is shown in Fig. 1. For the implementation in a neural pipeline, the down-sampling operation is equivalent to a stride, while the up-sampling operation is that of un-pooling.
### _Autoregressive modeling of temporal envelopes_
The application of linear prediction model in the frequency domain, an approach called frequency domain linear prediction (FDLP), enables the modeling of the temporal envelopes of a signal with an autoregressive (AR) model [8, 44]. The sub-band signal is transformed to the spectral domain using a discrete cosine transform (DCT) [8], where a linear prediction model is applied.
Let the sub-band signal be denoted as \(x_{q}[n]\), where \(q=1,...,Q\) denotes the sub-band index. The analytic signal, in signal processing theory, is a complex valued function, whose real value is the original signal while the imaginary value is the Hilbert transform of the signal. It finds application in single side-band amplitude modulation and quadrature filtering. Let the analytic version of sub-band signal, \(x_{q}[n]\) be denoted as, \(x_{q}^{a}[n]\). The corresponding analytic signal in the frequency domain, \(X_{q}^{a}[k]\) can be shown to be the one-sided discrete Fourier transform (DFT) [8] of the even symmetric version of \(x_{q}[n]\).
We apply linear prediction (LP) on the frequency domain signal, \(X_{q}^{a}[k]\). The corresponding LP coefficients are denoted by \(\{b_{p}\}_{p=0}^{m}\), where \(m\) is the order of the LP. The temporal envelope estimate of \(x_{q}^{a}[n]\), is given by,
\[e_{q}[n]=\frac{\alpha}{|\sum_{p=0}^{m}b_{p}e^{-2\pi ipn}|^{2}} \tag{2}\]
where \(\alpha\) denotes the LP gain. The envelope represents the autoregressive model of the Hilbert envelope. In this paper, we use the Burg method [45] for estimating the AR envelope.
The corresponding carrier (remaining residual signal), \(c_{q}[n]\) is found as,
\[c_{q}[n]=\frac{x_{q}[n]}{\sqrt{e_{q}[n]}} \tag{3}\]
The division operation in the expression above is well defined as the envelope given in Eq. (2) is always positive. Further, the modeling of the temporal envelopes using the AR model ensures that the peaks of the sub-band signal in the time-domain are well represented [46, 47].
### _Effect of reverberation on envelope and carrier signals_
The effect of reverberation on the time-domain speech signal can be expressed in the form of a convolution operation,
\[y[n]=x[n]*r[n], \tag{4}\]
where \(x[n]\) denotes the clean speech signal, \(r[n]\) is the impulse response of the room and \(y[n]\), is the reverberant speech signal. The room response function can be further split into two parts, \(r[n]=r_{e}[n]+r_{l}[n]\), where \(r_{e}[n]\) and \(r_{l}[n]\) are the early and late reflection components, respectively.
Let \(x_{q}[n]\), \(r_{q}[n]\) and \(y_{q}[n]\) denote the sub-band versions of the clean speech, room-response function and the reverberant speech signal respectively. Assuming an ideal band-pass filtering, it can be shown that the analytic signal, \(x_{q}^{a}[n]\), is given by [8, 48],
\[y_{q}^{a}[n]=\frac{1}{2}[x_{q}^{a}[n]*r_{q}^{a}[n]], \tag{5}\]
For band-pass filters with narrow band-width, the envelopes of the reverberant speech can be approximated as [18],
\[e_{yq}[n]\simeq\frac{1}{2}e_{xq}[n]*e_{rq}[n], \tag{6}\]
where \(e_{yq}[n]\), \(e_{xq}[n]\), \(e_{rq}[n]\) denote the sub-band envelopes of reverberant speech, clean speech and room response respectively. Prior efforts in envelope normalization focus on suppressing the linear effects of reverberation by setting the
Fig. 1: Illustration of a 4-channel uniform QMF decomposition using a 2-stage binary QMF tree. In our work, we use \(64\)-channel decomposition, using a 6-way binary tree.
gain of the reconstructed envelopes to unity [49]. However, in this work, we develop neural models that can remove the non-linear effects of reverberation. The reverberant sub-band envelope can also be viewed an additive model [50, 18].
\[e_{pq}[n]=e_{yqe}[n]+e_{pql}[n], \tag{7}\]
where, \(e_{yqe}[n]\) is the early reflection component (which includes the direct path and the early reflections), while \(e_{yql}[n]\) is the late reflection part of the sub-band envelope \(e_{yq}[n]\).
The key assumptions about the reverberation model of Eq. (4-6), is a long-analysis window in the time domain. As the reverberation is a long-term convolution effect, we highlight that the room impulse response (typically with a \(\mathrm{T}60>400ms\)) can be absorbed as a multiplication in the frequency domain, as well as a convolution in the sub-band envelope domain, only in the case of a long analysis window. The widely used short-time Fourier transform (STFT) does not capture the room impulse response function directly, and hence does not allow a convolutive modeling of the artifacts. Further, the phase effects in STFT domain are somewhat cumbersome to model. The above mentioned issues of STFT are also verified experimentally in Sec. V.
**Envelope enhancement:** A neural model can be used to learn late reflection component \(e_{xql}[n]\) from the sub-band temporal envelope \(e_{xq}[n]\). The predicted late reflection component can be subtracted from the sub-band envelope to suppress the artifacts of reverberation.
We pose the problem in the log domain to reduce the dynamic range of the envelope magnitude. The neural model is trained with reverberant sub-band envelopes (\(log\)\((e_{xq}[n])\)) as input. The model outputs the gain (in the log domain, i.e., \(log\)\(\frac{e_{xq}[n]}{e_{xq}[n]}\)). This gain is added in the log-domain to generate dereverberated signal envelope (\(log\)\((\hat{e}_{sq}[n])\).
**Envelope-carrier dereverberation model**: In a similar manner, the non-linear mapping between the reverberant carrier, \(e_{xq}[n]\) and clean carrier, \(c_{xq}[n]\), can be learned using a neural network. A neural model is trained with reverberant sub-band carrier (\(c_{xq}[n]\)) as input and model outputs the residual (an estimate of the late reflection component, \(c_{xqq}[n]\)), which when added with the reverberant carrier generates the estimate of source signal carrier (\(\hat{c}_{sq}[n]\)). Instead of independent operations of dereverberation of the envelope and the carrier, we propose to learn the mapping between clean and reverberant versions of both the envelope and the carrier in a joint model. The input to the neural model is the sub-band reverberant envelope spliced with the corresponding carrier signal. The network is trained to output the late reflection components of both the envelope and carrier. With this approach, the model also learns the non-linear relationships between the envelope and carrier signals for the dereverberation task. From the model output, the estimate of the clean sub-band signal \(\hat{s}_{q}[n]\) is generated. In our implementation, the audio signal is divided into non overlapping segments of \(1\) sec. length and passed through the envelope-carrier dereverberation model. The model is outlined in Fig. 2.
### _DFAR model architecture using DPLSTM_
We propose the dual path long short term model (DPLSTM) for the dereverberation of the envelope-carrier components of the sub-band signal. Our proposed model is inspired by dual path RNN proposed by Luo et. al [20]. The block schematic of the DPLSTM model architecture is shown in Fig. 3. For \(1\) sec. of audio sampled at \(16\) kHz, the envelope (\(\mathbf{E}^{y}\)) and carrier (\(\mathbf{C}^{y}\)) components of the critically sampled sub-band signals (\(64\) channel QMF decomposition) are of length \(250\). The envelope/carrier signals of all the sub-bands, for the reverberant signal (\(\mathbf{Y}\)), is of size \(64\times 250\). The combined envelope-carrier input is therefore of size \(128\times 250\), which forms the input to the DPLSTM model. The DPLSTM model outputs are also of the same size of the input, and the model is trained using the mean squared error (MSE) loss.
The proposed DPLSTM has two paths, one LSTM path models the recurrence along the time dimension, while the other models the recurrence along the frequency dimension. We use two separate \(3\)-layer LSTM architectures for these paths. The output dimensions are kept the same as the input dimension for each of these paths. The frequency recurrence LSTM output is transposed and these are concatenated in the frequency dimension. This combined output is fed to a multi layer bi-directional LSTM, which performs recurrence over time. The final output is split into sub-band specific envelope and carrier components. The modulation of the envelope with the respective carrier components generates the sub-band signals, which are passed through the QMF synthesis to generate the full-band dereverberated signal.
Fig. 2: Block schematic of speech dereverberation model, the feature extraction module and the E2E ASR model. The red arrows denote the envelopes, \(e[n]\), and the green arrows represent the carrier, \(c[n]\). The entire model can be constructed as an end-to-end neural framework.
### _Joint learning of dereverberation model for ASR_
The joint learning of the envelope-carrier dereverberation module with the E2E ASR architecture is achieved by combining the two separate models to train a single joint neural model. This is shown in Fig. 2. We initialize the modules with weights obtained from the independent training of each component. Specifically, the envelope-carrier dereverberation model is trained using MSE loss, which is followed by a sub-band synthesis (right side half of Fig. 1). The QMF synthesis is implemented using a 1-D CNN layer to generate the dereverberated speech signal. Further, the E2E ASR architecture is separately trained on the log-mel filter bank features, obtained from the dereverberated speech. The mel-filter bank feature generation can also implemented using a neural framework. Thus, the final model, composed of neural components from the envelope-carrier dereverberation, sub-band synthesis, feature extraction and ASR, can now be jointly optimized using the E2E ASR loss function. This model is refered to as E2E-DFAR model1. The trainable components are the DPLSTM model and the ASR model parameters, while the sub-band synthesis and feature extraction parameters are not learnable.
Footnote 1: The implementation of the work can be found in [https://github.com/anurejian/DFAR](https://github.com/anurejian/DFAR)
## IV Experimental setup
### _Datasets_
#### Iv-A1 REVERB Challenge ASR
The audio samples in REVERB challenge dataset [51] are \(8\) channel recordings with both real and simulated reverberant conditions. The real samples are utterances from MC-WSJ-AV corpus [52], spoken by human speakers in a noisy reverberant room. The simulated samples of the dataset are generated by convolving six different room impulse responses with the clean WSJCAM0 recordings followed by the addition of noise at the signal-to-noise ratio (SNR) of \(20\) dB. The training data consists of \(7861\) ( \(\sim 17.5\) hours) utterances which are obtained by convolving WSJCAM0 train data with \(24\) measured RIRs. The reverberation time of the measured impulse responses range from \(0.2\) to \(0.8\) sec. The training, development and evaluation data sets consist of \(92\), \(15\) and \(38\) speakers respectively. The development data consists of \(1663\) (\(3.3\) hours) utterances and the evaluation data consists of \(2548\) (\(5.4\) hours) utterances.
#### Iv-A2 VOiCES Dataset
The VOiCES training set is a subset (\(80\) hours) of the LibriSpeech dataset. This set has utterances from \(427\) speakers recorded in clean environments with close-talking microphones. The development and evaluation sets are far-field microphone recordings from diverse room dimensions, environments and noise conditions containing \(19\) and \(20\) hours of speech, respectively. The three sets namely training, development and evaluation, do not have any overlap in terms of the speakers. The robustness of the developed models is challenged by the mismatch that exists between the training and development/evaluation sets. We artificially added reverberation and noise on the \(80\) hours training set, which served as the training set for all the E2E ASR experiments on the VOiCES dataset. The development set contains \(20\) hours of distant recordings from the \(200\) speakers. The evaluation data of \(19\) hours consists of recordings \(100\) speakers. The training set has \(22741\) utterances, development set has \(4318\) utterances and evaluation set has \(4600\) utterances.
### _E2E ASR baseline system_
For all the ASR experiments, we use the weighted prediction error based pre-processing [9] and unsupervised generalized eigenvalue (GEV) beamforming [13]. The baseline features are \(36\)-dimensional log-mel filter bank features with frequency range from \(200\) Hz to \(6500\) Hz. The ESPnet toolkit [57] is used to perform all the end-to-end ASR experiments, with a Pytorch backend [58]. The model architecture uses 12 conformer encoder layers with \(2048\) units in the projection layer. The \(6\)-layer transformer architecture with \(2048\) units in the projection layer serves as the decoder. Both connectionist temporal cost (CTC) loss and attention based cross entropy (CE) loss are used in the training, with CTC-weight set at \(0.3\)[59]. A single layer of \(1000\) LSTM cell recurrent neural network is used for language modeling (RNN-LM). For training the model, we use stochastic gradient descent (SGD) optimizer with a batch size of \(32\). For language model training, data is augmented from Wall Street Journal (WSJ) corpus.
Fig. 3: The dual path LSTM model architecture for envelope-carrier dereverberation. The top LSTM path models the recurrence along the time dimension while the one on the bottom models the recurrence along the frequency dimension.
### _Performance metrics_
#### Iv-C1 ASR performance metrics
* **WER/CER** (Word/Character Error Rate): The word/character error rate is given by the ratio of number of word/character insertions, deletions and substitutions in the system output to the total number of words/characters in the reference.
#### Iv-C2 Speech quality metrics
* **SRMR**: Speech to reverberation modulation ratio (SRMR) is a non intrusive measure. Here, a representation is obtained using an auditory-inspired filter bank analysis of critical band temporal envelopes of the signal. The modulation spectral information is used to get an adaptive measure termed as speech to reverberation modulation energy ratio [60, 61]. A higher value indicates an improved quality of the given speech signal.
* **MOS** (Mean Opinion Score): To evaluate the performance of dereverberation algorithms, subjective quality and intelligibility measurement methods are needed. The most widely used subjective method is the ITU-T standard [62], where a panel of listeners are asked to rate the quality/intelligibility of the audio.
## V Experiments and results
The baseline features are the beamformed log-mel filter-bank energy features (denoted as BF-FBANK).
### _REVERB Challenge ASR_
The word error rates (WER) for the dereverberation experiments are shown in Table I. Note that, all the experiments use the same input features (log-mel filter bank features) along with the same E2E ASR architecture (conformer encoder and transformer decoder). The only difference between the various rows, reported in Table I, is the dereverberation pre-processing applied on the raw audio waveform. All the dereverberation experiments use the DPLSTM architecture described in Sec. III.
#### V-A1 Various dereverberation configurations
In Table I, the first row is the baseline result with the beamformed audio (unsupervised GEV beamforming [13] and weighted prediction error (WPE) processing [9].
The next set of rows compare several prior works.
* A full-band and sub-band fusion model for speech enhancement [54].
* Deep complex convolution recurrent neural network model for speech enhancement [53].
* Deep non-linear filter for multi-channel audio [55]
* Reverberation time shortening [56]
The prior works are trained on the same data settings as used in the DFAR framework. All the prior works, except DCCRN (which is not designed for ASR), improve the baseline system in range of \(8\)-\(11\)% in terms of relative WER. However, the proposed DFAR/E2E-DFAR approach is observed to provide
the best WER, with relative improvement in WER of \(19/34\)% on the evaluation data.
In Table I, we have also performed two ASR experiments - i) using STFT inputs (log magnitude), and ii) using the sub-band signal directly without the envelope-carrier decomposition. Both these experiments, use the DPLSTM dereverberation model proposed in this work. As seen in Table I, the dereverberation on the STFT magnitude component improves the ASR systems significantly over the baseline, while the dereverberation on the sub-band signal directly is not effective. However, the STFT approach is also seen to be inferior to the DFAR approach where the envelope-carrier dereverberation is performed.
The fourth set of rows corresponds to the WER results with envelope/carrier based dereverberation alone. The relative improvements of \(2-9\)% are seen here compared to the baseline BF-FBANK. Separately, with dereverberation based on the carrier signal alone, a similar improvement is achieved. Further, the dereverberation of the temporal envelope and carrier components in a combined fashion using the DPLSTM model improves the ASR results over the separate dereverberation of envelope/carrier components. Here, average relative improvements of \(16\)% and \(19\)% are seen in the development set and evaluation set respectively, over the BF-FBANK baseline system for the DFAR approach.
The final row in Table I reports the results using the joint learning of the dereverberation network and the E2E ASR model. The E2E-DFAR is initialized using the dereverberation model and the E2E model trained separately. The proposed E2E-DFAR model yields average relative improvements of \(27\)% and \(34\)% on the development set and evaluation set respectively over the baseline system. The joint training is also shown to improve over the set up of having separate networks for dereverberation and E2E ASR. While the DFAR model is trained only on simulated reverberation conditions, the WER improvement in real condition is seen to be more pronounced than those observed in the simulated data. This indicates that the model can generalize well to unseen reverberation conditions in the real-world.
#### Iv-A2 Comparison with prior works
The comparison of the results from prior works reported on the REVERB challenge dataset is given in Table II. The Table includes results from end-to-end ASR systems [63, 65, 66] as well as the joint enhancement and ASR modeling work reported in [64]. We also compare with our prior work reported in [18]. Specifically, many of the prior works compared in Table II are based on STFT based enhancement. The work reported in Subramanian et. al. [63], used a neural beamforming approach in the STFT domain, while the efforts described in Heymann et. al. [64], used a long-short term memory network for mask estimation in power spectral domain (PSD). The dynamic convolution method proposed in Fujita et. al. [65] used deconvolution of log-mel spectrogram features. Similar to the proposed work, all these efforts have also used the E2E ASR model training. As seen in Table II, the proposed work improves over these prior works considered here, further highlighting the benefits of the dereverberation in the sub-band time domain using long-term envelope-carrier based DPLSTM models.
#### Iv-A3 Dereverberation model architecture
The ASR experiments on the REVERB challenge dataset, pertaining to the choice of different model architectures used in the dereverberation model, are listed in Table III. We have experimented with convolutional LSTM (CLSTM) [50] and time-domain LSTM (4-layer LSTM) architecture [67] in addition to the DPLSTM approach. As seen here, the Dual-path recurrence based DPLSTM gives the best word error rate in comparison with the other LSTM neural architectures considered. This may be attributed to the joint time-frequency recurrence performed to the other approaches which perform only time domain recurrence.
#### Iv-A4 Dereverberation loss function
The MSE loss function used in the DPLSTM model training consists of a combination of loss values from the envelope and the carrier components. We experimented with the hyper parameter, \(\lambda\), which controls the proportion of envelope based loss and carrier based loss in the total loss (\(Total\ loss=\lambda\times env.\ loss+(1-\lambda)\times carr.\ loss\)). The ASR results for the various choices of the hyper parameter \(\lambda\) are shown in Table IV. Empirically, the value of \(\lambda=0.6\) gives the best WER on the REVERB challenge dataset. Further, the choice of \(\lambda=1\) or \(\lambda=0\), corresponding to envelope/carrier only dereverberation, are inferior to other choices of \(\lambda\), indicating that the joint dereverberation of the envelope and carrier components is beneficial.
### _VoiCes ASR_
The ASR setup used in the VOICES dataset followed the ESPnet recipe with the conformer encoder and a transformer decoder. The rest of the model parameters and hyper-parameters are kept similar to the ones in the REVERB challenge dataset. The WER results on the VOiCES dataset are given in Table V. The dereverberation of the envelope alone provides an absolute improvement of \(1.9\)% and \(2.2\)% on the development and evaluation data respectively, compared
to the FBANK baseline system. The dereverberation based on envelope-carrier modeling further improves the results. An absolute improvement of \(3.3\)%/\(5.4\)% on the development/evaluation data is achieved, compared to the FBANK baseline. Further, the joint training on envelope-carrier dereverberation network with the ASR model improves the WER results. We observe relative improvements of \(10\)% and \(12\)% on the development and evaluation data respectively.
### _Speech quality evaluation_
A comparison of the SRMR values for different dereverberation approaches is reported in Table VI. Here, we compare the baseline unsupervised GEV beamforming [13] and weighted prediction error (WPE) [9] with various strategies for beamforming. The deep complex convolutional recurrent network (DOCRN) based speech enhancement [53] is also implemented on the REVERB dataset, and these results are reported in Table VI. While the envelope based dereverberation did not improve the SRMR values, the carrier based dereverberation is shown to improve the SRMR results. Further, the DFAR model also achieves similar improvements in SRMR for all the conditions over the baseline approach (GEV+WPE) and the DOCRN approach.
We conducted a subjective evaluation to further assess the performance of the dereverberation method. The subjects were asked to rate the quality of the audio on a scale of \(1\) to \(5\), \(1\) being poor and \(5\) being excellent. The subjects listened to the audio in a relatively quiet room with a high quality Sennheiser headset. We perform the A-B listening test, where the two versions of the same audio file were played, the first one with GEV + WPE dereverberation and the second one with the proposed dereverberation approach. We chose \(20\) audio samples, from four different conditions (real and simulated data and from near and far rooms) for this evaluation and recruited \(20\) subjects.
The subjective results are shown in Table VII. As seen, the proposed speech dereverberation scheme shows improvement in subjective MOS scores for all the conditions considered. The subjective results validate the signal quality improvements observed in the SRMR values (Table VI).
## VI Conclusion
In this paper, we propose a speech dereverberation model using frequency domain linear prediction based sub-band envelope-carrier decomposition. The sub-band envelope and carrier components are processed through a dereverberation network. A novel neural architecture, based on dual path recurrence, is proposed for dereverberation. Using the joint learning of the neural speech dereverberation module and the E2E ASR model, we perform several speech recognition experiments on the REVERB challenge dataset as well as on the VOiCES dataset. These results show that the proposed approach improves over the state of art E2E ASR systems based on mel filterbank features.
The dereverberation approach proposed in this paper also reconstructs the audio signal, which makes it useful for audio quality improvement applications as well as other speech processing systems in addition to the ASR system. We have further evaluated the reconstruction quality subjectively and objectively on the REVERB challenge dataset. The quality measurements show that the proposed speech dereverberation method improves speech quality over the baseline framework of weighted prediction error. The ablation studies on various architecture choices provides justification for the choice of the DPLSTM network architecture. Given that the proposed model allows the reconstruction of the audio signal, it can be used in conjunction with self-supervised neural approaches for representation learning of speech as well. This will form part of our future investigation. |
2309.08671 | Designing MoirΓ© Patterns by Strain | Experiments conducted on two-dimensional twisted materials have revealed a
plethora of moir\'e patterns with different forms and shapes. The formation of
these patterns is usually attributed to the presence of small strains in the
samples, which typically arise during their fabrication. In this work we find
that the superlattice structure of such systems actually depends crucially on
the interplay between twist and strain. For systems composed of honeycomb
lattices, we show that this can lead to the formation of practically any
moir\'e geometry, even if each lattice is only slightly distorted. As a result,
we show that under strain the moir\'e Brillouin zone is not a stretched
irregular hexagon, but rather a primitive cell that changes according to the
geometry of the strained moir\'e vectors. We identify the conditions for the
formation of hexagonal moir\'e patterns arising solely due to shear or biaxial
strain, thus opening the possibility of engineering moir\'e patterns solely by
strain. Moreover, we study the electronic properties in such moir\'e patterns
and find that the strain tends to suppress the formation of the flat moir\'e
bands, even in the strain-induced hexagonal patterns analogous to those
obtained by the twist only. Our work explains the plethora of moir\'e patterns
observed in experiments, and provides a solid theoretical foundation from which
one can design moir\'e patterns by strain. | Federico Escudero, Andreas Sinner, Zhen Zhan, Pierre A. PantaleΓ³n, Francisco Guinea | 2023-09-15T18:00:11Z | http://arxiv.org/abs/2309.08671v2 | # Designing Moire Patterns by Strain
###### Abstract
Experiments conducted on two-dimensional twisted materials have revealed a plethora of moire patterns with different forms and shapes. The formation of these patterns is usually attributed to the presence of small strains in the samples, which typically arise during their fabrication. In this work we find that the superlattice structure of such systems actually depends crucially on the interplay between twist and strain. For systems composed of honeycomb lattices, we show that this can lead to the formation of practically any moire geometry, even if each lattice is only slightly distorted. As a result, we show that under strain the moire Brillouin zone is not a stretched irregular hexagon, but rather a primitive cell that changes according to the geometry of the strained moire vectors. We identify the conditions for the formation of hexagonal moire patterns arising solely due to shear or biaxial strain, thus opening the possibility of engineering moire patterns solely by strain. Moreover, we study the electronic properties in such moire patterns and find that the strain tends to suppress the formation of the flat moire bands, even in the strain-induced hexagonal patterns analogous to those obtained by the twist only.
## I Introduction
The recent discovery of correlated electronic states and superconductivity in twisted bilayer graphene (TBG) [1; 2; 3] has sparked a great interest in twisted moire systems. Theoretical works in TBG [4; 5; 6; 7; 8; 9; 10], and transition metal dicalcogenides (TMDs) [11; 12; 13; 14; 15; 16; 17; 18; 19; 20], have demonstrated that the moire patterns in these systems can give rise to narrow bands that are largely responsible for the correlated effects [2; 3; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. The form of these moire patterns, however, can be highly sensitive to the presence of strain in the system [41]. This can have significant effects on the electronic properties, e.g., by preventing the bands from becoming flat around the magic angle, or by splitting the van Hove singularities [42; 43; 44; 45]. Although the strain in superlattices configurations typically arises randomly during the fabrication of the samples [46; 47], recent experimental advances have opened the possibility of inducing and controlling, in a precise way, different types of strain fields [48]. This provides a promising platform for designing moire patterns, and tune the electronic properties, through the interplay between twist and strain [49].
In superlattice configurations, the effect of the strain is usually magnified in the resulting moire pattern [50]. Local variations of strain in the samples can, indeed, lead to large changes in the moire pattern [51]. This is consistent with several recent experimental studies where the creation of different types of moire lattice defects have been reported. Examples include domain walls between different stacking domains in TBG [52], hexagonal boron nitride [53], or TMDs [54]. On the other hand, the effect of strains in monolayer graphene and other non-twisted bidimensional materials has been extensively studied [55; 56; 57; 58; 50], and important insights on the role of strains in twisted bilayer graphene have been described in [42; 43; 47; 51; 59; 60]. Interestingly, highly anisotropic moire patterns in strained twisted bilayer graphene have been reported in many experiments [61; 62; 63; 64; 65; 61; 62; 64; 63; 65]. In addition to anisotropies, almost every experiment in multilayer graphene [66; 67; 68; 61; 67; 68; 64; 65] and TMDs [65; 69; 54], have shown the existence of moire patterns with different geometries. In particular, recent experiments have shown the existence of unconventional rectangular moire patterns in TMDs [65] and multilayer graphene [68].
Inspired by these findings, in this work we study how the interplay between twist and strain can modify the geometrical properties of the moire patterns. We find that by selectively applying strain to the system one can change the moire patterns to practically any geometry, even at very small strain magnitudes that only slightly distort each lattice. Exploiting a unique transformation that determines the relative angle and length between the moire vectors, we develop a general theoretical scheme which allows one to describe any strained moire geometry. We discuss different experimentally relevant types of strain, such as uniaxial heterostrain, shear strain and biaxial strain. We obtain and discuss the formation of special moire geometries, such as the square moire patterns. We also show that hexagonal moire patterns, analogous to those obtained with only a twist, can be formed solely by the application of shear or biaxial strain, thus opening the possibility of engineering moire patterns only by strain. Finally, we observe that the typical irregular
hexagonal cell, commonly used to describe strained honeycomb lattices, is no longer the moire Brillouin zone (mBZ) of the strained superlattice. Instead, we identify a family of mBZ, with distinct geometries, that reflect the symmetries of the superlattice.
Our geometrical analysis of strained moire patterns overlaps with those recently presented in Ref. [20], where various types of strain have also been examined. However, despite the similarities, our theoretical scheme is build upon finding a unique transformation that directly determines the geometrical properties of the moire vectors. This allows us to analytically study, in greater detail, what combinations of twist and strain result in any particular moire pattern, thus providing a firm platform from which one can actually design moire patterns. In addition, we develop a comprehensive account of the strain effects in both real and reciprocal space, and in particular discuss how these can strongly reshape the moire BZ, which has not been addressed before in the literature. We thus believe that our work complements previous theoretical studies by providing a detailed account of how the moire patterns can be actually designed by strain.
Furthermore, our geometrical analysis is complemented by the studies of the electronic properties. We find that the modification of the moire patterns by the strain plays a crucial role in the formation of flat bands around a magic angle. We attribute this to an interplay between the shift of the Dirac points in each deformed lattice, due to geometric and energetic effects, and the moire potential that couples them. The strain influences both by breaking almost all the symmetries in the system, effectively preventing the lowest moire bands to flatten across the BZ. We find that this occurs even in the hexagonal moire patterns that arise due to strain only, and that on the moire scale look practically identical to those obtained with the twist only. In these cases the strain reorganizes the charge density in the system, and leads to the splitting and appearance of multiple high-order van Hove singularities.
The rest of the paper is organized as follows: In Section II we discuss geometrical properties of strained moire patterns. We describe in details how different patterns can be achieved through the interplay between twist and different types of strain. We also obtain how the first moire Brillouin zone changes under strain. In Section III we discuss the electronic properties of the strained moire patterns, using an extension of the continuum model in the presence of strain. We calculate the band structure, the density of states, and the charge density profile under different types of strain, and compare them to the case of TBG without strain. Finally, our conclusions follow in Section IV.
## II Geometrical properties of strained moire patterns
### General considerations
We choose the lattice vectors of a honeycomb lattice as \(\mathbf{a}_{1}=a\left(1,0\right)\) and \(\mathbf{a}_{2}=a\left(1/2,\sqrt{3}/2\right)\), where \(a\) is the lattice constant (\(a\simeq 2.46\) A in graphene). In a honeycomb twisted bilayer configuration, the usual rotation by \(\pm\theta/2\), and a further application of strain, yields \(\tilde{\mathbf{a}}_{i,\pm}=\left(\mathbf{1}+\mathcal{E}_{\pm}\right)\mathrm{R }\left(\pm\theta/2\right)\mathbf{a}_{i}\), where \(\mathrm{R}\left(\theta\right)\) is the rotation matrix and \(\mathcal{E}_{\pm}\) is the strain tensor. At small deformations (that is, to leading order in \(\mathcal{E}_{\pm}\)), the reciprocal vectors can be obtained as \(\tilde{\mathbf{b}}_{i,\pm}\simeq\left(\mathbf{1}-\mathcal{E}_{\pm}\right) \mathrm{R}\left(\pm\theta/2\right)\mathbf{b}_{i}\). In what follows, we only restrict our discussion to small twist angles and the practical case in which the forces act oppositely in each layer, \(\mathcal{E}_{+}=-\mathcal{E}_{-}=\mathcal{E}/2\). Then, for a general strain tensor of the form \(\mathcal{E}=\sum_{ij}\epsilon_{ij}\left(\mathbf{e}_{i}\otimes\mathbf{e}_{j}\right)\) (where \(i,j=x,y\)), the moire lattice vectors can be obtained as \(\mathbf{g}_{i}=\tilde{\mathbf{b}}_{-,i}-\tilde{\mathbf{b}}_{+,i}\), which implies \(\mathbf{g}_{i}=\mathbf{T}\mathbf{b}_{i}\), where
\[\mathbf{T}=\left(\mathbf{1}+\mathcal{E}/2\right)\mathrm{R}\left(-\theta/2 \right)-\left(\mathbf{1}-\mathcal{E}/2\right)\mathrm{R}\left(\theta/2\right). \tag{1}\]
We are interested in how the combination of rotation and strain changes the geometry of the moire patterns. The angle \(\beta\) between the moire vectors can be determined from the symmetric transformation \(\mathbf{F}=\mathbf{T}^{\mathrm{T}}\mathbf{T}\) acting on the reciprocal vectors \(\mathbf{b}_{i}\),
\[\cos\beta=\frac{\mathbf{F}\mathbf{b}_{1}\cdot\mathbf{b}_{2}}{\sqrt{\left( \mathbf{F}\mathbf{b}_{1}\cdot\mathbf{b}_{1}\right)\left(\mathbf{F}\mathbf{b}_ {2}\cdot\mathbf{b}_{2}\right)}}. \tag{2}\]
We can separate \(\mathbf{F}=\mathbf{F}_{0}+\mathbf{F}_{\epsilon}\), where \(\mathbf{F}_{0}\) is the contribution due to pure rotations, and \(\mathbf{F}_{\epsilon}\) is the contribution due to the combination of rotation and strain:
\[\mathbf{F}_{0} =4\sin^{2}\left(\theta/2\right)\mathbf{1}, \tag{3}\] \[\mathbf{F}_{\epsilon} =\sin\theta\left(\begin{array}{cc}-2\epsilon_{xy}&\epsilon_{xx} -\epsilon_{yy}\\ \epsilon_{xx}-\epsilon_{yy}&2\epsilon_{xy}\end{array}\right)+\cos^{2}\left( \theta/2\right)\mathcal{E}^{2}. \tag{4}\]
Since \(\mathbf{F}_{0}\) is a spherical tensor, a transformation by \(\mathbf{F}_{0}\) alone does not change \(\beta\). This is, of course, the situation without strain, where the honeycomb layers are only rotated and the moire vectors have always the same angle \(\beta=2\pi/3\). However, under strain the vectors are also transformed by the non-spherical tensor \(\mathbf{F}_{\epsilon}\), which changes the angle of \(\mathbf{b}_{i}\) and hence modifies the geometrical properties of the moire pattern. Note that the second term in Eq. (4) describes the possibility of obtaining moire patterns without rotations, i.e., purely by strain [70; 71].
Equations (1) to (4) constitute the central results of the geometrical part of our study. They possess the versatility to describe a wide range of moire structures, relying solely on the transformation matrix \(\mathbf{F}\). This matrix can be constructed using an arbitrary strain tensor, rotation matrix, and even lattice geometries with appropriately
chosen lattice vectors. These equations provide a concise and straightforward representation, that can also be employed to reproduce the results presented in Ref. [20].
One crucial aspect of the modification of moire patterns under strain is that it requires significantly smaller strain magnitudes compared to the strain needed to modify a monolayer honeycomb lattice. This can be observed by examining the strain required to change the angle between the corresponding lattice vectors. Consider, for instance, the case of uniaxial heterostrain along the \(\phi=0\) direction, i.e., \(\epsilon_{xx}=\epsilon\), \(\epsilon_{yy}=-\nu\epsilon\) and \(\epsilon_{xy}=0\), where \(\nu\) is the Poisson ratio. As in Eq. (2), we can obtain the angle \(\alpha\) between the strained reciprocal vectors \(\tilde{\mathbf{b}}_{\pm}\) through the symmetrical transformation \(\mathbf{T}_{\pm}^{\mathrm{T}}\mathbf{T}_{\pm}\), where \(\mathbf{T}_{\pm}=\left(\mathbf{1}\mp\mathcal{E}/2\right)\mathrm{R}\left(\pm \theta/2\right)\). Then, at low twist angle and to leading order in \(\epsilon\) we get
\[\cos\alpha \simeq-\frac{1}{2}\mp\frac{3}{16}\epsilon\left(\nu+1\right), \tag{5}\] \[\cos\beta \simeq-\frac{1}{2}+\frac{3}{16}\epsilon\left(\nu+1\right)\frac{2 \sqrt{3}}{\theta}. \tag{6}\]
Thus at small values of \(\theta\) one needs much smaller strain magnitudes to modify \(\beta\) than to modify \(\alpha\). In fact, for experimentally relevant values \(\epsilon\lesssim 10\%\), and sufficiently low twist angles [48, 41, 69], one can, in principle, vary the angle \(\beta\) to any value between \(0\) and \(\pi\). In comparison, for the same strain range, the actual angle between the lattice vectors in the monolayer varies only by just a few degrees [72] (see also [58] and references therein). This means that, under the right strain parameters, the moire patterns can be practically changed to any desired geometry, even if the underlying honeycomb lattices are only slightly distorted [20, 68]. Such behavior is possible because the moire pattern arises from the twist angle or the lattice mismatch between the two monolayers, and any small distortion is enhanced by the moire [50].
It is important to note that, under strain, the moire vectors obtained by using a unique construction (e.g., by the difference between \(\tilde{\mathbf{b}}_{-,i}\) and \(\tilde{\mathbf{b}}_{+,i}\)) may not be the smallest ones (the so called _primitives_). Consequently, for arbitrary strain parameters the moire vectors may not reflect the symmetries of the corresponding moire geometry. For example, a square moire pattern can result from equal length moire vectors with an angle \(\beta=\pi/2\) between them, but also from moire vectors with an angle \(\beta=\pi/4\) and relative length \(\left|\mathbf{g}_{1}\right|/\left|\mathbf{g}_{2}\right|=\sqrt{2}\). In general, for arbitrary strain parameters, the primitive moire vectors are obtained by appropriately changing the set of reciprocal lattice vectors from which they are constructed (e.g., by taking the difference between the deformed vectors \(\mathbf{b}_{1}\) and \(\mathbf{b}_{1}+\mathbf{b}_{2}\), rather than \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\); see Fig. 1). These different constructions of \(\mathbf{g}_{i}\) eventually reflect the underlying symmetries of the honeycomb lattices.
Furthermore, although the form of the moire patterns between constrained honeycomb lattices are uniquely determined by the _periodicity_ of the superlattice, the same is not true under strain. Two set of symmetric moire vectors, which technically describe the same superlattice, may actually correspond to different forms of the moire pattern. The reason is that the _stretch_ of the AA stacking, which is _periodically_ repeated by the moire vectors, increases under strain [49, 61, 68]. The effect of the strain on the moire patterns thus acts not only on the modification of the moire periodicity, but also on how the form of the stacking shape is repeated. This behavior is, in a way, similar to the usual description of crystal structures through a basis within a primitive cell and a set of Bravais vectors that repeat such a basis.
Given the reciprocal vectors \(\mathbf{g}_{i}\), the primitive moire lattice vectors \(\mathbf{g}_{i}^{R}\) are most easily obtained by the relation \(\mathbf{g}_{i}\cdot\mathbf{g}_{i}^{R}=2\pi\delta_{ij}\), which implies \(\mathbf{g}_{i}^{R}=\mathbf{T}^{-\mathrm{T}}\mathbf{a}_{i}\), where \(\mathbf{T}\) is given by Eq. (1). Thus, the geometrical properties of the primitive moire vectors are determined by the inverse transformation \(\mathbf{F}^{-\mathrm{T}}=\mathbf{F}^{-1}\). In particular, the angle between the primitive vectors is \(\beta_{R}=\pi-\beta\), where \(\beta\) is the angle in reciprocal space given by Eq. (2).
In what follows we discuss in details the geometrical properties of the moire patterns under three important kinds of strain: the uniaxial heterostrain, the biaxial strain, and the shear strain. It is worth mentioning that our formalism, applied here to moire heterostructures which arise from honeycomb lattices, can be directly extended to other geometries by appropriately modifying the lattice vectors and the strain tensor [73].
### Uniaxial heterostrain
The uniaxial heterostrain refers to a type of strain that is applied along a unique axis, and acts oppositely in each honeycomb lattice. From the experimental point of view
it is widely regarded as the most relevant kind of strain in TBG. It was first introduced both theoretically and experimentally in Ref. [42], and then further investigated in Refs. [44, 43, 44, 59, 74]. The developed approach can, nevertheless, be directly extended to other types of strain, as discussed in the next sections.
The strain tensor of uniaxial heterostrain with magnitude \(\epsilon\), along an angle \(\phi\) relative to the \(x\) axis, reads
\[\mathcal{E} =\mathds{R}^{\mathrm{T}}\left(-\phi\right)\left(\begin{array}[] {cc}\epsilon&0\\ 0&-\nu\epsilon\end{array}\right)\mathds{R}\left(-\phi\right)\] \[=\epsilon\left[\begin{array}{cc}\cos^{2}\phi-\nu\sin^{2}\phi& \left(1+\nu\right)\sin\phi\cos\phi\\ \left(1+\nu\right)\sin\phi\cos\phi&\sin^{2}\phi-\nu\cos^{2}\phi\end{array} \right]. \tag{7}\]
The transformation matrix \(\mathbf{F}\) then becomes
\[\mathbf{F} =4\sin^{2}\left(\frac{\theta}{2}\right)\mathbf{1}+\epsilon\left( 1+\nu\right)\sin\left(\theta\right)\mathds{R}\left(2\phi\right)\sigma_{x}\] \[+\cos^{2}\left(\frac{\theta}{2}\right)\frac{\epsilon^{2}}{2} \left[\left(1+\nu^{2}\right)\mathbf{1}+\left(1-\nu^{2}\right)\mathds{R}\left( 2\phi\right)\sigma_{z}\right], \tag{8}\]
where \(\sigma_{i}\) are the Pauli matrices. From here one can readily see that the solutions of Eq. (2) for the strain magnitude always scale with the twist angle as \(\sim\tan\left(\theta/2\right)\). Indeed, by writing \(\epsilon=\epsilon^{\prime}\tan\left(\theta/2\right)\) it follows that \(\mathbf{F}\propto\sin^{2}\left(\theta/2\right)\) and, consequently, that the angle equation for \(\beta\) as function of \(\epsilon^{\prime}\) is independent of the twist angle. Thus, for any \(\beta\) and \(\phi\), the solutions of Eq. (2) for the strain magnitude have the form \(\epsilon\propto\tan\left(\theta/2\right)\). What this general result reflects is that the lower the twist angle, the weaker strain is needed to modify the geometry of the moire superlattices.
#### ii.2.1 Equal length moire vectors
In the following, to simplify our analysis, we focus on the moire patterns formed by equal-length moire vectors, i.e., on the structures with \(|\mathbf{g}_{1}|=|\mathbf{g}_{2}|\). This choice allows for analytical solutions, which can be used to analyze the geometrical effects. The consideration of moire vectors with different lengths is a straightforward extension of our analysis, as described in a following section. From Eq. (8), the equal-length moire vector condition for non-zero strain is given by
\[\epsilon_{\mathrm{eq}}=\frac{4}{1-\nu}\cot\left(\frac{\pi}{3}-2\phi\right) \tan\left(\frac{\theta}{2}\right). \tag{9}\]
Since \(\epsilon_{\mathrm{eq}}\propto\tan\left(\theta/2\right)\) and thus \(\mathbf{F}\propto\sin^{2}\left(\theta/2\right)\), Eq. (2) for \(\epsilon=\epsilon_{\mathrm{eq}}\) does not depend on \(\theta\). This is a rather remarkable result: it means that the strain direction \(\phi\) needed to obtain the equal length moire vectors, with an angle \(\beta\) between them, is independent of the twist angle. The twist angle only modifies the needed strain magnitude, and the resulting (equal) length of the moire vectors, \(|\mathbf{g}_{i}|^{2}=\mathbf{F}\mathbf{b}_{i}\cdot\mathbf{b}_{i}\propto\sin ^{2}\left(\theta/2\right)\) (as in the unstrained case). Note that Eq. (9) is not invariant under the transformation \(\phi\rightarrow\phi+\pi/3\) because we are not taking here other solutions into account, that are obtained by appro
Figure 2: MoirΓ© patterns generated by uniaxial heterostrain in a twisted honeycomb lattice with \(\theta=2^{\circ}\), for the case of equal length moirΓ© vectors. Panel (a): Strain parameters vary from left to right as: \(\epsilon=0\) (no strain); \(\epsilon\simeq 1.64\%,\phi\simeq-9.40^{\circ}\); \(\epsilon\simeq 4.44\%,\phi\simeq-0.90^{\circ}\); and \(\epsilon\simeq-2.28\%,\phi\simeq-22.7^{\circ}\). The corresponding angles between the primitive moirΓ© vectors, shown in black, are \(\beta_{R}=60^{\circ},90^{\circ},140^{\circ},30^{\circ}\). The WignerβSeitz cells of the moirΓ© superlattices are shown in white. The insets underneath each panel (in blue) show the strain magnitude in a scale of \(5\%\), and the strain angle relative to the non-rotated lattice orientation. Panel (b): The evolution of the moirΓ© pattern within the WignerβSeitz cell for angles (from left to right) \(\beta_{R}=40^{\circ},60^{\circ},80^{\circ},90^{\circ},100^{\circ},120^{\circ},140 ^{\circ}\). Bars underneath (thick line) indicate the magnitude of the strain \(\epsilon\) in a scale of \(5\%\) (thin line). As \(\epsilon\) increases, the stretch of the AA stacking within the primitive cell increases. As a result, the shape of two moirΓ© patterns with the same periodicity is not the same if they correspond to different strain magnitudes.
priately changing the construction of the primitive moire vectors (see Appendix A).
As detailed in Appendix B, by solving Eq. (2) for \(\phi\) one can get the required strain parameters, in order to obtain the equal length moire vectors with the angle \(\beta\) between them. At low twist angles, the corresponding strain magnitudes are relatively small and well within the experimental range [48]. Some moire patterns that can be formed under uniaxial heterostrain are shown in Fig. 2. In general, the moire patterns are quite sensitive to the values of the strain parameters, in the sense that small changes in \(\epsilon\) and \(\phi\) can, in contrast, result in significant changes in the geometry of the moire vectors [51, 52]. Thus, the precise control over the magnitude and direction of the applied uniaxial heterostrain is crucial for designing moire patterns through the strain manipulation. It is worth noting that this control has already been achieved experimentally. Reference [48] describes a methodology for process-induced strain engineering, where the strain magnitude and direction in TBG can be controlled.
Fig. 2 also shows that the orientation of the Wigner-Seitz cell, and of the stretched AA stacking within it, change depending on the strain magnitude. This is because the strain modifies not only the angle between the moire vectors, but also their orientation with respect to the non-strain case. For instance, in the strained case with \(\beta_{R}=120^{\circ}\), the hexagonal primitive cell is rotated with respect to the same cell in the non-strain case. In general, the stretch of the AA stacking occurs along the direction of the moire vector \(\mathbf{g}_{1}^{R}\pm\mathbf{g}_{2}^{R}\), where \(+\) (\(-\)) when \(\beta_{R}<90^{\circ}\) (\(\geq 90^{\circ}\)). Such direction always coincides with one corner of the Wigner-Seitz cell. The angle \(\phi_{s}\) of the stretching can thus be estimated as
\[\cos\phi_{s}=\frac{\mathbf{g}_{1}^{R}\pm\mathbf{g}_{2}^{R}}{\left|\mathbf{g}_{ 1}^{R}\pm\mathbf{g}_{2}^{R}\right|}\cdot\mathbf{e}_{x}. \tag{10}\]
Note that \(\phi_{s}\) generally differs from the strain angle \(\phi\), i.e., the observed stretch of the AA stacking does not reflect the direction along which the uniaxial heterostrain is applied. It only reflects the magnitude of the applied strain. Since when \(\epsilon\propto\tan\left(\theta/2\right)\) one has \(\mathbf{T}\propto\sin\left(\theta/2\right)\) [cf. Eqs. (7) and (1)], it follows that for any strain direction \(\phi\) that yields an angle \(\beta_{R}\) between the moire vectors, the corresponding stretch angle \(\phi_{s}\) is independent of the twist angle \(\theta\). The above analysis may allow one to estimate the strain properties of twisted bilayer honeycomb samples by analysing only the shape of the AA regions.
#### ii.2.2 Special moire patterns
Some special moire patterns that may be accomplished deserve further discussion. One case is the square moire pattern shown in Fig. 2a). Squared-like moire patterns have already been experimentally observed [66, 75, 76], and theoretically predicted [20]. While their shape has been attributed to highly distorted moire patterns, our model indicates that this geometry can be alternatively obtained by the right combination of twist angle and strain. Another interesting case occurs when \(\beta_{R}=120^{\circ}\), where one can have the same hexagonal moire periodicity as in the non-strain (where \(\beta_{R}=60^{\circ}\)), albeit with a stretched AA stacking within the primitive cell (see Fig. 2b).
A particularly relevant case is the critical limit in which the moire vectors become collinear. This can lead to quasi-unidimensional channels that have been predicted [50, 77] and observed in several experiments [41, 46, 48, 51, 52, 53, 54, 67, 69, 61, 62, 63, 64, 65, 66, 67, 73]. Plugging \(\beta=\{0,\pi\}\) into Eq. (13) yields a critical strain parameter [77]
\[\epsilon_{c}=\pm\frac{2}{\sqrt{\nu}}\tan\left(\frac{\theta}{2}\right). \tag{11}\]
This expression for \(\epsilon_{c}\) is actually quite general, i.e. it always leads to the collinear moire vectors, regardless of the strain angle \(\phi\)[20, 77]. Technically, this is because at this critical strain the determinant of the matrix \(\mathbf{F}\) vanishes, which means that it becomes non-invertible and the moire vectors are no longer linearly independent.
#### ii.2.3 Arbitrary strain parameters
The general situation of arbitrary strain parameters (within the limit of small deformations) is, in many ways, qualitatively very similar to the special case of equal length moire vectors. By fixing, for example, the strain angle to \(\phi=0\), one can still obtain many different geometries in which the angle between the moire vectors can be tuned solely by changing the strain magnitude. Examples of such moire patterns are shown in Fig. 3. There one sees that, although the length of one moire vector may be more than double the length of the other one,
the moire patterns follows a similar behavior to the simpler ones analyzed in Fig. 2. Thus our discussion in the previous section is readily generalized to arbitrary strain parameters. In particular, the symmetry of the moire superlattice, and the magnitude of the strain, are always reflected in the shape of the Wigner-Seitz cell, and the stretch of the AA stacking within it. Furthermore, the direction of the AA stretching also follows, in general, the direction of the moire vector \(\mathbf{g}_{1}^{R}\pm\mathbf{g}_{2}^{R}\) [cf. Eq. (10)].
In the case of pure uniaxial heterostrain, without a twist, the transformation given by Eq. (8) reduces to
\[\mathbf{F}=\frac{\epsilon^{2}}{2}\left[\left(1+\nu^{2}\right)\mathbf{1}+\left( 1-\nu^{2}\right)\mathrm{R}\left(2\phi\right)\sigma_{z}\right]. \tag{12}\]
Since the second term is not a spherical tensor, the resulting moire pattern is not hexagonal.
### Shear strain
In a honeycomb lattice, shear strain occurs when forces act parallel to its surface but in opposite directions. This leads to a distortion of the lattice. In simpler terms, shear strain in a honeycomb lattice is like the "sliding" forces that deform the lattice without altering its overall volume, cf. Fig. 4d). This kind of strain has been studied in graphene and transition metal dichconehides [78; 79; 80; 81].
The strain tensor due to shear forces applied perpendicularly to an angle direction \(\varphi\) is given by
\[\mathcal{E} =\mathrm{R}^{\mathrm{T}}\left(-\varphi\right)\left(\begin{array}[] {cc}0&\epsilon_{xy}\\ \epsilon_{yx}&0\end{array}\right)\mathrm{R}\left(-\varphi\right)\] \[=\epsilon_{s}\left(\begin{array}{cc}-\sin 2\varphi&\cos 2 \varphi\\ \cos 2\varphi&\sin 2\varphi\end{array}\right), \tag{13}\]
where \(\epsilon_{xy}=\epsilon_{yx}=\epsilon_{s}\) is the shear strain magnitude. For a twisted bilayer lattice, this leads to the transformation
\[\mathbf{F} =\left[4\sin^{2}\left(\frac{\theta}{2}\right)+\epsilon_{s}^{2}\cos ^{2}\left(\frac{\theta}{2}\right)\right]\mathbf{1}\] \[\qquad-2\sin\left(\theta\right)\epsilon_{s}\mathrm{R}\left(2 \varphi\right)\sigma_{z}. \tag{14}\]
The second term implies that the combined effect of twist and shear strain can change the geometry of the moire patterns, similar to the effect of uniaxial heterostrain. The main difference lies in how the distortion of each honeycomb lattice gives rise to a particular moire geometry. Thus, although the moire patterns for different strain types may appear similar, their electronic properties can be substantially different (cf. Fig. 8). In particular, shear forces can also lead to a critical case in which the moire vectors become collinear. This occurs when the determinant of the transformation given by Eq. (14) vanishes, which implies
\[\epsilon_{s,c}=\pm 2\tan\left(\frac{\theta}{2}\right), \tag{15}\]
independently of the shear angle \(\varphi\). This critical shear strain is \(\sim\sqrt{\nu}\simeq 0.4\) times smaller than the one required for the case of uniaxial heterostrain.
An interesting situation occurs in the case of pure shear forces without a twist angle, where Eq. (14) reduces to \(\mathbf{F}=\epsilon_{s}^{2}\mathbf{1}\). This transformation acts as in the twisted non-strain case, where \(\mathbf{F}=4\sin^{2}\left(\theta/2\right)\mathbf{1}\), with the resulting moire pattern being always hexagonal. This means that one can form moire patterns without any twist between the layers, just by applying opposite shear forces in each lattice, thus opening the possibility of engineering superlattice heterostructures purely by strain (cf. Fig 4b). The shear angle \(\varphi\) only changes the orientation of the moire pattern. Interestingly, the moire superlattice with pure shear strain can have the same periodicity as that of TBG with twist angle \(\theta\) if the strain magnitude satisfy
\[\epsilon_{s}=2\sin\left(\frac{\theta}{2}\right). \tag{16}\]
For example, a strain magnitude \(\epsilon_{s}\sim 1.8\%\) yields a moire periodicity \(L\sim 13.4\) nm, corresponding to an equivalent twist angle \(\theta\sim 1.05^{\circ}\).
### Biaxial strain
In the case of biaxial strain the forces are equally applied along the \(x\) and \(y\) directions, and in opposite directions in each layer. The corresponding strain tensor reads \(\mathcal{E}=\epsilon_{b}\mathbf{1}\), thus yielding the transformation matrix
\[\mathbf{F}=\left[4\sin^{2}\left(\frac{\theta}{2}\right)+\epsilon_{b}^{2}\cos^ {2}\left(\frac{\theta}{2}\right)\right]\mathbf{1}. \tag{17}\]
Since \(\mathbf{F}\) is always a spherical tensor, a biaxial strain cannot change the moire geometry: any combination of strain and twist always results in a hexagonal moire pattern. This is, of course, expected because the biaxial strain does not distort the hexagonal lattices, it only changes the size of the primitive cell. The effect of twist and strain, in this case, is to only modify the orientation and length of the superlattice vectors.
The change of orientation can be measured in relation to the direction of the moire vectors in the case of no strain, where, according to our reference convention, the second moire vector in reciprocal space is always along the \(x\) axis, \(\mathbf{g}_{2}=8\pi\sin\left(\theta/2\right)/\sqrt{3}a\mathbf{e}_{x}\) [cf. Eq. (1)]. In the case of biaxial strain, this moire vector becomes \(\mathbf{g}_{2}=\left(8\pi/\sqrt{3}a\right)\sin\left(\theta/2\right)\left[ \mathbf{e}_{x}+\epsilon\cot\left(\theta/2\right)\mathbf{e}_{y}/2\right]\), so its angle \(\alpha_{\epsilon}\) with respect to the \(x\) axis reads
\[\cos\alpha_{\epsilon}=\frac{1}{\sqrt{1+\frac{\epsilon^{2}}{4}\cot^{2}\left( \theta/2\right)}}. \tag{18}\]
By comparing Eqs. (3) and (17) one can obtain the combinations of strain magnitude \(\epsilon_{b}\) and twist angle
that gives the same moire periodicity as with only a twist angle \(\theta\),
\[\sin^{2}\left(\theta_{\epsilon}/2\right)=\frac{\sin^{2}\left(\theta/2\right)- \epsilon_{b}^{2}/4}{1-\epsilon_{b}^{2}/4}. \tag{19}\]
This condition does not, however, guarantee that both moire patterns would be align, since their orientation may differ due to the strain effect. This can be important when one seeks an alignment between two (or more) moire patterns arising from a combination of rotation and lattice mismatch.
A relevant example occurs in heterostructures of TBG/hBN in which hBN acts as a substrate of TBG [82, 83, 84]. In this case, the lattice mismatch between graphene (\(a_{g}=2.46\) A) and hBN (\(a_{h}=2.50\) A) can be accounted as a biaxial strain with magnitude \(\epsilon_{b}\sim 1-a_{T}/a_{B}=0.016\). If the twist angle in TBG is \(\theta_{T}\), and the twist angle between hBN and the graphene layer directly on top is \(\theta_{b}\), a _moire alignment_ implies that both moire patterns have the same orientation and periodicity. Since in TBG the layers are only rotated, the orientation condition is obtained from Eq. (18) by setting \(\cos\alpha_{\epsilon}=\pm 1/2\), which gives \(\theta_{b}\simeq\epsilon_{b}/\sqrt{3}\sim 0.53^{\circ}\). Then the equal periodicity condition, Eq. (19), implies that the twist angle in TBG should be \(\theta_{T}\simeq\sqrt{\theta_{b}^{2}+\epsilon_{b}^{2}}\sim 1.06^{\circ}\), in agreement with previous calculations [85, 86, 87]. We emphasize that this is the _only_ twist angle in TBG for which one can have a perfect moire pattern alignment (or a single moire) with a hBN substrate. As this is only a geometrical condition, it is quite remarkable that it occurs practically at the magic angle where the bands in TBG tend to become flat.
In the particular case of pure biaxial strain, with no twist, Eq. (17) reduces to \(\mathbf{F}=\epsilon_{b}^{2}\mathbf{1}\). In fact, according to Eq. (1) one simply has \(\mathbf{g}_{i}=\epsilon_{b}\mathbf{b}_{i}\), i.e., the moire vectors are just the reciprocal vectors scaled by the biaxial strain magnitude. Thus, in contrast to the cases of only a twist or shear strain, the moire BZ for only biaxial strain has the same orientation as the BZ of the honeycomb lattices (see Fig. 7). Similarly to the case of pure shear strain, the resulting moire pattern has the same hexagonal periodicity as with only a twist angle \(\theta\) when
\[\epsilon_{b}=2\sin\left(\frac{\theta}{2}\right). \tag{20}\]
However, the moire orientation with only biaxial strain is rotated \(90^{\circ}\) with respect to the case of only a twist angle, see Eq. (18).
A comparison between hexagonal moire patterns formed by only a twist, and only shear or biaxial strain, can be seen in Fig. 4. Although in all situations the moire patterns look practically the same at the moire scale, the local distortions of each honeycomb lattice can be significantly different. Note that only in the cases of a pure twist or a pure biaxial strain, the moire patterns have \(\mathcal{C}_{3}\) rotational symmetry.
Figure 4: Hexagonal moirΓ© patterns generated only by: (a) twist angle \(\theta=5^{\circ}\); (b) shear strain with magnitude \(\epsilon_{s}=2\sin\left(\theta/2\right)\simeq 8.7\%\); (c) biaxial strain with magnitude \(\epsilon_{b}=2\sin\left(\theta/2\right)\simeq 8.7\%\). From Eqs. (16) and (20), all cases have the same moirΓ© periodicity. The figures in the bottom row visualize the enlarged WignerβSeitz cells. Here, the vicinity of the AA, AB and BA stacking positions looks different for each case. This difference, however, becomes smaller (and practically unnoticeable at the moirΓ© scale) as the twist and strain decrease. Panel (d) shows schematically the corresponding deformations in the bottom (left) and top (right) lattices due to (from top to bottom panels) rotation, shear strain, and biaxial strain. The effects are exaggerated for better visualization.
### Deformation of the Brillouin zone
In the reciprocal space, the most symmetrical primitive cell is given by the first Brillouin zone, which is constructed by considering the set of points that can be reached from the origin without crossing a Bragg plane (lines in the 2D case). The moire patterns discussed in the previous sections imply that such cell would drastically change its shape under the application of strain. Consider, for example, the hexagonal BZ of a honeycomb lattice. In terms of the reciprocal vectors, it can be obtained by the union of the points
\[\begin{split}\mathbf{q}_{1}&=-\frac{2\mathbf{g}_{ 1}+\mathbf{g}_{2}}{3},\\ \mathbf{q}_{2}&=\mathbf{q}_{1}+\mathbf{g}_{1},\\ \mathbf{q}_{3}&=\mathbf{q}_{1}+\mathbf{g}_{1}+ \mathbf{g}_{2},\end{split} \tag{21}\]
and their negatives. This construction holds in general for the moire pattern of a twisted bilayer superlattice without strain, since then the two lattices are only relatively rotated. However, when the lattices are deformed, the construction through the six vectors \(\mathbf{q}_{i}\) yields a deformed hexagon which is no longer the first BZ. The same holds for the moire superlattice.
Although the construction through Eq. (21) still gives a unit cell in reciprocal space, such cell does not reflect the symmetries of the strained moire patterns. Specifically, we refer to the symmetries relating the AA and AB stacking positions of the moire patterns, as seen in Fig. 2. The correct construction of the moire BZ (mBZ) under strain requires a generalization of Eq. (21) for the case in which the lattice vectors can have any angle and length. Following our previous discussion, we will focus on the situations in which the lattice vectors have equal length. In that case, the points that determine the mBZ are given by
\[\begin{split}\mathbf{Q}_{1}&=-\frac{\left(1+2\chi \right)\mathbf{g}_{1}-\lambda\mathbf{g}_{2}}{2\left(1+\chi\right)},\\ \mathbf{Q}_{2}&=\mathbf{Q}_{1}+\mathbf{g}_{1},\\ \mathbf{Q}_{3}&=\mathbf{Q}_{1}+\mathbf{g}_{1}-\lambda \mathbf{g}_{2},\end{split} \tag{22}\]
where \(\chi=\left|\mathbf{g}_{1}\cdot\mathbf{g}_{2}\right|/\left|\mathbf{g}_{1}\cdot \mathbf{g}_{1}\right|\) and \(\lambda=\operatorname{sign}\left(\mathbf{g}_{1}\cdot\mathbf{g}_{2}\right)+ \delta_{0,\mathbf{g}_{1}\cdot\mathbf{g}_{2}}\) (see the appendix C for details). It is easy to see that the points in Eq. (21) reduce to those given in the above equation only for a hexagonal lattice with \(\beta=2\pi/3\). Note that for \(\beta=\pi/2\) the six points are reduced to four because \(\mathbf{Q}_{1}=-\mathbf{Q}_{3}\), thus resulting in a square mBZ. In Fig. 5 we show the evolution of the mBZ with the applied strain. A comparison is made with the deformed hexagon calculated with the points \(\mathbf{q}_{i}\) given by Eq. (21).
The mBZ, and its counterpart in real space (the Wigner-Seitz cell shown in Fig. 2), provides a direct visualization of the geometrical properties of the moire patterns under strain. This becomes clear by analyzing the shapes of the mBZ in Fig. 5, which follow a distinct pattern depending on the angle \(\beta\). In contrast, the deformed hexagon cell only reflects the magnitude of the strain in the system (i.e., the larger the strain, the longer the deformed hexagon gets), similarly as how the AA stacking stretches in real space (see Fig. 2b). This behavior has been used to characterize the moire patterns under strain, e.g., by reshaping the deformed hexagons to a regular form [43]. We believe, however, that the alternative way of looking at the moire patterns, by considering the mBZ or the Wigner-Seitz cell in real space, gives a clearer representation of the strained superlattice geometry. As noted in Sec. II.1, the underlying distortion of the honeycomb lattices, and thus of the magnitude of the strain, is reflected in the stretch of the AA stacking within the primitive cell. Furthermore, the reshaping of the mBZ may complement the approach in Ref. [45], where strain-induced open Fermi surfaces in a distorted honeycomb cell were proposed to explain the unusual magnetotransport experiments in Ref. [88]. However, the impact of mBZ reshaping due to strain on magnetotransport experiments remains an open question.
## III Electronic properties of strained moire lattices
### Effective continuum models
While the shape and form of strained moire patterns only reflect the geometrical differences between deformed lattices, the electronic properties reflect other important consequences of the strain, such as the shift of the Dirac
Figure 5: Evolution of the Brillouin zone for different angles \(\beta\) between equal length superlattice vectors (shown in black). Each respective mBZ constructed with vectors from Eqs. (22) is shown in blue, while the deformed hexagons constructed using vectors from Eqs. (21) is shown in red. Both constructions coincide only in the non-strain limit where \(\beta=120^{\circ}\). With strain, the deformed hexagons do no longer capture the full symmetry of the moirΓ© patterns. In particular, only the mBZ is symmetric around \(\beta=90^{\circ}\), since it corresponds to the same moirΓ© pattern rotated by \(180^{\circ}\).
points, the influence of the moire potential that couples them, and the splitting of the van Hove singularities, among others [42; 44]. For sufficiently low twist angles, these properties can be captured by a direct extension of the continuum model [4; 7; 10] in the presence of strain [42; 43; 44].
First, we note that under strain the mBZ and the position of the Dirac points in each lattice change. As a result, the latter in general do not coincide with the high symmetry points at borders of the mBZ. At small deformations, the new positions of the Dirac points in the \(\zeta\) valley of the \(\ell=\pm\) layer are given by
\[\mathbf{D}_{\ell,\zeta}\simeq\left(\mathbf{1}-\ell\mathcal{E}/2\right) \mathrm{R}\left[\ell\theta/2\right]\mathbf{K}_{\zeta}-\ell\zeta\mathbf{A}, \tag{23}\]
where \(\mathbf{K}_{\zeta}=-\zeta\left(2\mathbf{b}_{1}+\mathbf{b}_{2}\right)/3\) is the Dirac point in the undeformed honeycomb lattice, and
\[\mathbf{A}=\frac{\sqrt{3}}{2a}\beta\left(\epsilon_{xx}-\epsilon_{yy},-2 \epsilon_{xy}\right) \tag{24}\]
is the strain-induced gauge potential (\(\beta_{G}\simeq 3.14\) is the Gruneisen parameter) [89]. The two terms in Eq. (23) represent the combined effect of the strain on the position of the Dirac points: the first term gives the shift due to the geometrical distortion of the lattice, while the second term gives the shift due to the change in the hopping energies. In addition, strains can also lead to scalar (or deformation) potentials [90; 56; 91]
\[V=g\left(\epsilon_{xx}+\epsilon_{yy}\right). \tag{25}\]
We use \(g=4\) eV for monolayer graphene [92]. The aforementioned potential is incorporated into the diagonal elements of the Dirac Hamiltonian, resulting in a vertical energy displacement of the Dirac cones within each monolayer. This phenomenon resembles the responses observed under the influence of a perpendicular electric field [93].
In a TBG configuration, at low twist angles and strain magnitudes the low energy physics is dominated by states near the shifted Dirac points \(\mathbf{D}_{\ell,\zeta}\). The continuum model Hamiltonian under strain, for the \(\zeta\) valley, then reads
\[H_{\zeta}=\left[\begin{array}{cc}h_{-,\zeta}\left(\mathbf{q}\right)-V \mathbf{1}&U^{\dagger}\left(\mathbf{r}\right)\\ U\left(\mathbf{r}\right)&h_{+,\zeta}\left(\mathbf{q}\right)+V\mathbf{1}\end{array} \right]. \tag{26}\]
Here \(h_{\ell,\zeta}\left(\mathbf{q}\right)\) is the Dirac Hamiltonian in the \(\ell\) layer,
\[h_{\ell,\zeta}\left(\mathbf{q}\right)=-\hbar v_{F}\mathbf{\sigma}_{\zeta}\cdot \mathrm{R}^{\mathrm{T}}\left(\ell\theta/2\right)\left(1+\ell\mathcal{E}/2 \right)\left(\mathbf{q}-\mathbf{D}_{\ell,\zeta}\right), \tag{27}\]
where \(v_{F}\) is the Fermi velocity and \(\mathbf{\sigma}_{\zeta}=\left(\zeta\sigma_{x},\sigma_{y}\right)\). The coupling between the layers is given by the matrix \(U\left(\mathbf{r}\right)\) in the non-diagonal terms of \(H_{\zeta}\); its leading order Fourier expansion is
\[U\left(\mathbf{r}\right)=U_{0}+U_{1}e^{i\zeta\mathbf{g}_{1}\cdot\mathbf{r}}+ U_{1}^{\mathrm{T}}e^{i\zeta(\mathbf{g}_{1}+\mathbf{g}_{2})\cdot\mathbf{r}}, \tag{28}\]
where \(\mathbf{g}_{i}\) are the strained moire vectors and
\[U_{0}=\left(\begin{array}{cc}u_{0}&u_{1}\\ u_{1}&u_{0}\end{array}\right),\quad U_{1}=\left(\begin{array}{cc}u_{0}&u_{1} \omega^{-\xi\zeta}\\ u_{1}\omega^{\xi\zeta}&u_{0}\end{array}\right), \tag{29}\]
where \(\omega=e^{i2\pi/3}\), and \(u_{1}\), \(u_{0}\) are the \(AB\) and \(AA\) hopping energies, respectively. For the numerical calculations, we use \(u_{1}=u_{2}=90\) meV and \(\hbar v_{F}/a=2.135\) eV. In the matrix \(U_{1}\) we have introduced a factor \(\xi=\pm 1\) that accounts for the phase of the three leading order momentum transfers between the shifted Dirac points in each layer. This phase is contingent upon the specific type of strain. In particular, \(\xi=1\) in Fig. 7a) and c) for TBG and pure biaxial strain, respectively, and \(\xi=-1\) in Fig. 7b) for pure shear strain.
### Electronic structure: twist and strain
We first consider the case of TBG with uniaxial heterostrain [42; 43]. The numerical results for the band structure are shown in Fig. 6. As can be seen, even for relative low strain magnitudes
Figure 6: Evolution of the band structure of heteroaxially strained TBG with a twist angle \(\theta=1.05^{\circ}\), for different angles \(\beta\) between equal length moirΓ© vectors. The strain parameters are: a) \(\epsilon=0\) (no strain), b) \(\epsilon\simeq 0.4\%\), \(\phi=-12.0^{\circ}\) and c) \(\epsilon\simeq 0.8\%\), \(\phi=-9.40^{\circ}\). The angles between the reciprocal moire vectors are \(\beta=120^{\circ},105^{\circ}\) and \(90^{\circ}\), respectively. Underneath each 3D plot we show the corresponding mBZ, and to their side the corresponding density plot of the lower (bottom panel) and upper (top panel) middle band. The respective mBZ constructed with vectors defined in Eq. (22) are shown in white.
greatly differ from the one in the non-strain case. The discussion of several aspects is in order. First, we note that under strain the positions of the shifted Dirac points define a periodicity which does not coincide anymore with the corners of the mBZ. Indeed, according to Eq. (23), the difference \(\Delta\mathbf{D}=\mathbf{D}_{-}-\mathbf{D}_{+}\) between two shifted Dirac points corresponding to, e.g., the non-deformed position \(\mathbf{K}=-\left(2\mathbf{b}_{1}+\mathbf{b}_{2}\right)/3\), is given by
\[\Delta\mathbf{D}=-\frac{2\mathbf{g}_{1}+\mathbf{g}_{2}}{3}+2\mathbf{A}, \tag{30}\]
where \(\mathbf{g}_{i}=\mathbf{T}\mathbf{b}_{i}\) are the strained moire vectors. Clearly, the vector \(\Delta\mathbf{D}\) only coincides with the corner \(\mathbf{Q}_{1}\) of the mBZ [cf. Eq. (22)] when the angle between \(\mathbf{g}_{1}\) and \(\mathbf{g}_{2}\) is \(120^{\circ}\) and \(\mathbf{A}=0\), i.e., the non-strain case. Note that even in the case of pure shear strain with no twist, where the moire geometry is the same as in the only-twist case, the vector \(\Delta\mathbf{D}\) would still be shifted from the hexagonal mBZ due to the non-zero gauge field \(\mathbf{A}_{\text{shear}}\propto(0,-2\epsilon_{xy})\). This is expected because the honeycomb lattices are distorted due to the strain, and therefore the hopping energies are no longer the same as in the only twist case. It should be also noted that any relation between the Dirac points and the borders of the moire BZ is further blurred at low twist angles, where the Dirac points are strongly coupled by the moire potential.
Besides the actual shift in momentum due to strain induced gauge and deformation fields, there is also an additional energy shift of the Dirac points, which gets larger as the strain increases. As a result, the lowest bands around the magic angle still have two distinct Dirac points in the presence of strain. A close inspection reveals that such suppression of the flat bands occurs even when the gauge and deformation fields are not taken into account (cf. Appendix D), thus hinting that it is mainly due to how the strain influences the coupling of the Dirac points by the moire potential. Although a concrete explanation of this behavior is still lacking, it may hint that the origin of flat bands in TBG is intrinsically related to the symmetries of the system, particularly those relating the moire potential \(U\left(\mathbf{r}\right)\) (which always has a hexagonal symmetry), and the three momentum transfers \(\mathbf{q}_{i}\) (whose hexagonal symmetry is in general broken by the strain). Note that, although the strain breaks \(\mathcal{C}_{3z}\), \(\mathcal{C}_{2x}\) and \(\mathcal{C}_{2y}\) rotational symmetries, the inversion symmetry \(\mathcal{C}_{2z}\) remains intact, as seen in Fig. 6.
### Electronic structure: Pure strain
Next we examine the scenario of hexagonal moire structures emerging solely from strain (cf. Fig. 4). These cases are interesting because, when compared to the situation of hexagonal patterns arising from only a twist, they reflect the direct effect of strain in the electronic properties. In particular, by using the relations (16) and (20), we are able to compare the electronic structures of cases that share the same moire periodicity. In Fig. 8 we present the results for the band structure, density of states and charge density. For comparison, we also include the results for TBG without strain.
Remarkably, although all cases shown have the same hexagonal moire periodicity, their electronic properties differ substantially. The strain thus plays a decisive role in how the Dirac points in each lattice couple through the moire potential. This can be attributed to the actual distortion of each lattice under strain, which, as seen in Fig. 4, results in different behaviors around AA or AB stacking positions, even if at the moire scale they all look the same. Within the continuum model, these differences are mainly reflected in how are the three leading hopping processes between the Dirac points in each lattice, cf. Fig. 7. In particular, we only observe flat bands, and a corresponding peak in the density of states, in unstrained TBG. With strain, these flat bands disappear,
Figure 7: Reciprocal space representation of superlattice structures with hexagonal unit cells. Large hexagons represent the BZ of each monolayer (in red and blue). The moirΓ© BZ are represented by the small black hexagons. The figure shows the structures for: a) pure twist angle, b) pure shear strain and c) pure biaxial strain. The corresponding mBZ and the hopping processes between the Dirac cones of each graphene monolayer are displayed at the bottom. Arrows indicate the direction of the momentum transfers between Dirac points.
and a splitting and emergence of multiple high-order van Hove singularities takes place. We also observe that in TBG the difference between the charge density at the center and at the edges of the mBZ is more significant than in the two cases involving only strain. This contrast implies potential variations in the electrostatic interactions within purely strained systems when compared to those observed in TBG [94, 95].
## IV Conclusions
We have presented a general theoretical scheme that describes the strain effects in twisted two-dimensional materials. We have shown that the interplay between twist and strain can lead to the formation of practically any moire geometry. The strain plays a central role in this by distorting the lattices and thus modifying the resulting relative length and angle between the moire vectors. Due to the magnifying effect of the moire pattern formations, this effect becomes significant even at very small strain magnitudes, where each layer's lattice is barely deformed. Thus the plethora of moire patterns observed in experiments can be directly attributed to the presence of small strain in the samples. Our considerations, however, go far beyond the mere diagnosis of such intrinsic effects and offer a platform to actually design moire patterns by strain. Indeed, we have described in details the necessary conditions to form any desired moire geometry, simply by selectively changing the twist and strain parameters. In particular, we have specified the conditions to form special moire geometries, such as square moire patterns, or hexagonal moire patterns induced solely by strain. Furthermore, we have identified that the modifications of the moire geometry due to the strain lead to significant deformations of the moire Brillouin zone (mBZ). In contrast to previous studies we have found that, when subject to strain, the mBZ is not a deformed stretched hexagon, but rather a primitive cell that reflects new symmetries of the strained moire vectors. This might have important implications, in particular with respect to identifying the high symmetry points in band structures. We have rounded up our studies by analyzing the electronic properties of the above strained moire pattern. We have found that the strain almost invariably suppresses the formation of moire flat bands, even in those hexagonal patterns formed only by strain. It also tends to split and induce higher order van Hove singularities, as well as to modify the charge density profile.
Figure 8: Band structures of hexagonal moirΓ© patterns generated by: (a) only twist angle \(\theta=1.05^{\circ}\); (b) only shear strain with the magnitude \(\epsilon_{s}=2\sin{(\theta/2)}\simeq 1.83\%\); (c) only biaxial strain with the magnitude \(\epsilon_{b}=2\sin{(\theta/2)}\simeq 1.83\%\). The momentum path in each case is shown in Fig. 7. From Eqs. (16) and (20), all case have the same moirΓ© periodicity \(L\simeq 13.4\) nm. Panels e)-g) display the corresponding total charge density profile of the lower middle band. 3D plots in panels d) and h) show the bands for the moirΓ© structures realized by the shear and biaxial strain, respectively.
## Acknowledgments
We thank Vincent Renard, Niels R. Walet, Adrian Ceferino and Gerardo Naumis for discussions. IMDEA Nanociencia acknowledges support from the "Severo Ochoa" Programme for Centres of Excellence in R&D (Grant No. SEV-2016-0686). F.E. acknowledges support from a research fellowship of CONICET, and PIPCONICET 2021-2023 Grant No. 11220200100941CO. Z.Z. acknowledges support funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101034431 and from the "Severo Ochoa" Programme for Centres of Excellence in R&D (CEX2020-001039-S / AEI / 10.13039/501100011033). P.A.P and F.G. acknowledge funding from the European Commission, within the Graphene Flagship, Core 3, grant number 881603 and from grants NMAT2D (Comunidad de Madrid, Spain), SprQuMat and (MAD2D-CM)-MRR MATERIALS AVANZADOS-IMDEA-NC.
## Appendix A Hexagonal symmetry in strained moire patterns
As discussed in Sec. II.1, and shown in an example in Fig. 1, under strain the construction of the moire vectors using the usual recipe \(\mathbf{g}_{i}=\bar{\mathbf{b}}_{-,i}-\bar{\mathbf{b}}_{+,i}\) does not always yield the smallest (primitive) vectors of the superlattice. A comprehensive analysis of all moire geometries that can be formed under strain should account for which construction of the moire vectors give the smallest one. The analysis can be simplified by using symmetry arguments.
First we note that the smallest primitive moire vectors are, in general, given by one of these vectors: \(\mathbf{g}_{1}\), \(\mathbf{g}_{2}\), or \((\mathbf{g}_{1}+\mathbf{g}_{2})\), where \(\mathbf{g}_{i}=\mathbf{T}\mathbf{b}_{i}\) [cf. Eq. (1)]. Choosing the first two gives, of course, the usual set \(\{\mathbf{g}_{1},\mathbf{g}_{2}\}\), from which one can carry out the analysis of the strained moire geometries as done in the manuscript. But there are also two other possible sets: \(\{\mathbf{g}_{1},\mathbf{g}_{1}+\mathbf{g}_{2}\}\) and \(\{\mathbf{g}_{2},\mathbf{g}_{1}+\mathbf{g}_{2}\}\) (see Fig. 1). One of these two sets can be primitive when a translation of one moire vector \(\mathbf{g}_{i}\) by the other one \(\mathbf{g}_{j}\) results in a smaller vector (i.e., when \(|\mathbf{g}_{1}+\mathbf{g}_{2}|<|\mathbf{g}_{i}|\)). The appropriate construction of the primitive moire vectors by one of these three sets preserves the symmetries of the system (in the present case, the hexagonal symmetry of the underlying honeycomb lattices).
To see this, consider the equal length moire vectors condition for the case of uniaxial heterostrain. For the usual set \(\{\mathbf{g}_{1},\mathbf{g}_{2}\}\), such condition implies the strain magnitude given by Eq. (9). For the other two sets \(\{\mathbf{g}_{1},\mathbf{g}_{1}+\mathbf{g}_{2}\}\) and \(\{\mathbf{g}_{2},\mathbf{g}_{1}+\mathbf{g}_{2}\}\), the equal length condition can be stated as
\[\mathbf{F}\left[\mathbf{b}_{1}-(\mathbf{b}_{1}+\mathbf{b}_{2}) \right]\cdot\left[\mathbf{b}_{1}+(\mathbf{b}_{1}+\mathbf{b}_{2})\right] =0, \tag{101}\] \[\mathbf{F}\left[\mathbf{b}_{2}-(\mathbf{b}_{1}+\mathbf{b}_{2}) \right]\cdot\left[\mathbf{b}_{2}+(\mathbf{b}_{1}+\mathbf{b}_{2})\right] =0, \tag{102}\]
where we have used that \(\mathbf{F}=\mathbf{T}^{\mathrm{T}}\mathbf{T}\) is a symmetric transformation. Solving for the strain magnitude gives
\[\left|\mathbf{g}_{1}\right|=\left|\mathbf{g}_{1}+\mathbf{g}_{2} \right|\rightarrow\epsilon_{\mathrm{eq},2}=\frac{4}{\nu-1}\cot\left(2\phi \right)\tan\left(\theta/2\right), \tag{103}\] \[\left|\mathbf{g}_{2}\right|=\left|\mathbf{g}_{1}+\mathbf{g}_{2} \right|\rightarrow\epsilon_{\mathrm{eq},3}=\frac{4}{\nu-1}\cot\left(\frac{ \pi}{3}+2\phi\right)\tan\left(\theta/2\right). \tag{104}\]
Comparing with Eq. (9) we see that
\[\epsilon_{\mathrm{eq}}\left(\phi+\pi/3\right) =\epsilon_{\mathrm{eq},3}\left(\phi\right), \tag{105}\] \[\epsilon_{\mathrm{eq}}\left(\phi-\pi/3\right) =\epsilon_{\mathrm{eq},2}\left(\phi\right), \tag{106}\]
thus restoring the hexagonal symmetry. Geometrically, the obtained result means that for given parameters \(\left(\theta,\epsilon_{\mathrm{eq}},\phi\right)\), one of the equal length moire vector is no longer primitive after the transformation \(\phi\rightarrow\phi\pm\pi/3\). Rather, the new primitive vector is found by appropriately changing the moire vector construction (e.g., by using other set than the usual one \(\{\mathbf{g}_{1},\mathbf{g}_{2}\}\)). We emphasize that this is only a change in the superlattice description, due to how the moire vectors are constructed. The observed moire geometry, arising from the superposition of two strained honeycomb lattices with primitive vectors \(\mathbf{\tilde{a}}_{i,\pm}=\left(\mathbf{1}+\mathcal{E}_{\pm}\right)\mathrm{R} \left(\pm\theta/2\right)\mathbf{a}_{i}\), always reflects the honeycomb symmetries of the underlying lattices, such that any translation \(\phi\rightarrow\phi\pm\pi/3\) leads to the same moire pattern (up to an overall rotation of the system). For this reason it is more convenient to study, as done in the manuscript, the strained moire patterns by using only the set of vectors \(\{\mathbf{g}_{1},\mathbf{g}_{2}\}\), and generalizing the obtained results by taking into account the missing solutions corresponding to translations \(\phi\rightarrow\phi+\pi/3\). These latter solutions would then correspond to the ones obtained by
considering the other sets of possible primitive moire vectors.
## Appendix B Analytical solutions for equal length moire vectors
In the case of uniaxial heterostrain (Sec. II.2), by solving the angle equation (2) for \(\phi\) one can get the needed strain parameters to obtain equal length moire vectors with an angle \(\beta\) between them. Taking into account the symmetrical solutions, we find
\[\epsilon_{s,r} =\frac{4s}{1-\nu}\frac{f_{r}}{\sqrt{1-f_{r}^{2}}}\tan\left(\theta /2\right), \tag{20}\] \[\phi_{s,r} =-\frac{s}{2}\arccos f_{r}+\frac{\pi}{3}\left(n+\frac{1}{2}\right), \tag{21}\]
where
\[f_{r}\left(\nu,\cos\beta\right)=\left(\frac{1-\nu}{1+\nu}\right)\frac{2+\cos \beta+r\sqrt{3}\left|\sin\beta\right|}{1+2\cos\beta}. \tag{22}\]
Here \(s,r=\pm 1\), and \(n\) is an integer. The solutions are given in terms of four roots, which correspond to four equivalent strain directions that yield the same angle \(\beta\). For both \(r=\pm 1\) one has two strain angles \(\phi\) which are related by \(\phi_{-,r}+\phi_{+,r}=\pi/3+n\pi\). Consequently there is always two strain angles, \(\phi_{+}\) and \(\phi_{-}=\pi/3-\phi_{+}\), with corresponding strain magnitudes \(\pm\epsilon_{r}\), which give the same moire pattern. Each angle \(\phi_{\pm}\) is, in turn, symmetrical under the exchange \(\phi_{\pm}\rightarrow\phi_{\pm}+\pi/3\), due to the honeycomb symmetry of the lattice. The \(r=1\) roots correspond to the moire patterns formed through the lateral contraction of the honeycomb lattices, as measured by the Possion's ratio, and thus correspond to larger strain magnitudes. While the \(r=-1\) roots are solutions for any angle \(\beta\), the roots \(r=1\) are only solutions for certain \(\beta\). The corresponding equal length of the moire vectors reads
\[\frac{\left|\mathbf{g}_{i}\right|^{2}}{\left|\mathbf{b}_{i}\right|^{2}}=\frac{ \left(1+\nu\right)^{2}f_{r}^{2}-\left(1-\nu^{2}\right)f_{r}+\left(1-\nu\right) ^{2}}{\left(1-f_{r}^{2}\right)\left(1-\nu\right)^{2}}4\sin^{2}\left(\frac{ \theta}{2}\right). \tag{23}\]
It is important to note that the strain angle \(\phi\) is measured with respect to the orientation of the (non-deformed) honeycomb lattice. Upon rotation of both hexagonal monolayers by \(\pm\theta/2\), the actual strain direction relative to each lattice is \(\pm\theta/2+\phi\). Although the axis from which \(\phi\) is measured depends on the chosen frame of reference (i.e., the lattice vectors \(\mathbf{a}_{i}\)), the actual direction of the strain, in relation to the orientation of the honeycomb primitive cell (hexagon), is always fixed.
## Appendix C Construction of the moire Brillouin zone
Consider two equal length vectors \(\mathbf{g}_{1}\) and \(\mathbf{g}_{2}\) with angle \(\beta\) between them. We set, without loss of generality, the vector \(\mathbf{g}_{1}\) on the \(x\) axis,
\[\mathbf{g}_{1} =g\left(1,0\right), \tag{24}\] \[\mathbf{g}_{2} =g\left(\cos\beta,\sin\beta\right). \tag{25}\]
For any reciprocal vector \(\mathbf{R}\left(m_{1},m_{2}\right)=m_{1}\mathbf{g}_{1}+m_{2}\mathbf{g}_{2}\), the corresponding Bragg line, which we shall denote as \(l\left(m_{1},m_{2}\right)\), crosses \(\mathbf{R}\) perpendicularly at \(\mathbf{R}/2\). Since the mBZ has a mirror symmetry at \(\beta=\pi/2\) by a reflection at the \(x\) axis, it is sufficient to consider \(\beta<\pi/2\). In that case the six intersections are between the set of two Bragg lines
\[l\left(1,0\right);l\left(0,1\right), \tag{26}\] \[l\left(0,1\right);l\left(-1,1\right),\] (27) \[l\left(-1,0\right);l\left(-1,1\right), \tag{28}\]
and their negatives (see Fig. 10). Now, for an arbitrary vector \(\mathbf{f}=\left(f_{x},f_{y}\right)\) in the \(xy\) plane, a perpendicular vector is \(\mathbf{n}=\mathbf{e}_{z}\times\mathbf{f}=\left(-f_{y},f_{x}\right)\), whose angle with the \(x\) axis is \(\alpha=\arctan\left(n_{y}/n_{x}\right)\). A perpendicular line to \(\mathbf{f}\) that crosses \(\mathbf{f}/2\) then reads \(y=\left(f_{x}/f_{y}\right)\left(f_{x}-x\right)+f_{y}\). Therefore, since \(\mathbf{g}_{1}\) is all in \(x\), the three lines that we need for the mBZ construction are
\[l\left(1,0\right):x_{1} =\frac{g}{2}, \tag{29}\] \[l\left(0,1\right):y_{2} =\frac{1}{\tan\beta}\left(\frac{g}{2}\cos\beta-x\right)+\frac{g}{ 2}\sin\beta,\] (30) \[l\left(-1,1\right):y_{3} =\frac{\cos\beta-1}{\sin\beta}\left(g\frac{\cos\beta-1}{2}-x \right)+\frac{g}{2}\sin\beta. \tag{31}\]
Figure 10: Construction of the mBZ for equal length lattice vectors \(\mathbf{g}_{1}\) and \(\mathbf{g}_{2}\), with angles between them \(\beta=70^{\circ}\) (left) and \(\beta=110^{\circ}\) (right). The Bragg lines are shown in light gray, whose interceptions determine the mBZ (shown in black). If \(\beta<90^{\circ}\), the interceptions are between the Bragg lines associated with the vectors \(\mathbf{g}_{1}\), \(\mathbf{g}_{2}\), \(\mathbf{g}_{1}-\mathbf{g}_{2}\) (and their negatives), whereas if \(\beta>90^{\circ}\) the interceptions are between the Bragg lines of \(\mathbf{g}_{1}\), \(\mathbf{g}_{2}\), \(\mathbf{g}_{1}+\mathbf{g}_{2}\). Note that, up to a rotation, both cases have the same mBZ, since they represent the same lattice. The transition at which \(\left|\mathbf{g}_{1}-\mathbf{g}_{2}\right|\) becomes larger (or smaller) than \(\left|\mathbf{g}_{1}+\mathbf{g}_{2}\right|\) occurs at the critical square case \(\beta=90^{\circ}\), where \(\left|\mathbf{g}_{1}+\mathbf{g}_{2}\right|=\left|\mathbf{g}_{1}-\mathbf{g}_{ 2}\right|\) and the six points of the mBZ are reduced to four.
This leads to the three intersections points
\[\mathbf{I}_{1} =\frac{g}{2}\left[1,\frac{1}{\tan\beta}\left(\cos\beta-1\right)+ \sin\beta\right], \tag{109}\] \[\mathbf{I}_{2} =\frac{g}{2}\left[2\cos\beta-1,-\frac{1}{\tan\beta}\left(\cos \beta-1\right)+\sin\beta\right],\] (110) \[\mathbf{I}_{3} =\frac{g}{2}\left[-1,\frac{1}{\tan\beta}\left(\cos\beta-1\right)+ \sin\beta\right]. \tag{111}\]
We can write these points in terms of the vectors \(\mathbf{g}_{i}\) as
\[\mathbf{I}_{1} =\frac{\mathbf{g}_{1}+\mathbf{g}_{2}}{2}\left(1+\frac{\mathbf{g} _{1}\cdot\mathbf{g}_{2}}{\mathbf{g}_{1}\cdot\mathbf{g}_{1}}\right)^{-1}, \tag{112}\] \[\mathbf{I}_{2} =-\mathbf{I}_{1}+\mathbf{g}_{2},\] (113) \[\mathbf{I}_{3} =\mathbf{I}_{1}-\mathbf{g}_{1}. \tag{114}\]
The case \(\beta>\pi/2\) is obtained by a mirror reflection of \(\mathbf{g}_{2}\) around \(\mathbf{g}_{1}\), thus leading to Eq. (22) after changing the notation of the interception points.
## Appendix D Electronic properties without strain fields
Figure 11 shows the electronic structure of TBG under uniaxial heterostrain, but with zero gauge and scalar strain fields. Parameters are the same as in Fig. 6. Even in absence of gauge fields there is a distortion of the energy bands. As the strain increases, the mBZ is distorted, the Dirac cones are shifted and the remote bands are pushed to a region close to the narrow bands.
|
2309.07542 | An elliptic problem involving critical Choquard and singular
discontinuous nonlinearity | The present article investigates the existence, multiplicity and regularity
of weak solutions of problems involving a combination of critical Hartree type
nonlinearity along with singular and discontinuous nonlinearity. By applying
variational methods and using the notion of generalized gradients for Lipschitz
continuous functional, we obtain the existence and the multiplicity of weak
solutions for some suitable range of $\lambda$ and $\gamma$. Finally by
studying the $L^\infty$-estimates and boundary behavior of weak solutions, we
prove their H\"{o}lder and Sobolev regularity. | Gurdev C. Anthal, Jacques Giacomoni, Konijeti Sreenadh | 2023-09-14T09:13:39Z | http://arxiv.org/abs/2309.07542v1 | # An elliptic problem involving critical Choquard and singular discontinuous nonlinearity
###### Abstract
The present article investigates the existence, multiplicity and regularity of weak solutions of problems involving a combination of critical Hartree type nonlinearity along with singular and discontinuous nonlinearity (see \((\mathcal{P}_{\lambda})\) below). By applying variational methods and using the notion of generalized gradients for Lipschitz continuous functional, we obtain the existence and the multiplicity of weak solutions for some suitable range of \(\lambda\) and \(\gamma\). Finally by studying the \(L^{\infty}\)-estimates and boundary behaviour of weak solutions, we prove their Holder and Sobolev regularity.
**Key words:** Critical Choquard nonlinearity, Hardy-Littlewood-Sobolev inequality, existence results, discontinuous nonlinearites.
_2020 Mathematics Subject Classification:_ 35J20, 35J60, 35J75.
## 1 Introduction
In this article, we consider the following elliptic problem involving both critical Choquard and discontinuous and singular nonlinearities. Presicely, we deal with
\[(\mathcal{P}_{\lambda})\begin{cases}-\Delta u=\lambda\left(\left(\int\limits_{ \Omega}\frac{|u|^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dy\right)|u|^{2^{*}_{\mu}-2}u+ \chi_{\{u<a\}}u^{-\gamma}\right),\\ u>0\text{ in }\Omega,\ u\equiv 0\text{ on }\partial\Omega,\end{cases}\]
where \(\Omega\) is a bounded domain of \(\mathbb{R}^{n}\) with smooth boundary \(\partial\Omega\), \(\gamma>0\), \(n\geq 3\), \(a>0\), \(\lambda>0\), \(2^{*}_{\mu}=(2n-\mu)/(n-2)\), \(0<\mu<n\) and \(\chi_{A}\) denotes the characteristic function of a set \(A\).
An important obstacle in investigating this class of problems is that the corresponding energy functional is nondifferentiable due to the discontinuous nonlinearity. Therefore, we utilise the idea of generalized gradients as explained in the important work of F. H. Clarke ([8]), which was later
applied to the setting of partial differential equations by Chang [5].
The occurence of discontinuous nonlinearities arises in the modelling of a number of physical issues, including the obstacle problem, the seepage surface problem, and the Elenbass equation, for further information, see [6, 7]. These significant applications have driven a long series of investigations on problems involving such nonlinearities. We mention the pioneering work of Badiale and Tarentello [4], where existence and multiplicity results are established in the situation of critical growth and discontinuous nonlinearities in \(\mathbb{R}^{n}\) with \(n\geq 3\). We also quote further papers that consider different varieties of diffusion operators and nonlinearities, see [2, 11, 12, 20].
The problems involving Choquard type nonlinearity are widely studied since these problems found their applications in various physical phenomena. First, S. I. Pekar [27] used such kind of nonlinearities to describe the quantum mechanics of a polaron at rest whereas P. Choquard [24] described the model of an electron trapped in its own hole using such nonlinearity. One of the initial study of problems involving Choquard nonlinearities using variational methods was conducted by E. H. Lieb [24] wherein he established the existence and uniqueness of a positive radial ground state of the following problem
\[-\Delta u+V_{0}u=(I_{2}*|u|^{2})u\text{ in }\mathbb{R}^{3},\]
where \(I_{\mu}(x)=\dfrac{A_{\mu}}{|x|^{n-\mu}}\) with \(A_{\mu}=\frac{\Gamma\left(\frac{n-\mu}{2}\right)}{2^{\mu}\Gamma\left(\frac{n}{ 2}\right)\pi^{\frac{n}{2}}}\). Without any attempt to provide the complete list, we refer to [13, 25, 26] and the references therein for the study of Choquard problems using variational methods.
The problems involving singular nonlinearities have a very long history. These type of problems has numerous applications in the physical world, such as in the study of non-Newtonian flows in porous media and heterogeneous catalyst. One of the seminal breakthrough in the study of such problems was the work [10, Crandall-Rabinowitz-Tartar]. By applying the method of sub-supersolutions to the non singular approximated problem and then passing to the limit, the authors proved the existence of a solution to a class of elliptic PDEs involving a singular nonlinearity. Following this pionneering work, a significant amount of study has been conducted on elliptic singular equations about existence and qualitative properties, of solutions. In this regards, we refer the survey articles [14, 21] and references therein.
The investigation of singular problems in combination with critical growth nonlinearities was pioneered by [19, Haitao] wherein the author considered the following problem:
\[-\Delta u=\lambda u^{-\gamma}+u^{p},\ u>0\text{ in }\Omega,\ u=0\text{ on }\partial\Omega \tag{1.1}\]
where \(\Omega\subset\mathbb{R}^{n}\) (\(n\geq 3\)) is a smooth bounded domain and \(\gamma\in(0,1)\), \(1<p\leq\frac{n+2}{n-2}\). Using monotone iterations and the mountain pass lemma, the author proved existence and multiplicity results for the maximal range of parameter \(\lambda\) (i.e. global multiplicity). Later in [1, 11] the authors studied such problems for the higher singular cases, i.e. with \(\gamma\in(1,3)\). Finally, Hirano, Saccon and Shioji in [22] handled problem (1.1) for any \(\gamma>0\) and showed the existence of \(L^{1}_{\text{loc}}\) solutions \(u\) satisfying \((u-\epsilon)^{+}\in H^{1}_{0}(\Omega)\) for all \(\epsilon>0\) using variational methods and nonsmooth analysis arguments. We also mention the work [15] where the authors studied a doubly nonlocal critical singular problem in the spirit of [22] and obtained the existence, multiplicity and regularity results.
The stirring motivation to consider the problem \((\mathcal{P}_{\lambda})\) are the works [11] and [12], where the authors discussed the problem involving critical nonlinearities with singular and discontinuous nonlinearities for \(n=2\) and \(n\geq 3\) respectively. More precisely, in [12] the authors considered the following problem
\[\left\{-\Delta u=\lambda(\chi_{\{u<a\}}u^{-\gamma}+u^{2^{*}-1}),\ u>0\ \mbox{in}\ \Omega,\ u=0\ \mbox{on}\ \partial\Omega,\right.\]
for \(0<\gamma<3\), \(\lambda>0\), where \(2^{*}=2n/(n-2)\) is the critical Sobolev constant and obtain the existence and multiplicity results for suitable range of \(\lambda\).
Following the above discussion, we considered the problem \((\mathcal{P}_{\lambda})\) in the present work. The novelty features of this work is double with the presence of a nonlocal and critical Hartree type nonlinearity and a more singular nonlinearity (by considering higher values of \(\gamma>1\) with respect to former contributions). This brings additional technical difficulties and forces to follow a new approximation approach. Based on the notion of weak solutions given in the next section, we prove the following existence and multiplicity result:
**Theorem 1.1**: _For any \(a>0\), there exists \(\Lambda^{a}>0\) such that_
1. \((\mathcal{P}_{\lambda})\) _has no solution for any_ \(\lambda>\Lambda^{a}\)_._
2. \((\mathcal{P}_{\lambda})\) _admits at least one (minimal) solution_ \(v_{\lambda}\) _for any_ \(\lambda\in(0,\Lambda^{a})\) _and_ \(\gamma>0\)_. Moreover for any_ \(\omega\in(\max\{\frac{\gamma+1}{4},1\},\infty)\)_,_ \(v_{\lambda}^{\omega}\in H^{1}_{0}(\Omega)\)_. In addtion,_ \(v_{\lambda}\in H^{1}_{0}(\Omega)\) _if and only if_ \(\gamma<3\)_._
3. _Further if we take_ \(0<\gamma<3\) _and_ \(\mu<\min\{4,n\}\)_, then_ \((\mathcal{P}_{\lambda})\) _admits at least two solutions for any_ \(\lambda\in(0,\Lambda^{a})\)_._
We also discuss the boundary behavior and Holder regularity results of weak solutions. We have the following result in this direction.
**Theorem 1.2**: _Let \(u\) be a weak solution of \((\mathcal{P}_{\lambda})\) and \(\phi_{\gamma}\) be given by Definition 2.3. Then \(u\in L^{\infty}(\Omega)\) and \(C_{1}\phi_{\gamma}\leq u\leq C_{2}\phi_{\gamma}\) for some positive constants \(C_{1}\) and \(C_{2}\). Moreover, the following assertions hold:_
1. _When_ \(\gamma>1\)_,_ \(u(x)\in C^{\frac{2}{\gamma+1}}(\overline{\Omega})\)_._
2. _When_ \(\gamma=1\)_,_ \(u(x)\in C^{\beta}(\overline{\Omega})\)_, for all_ \(0<\beta<1\)_._
3. _When_ \(\gamma<1\)_,_ \(u(x)\in C^{1,1-\gamma}(\overline{\Omega})\)_._
The problems involving discontinuous nonlinearities but without singular term are tackled using variational techniques and the generalized gradient theory for locally Lipschitz functional. However the presence of singular term makes the associated energy functional neither differentiable nor locally Lipschitz in \(H^{1}_{0}(\Omega)\) which prohibits the use of both techniques directly. In order to overcome these difficulties we first considered the regularized problem \((\mathcal{P}_{\lambda,\epsilon})\) (see Section 4). The use of this regularization makes the associated energy functional differentiable and thus allow the use of suitable variational methods.
We begin our analysis by studying the purely singular discontinuous problem \((\mathcal{S}_{\lambda})\) (see Section 3). To this attempt, we consider the regularized problem \((\mathcal{S}_{\lambda,\epsilon})\). The analysis of \((\mathcal{S}_{\lambda,\epsilon})\) is divided into
two cases depending up on the parameter \(\gamma\) i.e., when \((a)\)\(0<\gamma<3\) and \((b)\)\(\gamma\geq 3\). For Case \((a)\), we applied Perron's method and show the existence of unique solution to \((\mathcal{S}_{\lambda,\epsilon})\) in \(H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\). Concerning Case \((b)\), we make use of monotone methods to obtain the existence of unique solution of \((\mathcal{S}_{\lambda,\epsilon})\). The existence of the minimal weak solution of \((\mathcal{S}_{\lambda})\) is then obtained as the limit of solutions of the regularized problem. After studying the purely singular problem, we then show the existence of a weak solution of \((\mathcal{P}_{\lambda})\) for suitable range of \(\lambda\) taking advantage of the construction of suitable sub and supersolutions. Under the restriction \(\mu<\min\{4,n\}\), this solution is then shown to be the local minimum of the energy functional in \(H^{1}_{0}(\Omega)\) topology. Then the existence of second solution is obtained by investigating the translated problem associated to \((\mathcal{P}_{\lambda})\). The associated energy functional is locally Lipschitz which leads to the use of generalized gradients technique. We further employ Ekeland variational principle and the concentration-compactness principle to get the existence of a second solution. We pointout here that the nonsmooth analysis arguments as performed in [22] cannot be used here because of the discontinuous term.
Turning to the structure of the paper, in Section 2 we collect the preliminaries required in the subsequent sections. In Section 3 we study the purely singular discontinuous problem. In Section 4, we obtain the existence of first solution. In Section 5, the existence of the second solution in discussed that achieves the proof of Theorem 1.1. Finally, in Section 6 we discuss the regularity of the solutions. and prove Theorem 1.2.
**Notations:** Throughout the paper, we will use the following notations:
* \(\delta(x):=\mathrm{dist}(x,\partial\Omega)\) and \(d_{\Omega}=\mathrm{diam}(\Omega)\);
* We denote positive constants by \(M,M_{1},M_{2},\cdots\);
* We denote the standard norm on \(L^{p}(\mathbb{R}^{n})\) by \(|\cdot|_{p}\);
* for any two functions \(g,\ h\), we write \(g\prec h\) or \(g\succ h\) if there exists a constant \(C>0\) such that \(g\leq Ch\) or \(g\geq Ch\). We write \(g\sim h\) if \(g\prec h\) and \(g\succ h\).
## 2 Preliminaries
In this section we give the functional settings and collect the notations and preliminary results required in the rest of the paper. We first define the notion of a weak solution as follows:
**Definition 2.1**: _We say that \(u\in H^{1}_{loc}(\Omega)\) is a weak solution of \((\mathcal{P}_{\lambda})\) if_
1. \(\text{essinf}_{K}\,u>0\) _for any compact set_ \(K\subset\Omega\)__
2. \((u-\nu)^{+}\in H^{1}_{0}(\Omega)\) _for every_ \(\nu>0\)_._
3. _For any_ \(\psi\in C^{\infty}_{c}(\Omega)\) _it holds_ \[\int\limits_{\Omega}\nabla u\nabla\psi=\lambda\int\limits_{\Omega}\chi_{\{u<a \}}u^{-\gamma}\psi+\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{u^{2^ {*}_{\mu}}(y)u^{2^{*}_{\mu}-1}(x)\psi(x)}{|x-y|^{\mu}}dxdy.\] (2.1)
**Remark 2.2**: _We want to remark that the assumption \((u-\nu)^{+}\in H^{1}_{0}(\Omega)\) for every \(\nu>0\) holds if there exists \(\ell\geq 1\) such that \(u^{\ell}\in H^{1}_{0}(\Omega)\)._
The formal energy functional \(J^{a}_{\lambda}(u)\) associated with the problem \((\mathcal{P}_{\lambda})\) is given as
\[J^{a}_{\lambda}(u)=\frac{1}{2}\|u\|^{2}-\lambda\int\limits_{\Omega}H(u)-\frac{ \lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\frac{|u(y)|^{2^{*}_{\mu}}|u(x)|^{2^ {*}_{\mu}}}{|x-y|^{\mu}}dxdy,\]
where we take
\[H(u)=\begin{cases}0&\text{ if }u\leq 0,\\ (1-\gamma)^{-1}u^{1-\gamma}&\text{ if }0<u<\frac{a}{2},\\ (1-\gamma)^{-1}(a/2)^{1-\gamma}+\int\limits_{a/2}^{u}\chi_{\{t<a\}}t^{-\gamma }dt&\text{ if }u\geq a/2,\end{cases}\]
for \(\gamma>0\), \(\gamma\neq 1\) and for \(\gamma=1\) we replace the terms of the form \((1-\gamma)^{-1}x^{1-\gamma}\) in the above definition with the term \(\log x\) i.e.,
\[H(u)=\begin{cases}0&\text{ if }u\leq 0,\\ \log u&\text{ if }0<u<\frac{a}{2},\\ \log(a/2)+\int\limits_{a/2}^{u}\chi_{\{t<a\}}t^{-1}dt&\text{ if }u\geq a/2.\end{cases}\]
**Definition 2.3**: _For \(0<\gamma<\infty\) we define \(\phi_{\gamma}\) as follows:_
\[\phi_{\gamma}=\begin{cases}e_{1}&0<\gamma<1,\\ e_{1}(-\log e_{1})^{\frac{1}{2}}&\gamma=1,\\ e_{1}^{\frac{2}{\gamma+1}}&1<\gamma,\end{cases}\]
_where \(e_{1}\) is the first positive eigenfunction of \(-\Delta\) on \(H^{1}_{0}(\Omega)\) with \(|e_{1}|_{\infty}\) fixed as a number less than \(1\)._
**Remark 2.4**: _If \(0<\gamma<3\), by an application of Hardy's inequaltiy it follows that \(u^{-\gamma}\psi\in L^{1}(\Omega)\) if \(\psi\in H^{1}_{0}(\Omega)\) and \(u\geq M\phi_{\gamma}\) in \(\Omega\), where \(M>0\) is a constant. In particular, if \(u\geq M\phi_{\gamma}\), then (2.1) holds for all \(\psi\in H^{1}_{0}(\Omega)\)._
Now we recall the Hardy-Littlewood-Sobolev inequality which is the foundation in study of problems involving Choquard nonlinearity:
**Proposition 2.5**: _Hardy-Littlewood-Sobolev inequality: Let \(r,q>1\) and \(0<\mu<n\) with \(1/r+1/q+\mu/n=2\), \(g\in L^{r}(\mathbb{R}^{n}),h\in L^{q}(\mathbb{R}^{n})\). Then, there exists a sharp constant \(C(r,q,n,\mu)\) independent of \(g\) and \(h\) such that_
\[\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{g(x)h(y)}{|x-y| ^{\mu}}dxdy\leq C(r,q,n,\mu)|g|_{r}|h|_{q}.\]
In particular, let \(g=h=|u|^{p}\) then by Hardy-Littlewood-Sobolev inequality we see that,
\[\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{|u(x)|^{p}u(y)|^ {p}}{|x-y|^{\mu}}dxdy\]
is well defined if \(|u|^{p}\in L^{\nu}(\mathbb{R}^{n})\) with \(\nu=\frac{2n}{2n-\mu}>1\). Thus, from Sobolev embedding theorems, we must have
\[\frac{2n-\mu}{n}\leq p\leq\frac{2n-\mu}{n-2}.\]
From this, for \(u\in L^{2^{*}}(\mathbb{R}^{n})\) we have
\[\left(\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{|u(x)|^{2 _{\mu}^{*}}|u(y)|^{2_{\mu}^{*}}}{|x-y|^{\mu}}dxdy\right)^{\frac{1}{2_{\mu}^{*} }}\leq C(n,\mu)^{\frac{1}{2_{\mu}^{*}}}|u|^{2}_{2^{*}}.\]
We fix \(S_{H,L}\) to denote the best constant associated to Hardy-Littlewood-Sobolev inequality, i.e,
\[S_{H,L}=\inf\limits_{u\in C_{0}^{\infty}(\mathbb{R}^{n})\setminus\{0\}}\frac {\left\|\nabla u\right\|^{2}_{L^{2}(\mathbb{R}^{n})}}{\left\|u\right\|^{2}_{ HL}}.\]
Now the following lemma plays a crucial role in the sequel:
**Lemma 2.6**: _The constant \(S_{H,L}\) is achieved if and only if_
\[u=C\left(\frac{b}{b^{2}+|x-d|^{2}}\right)^{\frac{n-2}{2}},\]
_where \(C>0\) is a fixed constant, \(d\in\mathbb{R}^{n}\) and \(b\in(0,\infty)\) are parameters. Moreover,_
\[S=C(n,\mu)^{\frac{n-2}{2n-\mu}}S_{H,L}.\]
## 3 The purely singular Discontinuous problem
In order to prove the existence results for \((P_{\lambda})\), we translate the problem by the minimal solution to the purely singular problem:
\[(\mathcal{S}_{\lambda})\left\{-\Delta u=\lambda\chi_{\{u<a\}}u^{-\gamma},\ u>0 \ \mbox{in}\ \Omega,\ u=0\ \mbox{on}\ \partial\Omega.\right.\]
We first study the existence of weak solutions to \((\mathcal{S}_{\lambda})\). We have the following result in this direction
**Proposition 3.1**: _There exists a weak minimal solution \(u_{\lambda}\) of \((\mathcal{S}_{\lambda})\) for any \(\gamma>0\). Furthermore, we have \(u_{\lambda}\sim\phi_{\gamma}\) near \(\partial\Omega\). Moreover regularity results as stated in Theorem 1.2 hold._
The main idea is to solve an approximating regular problem that admits a unique solution which is a strict subsolution to \((\mathcal{S}_{\lambda})\) and then pass through the limit. The approximating regular problem is obtained by replacing \(\chi_{\{u<a\}}\) with the continuous function \(\chi_{\epsilon}(u-a)\) where
\[\chi_{\epsilon}(t)=\chi_{(-\infty,-\epsilon)}(t)-t\epsilon^{-1}\chi_{[- \epsilon,0)}(t),\ t\in\mathbb{R}. \tag{3.1}\]
So we consider the following problem
\[(\mathcal{S}_{\lambda,\epsilon})\begin{cases}-\Delta v=\lambda\chi_{\epsilon} (v-a)v^{-\gamma}\ \mbox{in}\ \Omega,\\ v\equiv 0\ \mbox{on}\ \partial\Omega,\ v>0\ \mbox{in}\ \Omega.\end{cases}\]
We have the following existence results concerning \((\mathcal{S}_{\lambda,\epsilon})\).
**Proposition 3.2**: _We have_
* _Let_ \(0<\gamma<3\)_. Then there exists an_ \(\epsilon_{0}=\epsilon_{0}(a)\) _such that for_ \(0<\epsilon<\epsilon_{0}\) _there exists a unique solution_ \(u_{\lambda,\epsilon}\in H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\) _of the problem_ \((\mathcal{S}_{\lambda,\epsilon})\)_. Also the map_ \(\epsilon\to u_{\lambda,\epsilon}\) _is nonincreasing. Moreover, we have_ \(u_{\lambda,\epsilon}\sim\phi_{\gamma}\) _for any_ \(\epsilon\in(0,\epsilon_{0})\)_._
* _Let_ \(\gamma\geq 3\)_. Then there exists unique solution_ \(u_{\lambda,\epsilon}\in H^{1}_{loc}(\Omega)\) _of_ \((\mathcal{S}_{\lambda,\epsilon})\) _such that for any compact_ \(K\Subset\Omega\)_, there exists_ \(M(K)>0\) _which satisfies_ \(u_{\lambda,\epsilon}\geq M(K)>0\) _a.e. in_ \(K\)_. Moreover_ \[u_{\lambda,\epsilon}\mbox{ is uniformly bounded in }L^{\infty}(\Omega),\] (3.2) _and_ \[u^{\ell}_{\lambda,\epsilon}\mbox{ is uniformly bounded in }H^{1}_{0}(\Omega)\mbox{ with }\ell>\frac{\gamma+1}{4}.\] (3.3)
_Also the map \(\epsilon\to u_{\lambda,\epsilon}\) is nonincreasing and \(u_{\lambda,\epsilon}\sim\phi_{\gamma}\) independent of \(\epsilon\)._
We will consider the above two cases separately. First we consider the case when \(0<\gamma<3\). In this case, the proof is based on the classical sub-supersolution method in a variational setting called Perron's method. In this direction, inspired by Haitao [19], we establish the following result.
**Lemma 3.3**: _Let \(0<\epsilon<\frac{a}{2}\). Suppose that \(\underline{u},\ \overline{u}\in H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\) are weak subsolution and supersolution of \((\mathcal{S}_{\lambda,\epsilon})\) respectively, such that \(0<\underline{u}\leq\overline{u}\) in \(\Omega\) and \(\underline{u}\geq M(K)>0\) for every \(K\Subset\Omega\), for some constant \(M(K)\). Then there exists a solution \(u\in H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\) of \((\mathcal{S}_{\lambda,\epsilon})\) satisfying \(\underline{u}\leq u\leq\overline{u}\) in \(\Omega\)._
**Proof.** For sake of clarity, we give only the proof for \(\gamma\neq 1\). We introduce the energy functional associated to \((\mathcal{S}_{\lambda,\epsilon})\) :
\[S_{\lambda}^{a,\epsilon}(u)=\frac{1}{2}\|u\|^{2}-\lambda\int\limits_{\Omega}H_ {\epsilon}(u)dx,\]
where
\[H_{\epsilon}(t)=\begin{cases}0&\mbox{ if }u\leq 0,\\ (1-\gamma)^{-1}t^{1-\gamma}&\mbox{ if }0<t<\frac{a}{2},\\ (1-\gamma)^{-1}(a/2)^{1-\gamma}+\int\limits_{a/2}^{t}\chi_{\epsilon}(s-a)s^{- \gamma}dt&\mbox{ if }t\geq a/2.\end{cases} \tag{3.4}\]
Observe that the map \(\chi_{\epsilon}(t)\) lies in \([0,1]\), is continuous and nonincreasing. Also we see that the map \(\chi_{\epsilon}(t)t^{-\gamma}\) is nonincreasing. Concerning \(H_{\epsilon}(t)\), we observe that \(H_{\epsilon}(t)\leq(1-\gamma)^{-1}t^{1-\gamma}\) for \(t>0\). Now let us consider the conical shell set defined as:
\[\mathcal{M}=\{v\in H^{1}_{0}(\Omega):\underline{u}\leq v\leq\overline{u}\mbox { in }\Omega\}.\]
Clearly, \(\mathcal{M}\neq\emptyset\) is closed and convex. Therefore \(S_{\lambda}^{a,\epsilon}\) is weakly sequentially lower semicontinuous over \(\mathcal{M}\). Indeed, it is enough to show that \(S_{\lambda}^{a,\epsilon}\) is sequentially lower semicontinuous. Let \(\{v_{k}\}_{k\in\mathbb{N}}\subset\mathcal{M}\) be such that \(v_{k}\to v\) in \(H^{1}_{0}(\Omega)\). Now observe that
\[|H_{\epsilon}(v_{k})|\leq\begin{cases}\big{(}(1-\gamma)^{-1}(a/2)^{1-\gamma} \big{)}^{-}+|(1-\gamma)^{-1}|\underline{u}^{1-\gamma}&\mbox{ if }\gamma>1,\\ \big{(}(1-\gamma)^{-1}(a/2)^{1-\gamma}\big{)}^{-}+|(1-\gamma)^{-1}|\overline{ u}^{1-\gamma}&\mbox{ if }0<\gamma<1.\end{cases}\]
Using the fact that \(0<\underline{u}\leq\overline{u}\in H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\) and \(\Omega\) is bounded, we conclude that sequence \(|H_{\epsilon}(v_{k})|\) is bounded by a \(L^{1}(\Omega)\) function (thanks to \(\gamma<3\)). Thus by using the dominated convergence theorem and the continuity of the norm, we obtain the required claim. Therefore there exists \(u\in\mathcal{M}\) such that
\[S^{\epsilon,a}_{\lambda}(u)=\inf_{v\in\mathcal{M}}S^{\epsilon,a}_{\lambda}(v).\]
Next, we show that \(u\) is the required weak solution of \((\mathcal{S}_{\lambda,\epsilon})\). For this, let \(\varphi\in C^{\infty}_{c}(\Omega)\) and \(\kappa>0\). We define
\[\eta_{\kappa}=\begin{cases}\overline{u}&\text{if }u+\kappa\varphi\geq \overline{u},\\ u+\kappa\varphi&\text{if }\underline{u}\leq u+\kappa\varphi\leq\overline{u},\\ \underline{u}&\text{if }u+\kappa\varphi\leq\underline{u}.\end{cases}\]
Observe that \(\eta_{\kappa}=u+\kappa\varphi-\varphi^{\kappa}+\varphi_{\kappa}\in\mathcal{M}\), where \(\varphi^{\kappa}=(u+\kappa\varphi-\overline{u})^{+}\) and \(\varphi_{\kappa}=(u+\kappa\varphi-\underline{u})^{-}\). Since \(u\) is a minimizer of \(S^{\epsilon,a}_{\lambda}\) over \(\mathcal{M}\), we have
\[0\leq \lim_{t\to 0}\frac{S^{\epsilon,a}_{\lambda}(u+t(\eta_{\kappa}-u))-S^{ \epsilon,a}_{\lambda}(u)}{t}\] \[= \int_{\Omega}\nabla u\nabla(\eta_{\kappa}-u)dx-\lambda\int_{ \Omega}(\eta_{\kappa}-u)\chi_{\epsilon}(u-a)u^{-\gamma}dx. \tag{3.5}\]
Using the definition of \(\eta_{\kappa}\), \(\varphi_{\kappa}\) and \(\varphi^{\kappa}\), from (3.5), we have
\[\int_{\Omega}\nabla u\nabla\varphi dx-\lambda\int_{\Omega}\chi_{\epsilon}(u- a)u^{-\gamma}\varphi dx\geq\frac{1}{\kappa}(\mathcal{G}^{\kappa}-\mathcal{G}_{ \kappa}), \tag{3.6}\]
where
\[\mathcal{G}^{\kappa}=\int_{\Omega}\nabla u\nabla\varphi^{\kappa}dx-\lambda \int_{\Omega}\chi_{\epsilon}(u-a)u^{-\gamma}\varphi^{\kappa}dx\]
and
\[\mathcal{G}_{\kappa}=\int_{\Omega}\nabla u\nabla\varphi_{\kappa}dx-\lambda \int_{\Omega}\chi_{\epsilon}(u-a)u^{-\gamma}\varphi_{\kappa}dx.\]
Next we will give estimates of \(\mathcal{G}^{\kappa}\) and \(\mathcal{G}_{\kappa}\). First we give:
**Estimate of \(\mathcal{G}^{\kappa}\)**: Setting \(\Omega^{\kappa}=\{\varphi^{\kappa}>0\}\), we have
\[\frac{1}{\kappa}\int_{\Omega}\nabla u\nabla\varphi^{\kappa}= \frac{1}{\kappa}\int_{\Omega}\nabla(u-\overline{u})\nabla\varphi ^{\kappa}dx+\frac{1}{\kappa}\int_{\Omega}\nabla\overline{u}\nabla\varphi^{ \kappa}dx\] \[= \frac{1}{\kappa}\int_{\Omega^{\kappa}}\nabla(u-\overline{u}) \nabla(u+\kappa\varphi-\overline{u})dx+\frac{1}{\kappa}\int_{\Omega}\nabla \overline{u}\nabla\varphi^{\kappa}dx\]
\[\geq \int_{\Omega^{\kappa}}\nabla(u-\overline{u})\nabla\varphi dx+\frac{1}{ \kappa}\int_{\Omega}\nabla\overline{u}\nabla\varphi^{\kappa}dx=o(1)+\frac{1} {\kappa}\int_{\Omega}\nabla\overline{u}\nabla\varphi^{\kappa}dx. \tag{3.7}\]
Using (3.7) and employing the facts that \(\overline{u}\) is a supersolution of \((\mathcal{S}_{\epsilon})\), \(\chi_{\epsilon}(t)t^{\gamma}\) is nonincreasing and \((\chi_{\epsilon}(\overline{u}-a)\overline{u}^{-\gamma}-\chi_{\epsilon}(u-a)u^{- \gamma})\varphi\in L^{1}(\Omega)\), together with dominated convergence theorem we obtain the following
\[\frac{1}{\kappa}\mathcal{G}^{\kappa}\geq o(1)+\frac{1}{\kappa}\left(\int\limits_{\Omega}\nabla\overline{u} \nabla\varphi^{\kappa}dx-\lambda\int\limits_{\Omega}\chi_{\epsilon}(u-a)u^{- \gamma}\varphi^{\kappa}dx\right)\] \[= o(1)+\frac{1}{\kappa}\left(\int\limits_{\Omega}\nabla\overline{ u}\nabla\varphi^{\kappa}dx-\lambda\int\limits_{\Omega}\chi_{\epsilon}( \overline{u}-a)\overline{u}^{-\gamma}\varphi^{\kappa}dx\right)\] \[+\frac{\lambda}{\kappa}\left(\int\limits_{\Omega}\chi_{\epsilon}( \overline{u}-a)\overline{u}^{-\gamma}\varphi^{\kappa}dx-\int\limits_{\Omega} \chi_{\epsilon}(u-a)u^{-\gamma}\varphi^{\kappa}dx\right)\] \[\geq o(1)+\frac{\lambda}{\kappa}\left(\int\limits_{\Omega^{n}}(\chi_{ \epsilon}(\overline{u}-a)\overline{u}^{-\gamma}-\chi_{\epsilon}(u-a)u^{- \gamma})(u-\overline{u})dx\right)\] \[+\lambda\int\limits_{\Omega^{n}}\left(\chi_{\epsilon}(\overline{ u}-a)\overline{u}^{-\gamma}-\chi_{\epsilon}(u-a)u^{-\gamma}\right)\varphi dx\geq o(1). \tag{3.8}\]
In the similar fashion, we see that
\[\frac{1}{\kappa}\mathcal{G}_{\kappa}\leq o(1). \tag{3.9}\]
Using (3.8) and (3.9), we conclude from (3.6)
\[0\leq\int\limits_{\Omega}\nabla u\nabla\varphi dx-\lambda\int\limits_{\Omega} \chi_{\epsilon}(u-a)u^{-\gamma}\varphi dx,\]
and the claim follows using the arbitrariness of \(\varphi\).
**Proof of Proposition 3.2** (1)**:** In view of Lemma 3.3, we construct an ordered pair of sub- and supersolution \(\underline{u}\) and \(\overline{u}\), respectively, of \((\mathcal{S}_{\lambda,\epsilon})\) for \(\epsilon\) small enough. Choose \(0<\epsilon_{0}<a\). Then for \(0<\epsilon<\epsilon_{0}\), we see that \(\chi_{\epsilon}(t-a)t^{-\gamma}\to\infty\) uniformly as \(t\to 0\). Thus we can find \(\theta>0\) sufficiently small so that
\[\lambda_{1}\theta|e_{1}|_{\infty}\leq\lambda\chi_{\epsilon}(\theta|e_{1}|_{ \infty}-a)(\theta|e_{1}|_{\infty})^{-\gamma}.\]
Now using the fact that \(\chi_{\epsilon}(t-a)t^{-\gamma}\) is nonincreasing, we obtain by taking \(\underline{u}=\theta e_{1}\) the following
\[-\Delta\underline{u}=\lambda_{1}(\theta e_{1})\leq\lambda\chi_{\epsilon}( \theta|e_{1}|_{\infty}-a)(\theta|e_{1}|_{\infty})^{-\gamma}\leq\lambda\chi_{ \epsilon}(\underline{u}-a)\underline{u}^{-\gamma}.\]
For supersolution, we take \(\overline{u}=\hat{u}\), where \(\hat{u}\) is the solution of the purely singular problem (replacing the discontinuity term by 1). Finally, we choose \(\theta\) small enough so that \(0<\underline{u}\leq\overline{u}\) a.e in \(\Omega\). Applying Lemma 3.3 we obtain the existence of \(u_{\lambda,\epsilon}\). It is easy to see that \(u_{\lambda,\epsilon}\sim\phi_{\gamma}\). Uniqueness of \(u_{\lambda,\epsilon}\), for \(\gamma<3\) follows from the nonincreasing nature of \(t\to\chi_{\epsilon}(t-a)t^{-\gamma}\) on \((0,+\infty)\). Finally, we prove that the map \(\epsilon\to u_{\lambda,\epsilon}\) is nonincreasing. For this, let \(\epsilon_{1}<\epsilon_{2}\) and \(u_{\lambda,\epsilon_{1}},\ u_{\lambda,\epsilon_{2}}\) be the corresponding solutions. Observe that for \(0<\epsilon_{1}<\epsilon_{2}\), \(u_{\lambda,\epsilon_{1}}\) is a supersolution for \((\mathcal{S}_{\lambda,\epsilon_{2}})\), from uniqueness and Lemma 3.3 we get \(u_{\lambda,\epsilon_{2}}\leq u_{\lambda,\epsilon_{1}}\). \(\square\)
Next we consider the case \(\gamma\geq 3\). We first show the validity of a weak comparison principle which will be used to get the uniqueness of the solution. Here we adopt the ideas developped in [9].
We define the real valued function \(g_{k}(\tau)\) by
\[g_{k}(\tau)=\begin{cases}\max\{-\lambda\chi_{\epsilon}(\tau-a)\tau^{-\gamma},-k \}&\text{ if }\tau>0,\\ -k&\text{ if }\tau\leq 0.\end{cases}\]
Now consider the real valued function \(G_{k}(\tau)\) defined by
\[\begin{cases}G_{k}^{\prime}(\tau)=g_{k}(\tau),\\ G_{k}(1)=0.\end{cases}\]
Finally, we define the functional \(\Phi_{k}:H^{1}_{0}(\Omega)\to[-\infty,+\infty]\) by
\[\Phi_{k}(v)=\frac{1}{2}\|v\|^{2}+\int\limits_{\Omega}G_{k}(v)dx,\ v\in H^{1}_ {0}(\Omega).\]
Let \(u\) be a fixed supersolution of \((\mathcal{S}_{\lambda,\epsilon})\) and consider \(w\) as the minimum of the functional \(\Phi_{k}\) on the convex set
\[\mathcal{M}:=\{v\in H^{1}_{0}(\Omega):0\leq v\leq u\ \text{ a.e. in }\Omega\}.\]
By [23], it follows that
\[\int\limits_{\Omega}\nabla w\nabla(v-w)dx\geq-\int\limits_{\Omega}G_{k}^{ \prime}(w)(v-w)dx\text{ for }v\in w+(H^{1}_{0}(\Omega)\cap L^{\infty}_{c}(\Omega))\text{ and }0\leq v\leq u, \tag{3.10}\]
where \(L^{\infty}_{c}(\Omega)\) denotes the space of \(L^{\infty}\) functions with compact support in \(\Omega\).
**Lemma 3.4**: _We have that_
\[\int\limits_{\Omega}\nabla w\nabla vdx\geq-\int\limits_{\Omega}G_{k}^{\prime} (w)v\text{ for }v\in C^{\infty}_{c}(\Omega)\text{ with }v\geq 0\ \text{ in }\Omega. \tag{3.11}\]
**Proof.** To prove this let us consider \(\psi\in C^{\infty}_{c}(\mathbb{R})\) with \(0\leq\psi\leq 1\) for \(t\in\mathbb{R}\), \(\psi(t)=1\) for \(t\in[-1,1]\) and \(\psi(t)=0\) for \(t\in(-\infty,2]\cup[2,\infty)\). Then for any \(\varphi\in C^{\infty}_{c}(\Omega)\) with \(\varphi\geq 0\) in \(\Omega\), we set
\[\varphi_{k}:=\psi\left(\frac{w}{k}\right)\varphi,\ \varphi_{k,t}=\min\{w+t \varphi_{k},u\},\]
with \(k\geq 1\) and \(t>0\). We have that \(\varphi_{k,t}\in w+(H^{1}_{0}(\Omega)\cap L^{\infty}_{c}(\Omega))\) and \(w\leq\varphi_{k,t}\leq u\), so that by (3.10) we have
\[\int\limits_{\Omega}\nabla w\nabla(\varphi_{k,t}-w)dx\geq\int\limits_{\Omega }G_{k}^{\prime}(w)(\varphi_{k,t}-w)dx. \tag{3.12}\]
Now using (3.12), we have
\[\|\varphi_{k,t}-w\|^{2}+\int\limits_{\Omega}\left(G_{k}^{\prime}(\varphi_{k,t })-G_{k}(w)\right)(\varphi_{k,t}-w)dx\]
\[\int\limits_{\Omega}\nabla w\nabla\varphi_{k}dx+\int\limits_{ \Omega}G^{\prime}_{k}(w)\varphi_{k}dx\geq 0.\]
Finally the proof holds by tending \(k\) to infinity.
Next we prove a weak comparison principle from which the uniqueness of the solution follows. Precisely, we have:
**Theorem 3.5**: _Let \(\gamma>0\), \(v\) be a subsolution to \((\mathcal{S}_{\lambda,\epsilon})\) such that \((v-\nu)^{+}\in H^{1}_{0}(\Omega)\) for every \(\nu>0\) and let \(u\) be a supersolution to \((\mathcal{S}_{\lambda,\epsilon})\). Then \(v\leq u\) a.e. in \(\Omega\)._
**Proof.** Let \(w\) be as in Lemma 3.4. Since \(w\in H^{1}_{0}(\Omega)\) and nonnegative, for any \(\nu>0\), \(\operatorname{supp}(v-w-\nu)^{+}\) is contained in \(\operatorname{supp}(v-\nu)^{+}\). From this, we conclude that
\[(v-w-\nu)^{+}\in H^{1}_{0}(\Omega)\text{ for any }\nu>0.\]
Now using standard density arguments, we obtain from (3.11) that
\[\int\limits_{\Omega}\nabla w\nabla K_{\tau}((v-w-\nu)^{+})dx\geq\int\limits_{ \Omega}G^{\prime}_{k}(w)K_{\tau}((v-w-\nu)^{+})dx \tag{3.16}\]
where \(K_{\tau}(t):=\min\{t,\tau\}\) for \(\tau\geq 0\) and \(K_{\tau}(-t):=-K_{\tau}(t)\) for \(t<0\). Let now \(\psi_{k}\in C^{\infty}_{c}(\Omega)\) such that \(\psi_{k}\to(v-w-\nu)^{+}\in H^{1}_{0}(\Omega)\) and set
\[\widetilde{\psi}_{\tau,k}:=K_{\tau}(\min\{(v-w-\nu)^{+},\psi_{k}^{+}\}).\]
It follows that \(\widetilde{\psi}_{\tau,k}\in H^{1}_{0}(\Omega)\cap L^{\infty}_{c}(\Omega)\) and by a density argument
\[\int\limits_{\Omega}\nabla v\nabla K_{\tau}((v-w-\nu)^{+})dx\leq\int\limits_{ \Omega}\lambda\frac{\chi_{\epsilon}(v-a)}{v^{\gamma}}\widetilde{\psi}_{\tau,k }dx.\]
Passing to the limit as \(k\to\infty\), we obtain
\[\int\limits_{\Omega}\nabla v\nabla K_{\tau}((v-w-\nu)^{+})dx\leq\int\limits_{ \Omega}\lambda\frac{\chi_{\epsilon}(v-a)}{v^{\gamma}}K_{\tau}((v-w-\nu)^{+})dx. \tag{3.17}\]
Choosing \(\nu>0\) such that \(\nu^{-\gamma}<k\), from (3.16) and (3.17) we deduce that
\[\|K_{\tau}((v-w-\nu)^{+})\|^{2}\leq \int\limits_{\Omega}\left(\lambda\frac{\chi_{\epsilon}(v-a)}{v^{ \gamma}}+G^{\prime}_{k}(w)\right)K_{\tau}((v-w-\nu)^{+})dx\] \[\leq \int\limits_{\Omega}(-G^{\prime}_{k}(v)+G^{\prime}_{k}(w))K_{ \tau}((v-w-\nu)^{+})dx\leq 0.\]
By the arbitrariness of \(\tau\) we deduce that
\[v\leq w+\nu\leq u+\nu\text{ a.e. in }\Omega\]
and the conclusion follows letting \(\nu\to 0\). \(\square\)
Regarding the existence of solutions, we use the classical approach of regularizing the singular nonlinearities \(u^{-\gamma}\) by \(\left(u+\frac{1}{k}\right)^{-\gamma}\) and derive uniform a priori estimates for the weak solution of the regularized problem. More precisely, we study the following approximated problem
\[(\mathcal{S}_{\lambda,\epsilon,k})\begin{cases}-\Delta u=\lambda \frac{\chi_{\epsilon}(u-a)}{(u+\frac{1}{k})^{\gamma}},&u>0\quad\text{ in }\Omega,\\ u=0&\text{ on }\partial\Omega.\end{cases}\]
**Lemma 3.6**: _For any \(k\in\mathbb{N}\backslash\{0\}\) and \(\gamma>0\), there exists a unique nonnegative weak solution \(u_{\lambda,k,\epsilon}\in H^{1}_{0}(\Omega)\) of the problem \((\mathcal{S}_{\lambda,\epsilon,k})\) in the sense that_
\[\int\limits_{\Omega}\nabla u_{\lambda,k,\epsilon}\nabla vdx= \lambda\int\limits_{\Omega}\frac{\chi_{\epsilon}(u_{\lambda,k,\epsilon}-a)}{( u_{\lambda,k,\epsilon}+\frac{1}{k})^{\gamma}}vdx\text{ for all }v\in H^{1}_{0}(\Omega). \tag{3.18}\]
_Moreover,_
* _The solution_ \(u_{\lambda,k,\epsilon}\in C^{1,\alpha}(\overline{\Omega})\) _for every_ \(\alpha\in(0,1)\) _and_ \(u_{\lambda,k,\epsilon}>0\) _in_ \(\Omega\)_._
* _The sequence_ \(\{u_{\lambda,k,\epsilon}\}_{n\in\mathbb{N}}\) _is monotonically increasing in the sense that_ \(u_{\lambda,k+1,\epsilon}\geq u_{\lambda,k,\epsilon}\) _for all_ \(k\in\mathbb{N}\)_._
* _For every compact set_ \(K\Subset\Omega\) _and_ \(k\in\mathbb{N}\)_, there exists a constant_ \(M(K)>0\) _independent of_ \(k\) _such that_ \(u_{\lambda,k,\epsilon}\geq M(K)>0\)_._
* \(u_{\lambda,k,\epsilon}\) _is uniformly bounded in_ \(L^{\infty}(\Omega)\) _both in_ \(k\) _and_ \(\epsilon\)_._
* \(u_{\lambda,k,\epsilon}^{\omega}\) _is bounded in_ \(H^{1}_{0}(\Omega)\) _with_ \(\omega>\frac{\gamma+1}{4}\)_, independently of both_ \(k\) _and_ \(\epsilon\)_._
Proof.: Proof of parts \(i)\), \(ii)\), \(iii)\) and \(iv)\) are standard and hence omitted. We next give the proof of part \(v)\). Since \(u_{\lambda,k,\epsilon}\in L^{\infty}(\Omega)\cap H^{1}_{0}(\Omega)\) and positive, for any \(\nu>0\) and \(\omega>0\), \((u_{\lambda,k,\epsilon}+\nu)^{\omega}-\nu^{\omega}\) belongs to \(H^{1}_{0}(\Omega)\) and so by taking it as a test function in (3.18) with \(\nu\in(0,\frac{1}{k})\) and \(\omega\in[\gamma,\infty)\), we obtain
\[\int\limits_{\Omega}\nabla u_{\lambda,k,\epsilon}\nabla(u_{ \lambda,k,\epsilon}+\nu)^{\omega}dx\leq\lambda\int\limits_{\Omega}\frac{\chi_ {\epsilon}(u_{\lambda,k,\epsilon}(x)-a)}{(u_{\lambda,k,\epsilon}+\frac{1}{k})^ {\gamma}}(u_{\lambda,k,\epsilon}+\nu)^{\omega}dx\leq\lambda\int\limits_{ \Omega}(u_{\lambda,k,\epsilon}+\nu)^{\omega-\gamma}dx. \tag{3.19}\]
Passing \(\nu\to 0\) in (3.19) via Fatou's Lemma, we obtain
\[\frac{4\omega}{(\omega+1)^{2}}\int\limits_{\Omega}|\nabla u_{ \lambda,k,\epsilon}^{\frac{\omega+1}{2}}|^{2}dx\leq\lambda\int\limits_{ \Omega}(u_{\lambda,k,\epsilon})^{\omega-\gamma}dx. \tag{3.20}\]
In order to estimate the R.H.S term of (3.20), we choose \(\omega\) that satisfies (according to boundary behaviour of \(u_{k,\epsilon}\)):
\[\omega>\frac{\gamma-1}{2}.\]
Now using the boundary behaviour of \(u_{k,\epsilon}\)
\[\int\limits_{\Omega}u_{\lambda,k,\epsilon}^{\omega-\gamma}dx\leq \lambda\int\limits_{\Omega}e_{1}^{\frac{2(\omega-\gamma)}{\gamma+1}}<\infty, \tag{3.21}\]
since \(\frac{2(\omega-\gamma)}{\gamma+1}>-1\). Combining (3.20) and (3.21), we obtain the required conclusion.
We now turn our attention to the \(L^{\infty}\) estimates (assertion iv)). For this fix \(p>\frac{n}{2}\). Now take \(\phi_{m}(u_{\lambda,k,\epsilon}):=(u_{\lambda,k,\epsilon}-m)^{+}\) with \(m\geq 1\) as a test function in (3.18) and using Sobolev embeddings
and Holder inequality, we get
\[\left(\int\limits_{\mathcal{Y}_{m}}|\phi_{m}(u_{\lambda,k,\epsilon })|^{\frac{2n}{n-2}}\right)^{\frac{n-2}{2}}\leq M\int\limits_{T_{m}}|\nabla\phi_{m}(u_{\lambda,k,\epsilon})|^{2} dx=M\int\limits_{\Omega}\nabla u_{\lambda,k,\epsilon}\nabla\phi_{m}(u_{ \lambda,k,\epsilon})dx\] \[\leq M\int\limits_{\Omega}\frac{\chi_{\epsilon}(u_{\lambda,k,\epsilon }-a)}{(u_{\lambda,k,\epsilon}+\frac{1}{k})^{\gamma}}\phi_{m}(u_{\lambda,k, \epsilon})dx\leq M\int\limits_{T_{m}}\chi_{\epsilon}(u_{\lambda,k,\epsilon}-a) \phi_{m}(u_{\lambda,k,\epsilon})dx\] \[\leq M|\chi_{\epsilon}(u_{\lambda,k,\epsilon}-a)|_{p}\left(\int \limits_{\mathcal{Y}_{m}}|\phi_{m}(u_{\lambda,k,\epsilon})|^{\frac{2n}{n-2}} \right)^{\frac{n-2}{2n}}|T_{m}|^{1-\frac{n-2}{2n}-\frac{1}{p}}\] \[\leq M|\Omega|^{\frac{1}{p}}\left(\int\limits_{\mathcal{Y}_{m}}|\phi_ {m}(u_{\lambda,k,\epsilon})|^{\frac{2n}{n-2}}\right)^{\frac{n-2}{2n}}|T_{m}|^ {1-\frac{n-2}{2n}-\frac{1}{p}} \tag{3.22}\]
where \(T_{m}:=\{x\in\Omega:u_{\lambda,k,\epsilon}\geq m\}\). Let \(j>m\geq 1\), then \(T_{j}\subset T_{m}\) and \(\phi_{m}(u_{\lambda,k,\epsilon})\geq j-m\) for \(x\in T_{j}\). Using above facts, from (3.22), we obtain
\[|j-m||T_{j}|^{\frac{n-2}{2n}}\leq\left(\int\limits_{\mathcal{Y}_{j}}|\phi_{m}( u_{\lambda,k,\epsilon})|^{\frac{2n}{n-2}}\right)^{\frac{n-2}{2n}}\leq\left( \int\limits_{\mathcal{Y}_{m}}|\phi_{m}(u_{\lambda,k,\epsilon})|^{\frac{2n}{n- 2}}\right)^{\frac{n-2}{2n}}\leq M||\Omega|^{\frac{1}{p}}|T_{m}|^{1-\frac{n-2} {2n}-\frac{1}{p}}\]
which further implies
\[|T_{j}|\leq M\frac{|\Omega|^{\frac{2n}{(n-2)p}}|T_{m}|^{\frac{2n}{n-2}\left(1- \frac{n-2}{2n}-\frac{1}{p}\right)}}{|j-m|^{\frac{2n}{n-2}}}.\]
Since \(p>\frac{n}{2}\), we have that
\[\frac{2n}{n-2}\left(1-\frac{n-2}{2n}-\frac{1}{p}\right)>1.\]
Thus by [23, Lemma B.1], there exists \(m_{0}\) such that \(|A_{m}|=0\) for all \(m\geq m_{0}\). This completes the proof.
**Proof of Proposition 3.2** (2)**: **:** Let \(\gamma\geq 3\) and \(u_{\lambda,k,\epsilon}\) be the weak solution of the problem \((\mathcal{S}_{\lambda,\epsilon,k})\) in the sense that it satisfies (3.18). Now from Lemma 3.6, we know that \(u_{\lambda,k,\epsilon}^{\frac{\omega+1}{2}}\) is uniformly bounded in \(H^{1}_{0}(\Omega)\) with \(\omega>\frac{\gamma-1}{2}\). Since \(\gamma\geq 3\), we have \(\omega>1\). This together with the fact that for every compact subset \(K\Subset\Omega\) there exists \(M=M(K)\) independent of \(k\) such that \(0<M\leq u_{\lambda,k,\epsilon}(x)\) for \(x\in K\), we get that \(u_{\lambda,k,\epsilon}\) is uniformly bounded in \(H^{1}_{\rm loc}(\Omega)\). Precisely,
\[\int\limits_{K}|\nabla u_{\lambda,k,\epsilon}|^{2}\leq M^{-(\omega-1)}\int \limits_{K}u_{\lambda,k,\epsilon}^{(\omega-1)}|\nabla u_{\lambda,k,\epsilon}| ^{2}dx\leq\frac{4M^{-(\omega-1)}}{\omega+1}\int\limits_{K}|\nabla u_{\lambda, k,\epsilon}^{\frac{\omega+1}{2}}|^{2}\leq M_{1},\]
where \(M_{1}\) is independent of \(k\). Then there exists a \(u_{\epsilon}\in H^{1}_{\rm loc}(\Omega)\) such that
\[u_{\lambda,k,\epsilon}\rightharpoonup u_{\lambda,\epsilon},\ u_{\lambda,k, \epsilon}\to u_{\lambda,\epsilon}\ {\rm in}\ L^{r}_{\rm loc}(\Omega)\ {\rm for}\ 1\leq r<2^{*}\ \ {\rm and}\ {\rm a.e}\ {\rm in}\ \Omega. \tag{3.23}\]
Now by using the weak convergence property we are able to pass to the limit in the left hand side of (3.18), \(i.e.\) for any \(v\in H^{1}_{\text{loc}}(\Omega)\) with \(K=\text{supp}(v)\Subset\Omega\),
\[\int\limits_{\Omega}\nabla u_{\lambda,k,\epsilon}\nabla vdx\to\int\limits_{ \Omega}\nabla u_{\lambda,\epsilon}\nabla vdx\text{ as }k\to\infty. \tag{3.24}\]
Finally using the facts that \(0\leq\chi_{\epsilon}(u_{\lambda,k,\epsilon}-a)\leq 1\), \(M_{1}(K)\leq u_{\lambda,1,\epsilon}\leq u_{\lambda,k,\epsilon}\) a.e. in \(K\) and from Lebesgue dominated convergence theorem, we get
\[\int\limits_{\Omega}\frac{\chi_{\epsilon}(u_{\lambda,k,\epsilon}-a)}{(u_{ \lambda,k,\epsilon}+\frac{1}{k})^{\gamma}}v\to\int\limits_{\Omega}\frac{\chi_ {\epsilon}(u_{\lambda,\epsilon}-a)}{u_{\lambda,\epsilon}^{\gamma}}vdx. \tag{3.25}\]
Passing to the limit in (3.18) and using (3.24) and (3.25), we see that \(u_{\epsilon}\) is a weak solution of \((\mathcal{S}_{\lambda,\epsilon})\).
For uniqueness of the solution, using Theorem 3.5, it is sufficient to show that \((u_{\lambda,\epsilon}-\nu)^{+}\in H^{1}_{0}(\Omega)\) for every \(\nu>0\). From Lemma 3.6, we have \(u_{\lambda,k,\epsilon}^{\omega}\in H^{1}_{0}(\Omega)\), where \(\omega>\frac{\gamma+1}{4}\geq 1\). Let \(\varphi_{m}\in C^{1}_{c}(\Omega)\) such that \(\varphi_{m}\) converges to \(u_{\lambda,k,\epsilon}^{\omega}\) in \(H^{1}_{0}(\Omega)\) and set
\[\psi_{m}:=(\varphi_{m}^{\frac{1}{\omega}}-\nu)^{+}.\]
Clearly \(\psi_{m}\) is uniformly bounded in \(H^{1}_{0}(\Omega)\) and converges a.e. to \((u_{\lambda,k,\epsilon}-\nu)^{+}\). Therefore we obtain that \((u_{\lambda,k,\epsilon}-\nu)^{+}\in H^{1}_{0}(\Omega)\) and hence \((u_{\lambda,\epsilon}-\nu)^{+}\in H^{1}_{0}(\Omega)\). This proves uniqueness. Next we prove that the map \(\epsilon\to u_{\lambda,\epsilon}\) is nonincreasing. For this, let \(\epsilon_{1}<\epsilon_{2}\) and \(u_{\lambda,\epsilon_{1}},\ u_{\lambda,\epsilon_{2}}\) be the corresponding solutions. Arguing by contradiction, suppose that there exists \(F\subset\Omega\) with positive measure such that \(w=u_{\lambda,\epsilon_{1}}-u_{\lambda,\epsilon_{2}}<0\) a.e on \(F\). Using the facts that \(u_{\lambda,\epsilon_{1}}\), \(u_{\lambda,\epsilon_{2}}\) are solutions of \((\mathcal{S}_{\lambda,\epsilon_{1}})\) and \((\mathcal{S}_{\lambda,\epsilon_{2}})\), respectively, \(\lambda>0\) and for \(t_{1}<t_{2}\) and \(\epsilon_{1}<\epsilon_{2}\), \(\chi_{\epsilon_{1}}(t_{1}-a)\geq\chi_{\epsilon_{2}}(t_{2}-a)\), by taking \(v=w^{-}\), we get
\[\int\limits_{\Omega}|\nabla w^{-}|^{2}dx=-\lambda\int\limits_{\Omega}\left( \chi_{\epsilon_{1}}(u_{\lambda,\epsilon_{1}}-a)u_{\lambda,\epsilon_{1}}^{- \gamma}-\chi_{\epsilon_{2}}(u_{\lambda,\epsilon_{2}}-a)u_{\lambda,\epsilon_{2 }}^{-\gamma}\right)v\leq 0,\]
and hence \(w\geq 0\) a.e. in \(\Omega\), which yields a contradiction. Lastly we check the validity of (3.2) and (3.3). From Lemma 3.6, we have \(u_{\lambda,k,\epsilon}^{\omega}\) is uniformly bounded in \(H^{1}_{0}(\Omega)\) for \(\omega>\frac{\gamma+1}{4}\). This implies the existence of \(\psi\in H^{1}_{0}(\Omega)\) such that \(u_{\lambda,k,\epsilon}^{\omega}\rightharpoonup\psi\) in \(H^{1}_{0}(\Omega)\) and \(u_{\lambda,k,\epsilon}^{\omega}\to\psi\) un \(L^{r}(\Omega)\) for every \(1\leq r<2^{*}\) a.e. in \(\Omega\). This together with (3.23) imply that \(\psi=u_{\lambda,\epsilon}^{\omega}\). Thus we have
\[\|u_{\lambda,\epsilon}^{\omega}\|\leq\liminf_{k\to\infty}\|u_{\lambda,k, \epsilon}^{\omega}\|^{2}\leq M.\]
From Lemma 3.6, we know that \(u_{\lambda,k,\epsilon}\) is uniformly bounded in \(L^{\infty}(\Omega)\) both in \(k\) and \(\epsilon\) and hence \(u_{\lambda,\epsilon}\) is uniformly bounded in \(L^{\infty}(\Omega)\). This completes the proof.
Lastly, we give the
**Proof of Proposition 3.1:** We divide the proof into two cases:
**Case A**: \(0<\gamma<3\). In this case, since \(u_{\lambda,\epsilon}\) is a solution of \((\mathcal{S}_{\lambda,\epsilon})\), using Proposition 3.2, Remark 2.4 and Hardy's inequality we find,
\[\|u_{\lambda,\epsilon}\|^{2}=\lambda\int\limits_{\Omega}\chi_{\epsilon}(u_{ \lambda,\epsilon}-a)u_{\lambda,\epsilon}^{1-\gamma}\leq C\int\limits_{\Omega}u_ {\lambda,\epsilon}\phi_{\gamma}^{-\gamma}\leq M\left(\int\limits_{\Omega}| \nabla u_{\lambda,\epsilon}|^{2}\right)^{\frac{1}{2}}=M\|u_{\lambda,\epsilon}\|.\]
Thus, \(\{u_{\lambda,\epsilon}\}_{\epsilon}\) is a bounded sequence in \(H^{1}_{0}(\Omega)\). Let \(u_{\lambda,\epsilon}\rightharpoonup u_{\lambda}\) in \(H^{1}_{0}(\Omega)\) and \(a.e\) in \(\Omega\). From the lower bound \(u_{\lambda,\epsilon}\geq M\phi_{\gamma}\), we obtain that \(\{\chi_{\epsilon}(u_{\lambda,\epsilon}-a)u_{\lambda,\epsilon}^{-\gamma}\}\) is a bounded sequence in \(L^{\infty}_{\rm loc}(\Omega)\). Then using elliptic regularity theory, \(\{u_{\lambda,\epsilon}\}_{\epsilon}\) is a bounded sequence in \(C^{\alpha}_{\rm loc}(\Omega)\) for some \(\alpha>0\) and hence, \(u_{\lambda,\epsilon}\to u_{\lambda}\) uniformly on compact subsets of \(\Omega\). Let \(\psi\in H^{1}_{0}(\Omega)\) be arbitrary. Using Remark 2.4, the estimate \(\chi_{\epsilon}(u_{\lambda,\epsilon}-a)u_{\lambda,\epsilon}^{-\gamma}\psi\leq M \phi_{\gamma}^{-\gamma}\psi\) and the weak convergence of \(u_{\lambda,\epsilon}\rightharpoonup u_{\lambda}\), we obtain that \(u_{\lambda}\) solves \(({\cal S}_{\lambda})\).
**Case B:**\(\gamma\geq 3\). The boundedness of the sequence \(\{u_{\lambda,\epsilon}\}_{\epsilon}\) follows using the same arguments as in the proof of Proposition 3.2 (2). Let \(u_{\lambda,\epsilon}\rightharpoonup u_{\lambda}\) in \(H^{1}_{0}(\Omega)\) and a.e. in \(\Omega\). Using the fact that for any compact set \(K\Subset\Omega\), \(0<M(K)\leq u_{\lambda,\epsilon}\) for every \(\epsilon>0\) and following the similar arguments of Case A, we conclude that \(u_{\lambda}\) is a weak solution of \(({\cal S}_{\lambda})\).
In both cases, any solution to \(({\cal S}_{\lambda})\) is a supersolution to \(({\cal S}_{\lambda,\epsilon})\). According to Theorem 3.5, \(u_{\lambda}\) is the minimal solution to \(({\cal S}_{\lambda})\). Finally, the Holder regularity results follow using the boundary behaviour and [18, Theorem 1.2]. \(\Box\)
## 4 Existence of a first solution for \(({\cal P}_{\lambda})\)
In this section, we establish the existence of first solution of the problem \(({\cal P}_{\lambda})\). Here again we follow the regularising techniques as in the last section. Define
\[\Lambda^{a}=\sup\{\lambda>0:({\cal P}_{\lambda})\mbox{ has at least one solution}\}.\]
We have
**Lemma 4.1**: \(0<\Lambda^{a}<\infty\)_._
**Proof.** Let \(({\cal P}_{\lambda})\) admit a solution \(v_{\lambda}\). Multiplying \(({\cal P}_{\lambda})\) by \(e_{1}\), we get
\[\lambda_{1}\int\limits_{\Omega}v_{\lambda}e_{1}dx= \lambda\left(\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{ \lambda}^{2_{\mu}^{*}}(y)v_{\lambda}^{2_{\mu}^{*}-1}(x)e_{1}(x)}{|x-y|^{\mu}} dxdy+\int\limits_{\Omega}\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{-\gamma}e_{1}(x)dx\right)\] \[\geq\lambda\left(\int\limits_{\Omega}\left(\left(\frac{1}{d_{ \Omega}^{\mu}}\int\limits_{\Omega}v_{\lambda}^{2_{\mu}^{*}}(y)dy\right)v_{ \lambda}^{2_{\mu}^{*}-1}(x)+\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{-\gamma}(x) \right)e_{1}(x)dx\right). \tag{4.1}\]
Now for any \(m>0\) by noting the superlinear nature of the map \(t\to mt^{2_{\mu}^{*}-1}+\chi_{\{t<a\}}t^{-\gamma}\) at infinity, we guarantee the existence of a constant \(M=M(a)>0\) such that \(mt^{2_{\mu}^{*}-1}+\chi_{\{t<a\}}t^{-\gamma}>Mt\). Employing this observation in (4.1), we conclude that
\[\lambda_{1}\int\limits_{\Omega}v_{\lambda}e_{1}dx\geq\lambda M\int\limits_{ \Omega}v_{\lambda}e_{1}dx.\]
This implies \(\Lambda^{a}<\infty\). Next we show that \(0<\Lambda^{a}\). For this, we consider the following singular problem without the jump discontinuity
\[\left\{-\Delta u=\lambda\left(u^{-\gamma}+\left(\int\limits_{\Omega}\frac{u^{2 _{\mu}^{*}}(y)}{|x-y|^{\mu}}dy\right)u^{2_{\mu}^{*}-1}(x)\right)\mbox{ in }\Omega,u>0\mbox{ in } \Omega,\ u=0\mbox{ on }\partial\Omega. \tag{4.2}\]
It is well known (see Theorems 1.1, 2.2 and 2.5 in [10]) that there exists a unique \(w_{\lambda}\in C_{0}(\overline{\Omega})\cap C^{2}(\Omega)\) solving the following singular problem for all \(\lambda>0\):
\[\left\{-\Delta w=\lambda w^{-\gamma},\ w>0\ \text{in}\ \Omega,\ v=0\ \text{on}\ \partial\Omega.\right.\]
Also by Dini's Theorem, we have that \(w_{\lambda}\to 0\) uniformly in \(\Omega\) as \(\lambda\to 0^{+}\). Clearly, \(w_{\lambda}\) is a subsolution to (4.2). Next let \(z\in H^{1}_{0}(\Omega)\) solve
\[\left\{-\Delta z=1,\ z>0\ \text{in}\ \Omega,\ z=0\ \text{in}\ \partial\Omega.\right.\]
Define \(z_{\lambda}=w_{\lambda}+z\). Next we claim that there exists \(\hat{\lambda}>0\) small such that for \(\lambda<\hat{\lambda}\), \(z_{\lambda}\) is a supersolution to (4.2). The choice of \(\hat{\lambda}\) is such that \(\hat{\lambda}(|w_{\lambda}+z|_{\infty})^{22^{*}_{\mu}-1}\hat{M}\leq 1\), where \(\hat{M}\) is such that \(\left|\int\limits_{\Omega}\dfrac{dy}{|x-y|^{\mu}}\right|<\hat{M}\). Note that the choice of such \(\hat{M}\) is possible since \(\Omega\) is bounded. Then for \(\lambda<\hat{\lambda}\), we have
\[-\Delta z_{\lambda}=\lambda w_{\lambda}^{-\gamma}+1\geq\lambda\left(z_{\lambda }^{-\gamma}+(|w_{\lambda}+z|_{\infty})^{22^{*}_{\mu}-1}\hat{M}\right)\geq \lambda\left(z_{\lambda}^{-\gamma}+\int\limits_{\Omega}\dfrac{z_{\lambda}^{2^{ *}_{\mu}}(y)z_{\lambda}^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}dy\right).\]
This completes the proof of the claim. Let \(\mathcal{K}_{\lambda}=\{u\in H^{1}_{0}(\Omega):w_{\lambda}\leq u\leq z_{\lambda }\ \text{in}\ \Omega\}\). Clearly, \(\mathcal{K}_{\lambda}\) is a closed convex (hence weakly closed) set in \(H^{1}_{0}(\Omega)\). Now, we define the following iterative scheme for all \(\lambda<\hat{\lambda}\):
\[\begin{cases}u_{0}=w_{\lambda},\\ -\Delta u_{k}-\lambda u_{k}^{-\gamma}=\lambda\int\limits_{\Omega}\dfrac{u_{k-1 }^{2^{*}_{\mu}}(y)u_{k-1}^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}dy,\ u_{k}>0\ \text{in}\ \Omega,\\ u_{k}=0\ \text{on}\ \partial\Omega,\ k=1,2,3,\dots\end{cases}\]
The above scheme is well defined as we can solve for \(u_{n}\) is closed convex set \(\mathcal{K}_{\lambda}\). Using the monotonicity of the operator \(-\Delta u-\lambda u^{-\gamma}\), we have that the sequence \(\{u_{k}\}_{k\in\mathbb{N}}\) is nondecreasing and \(w_{\lambda}\leq u_{k}\leq z_{\lambda}\) for all \(k\). In particular, \(\{u_{k}\}_{k\in\mathbb{N}}\) is uniformly bounded in \(C^{\alpha}(\bar{\Omega})\). By the Ascoli-Arzela theorem, \(u_{k}\to\bar{u}_{\lambda}\in C_{0}(\Omega)\) as \(k\to\infty\) and \(w_{\lambda}\leq\bar{u}_{\lambda}\leq z_{\lambda}\). Now following the arguments as in the proof of Proposition 3.2, we conclude that \(\bar{u}_{\lambda}\) is a solution of (4.2). Noting that \(|\bar{u}_{\lambda}|_{\infty}\to 0\) for \(\lambda\to 0\), \(\bar{u}_{\lambda}\) solves \((\mathcal{P}_{\lambda})\) for small \(\lambda\) and hence \(\Lambda^{a}>0\). \(\square\)
Now we consider the following perturbed regular problem associated to \((\mathcal{P}_{\lambda})\),
\[(\mathcal{P}_{\lambda,\epsilon})\begin{cases}-\Delta u=\lambda \left(\int\limits_{\Omega}\dfrac{u^{2^{*}_{\mu}}(y)u^{2^{*}-1}(x)}{|x-y|^{\mu} }dy+\chi_{\epsilon}(u-a)u^{-\gamma}\right),\\ u\equiv 0\ \text{on}\ \partial\Omega,\ u>0\ \text{in}\ \Omega,\end{cases}\]
where \(\chi_{\epsilon}\) is defined as in (3.1). The formal energy functional \(J_{\lambda,\epsilon}\) associated to \((\mathcal{P}_{\lambda,\epsilon})\) is defined as
\[J_{\lambda,\epsilon}(u)=\frac{1}{2}\|u\|^{2}-\lambda\int\limits_{\Omega}H_{ \epsilon}(u)-\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\dfrac{|u|^{2^{*}_{\mu}}(y)|u|^{2^{*}}(x)}{|x-y|^{\mu}}dxdy,\]
where \(H_{\epsilon}\) is defined as in (3.4).
We now have the following existence result:
**Lemma 4.2**: \(({\cal P}_{\lambda})\) _admits a solution \(v_{\lambda}\) for all \(\lambda\in(0,\Lambda^{a})\). Moreover, \(v_{\lambda}\sim\phi_{\lambda}\) and satisfies the Sobolev regularity result as stated in Theorem 1.1._
**Proof.** We first show that given any \(0<\epsilon<\epsilon_{0}(a)\), where \(\epsilon_{0}\) is obtained in Proposition 3.2 and \(\lambda\in(0,\Lambda^{a})\), the approximating problem \(({\cal P}_{\lambda,\epsilon})\) admits a solution \(v_{\lambda,\epsilon}\). Let \(u_{\lambda,\epsilon}\) be the solution of \(({\cal S}_{\lambda,\epsilon})\) as obtained in Proposition 3.2. Given any \(\lambda\in(0,\Lambda^{a})\), there exists \(\bar{\lambda}>\lambda\) such that \(({\cal P}_{\bar{\lambda}})\) admits a solution \(\bar{v}\) and by the definition of \(\chi_{\epsilon}\), \(\bar{v}\) is a supersolution of \(({\cal P}_{\lambda,\epsilon})\). Then by Theorem 3.5, we see that \(u_{\lambda,\epsilon}\leq\bar{v}\). Now the existence of a solution \(v_{\lambda,\epsilon}\) of \(({\cal P}_{\lambda,\epsilon})\) is obtained as a local minimizer of \(J_{\lambda,\epsilon}\) over the convex set \({\cal M}_{\epsilon}=\{u\in H^{1}_{0}(\Omega):u_{\lambda,\epsilon}\leq u\leq \bar{v}\}\). As \(v_{\lambda,\epsilon}\) solves \(({\cal P}_{\lambda,\epsilon})\), it is easy to check that \(\{v_{\lambda,\epsilon}\}_{\epsilon}\) is bounded in \(H^{1}_{0}(\Omega)\) (thanks to \(\bar{v}\in L^{\infty}(\Omega)\)) and hence weakly converges to some \(v_{\lambda}\in H^{1}_{0}(\Omega)\). Then following the convergence arguments of Proposition 3.1, we conclude that \(v_{\lambda}\) is a solution of \(({\cal P}_{\lambda})\).
Now since \(|v_{\lambda,\epsilon}|_{\infty}\leq M\), where \(M\) is independent of \(\epsilon\) (because that same is true for \(u_{\lambda,\epsilon}\) by Proposition 3.2), we see that \(u_{\lambda,\epsilon}\leq v_{\lambda,\epsilon}\leq w\), where \(w\) is the solution of
\[-\Delta w=\lambda w^{-\gamma}+K,\ w>0\ \mbox{in}\ \Omega,\ w=0\ \mbox{on}\ \partial\Omega,\]
for some appropriate \(K\). Also since \(u_{\lambda,\epsilon}\sim\phi_{\lambda}\) and \(w\sim\phi_{\lambda}\), we conclude that \(v_{\lambda,\epsilon}\sim\phi_{\lambda}\) for every \(\epsilon\) and hence \(v_{\lambda}\sim\phi_{\gamma}\). Finally, the Sobolev regularity results for \(v_{\lambda,\epsilon}\) follows on the similar lines of the proof of Lemma 3.6 item \(v)\) using the fact that \(v_{\lambda,\epsilon}\in L^{\infty}(\Omega)\) independent of \(\epsilon\) and for \(v_{\lambda}\) using the arguments as in the proof of Proposition 3.2 (2). \(\Box\)
Next following the proof of Proposition 3.1 and Lemma 3.4 of [11], we have the following lemma:
**Lemma 4.3**: _For any \(0<\lambda<\Lambda^{a}\) and \(0<\mu\leq\min\{n,4\}\), \(J_{\lambda}(v_{\lambda})=\min_{v\in{\cal M}_{0}}J_{\lambda}(v)\), where \({\cal M}_{0}=\{u\in H^{1}_{0}(\Omega):u_{\lambda}\leq u\leq\bar{v}\}\) where \(u_{\lambda}\) is as in Proposition 3.1._
Now we claim that \(v_{\lambda}\) is a local minimum of \({\cal J}_{\lambda}\) in \(H^{1}_{0}(\Omega)\). We have
**Theorem 4.4**: _Let \(a>0\) and \(0<\mu\leq\min\{n,4\}\). Then for \(\lambda\in(0,\Lambda^{a})\), \(v_{\lambda}\) is a local minimum of \(J_{\lambda}\) in \(H^{1}_{0}(\Omega)\)._
**Proof.** We assume that \(v_{\lambda}\) is not a local minimum of \(J_{\lambda}\) in \(H^{1}_{0}(\Omega)\) and derive a contradiction. Let \(\{v_{k}\}\subset H^{1}_{0}(\Omega)\) be such that \(v_{k}\to v_{\lambda}\) in \(H^{1}_{0}(\Omega)\) and \(J_{\lambda}(v_{k})<J_{\lambda}(v_{\lambda})\). For \(\underline{v}=u_{\lambda}\) and solution \(\overline{v}\) of \(({\cal P}_{\bar{\lambda}})\) where \(0<\lambda<\bar{\lambda}<\Lambda^{a}\), define \(z_{k}=\max\{\underline{v},\min\{v_{k},\overline{v}\}\}\), \(\overline{w}_{k}=(v_{k}-\overline{v})^{+}\), \(\underline{w}_{k}=(v_{k}-\underline{v})^{-}\), \(\overline{A}_{k}=\mbox{supp}(\overline{w}_{k})\) and \(\underline{A}_{k}=\mbox{supp}(\underline{w}_{k})\).
**Claim A**: \(|\overline{A}_{k}|\), \(|\underline{A}_{k}|\) and \(\|\overline{w}_{k}\|\to 0\) as \(k\to\infty\).
The proof of the claim can be proved on the similar lines of the proof of Theorem 2.2 of [12] and hence omitted. Now note that \(z_{k}\in{\cal M}_{0}=\{u\in H^{1}_{0}(\Omega):\underline{v}\leq u\leq\overline{v}\}\) and \(v_{k}=z_{k}-\underline{w}_{k}+\overline{w}_{k}\). Now
\[J_{\lambda}(v_{k})= J_{\lambda}(z_{k})+\frac{1}{2}\int\limits_{\overline{A}_{k}}\left(| \nabla v_{k}|^{2}-|\nabla\overline{v}|^{2}\right)dx+\frac{1}{2}\int\limits_{ \underline{A}_{k}}\left(|\nabla v_{k}|^{2}-|\nabla\underline{v}|^{2}\right)dx- \lambda\int\limits_{\overline{A}_{k}}(H(v_{k})-H(\overline{v}))dx\] \[-\lambda\int\limits_{\underline{A}_{k}}(H(v_{k})-H(\underline{v})) dx-\frac{\lambda}{22^{\mu}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{k}^{2 ^{\mu}_{\mu}}(x)v_{k}^{2^{*}_{\mu}}(y-z_{k}^{2^{*}_{\mu}}(x)z_{k}^{2^{*}_{\mu}}(y) }{|x-y|^{\mu}}dxdy\]
\[= J_{\lambda}(z_{k})+\frac{1}{2}\underset{\vec{A}_{k}}{\int}|\nabla \overline{z}_{k}|^{2}dx+\underset{\vec{A}_{k}}{\int}\nabla\overline{v}\nabla \overline{z}_{k}dx+\frac{1}{2}\underset{\vec{A}_{k}}{\int}|\nabla\underline{z}_ {k}|^{2}dx+\underset{\vec{A}_{k}}{\int}\nabla\underline{v}\nabla\underline{z}_ {k}dx\] \[-\lambda\underset{\vec{A}_{k}}{\int}(H(\overline{v}+\overline{z}_ {k})-H(\overline{v}))dx--\frac{\lambda}{22^{*}_{\mu}}\int\underset{\Omega}{ \int}\frac{(v_{k}^{2^{*}_{\mu}}(x)-z_{k}^{2^{*}_{\mu}}(x))v_{k}^{2^{*}_{\mu}}( y)}{|x-y|^{\mu}}dxdy\] \[-\lambda\underset{\vec{A}_{k}}{\int}(H(\underline{v}_{k}- \underline{v})-H(\underline{v}))dx-\frac{\lambda}{22^{*}_{\mu}}\int\underset{ \Omega}{\int}\frac{(v_{k}^{2^{*}_{\mu}}(x)-z_{k}^{2^{*}_{\mu}}(x))z_{k}^{2^{* }_{\mu}}(y)}{|x-y|^{\mu}}dxdy. \tag{4.3}\]
Employing the facts that \(\underline{v}\) and \(\overline{v}\) are respectively the sub- and supersolutions of \((\mathcal{P}_{\lambda})\) we obtain from (4.3)
\[J_{\lambda}(v_{k})\geq J_{\lambda}(z_{k})+I_{k}+J_{k},\]
where
\[I_{k}= \frac{1}{2}\underset{\vec{A}_{k}}{\int}|\nabla\overline{w}_{k}|^ {2}dx+\lambda\underset{\vec{A}_{k}}{\int}\left(\chi_{\{\overline{v}<a\}} \overline{v}^{\,-\gamma}\overline{w}_{k}-(H(\overline{v}+\overline{w}_{k})-H( \overline{v}))\right)dx\] \[+\frac{\lambda}{2}\underset{\Omega}{\int}\underset{\vec{A}_{k}} {\int}\frac{\overline{v}^{\,2^{*}_{\mu}}(y)\overline{v}^{\,2^{*}_{\mu}}{}^{\mu }{}^{-1}(x)\overline{w}_{k}(x)-\frac{1}{2^{*}_{\mu}}\left((\overline{v}+ \overline{w}_{k})^{2^{*}_{\mu}}(x)-\overline{v}^{\,2^{*}_{\mu}}(x)\right) \left(v_{k}^{2^{*}_{\mu}}(y)+z_{k}^{2^{*}_{\mu}}(y)\right)}{|x-y|^{\mu}}dxdy\]
and
\[J_{k}= \frac{1}{2}\underset{\vec{A}_{k}}{\int}|\nabla\underline{w}_{k}| ^{2}dx-\lambda\underset{\vec{A}_{k}}{\int}\left(\left(\chi_{\{\underline{v}<a \}}\underline{v}^{\,-\gamma}\underline{w}_{k}+H(\underline{v}-\underline{w}_{ k})\right)-H(\underline{v})\right)dx\] \[-\frac{\lambda}{2}\underset{\Omega}{\int}\underset{\underline{A }_{k}}{\int}\frac{\underline{v}^{\,2^{*}_{\mu}}(y)\underline{v}^{\,2^{*}_{\mu} }{}^{-1}(x)\underline{w}_{k}(x)-\frac{1}{2^{*}_{\mu}}\left((\underline{v}- \underline{w}_{k})^{2^{*}_{\mu}}(x)-\underline{v}^{\,2^{*}_{\mu}}(x)\right) \left(w_{k}^{2^{*}_{\mu}}(y)+z_{k}^{2^{*}_{\mu}}(y)\right)}{|x-y|^{\mu}}dxdy.\]
Now we claim that \(I_{k},\ J_{k}\geq 0\) for large \(k\) which is a contradiction to our assumption that \(J_{\lambda}(v_{k})<J_{\lambda}(v_{\lambda})\) for all \(k\). We only show that \(I_{k}\geq 0\). The case of \(J_{k}\geq 0\) runs in a similar fashion.
Dividing \(\overline{A}_{k}\) into three subdomains, viz, \(\overline{A}_{k}\cap\{x\in\Omega:a<\overline{v}(x)\}\), \(\overline{A}_{k}\cap\{x\in\Omega:\overline{v}(x)\leq a\leq(\overline{v}+ \overline{w}_{k})(x)\}\) and \(\overline{A}_{k}\cap\{x\in\Omega:(\overline{v}+\overline{w}_{k})(x)<a\}\), one can check that the second integral on the right hand side of \(I_{k}\) is nonnegative. Now using the fact that \(z_{k}\leq\overline{v}\) and the mean value theorem, we obtain, for some \(\theta=\theta(x)\in(0,1)\) that
\[I_{k,1}=\underset{\Omega}{\int}\underset{\vec{A}_{k}}{\int} \frac{\overline{v}^{2^{*}_{\mu}}(y)\overline{v}^{\,2^{*}_{\mu}}{}^{-1}(x) \overline{w}_{k}(x)-\frac{1}{2^{*}_{\mu}}\left((\overline{v}+\overline{w}_{k}) ^{2^{*}_{\mu}}(x)-\overline{v}^{2^{*}_{\mu}}(x)\right)z_{k}^{2^{*}_{\mu}}(y)} {|x-y|^{\mu}}dxdy\] \[\geq \underset{\Omega}{\int}\frac{\overline{v}^{2^{*}_{\mu}}(y)}{|x-y |^{\mu}}dy\left(\underset{\vec{A}_{k}}{\int}\left(\overline{v}^{\,2^{*}_{\mu} -1}(x)\overline{w}_{k}(x)-\frac{1}{2^{*}_{\mu}}\left((\overline{v}+\overline{w}_ {k})^{2^{*}_{\mu}}(x)-\overline{v}^{2^{*}_{\mu}}(x)\right)\right)dx\right)\] \[= -\underset{\Omega}{\int}\frac{\overline{v}^{2^{*}_{\mu}}(y)}{|x- y|^{\mu}}dy\underset{\vec{A}_{k}}{\int}\left((\overline{v}+\theta\overline{w}_{k})^{2^{*}_{ \mu}}{}^{-1}\left(x\right)-\overline{v}^{2^{*}_{\mu}}{}^{-1}(x)\right)\overline{ w}_{k}(x)dx\]
\[\geq -\int\limits_{\Omega}\frac{\overline{v}^{2^{*}_{\mu}}(y)}{|x-y|^{ \mu}}dy\int\limits_{\overline{A}_{K}}(\overline{v}+\overline{w}_{k})^{2^{*}_{ \mu}-2}\overline{w}_{k}^{2}dx\geq-M\int\limits_{\Omega}\int\limits_{ \overline{A}_{k}}\frac{\overline{v}^{2^{*}_{\mu}}(y)\left(\overline{v}^{2^{*}_{ \mu}-2}(x)+\overline{w}_{k}^{2^{*}_{\mu}-2}(x)\right)\overline{w}_{k}^{2}(x)}{ |x-y|^{\mu}}dxdy.\]
Now by the application of Hardy-Littlewood-Sobolev and Holder's inequalities, we obtain
\[I_{k,1}\geq M_{1}|\overline{v}_{|2^{*}}^{2^{\mu}_{\mu}}\left(\left(\int\limits _{\overline{A}_{k}}\overline{v}^{2^{*}_{\mu}}dx\right)^{\frac{2^{*}_{\mu}-2}{ 2^{*}_{\mu}}}\|\overline{w}_{k}\|^{2}+\|\overline{w}_{k}\|^{2^{*}_{\mu}} \right). \tag{4.4}\]
Also
\[I_{k,2}= \int\limits_{\Omega}\int\limits_{\overline{A}_{k}}\frac{\overline {v}^{2^{*}_{\mu}}(y)\overline{v}^{2^{*}_{\mu}-1}(x)\overline{w}_{k}(x)}{|x-y| ^{\mu}}dxdy-\frac{1}{2^{*}_{\mu}}\int\limits_{\Omega\setminus\overline{A}_{k} }\int\limits_{\overline{A}_{k}}\frac{\left((\overline{v}+\overline{w}_{k})^{2^ {*}_{\mu}}(x)-\overline{v}^{2^{*}_{\mu}}(x)\right)\overline{v}^{2^{*}_{\mu}}( y)}{|x-y|^{\mu}}dxdy\] \[-\frac{1}{2^{*}_{\mu}}\int\limits_{\overline{A}_{k}}\int\limits _{\overline{A}_{k}}\frac{\left((\overline{v}+\overline{w}_{k})^{2^{*}_{\mu}} (x)-\overline{v}^{2^{*}_{\mu}}(x)\right)v_{k}^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dxdy\] \[\geq \int\limits_{\Omega}\int\limits_{\overline{A}_{k}}\frac{\overline {v}^{2^{*}_{\mu}}(y)\left(\overline{v}^{2^{*}_{\mu}-1}(x)\overline{w}_{k}(x)- \frac{1}{2^{*}_{\mu}}\left((\overline{v}+\overline{w}_{k})^{2^{*}_{\mu}}(x)- \overline{v}^{2^{*}_{\mu}}(x)\right)\right)}{|x-y|^{\mu}}dxdy\] \[-\frac{1}{2^{*}_{\mu}}\int\limits_{\overline{A}_{k}}\int\limits _{\overline{A}_{k}}\frac{\left((\overline{v}+\overline{w}_{k})^{2^{*}_{\mu}} (x)-\overline{v}^{2^{*}_{\mu}}(x)\right)v_{k}^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dxdy.\]
Now the first integral in the last inequality can be estimated like \(I_{k,1}\). In the following we will estimate the second integral. Again using the mean value theorem, Hardy-Littlewood-Sobolev and Holder's inequalities, we have
\[\frac{1}{2^{*}_{\mu}}\int\limits_{\overline{A}_{k}}\int\limits_{ \overline{A}_{k}}\frac{\left((\overline{v}+\overline{w}_{k})^{2^{*}_{\mu}}(x)- \overline{v}^{2^{*}_{\mu}}(x)\right)v_{k}^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dxdy\leq \int\limits_{\overline{A}_{k}}\int\limits_{\overline{A}_{k}}\frac {(\overline{v}+\overline{w}_{k})^{2^{*}_{\mu}-1}(x)\overline{w}_{k}(x)v_{k}^{2 ^{*}_{\mu}}(y)}{|x-y|^{\mu}}dxdy\] \[\leq M\left(\|\overline{v}\|_{L^{2^{*}}(\overline{A}_{k})}+\|\overline {w}_{k}\|_{L^{2^{*}}(\overline{A}_{k})}\right)^{\frac{(2^{*}_{\mu}-1)2^{*}}{2 }}\|\overline{w}_{k}\|^{2^{*}}\] \[= o_{k}(1). \tag{4.5}\]
Now using (4.4), (4.5) and Claim A, we deduce that \(I_{k}\geq 0\). This completes the proof of the theorem.
## 5 Existence of a second solution for \((\mathcal{P}_{\lambda})\)
This section is devoted to obtain a second solution for \((\mathcal{P}_{\lambda})\) for \(\lambda\in(0,\Lambda^{a})\). Here we restrict ourselves to the case \(0<\gamma<3\). We obtain the second solution by translating the problem to the
solution \(v_{\lambda}\) obtained in the previous section. Precisely, we consider the following problem
\[(\hat{P}_{\lambda})\begin{cases}-\Delta w=\lambda\left(\chi_{\{w+v_{ \lambda}<a\}}(w+v_{\lambda})^{-\gamma}-\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{- \gamma}\right)\\ \hskip 28.452756pt+\lambda\left(\int\limits_{\Omega}\frac{(w+v_{ \lambda})^{2^{*}_{\mu}}(y)(w+v_{\lambda})^{2^{*}_{\mu}-1}(x)-v_{\lambda}^{2^ {*}_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}dy\right)\ \mbox{in}\ \Omega,\\ w>0\ \mbox{in}\ \Omega,\ w=0\ \mbox{on}\ \partial\Omega.\end{cases}\]
Clearly, if \(w_{\lambda}\in H^{1}_{0}(\Omega)\) weakly solves \((\hat{P}_{\lambda})\), then \(w_{\lambda}+v_{\lambda}\) weakly solves \((\mathcal{P}_{\lambda})\). Let us define, for \(x\in\Omega\),
\[f(x,s)=\left(\chi_{\{s+v_{\lambda}<a\}}(s+v_{\lambda})^{-\gamma}-\chi_{\{v_{ \lambda}<a\}}v_{\lambda}^{-\gamma}\right)\chi_{\mathbb{R}^{+}}(s).\]
Let \(F(x,t)=\int\limits_{0}^{t}f(x,s)ds\). Now the energy functional \(\mathcal{G}_{\lambda}:H^{1}_{0}(\Omega)\to\mathbb{R}\) associated with \((\hat{P}_{\lambda})\) is given as:
\[\mathcal{G}_{\lambda}(w)= \frac{1}{2}\|w\|^{2}-\lambda\int\limits_{\Omega}F(x,w)dx-\frac{ \lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{(w+v_{ \lambda})^{2^{*}_{\mu}}(y)(w+v_{\lambda})^{2^{*}_{\mu}}(x)}{|x-y|^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{\lambda }^{2^{*}_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x)w(x)}{|x-y|^{\mu}}dxdy.\]
**Proposition 5.1**: _The map \(\mathcal{G}_{\lambda}\) is locally Lipschitz._
**Proof.** The proof is similar to the proof of [12, Proposition 3.1] and hence omitted. \(\Box\)
**Remark 5.2**: _Note that \(J_{\lambda}(w^{+}+v_{\lambda})=J_{\lambda}(v_{\lambda})+\mathcal{G}_{\lambda}( w)-\frac{1}{2}\|w^{-}\|^{2}\) for any \(w\in H^{1}_{0}(\Omega)\). Therefore, since \(v_{\lambda}\) is a local minimum of \(J_{\lambda}\), it follows that \(0\) is a local minimum of \(\mathcal{G}_{\lambda}\) in \(H^{1}_{0}(\Omega)\)-topology._
**Definition 5.3**: _Let \(\Phi:H^{1}_{0}(\Omega)\to\mathbb{R}\) be a locally Lipschitz map. The generalized derivative of \(\Phi\) at \(u\) in the direction of \(v\) (denoted by \(\Phi^{0}(u,v)\)) is defined as:_
\[\Phi^{0}(u,v)=\limsup_{h\to 0,t\downarrow 0}\frac{\Phi(u+h+tv)-\Phi(u+h)}{t},\ u,\ v\in H^{1}_{0}(\Omega).\]
_We say that \(u\) is 'generalized' critical point of \(\Phi\) if \(\Phi^{0}(u,v)\geq 0\) for all \(v\in H^{1}_{0}(\Omega)\)._
**Remark 5.4**: _From [11, Definition 4.1], for \(w\geq 0\) and \(\psi\in H^{1}_{0}(\Omega)\), we have the following inequality:_
\[\mathcal{G}^{0}_{\lambda}(w,\psi)= \int\limits_{\Omega}\nabla(v_{\lambda}+w)\nabla\psi dx-\lambda\int \limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w)^{2^{*}_{\mu}}(y)(v_ {\lambda}+w)^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}\psi dxdy\] \[-\lambda\int\limits_{\Omega}z^{\psi}(v_{\lambda}+w)^{-\gamma} \psi dx, \tag{5.1}\]
_for some measurable function \(z^{\psi}\in[\chi_{\{v_{\lambda}+w<a\}},\chi_{\{v_{\lambda}+w\leq a\}}]\)._
**Remark 5.5**: _Suppose for some nontrivial nonnegative \(w_{\lambda}\in H^{1}_{0}(\Omega)\) we have \(\mathcal{G}_{\lambda}(w_{\lambda},\psi)\geq 0\) for all \(\psi\in H^{1}_{0}(\Omega)\), i.e., \(w_{\lambda}\) is a generalized critical point of \(\mathcal{G}_{\lambda}\). Then we claim that_
\[\lambda\int\limits_{\Omega} \frac{(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{ \lambda})^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}dy\leq-\Delta(v_{\lambda}+w_{\lambda})\] \[\leq\lambda\left(\int\limits_{\Omega}\frac{(v_{\lambda}+w_{ \lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1}(x)}{|x-y| ^{\mu}}dy+(v_{\lambda}+w_{\lambda})^{-\gamma}\right). \tag{5.2}\]
_Indeed, since \(w_{\lambda}\geq 0\) and \(\mathcal{G}^{0}_{\lambda}(w_{\lambda},\psi)\geq 0\), using (5.1), we have for any \(\psi\in H^{1}_{0}(\Omega)\)_
\[0\leq\mathcal{G}^{0}_{\lambda}(w_{\lambda},\psi)= \int\limits_{\Omega}\nabla(v_{\lambda}+w_{\lambda})\nabla\psi dx -\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w_{ \lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1}(x)}{|x-y| ^{\mu}}\psi dxdy\] \[-\lambda\int\limits_{\Omega}z^{\psi}(v_{\lambda}+w_{\lambda})^{- \gamma}\psi dx. \tag{5.3}\]
_Taking \(\psi\geq 0\) in (5.3), we have_
\[\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w)^{2^{*}_{ \mu}}(y)(v_{\lambda}+w)^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}\psi dxdy+\lambda\int \limits_{\Omega}z^{\psi}(v_{\lambda}+w)^{-\gamma}\psi dx\leq\int\limits_{ \Omega}\nabla(v_{\lambda}+w)\nabla\psi dx.\]
_Since \(z^{\psi}\geq 0\) and given that \(\psi\geq 0\), we have_
\[\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w)^{2^{*}_{ \mu}}(y)(v_{\lambda}+w)^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}\psi dxdy\leq\int \limits_{\Omega}\nabla(v_{\lambda}+w)\nabla\psi dx. \tag{5.4}\]
_Next let us consider \(\varphi\in H^{1}_{0}(\Omega)\) which is nonpositive, so that \(\psi=-\varphi\geq 0\). Again using (5.3), we have_
\[\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w)^{2^{*}_{ \mu}}(y)(v_{\lambda}+w)^{2^{*}_{\mu}-1}(x)}{|x-y|^{\mu}}\varphi dxdy+\lambda \int\limits_{\Omega}z^{\varphi}(v_{\lambda}+w)^{-\gamma}\varphi dx\leq\int \limits_{\Omega}\nabla(v_{\lambda}+w)\nabla\varphi dx.\]
_Multiplying by \(-1\) on both sides and using the fact that \(z^{\varphi}\in[0,1]\), we get_
\[\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w)^{2^{*}_ {\mu}}(y)(v_{\lambda}+w)^{2^{*}_{\mu}-1}(x)\psi(x)}{|x-y|^{\mu}}dxdy+\lambda \int\limits_{\Omega}(v_{\lambda}+w)^{-\gamma}\psi dx\geq\int\limits_{\Omega} \nabla(v_{\lambda}+w)\nabla\psi dx.\]
_Since \(\psi=-\varphi\) is any arbitrary nonnegative function in \(H^{1}_{0}(\Omega)\), the previous expression implies that_
\[-\Delta(v_{\lambda}+w_{\lambda})\leq\lambda\left(\int\limits_{\Omega}\frac{(v _{\lambda}+w_{\lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu} -1}(x)}{|x-y|^{\mu}}dy+(v_{\lambda}+w_{\lambda})^{-\gamma}\right)\text{ weakly}. \tag{5.5}\]
_Combining (5.4) and (5.5), we have the validity of (5.2). Hence the claim._
Note that \(-\Delta(v_{\lambda}+w_{\lambda})\) is a positive distribution and hence it is given by a positive, regular Radon measure say \(\nu\). Then using (5.2), we can show that \(\nu\) is absolutely continuous with respect to the Lebesgue measure. Now by Radon Nikodym theorem there exists a locally integrable function \(g\) such that \(-\Delta(v_{\lambda}+w_{\lambda})=g\) and hence \(g\in L^{p}_{\rm loc}(\Omega)\) for some \(p>1\). Now using Lemma \(B.3\) of [28]
and elliptic regularity, we can conclude that \(v_{\lambda}+w_{\lambda}\in W^{2,q}_{\rm loc}(\Omega)\) for all \(q<\infty\) and for almost every \(x\in\Omega\), using (5.2) we have \(-\Delta(v_{\lambda}+w_{\lambda})>0\). In particular,
\[-\Delta(v_{\lambda}+w_{\lambda})>0\mbox{ for a.e on }\{x\in\Omega:(v_{\lambda}+w_{ \lambda})(x)=a\}. \tag{5.6}\]
On the other hand, we have \(-\Delta(v_{\lambda}+w_{\lambda})=0\) a.e on the set \(\{x\in\Omega:(v_{\lambda}+w_{\lambda})(x)=a\}.\) This contradicts (5.6) unless the Lebesgue measure of the set \(\{x\in\Omega:(v_{\lambda}+w_{\lambda})(x)=a\}\) is zero. Therefore \(z^{\psi}=\chi_{\{v_{\lambda}+w_{\lambda}<a\}}\) a.e. in \(\Omega\) for any \(\psi\in H^{1}_{0}(\Omega)\) and hence \(v_{\lambda}+w_{\lambda}\) is a second solution for \(({\cal P}_{\lambda})\).
Our next target is to show the existence of a generalized critical point for \({\cal G}_{\lambda}\) which gives us the second solution of \(({\cal P}_{\lambda})\). We will employ the Mountain Pass theorem and Ekeland variational principle to this end. We define \(X^{+}=\{u\in H^{1}_{0}(\Omega):\ u\geq 0\mbox{ a.e in }\Omega\}\). Since \(0\) is a local minimum of \({\cal G}_{\lambda}\), there exists a \(\kappa_{0}>0\) such that \({\cal G}_{\lambda}(0)\leq{\cal G}_{\lambda}(u)\) for \(\|u\|\leq\kappa_{0}\). Then the following two cases arise:
1. \(ZA\) (Zero Altitude): \(\inf\{{\cal G}_{\lambda}(w):\|w\|=\kappa,\ w\in X^{+}\}={\cal G}_{\lambda}(0)=0\) for all \(\kappa\in(0,\kappa_{0})\).
2. \(MP\) (Mountain Pass): There exists \(\kappa_{1}\in(0,\kappa_{0})\) such that \(\inf\{{\cal G}_{\lambda}(w)=\kappa_{1},\ w\in X^{+}\}>{\cal G}_{\lambda}(0)\).
**Lemma 5.6**: _Let \(ZA\) hold for some \(\lambda\in(0,\Lambda^{a})\). Then there exists a nontrivial 'generalized' critical point \(w_{\lambda}\in X^{+}\) for \({\cal G}_{\lambda}\)._
**Proof.** Fix \(\kappa\in(0,\kappa_{0})\). Then there exists a sequence \(\{v_{k}\}_{k\in\mathbb{N}}\subset X^{+}\) with \(\|v_{k}\|=\kappa\) and \({\cal G}_{\lambda}(v_{k})\leq\frac{1}{k}\). Fix \(0<q<\frac{1}{2}\min\{\kappa_{0}-\kappa,\kappa\}\) and define \({\cal R}=\{w\in X^{+}:\kappa-q\leq\|w\|\leq\kappa+q\}\). Note that \({\cal R}\) is closed and \({\cal G}_{\lambda}\) is Lipschitz continuous on \({\cal R}\) (in view of Proposition 5.1). Thus by Ekeland's variational Principle, there exists \(\{\ell_{k}\}_{k\in\mathbb{N}}\subset{\cal R}\) such that the following holds:
1. \({\cal G}_{\lambda}(\ell_{k})\leq{\cal G}_{\lambda}(v_{k})\leq\frac{1}{k}\),
2. \(\|\ell_{k}-v_{k}\|\leq\frac{1}{k}\) and
3. \({\cal G}_{\lambda}(\ell_{k})\leq{\cal G}_{\lambda}(l)+\frac{1}{k}\|l-\ell_{k}\|\) for all \(l\in{\cal R}\).
We note that
\[\kappa-\frac{1}{k}=\|v_{k}\|-\frac{1}{k}\leq\|\ell_{k}\|\leq\|v_{k}\|+\frac{1}{ k}=\kappa+\frac{1}{k}. \tag{5.7}\]
Therefore, for \(\aleph\in X^{+}\) we can choose \(\epsilon>0\) sufficiently small such that \(\ell_{k}+\epsilon(\aleph-\ell_{k})\in{\cal R}\) for all large \(k\). Then by item 3 above, we get
\[\frac{{\cal G}_{\lambda}(\ell_{k}+\epsilon(\aleph-\ell_{k}))-{\cal G}_{ \lambda}(\ell_{k})}{\epsilon}\geq\frac{-1}{k}\|\aleph-\ell_{k}\|.\]
Letting \(\epsilon\to 0^{+}\), we conclude
\[{\cal G}^{0}_{\lambda}(\ell_{k},\aleph-\ell_{k})\geq-\frac{1}{k}\|\aleph-\ell_ {k}\|\mbox{ for all }\aleph\in X^{+}.\]
From Remark 5.4, for any \(\aleph\in X^{+}\), there exists \(z_{k}^{\aleph-\ell_{k}}\in[\chi_{\{v_{\lambda}+\ell_{k}<a\}},\chi_{\{v_{\lambda}+ \ell_{k}\leq a\}}]\) such that
\[\int\limits_{\Omega}\nabla(v_{\lambda}+\ell_{k})\nabla(\aleph-\ell_{k})dx- \lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+\ell_{k})^{2^ {\mu}_{\mu}}(y)(v_{\lambda}+\ell_{k})^{2^{\mu}_{\mu}-1}(x)(\aleph-\ell_{k})(x)}{| x-y|^{\mu}}dxdy\]
\[-\lambda\int\limits_{\Omega}z_{k}^{\aleph-\ell_{k}}(v_{\lambda}+\ell_{k})^{- \gamma}(\aleph-\ell_{k})dx\geq-\frac{1}{k}\|\aleph-\ell_{k}\|. \tag{5.8}\]
Since \(\{\ell_{k}\}_{k\in\mathbb{N}}\) is bounded in \(H^{1}_{0}(\Omega)\), we may assume \(\ell_{k}\rightharpoonup w_{\lambda}\in X^{+}\) weakly in \(H^{1}_{0}(\Omega)\) as well as a.e. in \(\Omega\). In the following we show that \(w_{\lambda}\) is a solution of \((\hat{P}_{\lambda})\). For \(\varphi\in C_{c}^{\infty}(\Omega)\), set
\[\varphi_{k,\epsilon}=(\ell_{k}+\epsilon\varphi)^{-}\text{ and }\ell=\ell_{k}+ \epsilon\varphi+\varphi_{k,\epsilon}=(l_{k}+\epsilon\varphi)^{+}.\]
Thus \(\ell\in X^{+}\) and with this choice of \(\ell\), (5.8) becomes
\[\int\limits_{\Omega}\nabla(v_{\lambda}+\ell_{k})\nabla( \epsilon\varphi+\varphi_{k,\epsilon})dx -\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{ \lambda}+\ell_{k})^{2^{*}_{\mu}}(y)(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}-1}(x)( \epsilon\varphi+\varphi_{k,\epsilon})(x)}{|x-y|^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}z_{k}^{\epsilon\varphi+\varphi_{ \epsilon,k}}(v_{\lambda}+\ell_{k})^{-\gamma}(\epsilon\varphi+\varphi_{k, \epsilon})dx\geq-\frac{1}{k}\|\epsilon\varphi+\varphi_{k,\epsilon}\|, \tag{5.9}\]
where now in view of the fact \(\{x\in\Omega:(\epsilon\varphi+\varphi_{k,\epsilon})(x)\leq 0\}=\{x\in\Omega: \varphi(x)\leq 0\}\), we have
\[z_{k}^{\epsilon\varphi+\varphi_{k,\epsilon}}=\chi_{\{v_{\lambda}+\ell_{k}<a \}}+\chi_{\{v_{\lambda}+\ell_{k}=a\}\cap\{\varphi\leq 0\}}. \tag{5.10}\]
For a fixed \(\epsilon\), we now let \(k\rightarrow\infty\) and show that we can pass to the required limits in each of the terms in (5.9). We have \(\varphi_{k,\epsilon}\rightharpoonup(w_{\lambda}+\epsilon\varphi)^{-}\) in \(H^{1}_{0}(\Omega)\). It can be shown as in [14, Lemma 7.5.2] that
\[\int\limits_{\Omega}\nabla(v_{\lambda}+\ell_{k})\nabla(\epsilon\varphi+ \varphi_{k,\epsilon})dx\leq\int\limits_{\Omega}\nabla(v_{\lambda}+w_{\lambda}) \nabla(\epsilon\varphi+(w_{\lambda}+\epsilon\varphi)^{-})dx+o_{k}(1). \tag{5.11}\]
Clearly \(z_{k}^{\epsilon\varphi+\varphi_{k,\epsilon}}\) is bounded in \(\Omega\) and hence \(z_{k}^{\epsilon\varphi+\varphi_{k,\epsilon}}\rightarrow\tilde{z}\) weak* in \(L^{\infty}(\Omega)\). Since
\[(\epsilon\varphi+\varphi_{k,\epsilon})(v_{\lambda}+\ell_{k})^{-\gamma} \rightarrow(\epsilon\varphi+(w_{\lambda}+\epsilon\varphi)^{-})(v_{\lambda}+w _{\lambda})^{-\gamma}\text{ in }L^{1}(\Omega),\]
we conclude that
\[\int\limits_{\Omega}z_{k}^{\epsilon\varphi+\varphi_{k,\epsilon}}(\epsilon \varphi+\varphi_{k,\epsilon})(v_{\lambda}+\ell_{k})^{-\gamma}\rightarrow\int \limits_{\Omega}\tilde{z}(\epsilon\varphi+(w_{\lambda}+\epsilon\varphi)^{-})( v_{\lambda}+w_{\lambda})^{-\gamma}. \tag{5.12}\]
Now it is standard that
\[\int\limits_{\Omega}\int\limits_{\Omega} \frac{(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}}(y)(v_{\lambda}+\ell_{ k})^{2^{*}_{\mu}-1}(x)\varphi(x)}{|x-y|^{\mu}}dxdy\] \[\rightarrow\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{ \lambda}+w_{\lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1 }(x)\varphi(x)}{|x-y|^{\mu}}dxdy. \tag{5.13}\]
Again since \(0\leq\varphi_{k,\epsilon}\leq\epsilon|\varphi|\), we see that
\[\int\limits_{\Omega}\int\limits_{\Omega} \frac{(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}}(y)(v_{\lambda}+\ell_{k}) ^{2^{*}_{\mu}-1}(x)\varphi_{k,\epsilon}}{|x-y|^{\mu}}dxdy\] \[\rightarrow\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{ \lambda}+w_{\lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1 }(x)(w_{\lambda}+\epsilon\varphi)^{-}}{|x-y|^{\mu}}dxdy. \tag{5.14}\]
Combining (5.9)-(5.14), we conclude that
\[0\leq \int\limits_{\Omega}\nabla(v_{\lambda}+w_{\lambda})\nabla(\epsilon \varphi+(w_{\lambda}+\epsilon\varphi)^{-})dx-\lambda\int\limits_{\Omega}\tilde{ z}(v_{\lambda}+w_{\lambda})^{-\gamma}(\epsilon\varphi+(w_{\lambda}+\epsilon\varphi)^{-})dx\] \[-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{ \lambda}+w_{\lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1 }(x)(\epsilon\varphi+(w_{\lambda}+\epsilon\varphi)^{-})(x)}{|x-y|^{\mu}}dxdy. \tag{5.15}\]
Note that \(0\leq\tilde{z}\leq 1\) a.e. in \(\Omega\) and \(\tilde{z}\) depends only upon \(\varphi\). Rewriting the above equation using the fact that \(v_{\lambda}\) solves \((\mathcal{P}_{\lambda})\) in the weak sense, we obtain
\[\int\limits_{\Omega}\nabla(v_{\lambda}+w_{\lambda})\nabla\varphi dx -\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w_{ \lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1}(x)\varphi (x)}{|x-y|^{\mu}}dxdy-\lambda\int\limits_{\Omega}\tilde{z}(v_{\lambda}+w_{ \lambda})^{-\gamma}\varphi dx\] \[\geq -\frac{1}{\epsilon}\int\limits_{\Omega}\nabla w_{\lambda}\nabla (w_{\lambda}+\epsilon\varphi)^{-}dx+\frac{\lambda}{\epsilon}\int\limits_{ \Omega}(\tilde{z}(v_{\lambda}+w_{\lambda})^{-\gamma}-\chi_{\{v_{\lambda}<a\}} v_{\lambda}^{-\gamma})(w_{\lambda}+\epsilon\varphi)^{-}dx\] \[+\frac{\lambda}{\epsilon}\int\limits_{\Omega}\int\limits_{\Omega }\frac{((v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2 ^{*}_{\mu}-1}(x)-v_{\lambda}^{2^{*}_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x))( v_{\lambda}+\epsilon\varphi)^{-}}{|x-y|^{\mu}}dxdy. \tag{5.16}\]
Let \(\Omega_{\epsilon}=\{x\in\Omega:w_{\lambda}+\epsilon\varphi\leq 0\}\). Note that \(|\Omega_{\epsilon}|\to 0\) as \(\epsilon\to 0\) and hence
\[-\int\limits_{\Omega}\nabla w_{\lambda}\nabla(w_{\lambda}+\epsilon\varphi)^{ -}dx=\int\limits_{\Omega_{\epsilon}}|\nabla w_{\lambda}|^{2}+\epsilon\int \limits_{\Omega_{\epsilon}}\nabla w_{\lambda}\nabla\varphi dx\geq o(\epsilon).\]
Note that the last term in the RHS of (5.16) is nonnegative. Using the fact that \(v_{\lambda}^{-\gamma}\varphi\in L^{1}(\Omega)\), we see that
\[\int\limits_{\Omega}(\tilde{z}(v_{\lambda}+w_{\lambda})^{-\gamma}-\chi_{\{v_{ \lambda}<a\}}v_{\lambda}^{-\gamma})(w_{\lambda}+\epsilon\varphi)^{-}dx\leq \int\limits_{\Omega_{\epsilon}}2v_{\lambda}^{-\gamma}(w_{\lambda}+\epsilon \varphi)^{-}\leq 2\epsilon\int\limits_{\Omega_{\epsilon}}v_{\lambda}^{-\gamma}| \varphi|=o(\epsilon).\]
Letting \(\epsilon\to 0\) in (5.16), it can be seen that given any \(\varphi\in H^{1}_{0}(\Omega)\), there exists \(\tilde{z}=\tilde{z}^{\varphi}\) with \(0\leq\tilde{z}\leq 1\) such that
\[0\leq \int\limits_{\Omega}\nabla(v_{\lambda}+w_{\lambda})\nabla\varphi dx -\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+w_{\lambda}) ^{2^{*}_{\mu}}(y)(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1}(x)\varphi(x)}{|x-y |^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}\tilde{z}(v_{\lambda}+w_{\lambda})^{- \gamma}\varphi dx.\]
Moreover since \(\tilde{z}\) is obtained as the weak* limit of \(z_{k}^{\epsilon\varphi+\varphi_{k,\epsilon}}\), we get that \(\tilde{z}=1\) a.e. in \(\{v_{\lambda}+w_{\lambda}<a\}\) and \(\tilde{z}=0\) a.e. in \(\{v_{\lambda}+w_{\lambda}>a\}\). Therefore from Remark 5.5, \(w_{\lambda}\) is a generalized critical point of \(\mathcal{G}_{\lambda}\). It remains to show that \(w_{\lambda}\not\equiv 0\). Note that if \(\mathcal{G}_{\lambda}(w_{\lambda})\not=0\) we are done. So assume \(\mathcal{G}_{\lambda}(w_{\lambda})=0\). From (5.7), we see that \(\|\ell_{k}\|\geq\frac{\kappa}{2}\) for all large \(k\). Thus it is sufficient to show that \(\ell_{k}\to w_{\lambda}\) strongly in \(H^{1}_{0}(\Omega)\). Taking \(\aleph=w_{\lambda}\) in (5.8), we get
\[\int\limits_{\Omega}\nabla(v_{\lambda}+w_{\lambda})\nabla(w_{ \lambda}-\ell_{k})dx-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{ \lambda}+\ell_{k})^{2^{*}_{\mu}}(y)(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}-1}(x)(w_ {\lambda}-\ell_{k})(x)}{|x-y|^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}z_{k}^{w_{\lambda}-\ell_{k}}(v_{ \lambda}+\ell_{k})^{-\gamma}(w_{\lambda}-\ell_{k})dx+\frac{1}{k}\|w_{\lambda}- \ell_{k}\|\geq\|w_{\lambda}-\ell_{k}\|^{2}. \tag{5.17}\]
Now for any measurable set \(E\subset\Omega\), as \(v_{\lambda}\geq M_{1}\phi_{\gamma}\) and \(\ell_{k}\in X^{+}\), thanks to Hardy's inequality, we have
\[\int\limits_{E}z_{k}^{w_{\lambda}-\ell_{k}}|\ell_{k}-w_{\lambda}|(v _{\lambda}+\ell_{k})^{-\gamma}\leq M\int\limits_{E}\frac{|\ell_{k}-w_{\lambda}|}{v_{ \lambda}^{\gamma}}\leq M\int\limits_{E}\frac{|\ell_{k}-w_{\lambda}|}{\delta(x)^{ \frac{2\gamma}{1+\gamma}}}\leq M\int\limits_{E}\frac{|\ell_{k}-w_{\lambda}|}{ \delta(x)}\delta(x)^{\frac{1-\gamma}{1+\gamma}}\] \[\leq M\|\ell_{k}-w_{\lambda}\|\|\delta(x)^{\frac{1-\gamma}{1+\gamma}} \|_{L^{2}(E)}. \tag{5.18}\]
Since \(\ell_{k}\to w_{\lambda}\) pointwise a.e. in \(\Omega\), by Vitali's convergence theorem,
\[\int\limits_{E}z_{k}^{w_{\lambda}-\ell_{k}}|\ell_{k}-w_{\lambda}|(v_{\lambda} +\ell_{k})^{-\gamma}\to 0\text{ as }k\rightarrow\infty. \tag{5.19}\]
Also using Brezis-Lieb Lemma, we have
\[\int\limits_{\Omega}\int\limits_{\Omega} \frac{(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}}(v_{\lambda}+\ell_{k}) ^{2^{*}-1}(w_{\lambda}-\ell_{k})}{|x-y|^{\mu}}dxdy\] \[= \int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+\ell_ {k})^{2^{*}}(v_{\lambda}+\ell_{k})^{2^{*}-1}(w_{\lambda}+v_{\lambda})}{|x-y|^ {\mu}}dxdy-\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{\lambda}+\ell_{k} )^{2^{*}}(v_{\lambda}+\ell_{k})^{2^{*}}}{|x-y|^{\mu}}dxdy\] \[= -\int\limits_{\Omega}\int\limits_{\Omega}\frac{(w_{\lambda}-\ell_ {k})^{2^{*}_{\mu}}(w_{\lambda}-\ell_{k})^{2^{*}_{\mu}}}{|x-y|^{\mu}}+o_{k}(1). \tag{5.20}\]
Now using (5.19) and (5.20) in (5.17), we get
\[\|w_{\lambda}-\ell_{k}\|^{2}-\lambda\int\limits_{\Omega}\int\limits_{\Omega} \frac{(w_{\lambda}-\ell_{k})^{2^{*}_{\mu}}(w_{\lambda}-\ell_{k})^{2^{*}_{\mu}} }{|x-y|^{\mu}}dxdy\leq o_{k}(1). \tag{5.21}\]
Again taking \(\aleph=2\ell_{k}\) in (5.8) and using the fact that \(v_{\lambda}\) solves \((\mathcal{P}_{\lambda})\), we obtain
\[-\frac{1}{k}\|\ell_{k}\|\leq \int\limits_{\Omega}\nabla v_{\lambda}\nabla\ell_{k}dx+\int \limits_{\Omega}|\nabla\ell_{k}|^{2}dx-\lambda\int\limits_{\Omega}\int \limits_{\Omega}\frac{(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}}(v_{\lambda}+\ell_{ k})^{2^{*}_{\mu}-1}\ell_{k}}{|x-y|^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}z_{k}^{\ell_{k}}l_{k}(v_{\lambda}+ \ell_{k})^{-\gamma}dx\] \[= \|\ell_{k}\|^{2}-\lambda\int\limits_{\Omega}\int\limits_{\Omega} \frac{((v_{\lambda}+\ell_{k})^{2^{*}_{\mu}}(v_{\lambda}+\ell_{k})^{2^{*}_{\mu} -1}-v_{\lambda}^{2^{*}_{\mu}}v_{\lambda}^{2^{*}_{\mu}-1})\ell_{k}}{|x-y|^{\mu} }dxdy\] \[+\lambda\int\limits_{\Omega}(\chi_{\{v_{\lambda}<a\}}v_{\lambda}^ {-\gamma}-z_{k}^{\ell_{k}}(v_{\lambda}+\ell_{k})^{-\gamma})\ell_{k}dx\] \[= \|w_{\lambda}\|^{2}+\|\ell_{k}-w_{\lambda}\|^{2}-\lambda\int\limits _{\Omega}\int\limits_{\Omega}\frac{((v_{\lambda}+\ell_{k})^{2^{*}_{\mu}}(v_{ \lambda}+\ell_{k})^{2^{*}_{\mu}-1}-v_{\lambda}^{2^{*}_{\mu}}v_{\lambda}^{2^{* }_{\mu}-1})\ell_{k}}{|x-y|^{\mu}}dxdy\] \[+\lambda\int\limits_{\Omega}(\chi_{\{v_{\lambda}<a\}}v_{\lambda}^ {-\gamma}-z_{k}^{\ell_{k}}(v_{\lambda}+\ell_{k})^{-\gamma})\ell_{k}dx+o_{k}(1). \tag{5.22}\]
Now as \(w_{\lambda}\) solves \((\hat{P}_{\lambda})\), we have
\[\|w_{\lambda}\|^{2}=\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{((v_{ \lambda}+w_{\lambda})^{2^{*}_{\mu}}(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1}-v_ {\lambda}^{2^{*}_{\mu}}v_{\lambda}^{2^{*}_{\mu}-1})w_{\lambda}}{|x-y|^{\mu}}dxdy\]
\[+\lambda\int\limits_{\Omega}(\chi_{\{v_{\lambda}+w_{\lambda}<a\}}(v_{ \lambda}+w_{\lambda})^{-\gamma}-\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{-\gamma})w_{ \lambda}dx.\]
Using this identity in (5.22), we get
\[-\frac{1}{k}\|\ell_{k}\|\leq\|\ell_{k}-w_{\lambda}\|^{2}-\lambda \int\limits_{\Omega}\int\limits_{\Omega}\frac{((v_{\lambda}+\ell_{k})^{2^{*}_{ \mu}}(v_{\lambda}+\ell_{k})^{2^{*}_{\mu}-1}-v_{\lambda}^{2^{*}_{\mu}}v_{\lambda }^{2^{*}_{\mu}-1})\ell_{k}}{|x-y|^{\mu}}dxdy\] \[\quad+\lambda\int\limits_{\Omega}(\chi_{\{v_{\lambda}+w_{\lambda} <a\}}(v_{\lambda}+w_{\lambda})^{-\gamma}-\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{ -\gamma})w_{\lambda}dx+\lambda\int\limits_{\Omega}(\chi_{\{v_{\lambda}<a\}}v_{ \lambda}^{-\gamma}-z_{k}^{\ell_{k}}(v_{\lambda}+\ell_{k})^{-\gamma})\ell_{k}dx\] \[\quad+\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{((v_ {\lambda}+w_{\lambda})^{2^{*}_{\mu}}(v_{\lambda}+w_{\lambda})^{2^{*}_{\mu}-1}-v _{\lambda}^{2^{*}_{\mu}}v_{\lambda}^{2^{*}_{\mu}-1})w_{\lambda}}{|x-y|^{\mu}}dxdy +o_{k}(1). \tag{5.23}\]
Now as \(\ell_{k}\to w_{\lambda}\) pointwise a.e. in \(\Omega\) and \(|\{x\in\Omega:(v_{\lambda}=w_{\lambda})(x)=a\}|=0\), using estimates similar to (5.18) we have
\[\int\limits_{\Omega}(\chi_{\{v_{\lambda}+w_{\lambda}<a\}}(v_{\lambda}+w_{ \lambda})^{-\gamma}-\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{-\gamma})w_{\lambda} dx+\int\limits_{\Omega}(\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{-\gamma}-z_{k}^{ \ell_{k}}(v_{\lambda}+\ell_{k})^{-\gamma})\ell_{k}dx=o_{k}(1).\]
Using above estimate and Brezis-Lieb lemma, we obtain form (5.23)
\[o_{k}(1)\leq\|\ell_{k}-w_{\lambda}\|^{2}-\lambda\int\limits_{\Omega}\int \limits_{\Omega}\frac{(w_{\lambda}-\ell_{k})^{2^{*}_{\mu}}(y)(w_{\lambda}- \ell_{k})^{2^{*}_{\mu}}(x)}{|x-y|^{\mu}}dxdy. \tag{5.24}\]
Also as \(\mathcal{G}_{\lambda}(\ell_{k})\leq\frac{1}{k}\), we have
\[\mathcal{G}_{\lambda}(\ell_{k})= \frac{1}{2}\|\ell_{k}\|^{2}-\lambda\int\limits_{\Omega}F(x,\ell_{k })dx-\frac{2\lambda}{2^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega} \frac{(\ell_{k}+v_{\lambda})^{2^{*}_{\mu}}(\ell_{k}+v_{\lambda})^{2^{*}_{\mu}} }{|x-y|^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{\lambda ^{*}_{\mu}}^{2^{*}_{\mu}}v_{\lambda}^{2^{*}_{\mu}-1}\ell_{k}}{|x-y|^{\mu}}dxdy \leq\frac{1}{k}.\]
From the fact that \(\ell_{k}\rightharpoonup w_{\lambda}\) weakly in \(H^{1}_{0}(\Omega)\), this implies
\[\frac{1}{2}\|\ell_{k}-w_{\lambda}\|^{2}- \frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(w_{\lambda}-\ell_{k})^{2^{*}_{\mu}}(y)(w_{\lambda}-\ell_{k})^{2^ {*}_{\mu}}(x)}{|x-y|^{\mu}}dxdy+\mathcal{G}_{\lambda}(w_{\lambda})+\lambda\int \limits_{\Omega}F(x,w_{\lambda})dx\] \[-\lambda\int\limits_{\Omega}F(x,\ell_{k})dx\leq o_{k}(1). \tag{5.25}\]
Now using the Hardy's inequality and Vitali's convergence theorem as in (5.18) one can check that \(\int\limits_{\Omega}F(x,\ell_{k})dx\to\int\limits_{\Omega}F(x,w_{\lambda})dx\) as \(k\to\infty\). Also as \(\mathcal{G}_{\lambda}(w_{\lambda})=0\), (5.25) implies
\[\frac{1}{2}\|\ell_{k}-w_{\lambda}\|^{2}-\frac{\lambda}{22^{*}_{\mu}}\int\limits _{\Omega}\int\limits_{\Omega}\frac{(w_{\lambda}-\ell_{k})^{2^{*}_{\mu}}(y)(w_ {\lambda}-\ell_{k})^{2^{*}_{\mu}}(x)}{|x-y|^{\mu}}dxdy\leq o_{k}(1). \tag{5.26}\]
Now from (5.21), (5.24) and (5.26) we get \(\left(\frac{1}{2}-\frac{1}{22^{*}_{\mu}}\right)\|\ell_{k}-w_{\lambda}\|\leq o_{ k}(1)\) and hence \(\ell_{k}\to w_{\lambda}\) in \(H^{1}_{0}(\Omega)\).
Next we consider the case \((MP)\). Here to deal with the critical nonlinearity, we use the following Talenti functions to study the critical level:
\[V_{\epsilon}(x)=S^{\frac{(n-\mu)(2-n)}{4(n-\mu+2)}}(C(n,\mu))^{\frac{2-n}{2(n-\mu +2)}}\left(\frac{\epsilon}{\epsilon^{2}+|x|^{2}}\right)^{\frac{n-2}{2}},\ 0< \epsilon<1.\]
Fix any \(y\in\Omega_{a}=\{x\in\Omega:v_{\lambda}(x)<a\}\). Let \(r>0\) such that \(B_{4r}(y)\subset\Omega\). Now define \(\psi\in C_{c}^{\infty}(\Omega)\) such that \(0\leq\eta\leq 1\) in \(\mathbb{R}^{n}\), \(\eta\equiv 1\) in \(B_{r}(y)\) and \(\eta\equiv 0\) in \(\mathbb{R}^{n}\setminus B_{2r}(y)\). For each \(\epsilon>0\) and \(x\in\mathbb{R}^{n}\), we define \(w_{\epsilon}(x)=\psi(x)V_{\epsilon}(x)\). In the following, we set the notation
\[\left\|u\right\|_{HL}^{2^{2*}}=\int\limits_{\Omega}\int\limits_{\Omega}\frac {|u(y)|^{2^{*}_{\mu}}|u(x)|^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy.\]
**Proposition 5.7**: _Let \(n\geq 3\), \(0<\mu<n\) then the following holds:_
* \(\left\|w_{\epsilon}\right\|^{2}\leq S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O( \epsilon^{n-2}).\)__
* \(\left\|w_{\epsilon}\right\|_{HL}^{2^{2*}_{\mu}}\leq S_{H,L}^{\frac{2n-\mu}{n- \mu+2}}+O(\epsilon^{n}).\)__
* \(\left\|w_{\epsilon}\right\|_{HL}^{2^{2*}_{\mu}}\leq S_{H,L}^{\frac{2n-\mu}{n- \mu+2}}-O(\epsilon^{n}).\)__
**Proof.** For proof of part \((i)\), we refer to [29, Lemma 1.46]. For \((ii)\) and \((iii)\), see [16, Proposition 2.8]. \(\square\)
**Lemma 5.8**: _The following holds:_
* _If_ \(\mu<\min\{4,n\}\) _then for all_ \(\kappa<1\)_,_ \[\left\|v_{\lambda}+tw_{\epsilon}\right\|_{HL}^{2^{2*}_{\mu}}\geq \left\|v_{\lambda}\right\|_{HL}^{2^{2*}_{\mu}}+\left\|tw_{\epsilon} \right\|_{HL}^{2^{2*}_{\mu}}+\widetilde{M}t^{2^{2*}_{\mu}-1}\int\limits_{ \Omega}\frac{w_{\epsilon^{\prime}}^{2^{*}_{\mu}}(y)w_{\epsilon}^{2^{*}_{\mu} -1}(x)v_{\lambda}(x)}{|x-y|^{\mu}}dxdy\] \[+22^{*}_{\mu}t\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{ \lambda}^{2^{*}_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x)w_{\epsilon}(x)}{|x-y| ^{\mu}}dxdy-O(\epsilon^{\frac{2n-\mu}{4})\kappa}).\]
* _There exists a constant_ \(T_{0}>0\) _such that_ \(\int\limits_{\Omega}\int\limits_{\Omega}\frac{w_{\epsilon}^{2^{*}_{\mu}}(y)w_ {\epsilon}^{2^{*}_{\mu}-1}(x)v_{\lambda}(x)}{|x-y|^{\mu}}dxdy\geq\widetilde{C} T_{0}\epsilon^{\frac{n-2}{2}}.\)__
**Proof.** For a proof, see the proof of [17, Lemma 4.2]. \(\square\)
Next we have the following lemma:
**Lemma 5.9**: _There exists \(\epsilon_{0}\) and \(R_{0}\geq 1\) such that_
* \(\mathcal{G}_{\lambda}(Rw_{\epsilon})<\mathcal{G}_{\lambda}(0)=0\) _for all_ \(\epsilon\in(0,\epsilon_{0})\) _and_ \(R\geq R_{0}\)_._
* \(\mathcal{G}_{\lambda}(tR_{0}w_{\epsilon})<\frac{1}{2}\left(\frac{n-\mu+2}{2n- \mu}\right)\left(\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{ n-\mu+2}}}\right)\) _for all_ \(t\in(0,1]\) _and_ \(\epsilon\in(0,\epsilon_{0})\)_._
**Proof.** Noting that for \(w\in X^{+}\), \(J_{\lambda}(v_{\lambda}+w)=J_{\lambda}(v_{\lambda})+\mathcal{G}_{\lambda}(w)\), this is equivalent to show that
1. \(J_{\lambda}(v_{\lambda}+Rw_{\epsilon})<J_{\lambda}(v_{\lambda})\) for all \(\epsilon\in(0,\epsilon_{0})\) and \(R\geq R_{0}\).
2. \(J_{\lambda}(v_{\lambda}+tR_{0}w_{\epsilon})<J_{\lambda}(v_{\lambda})+\frac{1}{2} \left(\frac{n-\mu+2}{2n-\mu}\right)\left(\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}} ^{\frac{n-2}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right)\) for all \(t\in(0,1]\) and \(\epsilon\in(0,\epsilon_{0})\).
Now using the fact that \(v_{\lambda}\) solves \((\mathcal{P}_{\lambda})\), first we estimate \(J_{\lambda}(v_{\lambda}+tRw_{\epsilon})\) as follows
\[J_{\lambda}(v_{\lambda}+tRw_{\epsilon})= \frac{1}{2}\int\limits_{\Omega}|\nabla(v_{\lambda}+tRw_{\epsilon })|^{2}dx-\lambda\int\limits_{\Omega}F(v_{\lambda}+tRw_{\epsilon})dx-\frac{ \lambda}{22^{*}_{\mu}}\|v_{\lambda}+tRw_{\epsilon}\|_{HL}^{22^{*}_{\mu}}\] \[= \frac{1}{2}\int\limits_{\Omega}|\nabla v_{\lambda}|^{2}dx+\frac{ R^{2}t^{2}}{2}\int\limits_{\Omega}|\nabla w_{\epsilon}|^{2}dx+tR\int\limits_{ \Omega}\nabla v_{\lambda}\nabla w_{\epsilon}dx\] \[-\lambda\int\limits_{\Omega}F(v_{\lambda}+tRw_{\epsilon})dx- \frac{\lambda}{22^{*}_{\mu}}\|v_{\lambda}+tRw_{\epsilon}\|_{HL}^{22^{*}_{\mu}}\] \[= \frac{1}{2}\int\limits_{\Omega}|\nabla v_{\lambda}|^{2}dx+\frac{ R^{2}t^{2}}{2}\int\limits_{\Omega}|\nabla w_{\epsilon}|^{2}dx+\lambda tR\int \limits_{\Omega}\chi_{\{v_{\lambda}<a\}}v_{\lambda}^{-\gamma}w_{\epsilon}dx- \frac{\lambda}{22^{*}_{\mu}}\|v_{\lambda}+tRw_{\epsilon}\|_{HL}^{22^{*}_{\mu}}\] \[+\lambda tR\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{ \lambda}^{2^{*}_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x)w_{\epsilon}(x)}{|x-y|^{ \mu}}dxdy-\lambda\int\limits_{\Omega}F(v_{\lambda}+tRw_{\epsilon})dx\] \[\leq J_{\lambda}(v_{\lambda})+\frac{1}{2}\|tRw_{\epsilon}\|^{2}-\frac{ \lambda}{22^{*}_{\mu}}\|tRw_{\epsilon}\|_{HL}^{22^{*}_{\mu}}-\frac{\lambda \widetilde{M}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits _{\Omega}\frac{w_{\epsilon^{*}_{\mu}}^{2^{*}_{\mu}-1}(x)v_{\lambda}(x)}{|x-y|^ {\mu}}dxdy\] \[+\lambda\int\limits_{\Omega}\left(\chi_{\{v_{\lambda}<a\}}v_{ \lambda}^{-\gamma}w_{\epsilon}+F(v_{\lambda})-F(v_{\lambda}+tRw_{\epsilon}) \right)+O(\epsilon^{(\frac{2n-\mu}{4})\kappa}).\]
Using Proposition 5.7 and Lemma 5.8, we obtain
\[J_{\lambda}(v_{\lambda}+tRw_{\epsilon})\] \[\leq J_{\lambda}(v_{\lambda})+\frac{t^{2}R^{2}}{2}\left(S_{H,L}^{ \frac{2n-\mu}{n-\mu+2}}+O(\epsilon^{\frac{n-2}{2}})\right)-\frac{\lambda t^{2 2^{*}_{\mu}}R^{22^{*}_{\mu}}}{22^{*}_{\mu}}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2 }}-O(\epsilon^{n})\right)+O(\epsilon^{(\frac{2n-\mu}{4})\kappa})\] \[+\lambda\int\limits_{\Omega}\left(\chi_{\{v_{\lambda}<a\}}v_{ \lambda}^{-\gamma}w_{\epsilon}+F(v_{\lambda})-F(v_{\lambda}+tRw_{\epsilon}) \right)-\frac{\lambda\widetilde{M}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu}}T_{0} \epsilon^{\frac{n-2}{2}}. \tag{5.27}\]
Finally by estimating the last integral in (5.27) using the similar lines of [12, Lemma 3.2] (Page no. 1671-1672), we conclude by taking \(\kappa=\frac{2}{2^{*}_{\mu}}\) that
\[J_{\lambda}(v_{\lambda}+tRw_{\epsilon})\] \[\leq J_{\lambda}(v_{\lambda})+\frac{t^{2}R^{2}}{2}\left(S_{H,L}^{ \frac{2n-\mu}{n-\mu+2}}+O(\epsilon^{\frac{n-2}{2}})\right)-\frac{\lambda t^{2 2^{*}_{\mu}}R^{22^{*}_{\mu}}}{22^{*}_{\mu}}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2 }}-O(\epsilon^{n})\right)+O(\epsilon^{(\frac{2n-\mu}{4})\kappa})\] \[-\frac{\lambda\widetilde{M}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu}}T_{0} \epsilon^{\frac{n-2}{2}}+o(\epsilon^{\frac{n-2}{2}}).\]
Now the Lemma follows using the arguments as in [15, Lemma 6.4].
**Lemma 5.10**: _Let \((MP)\) hold. Then there exists a solution \(w_{\lambda}\in X^{+}\) of \((\hat{P}_{\lambda})\) and hence a second solution of \((\mathcal{P}_{\lambda})\)._
**Proof.** Define a complete metric space \((M,d)\) as
\[M=\{\zeta\in C([0,1], X^{+}):\ \zeta(0)=0,\ \|\zeta(1)\|>\kappa_{1},\ \mathcal{G}_{\lambda}(\zeta(1))<0\},\] \[d(\zeta,\eta)=\max_{t\in[0,1]}\|\zeta(t)-\eta(t)\|.\]
From \((i)\) of Lemma 5.9, if \(R\) is chosen large, it is clear that \(M\) is non-empty. Let
\[c_{0}=\inf_{\zeta\in M}\max_{t\in[0,1]}\mathcal{G}_{\lambda}(\zeta(t)).\]
Then \((ii)\) of Lemma 5.9 and \((MP)\) implies that
\[0<c_{0}<\frac{1}{2}\left(\frac{n-\mu+2}{2n-\mu}\right)\left( \frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right). \tag{5.28}\]
Define
\[\Phi(\zeta)=\max_{t\in[0,1]}\mathcal{G}_{\lambda}(\zeta(t)),\ \zeta\in M.\]
By applying Ekeland's variational principle to the above functional we get a sequence \(\{\zeta_{k}\}_{k\in\mathbb{N}}\subset M\) such that
* \(\max_{t\in[0,1]}\mathcal{G}_{\lambda}(\zeta_{k}(t))<c_{0}+\frac{1}{k}\).
* \(\max_{t\in[0,1]}\mathcal{G}_{\lambda}(\zeta_{k}(t))\leq\max_{t\in[0,1]} \mathcal{G}_{\lambda}(\zeta(t))+\frac{1}{k}\max_{t\in[0,1]}\|\zeta(t)-\zeta_{ k}(t)\|\) for all \(\zeta\in M\).
Setting \(\Gamma_{k}=\{t\in[0,1]:\mathcal{G}_{\lambda}(\zeta_{k}(t))=\max_{s\in[0,1]} \mathcal{G}_{\lambda}(\zeta_{k}(s))\}\) we obtain by arguing as in [4, Page no. 659]\(t_{k}\in\Gamma_{k}\) such that for \(w_{k}=\zeta_{k}(t_{k})\) and \(\aleph\in X^{+}\), we have
\[\mathcal{G}_{\lambda}^{0}\left(w_{k},\frac{\aleph-w_{k}}{\max\{1, \|\aleph-w_{k}\|\}}\right)\geq-\frac{1}{k} \tag{5.29}\]
and
\[\mathcal{G}_{\lambda}(w_{k})\to c_{0}\ \text{as}\ k\to\infty. \tag{5.30}\]
From (5.30) and using the fact that \(F(w_{k})\leq 0\), we have
\[c_{0}+o_{k}(1)\geq \frac{1}{2}\|w_{k}\|^{2}-\frac{\lambda}{22_{\mu}^{2}}\|w_{k}+v_{ \lambda}\|_{HL}^{22_{\mu}^{*}}-\lambda\int\limits_{\Omega}\int\limits_{\Omega }\frac{v_{\lambda}^{2_{\mu}^{*}}(y)v_{\lambda}^{2_{\mu}^{*}-1}(x)w_{k}(x)}{|x -y|^{\mu}}dxdy. \tag{5.31}\]
Again substituting \(\aleph=2w_{k}+v_{\lambda}\) in (5.29), by Remark 5.4 we obtain
\[z_{k}^{2w_{k}+v_{\lambda}}=\tilde{z}_{k}\in[\chi_{\{w_{k}+v_{ \lambda}<a\}},\chi_{\{w_{k}+v_{\lambda}\leq a\}}]\]
such that
\[\|w_{k}+v_{\lambda}\|^{2}-\lambda\|w_{k}+v_{\lambda}\|_{HL}^{22^{*}_{\mu}}- \lambda\int\limits_{\Omega}\tilde{z}_{k}(w_{k}+v_{\lambda})^{1-\gamma}\geq- \frac{1}{n}\max\{1,\|w_{k}+v_{\lambda}\|\}. \tag{5.32}\]
From (5.31) and (5.32) we derive
\[\frac{1}{2}\|w_{k}\|^{2}-\frac{1}{2^{*}_{\mu}}\|w_{k}\|^{2}\leq M_{1}+M_{2}\|w _{k}\|,\]
where \(M_{1},\ M_{2}\) are positive constants. This implies that \(\{w_{k}\}_{k\in\mathbb{N}}\) is a bounded sequence and hence \(w_{k}\rightharpoonup w_{\lambda}\) weakly in \(H^{1}_{0}(\Omega)\). As in the zero altitude case we see that \(w_{\lambda}\) solves \((\hat{P}_{\lambda})\). Now we claim that \(w_{k}\to w_{\lambda}\) in \(H^{1}_{0}(\Omega)\) and that \(w_{\lambda}\not\equiv 0\). Without loss of generality we assume \(\mathcal{G}_{\lambda}(w_{\lambda})\neq 0\).
As \(\|w_{k}\|\leq M\), from (5.30), for \(\aleph\in X^{+}\) we have \(\mathcal{G}^{0}_{\lambda}(w_{k},\aleph-w_{k})\geq-\frac{M_{1}}{k}(1+\|\aleph \|)=o_{k}(1)\). Then as in zero altitude case, we get
\[\|w_{k}-w_{\lambda}\|-\lambda\|w_{k}-w_{\lambda}\|^{22^{*}_{\mu}}=o_{k}(1). \tag{5.33}\]
Also using Brezis-Lieb, we have
\[\mathcal{G}_{\lambda}(w_{k})= \frac{1}{2}\|w_{k}\|^{2}-\lambda\int\limits_{\Omega}F(w_{k})dx- \frac{\lambda}{22^{*}_{\mu}}\|w_{k}+v_{\lambda}\|_{HL}^{22^{*}_{\mu}}-\lambda \int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{\lambda}^{2^{*}_{\mu}}(y)v_{ \lambda}^{2^{*}_{\mu}-1}(x)w_{k}(x)}{|x-y|^{\mu}}dxdy\] \[= \frac{1}{2}\|w_{k}-w_{\lambda}\|^{2}+\frac{1}{2}\|w_{\lambda}\|^ {2}+\int\limits_{\Omega}\nabla(w_{k}-w_{\lambda})\nabla w_{\lambda}dx-\lambda \int\limits_{\Omega}F(w_{k})dx\] \[-\frac{\lambda}{22^{*}_{\mu}}\|w_{k}+v_{\lambda}\|_{HL}^{22^{*}_ {\mu}}-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{\lambda}^{2^{* }_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x)w_{k}(x)}{|x-y|^{\mu}}dxdy\] \[= \frac{1}{2}\|w_{k}-w_{\lambda}\|^{2}-\frac{\lambda}{22^{*}_{\mu} }\|w_{k}-v_{\lambda}\|_{HL}^{22^{*}_{\mu}}+\frac{1}{2}\|w_{\lambda}\|^{2}- \lambda\int\limits_{\Omega}F(w_{k})dx\] \[-\frac{\lambda}{22^{*}_{\mu}}\|w_{\lambda}+v_{\lambda}\|_{HL}^{22^ {*}_{\mu}}-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{v_{\lambda}^{ 2^{*}_{\mu}}(y)v_{\lambda}^{2^{*}_{\mu}-1}(x)w_{k}(x)}{|x-y|^{\mu}}dxdy+o_{k}(1)\] \[= \frac{1}{2}\|w_{k}-w_{\lambda}\|^{2}-\frac{\lambda}{22^{*}_{\mu} }\|w_{k}-v_{\lambda}\|_{HL}^{22^{*}_{\mu}}+\mathcal{G}_{\lambda}(w_{\lambda}) +\lambda\int\limits_{\Omega}(F(w_{\lambda})-F(w_{k}))dx+o_{k}(1)\] \[= \frac{1}{2}\|w_{k}-w_{\lambda}\|^{2}-\frac{\lambda}{22^{*}_{\mu} }\|w_{k}-v_{\lambda}\|_{HL}^{22^{*}_{\mu}}+\mathcal{G}_{\lambda}(w_{\lambda}) +o_{k}(1). \tag{5.34}\]
Now as \(\mathcal{G}_{\lambda}(w_{\lambda})=0\), using (5.28), (5.30), (5.33) and (5.34), we get
\[\|w_{k}-w_{\lambda}\|^{2}=\left(\frac{2(2n-\mu)}{n-\mu+2}\right)c_{0}+o_{k}(1 )<\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}+o_{k}(1). \tag{5.35}\]
Also by Hardy-Littlewood-Sobolev inequality, we have
\[\|w_{k}-w_{\lambda}\|^{2}\left(1-\lambda S_{HL}^{-2^{*}_{\mu}}\|w_{k}-w_{ \lambda}\|^{22^{*}_{\mu}-2}\right)\leq\|w_{k}-w_{\lambda}\|^{2}-\lambda\|w_{k}- w_{\lambda}\|_{HL}^{22^{*}_{\mu}}=o_{k}(1). \tag{5.36}\]
Thus combining (5.35) and (5.36), we obtain \(w_{k}\to w_{\lambda}\) in \(H^{1}_{0}(\Omega)\). This completes the proof.
We are now ready to give the
**Proof of Theorem 1.1**: The existence of the first solution \(v_{\lambda}\) for all \(\lambda\in(0,\Lambda^{a})\) and \(\gamma>0\) follows from Lemma 4.2. The existence of second solution \(w_{\lambda}\) for the same range of \(\lambda\) and \(0<\gamma<3\) follows from Lemmata 5.6 and 5.10 keeping in view Remark 5.5. \(\Box\)
## 6 Regularity Results
In this section, we will discuss the regularity results. First let us recall an important inequality for nonlocal nonlinearities by Moroz and Van Schaftingen [26].
**Lemma 6.1**: _Let \(n\geq 2\), \(\mu\in(0,n)\) and \(\theta\in(0,n)\). If \(H,\ K\in L^{\frac{2n}{n-\mu+2}}(\mathbb{R}^{n})+L^{\frac{2n}{n-\mu}}(\mathbb{R }^{n})\), \(\left(1-\frac{\mu}{n}\right)<\theta<\left(1+\frac{\mu}{n}\right)\), then for any \(\epsilon>0\), there exists \(M_{\epsilon,\theta}\in\mathbb{R}\) such that for any \(u\in H^{1}(\mathbb{R}^{n})\),_
\[\int\limits_{\mathbb{R}^{n}}\left(|x|^{-\mu}*(H|u|^{\theta})\right)K|u|^{2- \theta}dx\leq\epsilon^{2}\int\limits_{\mathbb{R}^{n}}|\nabla u|^{2}dx+M_{ \epsilon,\theta}\int\limits_{\mathbb{R}^{n}}|u|^{2}dx.\]
We have the following Lemma which provides the \(L^{\infty}\) estimates and boundary behaviour for the weak solutions of \((\mathcal{P}_{\lambda})\).
**Lemma 6.2**: _Let \(u\) be a nonnegative weak solution of \((\mathcal{P}_{\lambda})\). Then \(u\in L^{\infty}(\Omega)\)._
**Proof.** Let \(u\) be a nonnegative weak solution of \((\mathcal{P}_{\lambda})\). Let \(\Upsilon:\mathbb{R}\to[0,1]\) be a \(C^{\infty}(\mathbb{R})\) convex increasing function such that \(\Upsilon^{\prime}(t)\leq 1\) for all \(t\in[0,1]\) and \(\Upsilon^{\prime}(t)=1\) when \(t\geq 1\). Define \(\Upsilon_{\epsilon}(t)=\epsilon\Upsilon(\frac{t}{\epsilon})\). Then using the fact that \(\Upsilon_{\epsilon}\) is smooth, we obtain \(\Upsilon_{\epsilon}\to(t-1)^{+}\) uniformly as \(\epsilon\to 0\). It implies
\[-\Delta\Upsilon_{\epsilon}(u)\leq \Upsilon^{\prime}_{\epsilon}(u)(-\Delta)u\leq\chi_{\{u>1\}}(- \Delta)u\] \[\leq \chi_{\{u>1\}}\left(\lambda\chi_{\{u<a\}}u^{-\gamma}+\lambda \left(\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}\right)u^{2^{* }_{\mu}-1}\right)\] \[\leq M\left(1+\left(\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}}(y)}{|x- y|^{\mu}}\right)u^{2^{*}_{\mu}-1}(x)\right).\]
Hence, as \(\epsilon\to 0\), we deduce that
\[-\Delta(u-1)^{+}\leq M\left(1+\left(\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}} (y)}{|x-y|^{\mu}}\right)u^{2^{*}_{\mu}-1}(x)\right). \tag{6.1}\]
For \(\tau>0\), we define \(u_{\tau}=\min\{u,\tau\}\). Since \(w=|u_{\tau}|^{q-2}u_{\tau}\in H^{1}_{0}(\Omega)\) for \(q\geq 2\), we can take it as a test function in (6.1). Now
\[\frac{4(q-1)}{q^{2}}\int\limits_{\Omega}|\nabla(u_{\tau})^{\frac {q}{2}}|^{2}dx= (q-1)\int\limits_{\Omega}|u_{\tau}|^{q-2}|\nabla u_{\tau}|^{2}dx\] \[\leq \int\limits_{\Omega}\nabla u\nabla w=\int\limits_{\{u\geq 1\}} \nabla(u-1)^{+}\nabla w+\int\limits_{\{0\leq u\leq 1\}}\nabla u\nabla w. \tag{6.2}\]
Note that for any \(\tau>1\),
\[\int\limits_{\{0\leq u\leq 1\}}\nabla u\nabla w=(q-1)\int\limits_{\{0 \leq u\leq 1\}}|u_{\tau}|^{q-2}|\nabla u_{\tau}|^{2}\leq m_{1}\|u\|^{2}=m_{2}, \tag{6.3}\]
and \(m_{2}\) is independent of \(\tau\). Taking into account (6.1) and (6.3), we obtain from (6.2)
\[\frac{4(q-1)}{q^{2}}\int\limits_{\Omega}|\nabla(u_{\tau})^{\frac{q }{2}}|^{2}dx\leq M\int\limits_{\Omega}\int\limits_{\Omega}\frac{u^{2^{*}_{\mu} }(y)}{|x-y|^{\mu}}dyu^{2^{*}_{\mu}-1}(x)u^{q-1}_{\tau}(x)dx+M\int\limits_{ \Omega}|u_{\tau}|^{q-1}dx+m_{2}.\]
If \(2\leq s<\frac{2n}{n-\mu}\), using Lemma 6.1 with \(\theta=\frac{2}{q}\), there exists \(M_{1}>0\) such that
\[\int\limits_{\Omega}\int\limits_{\Omega}\frac{u^{2^{*}_{\tau}} _{\tau}(y)}{|x-y|^{\mu}}dyu^{2^{*}_{\mu}-1}_{\tau}(x)u^{q-1}_{\tau}(x)dx\leq \frac{2(q-1)}{Mq^{2}}\int\limits_{\Omega}|\nabla(u_{\tau})^{\frac{q}{2}}|^{2} dx+M_{1}\int\limits_{\Omega}|u^{\frac{q}{2}}_{\tau}|^{2}dx.\]
Since \(u_{\tau}\leq u\), we have
\[\frac{2(q-1)}{q^{2}}\int\limits_{\Omega}|\nabla(u_{\tau})^{\frac {q}{2}}|^{2}dx\leq M_{1}\int\limits_{\Omega}u^{q}dx+M\int\limits_{A_{\tau}}\int \limits_{\Omega}\frac{u^{2^{*}_{\mu}-1}(y)u^{q-1}(y)}{|x-y|^{\mu}}dyu^{2^{*}_ {\mu}-1}(x)u^{2^{*}_{\mu}}(x)dx\] \[+M\int\limits_{\Omega}u^{q-1}dx+m_{2},\]
where \(A_{\tau}=\{x\in\Omega:u>\tau\}\).
Since \(2\leq q<\frac{2n}{n-\mu}\), applying the Hardy-Littlewood-Sobolev inequality again, we have
\[\int\limits_{A_{\tau}}\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}-1} (y)u^{q-1}(y)}{|x-y|^{\mu}}dyu^{2^{*}_{\mu}-1}(x)u^{2^{*}_{\mu}}(x)dx\leq M_{ 2}\left(\int\limits_{\Omega}|u^{2^{*}_{\mu}-1}u^{q-1}|^{r}dx\right)^{\frac{1}{ r}}\left(\int\limits_{A_{\tau}}|u^{2^{*}_{\mu}}|^{l}\right)^{\frac{1}{l}},\]
where \(\frac{1}{r}=1-\frac{n-\mu}{2n}-\frac{1}{q}\) and \(\frac{1}{l}=\frac{n-\mu}{2n}+\frac{1}{q}\). By Holder's inequality, if \(u\in L^{q}(\Omega)\), then \(u^{2^{*}_{\mu}}\in L^{l}(\Omega)\) and \(|u|^{2^{*}_{\mu}-1}|u|^{q-1}\in L^{r}(\Omega)\), whence by Lebesgue's dominated convergence theorem
\[\lim\limits_{\tau\rightarrow\infty}\int\limits_{A_{\tau}}\int\limits_{\Omega }\frac{u^{2^{*}_{\mu}-1}(y)u^{q-1}(y)}{|x-y|^{\mu}}dyu^{2^{*}_{\mu}-1}(x)u^{2^ {*}_{\mu}}(x)dx=0.\]
Finally by Sobolev embedding theory, we obtain that there exists a constant \(\hat{M}\), independent of \(\tau\), such that
\[\left(\int\limits_{\Omega}|u_{\tau}|^{\frac{qn}{n-2}}\right)^{1 -\frac{2}{n}}\leq M_{1}\int\limits_{\Omega}u^{q}dx+M\int\limits_{\Omega}u^{q -1}+\hat{M}.\]
Letting \(\tau\rightarrow\infty\) we conclude that \(u\in L^{\frac{qn}{n-2}}\). By iterating over \(q\) a finite number of times we cover the range \(q\in[2,\frac{2n}{n-\mu})\). So we get weak solution \(u\in L^{q}(\Omega)\) for every \(q\in[2,\frac{2n^{2}}{(n-\mu)(n-2)}]\). Thus, \(u^{2^{*}_{\mu}}\in L^{q}(\Omega)\) for every \(q\in[\frac{2(n-2)}{2n-\mu},\frac{2n^{2}}{(n-\mu)(2n-\mu)})\). Since \(\frac{2(n-2)}{2n-\mu}<\frac{n}{n-\mu}<\frac{2n^{2}}{(n-\mu)(2n-\mu)}\), we have
\[\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}}}{|x-y|^{\mu}}dy\in L^{ \infty}(\Omega)\]
and so from [3, Theorem 1.16], we have \((u-1)^{+}\in L^{\infty}(\Omega)\) which imply that \(u\in L^{\infty}(\Omega)\).
Finally, we give the proof of
**Proof of Theorem 1.2:** We first prove the boundary behavior. For this, we see that \(u\) is a supersolution for \((\mathcal{S}_{\lambda,\epsilon})\) for any \(\epsilon\). Then by applying Theorem 3.5, we get \(u_{\lambda,\epsilon}\leq u\) a.e. in \(\Omega\). Furthermore, thanks to Lemma 6.2 we see that \(u\) is a subsolution to the following problem
\[-\Delta w=\lambda w^{-\gamma}+K,\ w>0\ \text{in}\ \Omega,\ w=0\ \text{on}\ \partial\Omega,\]
where \(K=\lambda K_{1}|u|_{\infty}^{22^{\ast}_{\mu}-1}\) and thus \(u\leq w\) a.e. in \(\Omega\) i.e., we have \(u_{\lambda,\epsilon}\leq u\leq w\) a.e. in \(\Omega\). Now since both \(u_{\lambda,\epsilon}\sim\phi_{\gamma}\) and \(w\sim\phi_{\gamma}\), we have \(u\sim\phi_{\gamma}\). Finally, the proof of Holder continuity results follows directly from Lemma 6.2 and [18, Theorem 1.2]. \(\Box\)
**Acknowledgement:** The first author thanks the CSIR(India) for financial support in the form of a Senior Research Fellowship, Grant Number 09/086(1406)/2019-EMR-I. The second author is partially funded by IFCAM (Indo-French Centre for Applied Mathematics) IRL CNRS 3494.
|
2309.06884 | Autoencoder-Based Visual Anomaly Localization for Manufacturing Quality
Control | Manufacturing industries require efficient and voluminous production of
high-quality finished goods. In the context of Industry 4.0, visual anomaly
detection poses an optimistic solution for automatically controlled product
quality with high precision. In general, automation based on computer vision is
a promising solution to prevent bottlenecks at the product quality checkpoint.
We considered recent advancements in machine learning to improve visual defect
localization, but challenges persist in obtaining a balanced feature set and
database of the wide variety of defects occurring in the production line.
Hence, this paper proposes a defect localizing autoencoder with unsupervised
class selection by clustering with k-means the features extracted from a
pre-trained VGG16 network. Moreover, the selected classes of defects are
augmented with natural wild textures to simulate artificial defects. The study
demonstrates the effectiveness of the defect localizing autoencoder with
unsupervised class selection for improving defect detection in manufacturing
industries. The proposed methodology shows promising results with precise and
accurate localization of quality defects on melamine-faced boards for the
furniture industry. Incorporating artificial defects into the training data
shows significant potential for practical implementation in real-world quality
control scenarios. | Devang Mehta, Noah Klarmann | 2023-09-13T11:18:15Z | http://arxiv.org/abs/2309.06884v2 | # Autoencoder-based Visual Anomaly Localization for Manufacturing Quality Control
###### Abstract
Manufacturing industries require efficient and voluminous production of high-quality finished goods. In the context of Industry 4.0, visual anomaly detection poses an optimistic solution for automatically controlled product quality with high precision. In general, automation based on computer vision is a promising solution to prevent bottlenecks at the product quality checkpoint. We considered recent advancements in machine learning to improve visual defect localization, but challenges persist in obtaining a balanced feature set and database of the wide variety of defects occurring in the production line. Hence, this paper proposes a defect localizing autoencoder with unsupervised class selection by clustering with k-means the features extracted from a pre-trained VGG16 network. Moreover, the selected classes of defects are augmented with natural wild textures to simulate artificial defects. The study demonstrates the effectiveness of the defect localizing autoencoder with unsupervised class selection for improving defect detection in manufacturing industries. The proposed methodology shows promising results with precise and accurate localization of quality defects on melamine-faced boards for the furniture industry. Incorporating artificial defects into the training data shows significant potential for practical implementation in real-world quality control scenarios.
A Article
Anomaly detection; artificial defect simulation; autoencoder; computer vision; defect detection; defect localization; deep learning; deep neural network; deep neural network-based defect detection; feature extraction; industry 4.0; unsupervised clustering; manufacturing quality control; machine vision; unsupervised class selection; unsupervised learning; visual inspection systems; visual product quality control
## 1 Introduction
Artificial Intelligence (AI) promises to be a revolutionary force in the 21st century. It has gained significant attention across various sectors, with extensive research, development, and production of AI-driven products and services. The widespread adoption, ease of use, and flexibility of AI technologies have propelled its evolution. This research aims to contribute this revolutionary wave to visual inspection in furniture manufacturing.
In the manufacturing industry, the visual inspection of products plays a crucial role at different stages of the production process; ensuring the quality of the final product is essential to meet aesthetic and functional requirements. At the University of Applied Sciences Rosenheim, the proto_lab1 laboratory is an innovative Industry 4.0 platform that produces furniture with state-of-the-art machinery in a fully digitalized way, posing an ideal ecosystem for applying AI use cases. The goal of applying data-based methodologies is to establish a high-quality, efficient, intelligent system to improve the production cycle holistically. In this context, an integrated autonomous system, particularly computer vision (CV) systems, has proven to be an overly promising way. Detecting product flaws is denoted as anomaly detection, alternatively also known as artifact, novelty, or outlier detection.
Li et al. [1] presents an overview on machine vision applications in furniture manufacturing from 2011 to 2022. Many/most studies rely on classical methods to perform quality checks. These traditional methodologies are well-established, straightforward, and computationally optimized but usually show limitations. Limitations may occur due to variations in the environment, specific manual step-wise feature engineering, heterogeneous images regarding size and quality, and complexity of the image data. While widely utilized for elementary implementations and tasks, classical methods cannot keep pace with technological advancements in camera sensor resolution, optics, and Deep Learning (DL) techniques [2]. DL methods leverage large amounts of data, requiring minimal expertise and automatic fine-tuning. DL demonstrates flexibility to adapt to different domains and datasets, even when the relevant data is limited - e.g., by making use of transfer learning [3]. The dataset's images are sharp and consistent in feature space in the scope of the problem. Classical CV methods often require algorithmic computation for each type of feature or class, making them expensive to implement. Anomaly localization, however, can be efficiently scaled up by leveraging the advantages of neural networks. Several DL methodologies have shown remarkable predictive performance. In some cases, hybrid approaches might be the best choice when combining traditional CV methods and DL techniques.
As manufacturing systems excel in optimization, productivity, and efficiency, the number of products having defects reduces enormously. Consequently, current research and development efforts are in unsupervised anomaly localization, with generative deep neural networks gaining significant prominence. Initially, a popular approach was to train models using non-anomalous classes and predict classes of anomalous and non-anomalous instances. However, this methodology requires additional information for generating or reconstructing non-anomalous images. Modern challenges include accurately localizing anomalies within a low-variance feature set in an image dataset. Achieving an intelligent, thorough, fast, robust, and reliable CV system necessitates the integration of cutting-edge technologies such as deep learning and 3D point cloud analysis in some cases where depth information is required [1].
This paper presents a hybrid approach for localizing surface defects on melame-faced boards. In contrast to directly focusing the camera on specific dimensions of boards, our image dataset consists of high-resolution images captured with a camera having a fixed field of view, allowing inspection of boards of various proportions. Achieving such a model is done by slicing the high-resolution image and selecting classes of interest from an imbalanced dataset with feature extraction and k-means clustering in order to consider the minute variation in the frequency of the features, and finally, simulating anomalies for the autoencoder model to predict all artifacts.
The subsequent sections delve deeper into the literature, methodology, results, and implications of this research, providing an understanding of the approaches utilized in localizing anomalies in furniture manufacturing.
## 2 Technical Background
In the following, we present an overview of the core methodologies and principles that underpin our research. For readers well-versed in these methodologies, this section may be familiar and can skip to the next section without losing continuity in the paper. However, for those less acquainted with these concepts, this section serves as a short introduction, illuminating the methods and tools utilized throughout our work.
### K-Means Clustering
The unsupervised K-means algorithm, as introduced by Hartigan and Wong [4], divides \(M\) points in \(N\) dimensions into \(K\) clusters in a way that within every cluster, the sum of squares are minimized. Typically, K-means executes as a preprocessing step before the start of the main algorithm. Generally, clustering has a somewhat simple implementation, and the algorithm is guaranteed to converge. It easily scales up to adapt to new data with mini-batches to save computation time, cluster merging when the new data clusters based
on existing centroids are close, centroid initialization by utilizing existing centroids, online k-means for continuous data or incremental Principal Component Analysis (PCA) when the dimensionality of the data is very high. The k-means algorithm degrades in performance when the data contains several outliers, resulting in incorrect clustering. Because the k-means algorithm works by minimizing the sum of the squared distance between the data point and the centroid, some data would be clustered tightly in high-density regions, while other regions would have data spread farther apart. Hence, it is worth noting that the k-means algorithm must be generalized for robust performance when the data is of varying density.
### Segmentation
Image segmentation aims to simplify and represent an image by dividing it into multiple regions denoted as segments. Each segment represents pixels that share the same features. This process helps to analyze the image data more meaningfully in a spatial sense. Unlike clustering, segmentation also considers boundaries and structures. Various segmentation algorithms use specific conditions to segment the image. Felzenszwalb and Huttenlocher [5] introduced a fast algorithm that generates segments based on a boundary that separates the regions. By applying Felzenszwalb's segmentation with \(scale=25\), \(sigma=1\), and \(min\_size=500\) to an image of braid from DTD Figure 0(a), the algorithm can find segments as shown in Figure 0(b).
### Ssim
Structural Similarity (SSIM), introduced by Wang et al. [6], is a commonly used algorithm to determine the similarity or difference between two images. The algorithm is designed specifically for grayscale images, considering their inherent properties such as luminance, contrast, and structure. It adopts mechanisms of human vision for effectively identifying structural information. The SSIM score ranges from zero to one, while unity indicates perfect similarity and zero indicates complete dissimilarity. For example, the SSIM score between the original image Figure 1(a) and its blurred version Figure 1(b) with a kernel of \((99,99)\) is 0.9646.
Figure 1: DTD (a) image of braid, and (b) its Felzenszwalb segmented boundaries.
Figure 2: Comparing melamine-faced board sample image (a) actual vs. (b) blurred image.
### Autoencoders
The generative model autoencoder is a class of unsupervised learning algorithms in which the output shape is the same as the input shape [7]. Such a model allows the network to learn basic representations in a better way when compared to raw, unprocessed data, thereby learning features and patterns while ignoring noise. The network has an encoder that maps the information to a latent representation. Following it is decoding the latent space to reconstruct the original data. The model optimizes by minimizing the MSE loss between the target and the reconstructed image. Usually, the convolutional autoencoders (CAEs) are not very promising for localizing anomalies; using a de-noising autoencoder (DAE) with an SSIM-based loss typically increases the performance of CAE.
### Generative Adversarial Networks
Generative Adversarial Networks (GANs), as described by Goodfellow et al. [8], are generative models based on game theory, where the players (the generator and the discriminator) each try to beat the other through strategic adjustments. In the context of image data, the model works competitively, where a generator creates images from random noise that resemble real images, and the discriminator distinguishes between the real and the generated images based on a probability score. In short, GANs learn the probability distribution of the data to generate synthetic data. GAN-based methods fail where the discriminator gets stuck in a local minimum, and the generators produce a particular output repeatedly, making it difficult to reliably reconstruct anomaly-free images, especially textures [9].
### U-Net
U-net, as introduced by Ronneberger et al. [10], was initially designed for segmentation in the field of medical imaging. It depends on data augmentation and can be trained even with a couple of images. The main difference between autoencoder and an u-net architecture is the implementation of skip connections. Skip connections help the network maintain high-resolution features lost during downsampling.
### Feature Extraction With Pre-Trained Cnn
A deep neural network can extract feature descriptors for a completely different dataset. This approach is known as feature extraction. Figure 3 depicts the architecture of the popular VGG16 [11] deep neural network architecture. The first part of the fully connected layer of the model provides the features of the input image. Simple clustering or classification utilizes these feature vectors from multiple dataset images as post-processing.
## 3 Related Work
Anomaly detection tackled with AI methodologies broadly categorizes into supervised, semi-supervised, and unsupervised models. The lack of anomaly ground truth values in an image dataset is the foremost cause for researchers to employ unsupervised anomaly localization models. Due to the lack of data on anomalous classes, modern research and development in unsupervised anomaly localization relies on reconstructive or generative deep neural networks. A popular method that follows Goodfellow et al. [12] is to train the model with anomaly-free classes only, and the difference between the reconstruction and
Figure 3: The VGG16 model architecture.
the input data would localize the anomalies. The methodology suffers from insufficient information on generating or reconstructing an image of a non-anomalous class.
Based on the criteria of the type of methodology incorporated to try to localize anomalies, we have two subsections that provide a better understanding of the existing research. The studies follow two approaches, one where the input data is encoded in some representation and then fed for training, and the other where the focus is mainly on identifying the difference with the reconstruction.
### Encoding-based Anomaly Localization
The presence of knots in wood plays a crucial role in assessing the eventual quality/strength of the end product. Kamal et al. [13] introduces a unique technique, employing feed-forward back-propagation neural networks with Laws Texture Energy Measures (LTEM) [14] as input parameters. This innovative approach aims to predict knot defects in wood through a supervised classification model. While the model performs well for multi-class classification, it struggles with generalization when dealing with unfamiliar defects. Nakanishi et al. [15] proposes an alternative approach using autoencoders and a weighted frequency domain loss to identify various wood anomalies effectively. The study reveals that the weighted frequency domain loss significantly improves the autoencoder's ability to detect anomalies by emphasizing specific frequency components. However, we need further investigation to assess its effectiveness on real-time datasets and, as mentioned by the authors, on high-frequency components.
### Reconstruction-based Anomaly Localization
The widely recognized MVTec AD dataset, established by Bergmann et al. [16], serves as the standard benchmark for unsupervised anomaly detection and localization. Many researchers utilize the autoencoder architecture for feature learning on this dataset and employ image inpainting techniques to enhance the robustness of generalized predictions. DRAEM, as introduced by Zavrtanik et al. [17], is a notable model that is a reconstructive-discriminative sub-network trained on the MVTec AD dataset specifically for visual surface anomaly detection. DRAEM incorporates a Perlin noise generator [18] to produce random shapes of artificial anomalies, while the shape content derives from randomly augmented DTD [19] images. DRAEM outperforms several other methods, achieving impressive mean detection/localization; e.g., an AUROC score of 0.98 and 0.973, respectively, could be achieved. It improves the accuracy of anomaly localization and achieves nearly fully supervised results on surface defect datasets.
Another novel model named CS-ResNet, proposed by Zhang et al. [20], is introduced for PCB cosmetic defect detection using convolutional neural networks. It addresses issues related to unbalanced class distribution and misclassification cost by incorporating a cost-sensitive adjustment layer in the standard ResNet [21]. This modification results in higher accuracy and lower misclassification cost compared to Auto-VRS [22].
DeRA, as introduced by Hida et al. [9] and akin to DRAEM, is an unsupervised anomaly detection method tested on the MVTec AD dataset. It leverages Felzenszwalb's graph-based segmentation method [5] on segments of the DTD [19] dataset. These segments are superimposed on non-anomalous images with random transparency, creating a more diverse and complex anomalous dataset. The DeRA neural network combines U-net [10] and autoencoder architectures. The performance of DeRA is notably enhanced by incorporating the Neural Style Transfer (NST) loss [23] using a pre-trained VGG19 [11] network as a discriminator. This loss function, trained on the ImageNet dataset [24], aids in improving the anomaly detection results. DeRA achieves a pixel-wise mean AUROC of 0.97, surpassing methodologies described by Bergmann et al. [16]. However, this method is limited to grayscale images and performs poorly on transparent artifacts.
In contrast to DeRA, Schluter et al. [25] propose a self-supervision task called Natural Synthetic Anomalies (NSA) using Poisson image editing [26] to generate a wide range of realistic synthetic anomalies for anomaly detection and localization. The NSA architecture,
a ResNet-based encoder-decoder, achieves mean image-level and pixel-level AUROC scores of 0.972 and 0.963 on the MVTec AD dataset. While outperforming particular methods that do not use additional data, NSA lacks robustness when dealing with minute anomalies.
To compare different autoencoder models for real-time anomaly detection, Mujkic et al. [27] evaluated the following three models: Denoising Autoencoder (DAE) [28], Semi-Supervised Autoencoder (SSAE), and Variational Autoencoder (VQ-VAE) [29], against the baseline YOLOv5 [30]. Although YOLOv5 slightly outperformed SSAE in the AUROC score (0.945 vs. 0.8849), SSAE demonstrated better performance in critical cases.
Lastly, Huang et al. [31] introduced a self-supervised masking (SSM) method for anomaly localization. They use random masking to augment each image, creating a diverse set of training triplets [32], enabling the autoencoder to reconstruct masks of various shapes and sizes during training. For inference, a progressive mask refinement method gradually reveals non-anomalous regions and eventually localizes anomalies. On the MVTec AD dataset, the SSM achieves a mean AUROC score of 0.92.
Our research paper leverages the autoencoder architecture, incorporating the autoencoder architecture, an artificial anomaly overlay, and a loss function akin to DeRA. Unlike conventional approaches, our unique pipeline accommodates diverse features within our training dataset. Given the real-world limitations of training high-resolution images directly, we adopt the sliding window technique to capture and comprehend image features effectively. Implementing the sliding window approach, we address memory constraints and augment the richness of our training data. Such an implementation additionally allows our model to be versatile for all dimensions of the boards without an additional camera system. Different from CS-ResNet, we integrate k-means clustering to identify and prioritize non-frequent image features that could be overshadowed by dominant classes, refining the focus of our model and rectifying class imbalances. The resultant balanced training data propels the overall efficacy of our approach. Similar to the methodology in DRAEM, we generate artificial anomalies akin to DeRA using Felzenszwalb's method. Our workflow stands for an innovative and pragmatic solution for advancing anomaly detection within complex image datasets.
## 4 Methodology
This paper aims to provide a novel solution for localizing artifacts on a melamine-faced board surface subject on the basis of a highly imbalanced data set. The methodology deals with the frequently arising problem of insufficient data featuring anomalies and requiring a high-resolution image for anomaly identification. The first step in the approach is to capture high-resolution images of the melamine-faced boards. Moreover, we employ the sliding window technique to slide a small window to slice the high-resolution images into small crops, as detailed in Subsections 4.1 and 4.2. The features of the sliced images are extracted with the help of a pre-trained VGG16 and clustered into groups with the k-means algorithm, as described in Subsection 4.3. As a separate process, Subsection 4.4 explains the artificial anomaly generation by extracting segments from the images of DTD. Finally, the artificial anomalies are dynamically and randomly overlaid on the fly for training. Figure 4 represents the pipeline of the method, while the following subsections detail the elements.
### Imaging System and Dataset Gathering
The demonstrator is operated at controlled conditions, featuring an enclosed lightproof setup utilizing a precise artificial light source, camera position, and focus. A graphical user interface (GUI) running on a Raspberry Pi 4 [33] empowers the system to initiate image capturing when a product sample takes its position beneath the anticipatory camera lens. Our data collection venture encompassed a multitude of captures, each revealing the melamine-faced board's distinctive facets through diverse locations and orientations within the enclosure. The resulting image dataset showcases a lateral viewpoint (Figure 4(a)) alongside a transverse (Figure 4(b)). The image dataset presents invaluable insights, culminating in 348 images, for our analysis and model development pursuits.
### Image Dataset Analysis
The demonstrator captures images with a resolution of \((4912,3684)\). Each melamine-faced board has a barcode label glued on its surface. However, the label feature space resembles the melamine-faced board's surface feature space, presenting a critical issue. Hence, the barcode labels require manual removal beforehand. Seven distinct classes meticulously considered in the images are as follows: (a) background, (b) surface, (c) drilling holes, (d) edges, (e) corners, (f) slots, and (g) combinations of edges, holes, and slots. The board carefully selected for this endeavor is Figure 5 because it included the most diverse classes, with each board exhibiting unique features.
The challenge arose in training the network directly on such high-resolution images. Hence, the crop windows of the image, Figure 6, each of resolution \((289,289)\), are generated with a precise stride of \((67,97)\) and zero padding. This approach resulted in 2'520 cropped tiles derived from a single high-resolution image and 876'960 tiles extracted from all images (348 images in total).
The image dataset demonstrated exceptional consistency in the feature space, featuring only minute variance. This consistency instrumentally overcame the inherent challenges of lens distortion, where features further away from the camera are captured with a bulge, elongation, or stretch (barrel distortion) without camera calibration.
### Unsupervised Class Selection
The prototype image dataset features noticeable class imbalances as surface and background tiles dominate the remaining classes. To mitigate this issue, we adopt an
Figure 4: Unsupervised defect localizing training pipeline.
Figure 5: Prototype setup training image.
Figure 6: Sample windows of furniture finishing (a) Surface, (b) Edge, (c) Corner, (d) Background, (e) Edge & groove, (f) Hole & edge, (g) Groove
approach of two steps: Firstly, we extract feature vectors for each cropped tile using the pre-trained VGG16 model with ImageNet1k_v1 weights. This step significantly contributes to resolving the class imbalance by enhancing the representation of each class. Next, we utilize the unsupervised k-means algorithm to cluster the obtained feature vectors into seven distinct clusters. In doing so, we identify inherent patterns and group samples regarding their similarity. Moreover, we deliberately drop the clusters with the highest frequency, corresponding to the surface class, and the second-highest frequency, associated with the background class. By discarding these clusters, we ensure that only the most relevant and informative ones are retained, thus facilitating a more balanced and meaningful representation of the dataset.
### Image Dataset Augmentation
Data augmentation in computer vision is the process employed to increase the diversity of a training dataset by applying diverse transformations such as rotation, flipping, scaling, and cropping to the original images. This technique helps to enhance the ability of the model to generalize by exposing it to different variations of the data, thereby improving performance and reducing overfitting. We achieve data augmentation by adopting the extraction of segments from the Describable Textures Dataset (DTD) [19]. The DTD is a collection of natural patterns and textures, serving as a foundation for developing better methods to recognize and understand texture attributes in images. Notably, the DTD contains image resolutions higher than the cropped-out tiles resulting from the sliding window technique. We employ Felzenswalb's segmentation algorithm to extract segments from the DTD with specific values for the algorithm parameters, as detailed in Table 1. Next, the resulting image segments are randomly selected and superimposed onto the tile, as demonstrated in Figure 7. This approach enables the generation of diverse artificial anomaly variations, significantly contributing to the overall performance of the anomaly detection model and increasing its robustness to detect anomalous patterns in various real-world scenarios. This augmentation strategy is instrumental in improving the model's generalization capabilities and practical applicability in real-world anomaly detection tasks. We additionally employ data augmentation techniques, as detailed in Table 2. During the training process, these augmentations are randomly applied to each image, enriching the diversity of the data and enhancing the model's ability to generalize effectively.
### Network Architecture
Inspired by the work of Hida et al. [9], we propose the network architecture, as depicted in \(8\), with the principal objective of encoding an input image of resolution 289 x 289 into a compact latent representation of size 512 x 1 x 1 using the encoder. After the encoding,
\begin{table}
\begin{tabular}{c c} \hline \hline
**Parameter** & **Value** \\ \hline scale & 2 \\ sigma & 5 \\ min\_size & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Felzenswalbβs parameter values
\begin{table}
\begin{tabular}{c c} \hline \hline
**Augmentation** & **Value** \\ \hline horizontal flip & probability = 0.5 \\ vertical flip & [0.98, 1.5] \\ brightness range & [1, 1.2] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Augmentation values
Figure 7: DTD (left) & artificial anomaly overlaid on crop window of prototype dataset (right)
the decoder reconstructs the original input image. As indicated in 8, the autoencoder features skip connections that bypass the middle part of the network. In general, skip connections enable the network to more easily retain finer details from the input, thus facilitating the learning of a more accurate reconstruction of the original data. By bypassing specific layers, skip connections can also alleviate the vanishing gradient problem, which is particularly beneficial when training deeper autoencoders.
### Loss Function
At the outset, we implemented the Mean Squared Error (MSE) loss function alone, which yielded unsatisfactory results. However, we observed a significant improvement after introducing the Structural Similarity Index (SSIM) metric as an additional term in the loss function. This addition notably enhanced the preservation of intricate image details, resulting in greater accuracy. Given that SSIM is for grayscale images, we adjusted the dataset accordingly, leading us to utilize grayscale images for training. Further increasing overall performance, we integrate an extra MSE loss component tailored for overlay regions. This addition proved crucial in emphasizing precise reconstruction within these critical areas, contributing to enhanced outcomes. The primary responsibility for emphasizing accurate reconstruction in these vital areas, where anomalies exist, lies with the MSE at overlay loss term. Its influence leads to improved results, particularly in overlay regions. We culminate these insights and develop the final version of the loss function (see Equation 1), which combines MSE (Equation 2), SSIM (Equation 3), and MSE at overlay (Equation 4) for training. The individual loss weights in Equation 1, \(\lambda_{MSE}\), \(\lambda_{SSIM}\), and \(\lambda_{MSE\_artificial\_anomaly}\) use a value of one. In the last step, capitalizing on the strengths of these three distinct loss metrics, we train the model to achieve excellent accuracy and robustness in addressing artificial anomalies and irregularities such as unexpected variations, inconsistencies, abnormal patterns, noise, errors, or outliers in the data. This comprehensive approach ensured our model's proficiency in handling complex challenges within the dataset.
\[\mathit{Loss}= \lambda_{MSE}L_{MSE}(Y,\hat{Y})+\] \[\lambda_{SSIM}(1-L_{SSIM}(Y,\hat{Y}))+\] \[\lambda_{MSE\_artificial\_anomaly}L_{MSE\_artificial\_anomaly}(Y, \hat{Y}) \tag{1}\] \[L_{MSE}(Y,\hat{Y})= \frac{1}{p}\sum_{i=1}^{p}(Y_{i}-\hat{Y}_{i})\] (2) \[L_{SSIM}(Y,\hat{Y})= \frac{1}{q}\sum_{j=1}^{q}\frac{(2\mu_{Y_{j}}\mu_{\hat{Y}_{j}}+c_{ 1})(2\sigma_{Y_{j}\hat{Y}_{j}}+c_{2})}{(\mu_{Y_{j}}^{2}+\mu_{\hat{Y}_{j}}^{2}+c _{1})(\sigma_{Y_{j}}^{2}+\sigma_{\hat{Y}_{j}}^{2}+c_{2})},\] (3) \[\mathit{where}\ c_{1}=0.01,\ \mathit{and}\ c_{2}=0.03\]
Figure 8: Autoencoder architecture
\[L_{MSE\_artificial\_anomaly}(Y,\hat{Y})= \frac{1}{r}\sum_{k=1}^{r}(Y_{k}-\hat{Y}_{k}), \tag{4}\] \[\text{where }k\;\epsilon\;\{artificial\;anomaly\;pixels\}\]
### Training
Our neural network implementation utilizes the PyTorch framework [34]. We ensure reproducibility by setting a random seed of 42 and using modules such as NumPy and PyTorch. For optimization, we employ the Adam optimizer with a learning rate of \(2e-4\), betas set at \((0.9,0.999)\), an eps value of \(1e-8\), and with the amsgrad option disabled. We incorporate a learning rate scheduler and an early stopping strategy to optimize training performance. The "reduce on plateau" learning rate scheduler has a configuration with mode set to minimum, a factor of 0.7, patience of three epochs, a threshold of \(1e-4\), an eps value of \(1e-8\), verbose mode disabled, and cooldown and minimum learning rate set to zero. We assign a patience of 40 epochs and a minimum change in validation loss of 1e-6 for early stopping. If the validation loss doesn't show improvement (change less than \(1e-6\)) for 40 consecutive epochs, the training process halts. We save the Model's weights when the validation loss during training is lower than the previous lowest value, thus capturing the best performance achieved during training. These optimization strategies enhance performance and reproducibility in network training, surpassing non-optimized models. This results in more consistent and reliable outcomes. Figure 9 shows the training performance through these applied optimizations, achieving completion in two days and four hours with 285 epochs.
### Hardware Setup
The demonstrator is a light-proof box with an inner dimension of 1200 x 800 x 2033 mm\({}^{3}\). It incorporates a clearance height of 402.00 mm at the bottom for an AGV or movable table to pass through. The camera mounting is 1088 mm above the clearance, ensuring optimal image capturing conditions. A diffuse artificial light source is positioned behind the camera, providing a constant and uniform light distribution throughout the imaging process. A Raspberry Pi 4 computer with a touch display is on the side of the prototype setup. The model trains on an AMD Ryzen 9 7950x 16-Core Processor with 64 GB RAM and Nvidia GeForce RTX 3070 Lite Hash Rate with 8GB VRAM.
## 5 Results & Discussion
In the context of classification analysis, the False Positive Rate (FPR) and True Positive Rate (TPR) are pivotal metrics employed to evaluate the efficacy of a classifier in alignment with actual ground truth labels. In the context of an unsupervised model akin to ours, synthetically generated anomalies influence the assessment of the model's performance. It is noteworthy to consider that incorporating bona fide ground truth data could enhance the model's interpretative capacity.
In our investigation, a suite of seven discrete melamine-faced board images assumes the role of the litmus test for gauging the model's capabilities, portrayed in Figure 10's
Figure 9: Training performance
ROC plot. The subsequent narrative endeavors to unfold the nuanced interpretations latent within this ROC plot. The zone where TPR attains unity while FPR dwells at zero demonstrates an impeccable proficiency in demarcating anomalies from their non-anomalous counterparts. However, a discernible pattern emerges in the investigation of the shown ROC plot. Initially, at exceedingly low thresholds, the TPR experiences a precipitous ascent until 0.4 while the FPR remains close to zero. This dynamic underscores the model's capacity to effectively localize anomalies, albeit with a trade-off that precision remains high while accuracy is fair. Proceeding along this ROC plot, a shift in the equilibrium is palpable. The velocity at which FPR escalates surpasses that of TPR as thresholds ascend. This divergence indicates the model's tendency to misclassify non-anomalous pixels as anomalies. Though accuracy registers an uptick, a proportionate decline in precision is also observed. Such a juxtaposition warrants a meticulous inquiry to study the underlying catalyst -- whether inherent noise in the data or a manifestation of model overfitting to specific patterns -- contributing to the abrupt spike in FPR. As thresholds scale higher, a unique dynamic manifests. The model traverses the image where all anomalous pixels are successfully localized; however, this feat coexists with a counterpoint -- the erroneous labeling of non-anomalous pixels. This paradox underscores a control of heightened accuracy counterbalanced by a degrading precision.
In sum, this investigation of the ROC plot within our context shines a spotlight on the intricate interplay between the True Positive Rate and False Positive Rate, unraveling insights into the model's discrimination prowess, precision, and potential pitfalls attributed to varying thresholds. We select the average threshold value 0.04, corresponding to the TPR value 0.4, to obtain accurate predictions while maintaining low misclassification.
As previously stated, the uncropped images are of high resolution, and most of the anomalies are relatively tiny compared to the size of the other features. To better visualize the defects, the evaluation focuses on identifying specific crops of the melamine-faced board to showcase the performance, encompassing corners, edges, grooves, holes, and surfaces with their actual sharpness. Magnifying Figure 11 can provide a better visualization of the anomalies. While correct predictions generally characterize the non-anomalous area, accuracy fluctuates amidst anomalous regions. Given an unsupervised defect localizing model, achieving precise pixel-level performance is challenging due to the lack of definitive ground truth for the model to learn. Addressing this is done by generating heatmaps from the difference between the original and reconstructed images. These heatmaps are then overlaid with the corresponding actual anomalous crop areas to gauge the model behavior and the quality of its localization (refer to Figure 11). Each tile features a 289 x 289 resolution crop, organized in sets of three (\(a\), \(b\), and \(c\)) in each of the four columns. The first tile, \(a\), represents the actual anomalous area crop, followed by the second tile, \(b\), displaying
Figure 10: ROC plot of the model predictions on seven different melamine-faced board images, each plotted for a range of threshold values.
the heatmap resulting from the difference, and the third tile, \(c\), presenting an overlay of the heatmap on the actual anomalous crop area, with opacities set at 75% and 50% respectively.
The model demonstrates decent capabilities regarding localization of tiny artifacts such as smudges, dirt, and deformities on the corners of the melamine-faced boards in Figure 11A1, Figure 11A3, and Figure 11A4. The model accurately identifies these imperfections. However, the model's performance falls short when predicting the presence of large imperfections on the melamine-faced boards, as evidenced in Figure 11A2. It appears that the model struggles to detect and localize large-scale defects.
In the context of anomaly localization close to the edges of the boards, as shown in Figure 11B - C, the model's performance in predicting significant defects of various shapes and sizes is decent. However, challenges arise when dealing with relatively small-scale or (and) blurred anomalies, such as the one located on the left edge in Figure 11B2. In such cases, the model's confidence in its prediction decreases, leading to less accurate results. Figure 11C1 showcases the model's capability to accurately localize artifacts even when positioned slightly away from the edge, demonstrating its potential for robust anomaly detection in various scenarios.
The model exhibits mixed confidence when dealing with artifacts around the groove, as evidenced in Figure 11D1 - D4. This behavior can be attributed to the artifact closely resembling the surrounding environment, making it challenging for the model to distinguish it as an anomaly. However, the model's performance receives a significant boost when dealing with instances of discontinuity. Discontinuities in the data are more straightforward for the model to identify and classify as anomalies, resulting in higher confidence predictions. Moreover, the defects found at the end of the groove are also localized by the model with moderate confidence, as seen in Figure 11E1, suggesting that the model can detect these defects to some extent, but there may still be room for improvement in accuracy and precision.
The model's performance in localizing anomalies around holes is observable in Figure 11F - H. It successfully identifies anomalies within hole textures. Moreover, the model exhibits its proficiency in accurately pinpointing localizing different types of defects, such as defects along edges and holes (Figure 11F4) as well as defects on surfaces and holes (evident in Figure 11G1, Figure 11G3, and Figure 11H1). The versatile capability to address diverse anomaly types underscores the model's effectiveness and potential.
Figure 11I - N evidences the model's localization performance for plain surface defects. It successfully detects prominent anomalies and localizes even the most minute plain surface defects, as evident in Figure 11I1 - J4. Moreover, the model demonstrates its ability to recognize defects resembling closely to holes in shape and texture, as observed in Figure 11M1.
Overall, the results in Figure 11 highlight the model's robustness, limitations, and accuracy in tackling complex defect localization tasks by identifying and localizing a wide range of surface defects occurring on melamine-faced boards.
## 6 Conclusion
Our paper presents a hybrid approach for detecting surface defects on melamine-faced boards. Unlike traditional methods that use images with a specific region of interest, our methodology utilizes a dataset with images of high-resolution captured with a fixed field-of-view camera, enabling inspection of boards of varying sizes. This model combines techniques, including slicing high-resolution images, addressing imbalanced datasets, performing feature extraction and k-means clustering due to feature frequency variations, and utilizing an autoencoder model for anomaly prediction.
The unsupervised defect localizing model evaluation reveals strong performance in identifying anomalies within the melamine-faced board. The model recognizes and localizes minor artifacts such as smudges, dirt, and deformities, even in corners and edges. It also accurately predicts significant defects around edges and holes, showcasing its robust anomaly detection capabilities. However, challenges arise when dealing with relatively
Figure 11: The defect localization on the melamine-faced board involves four columns with the following three grouping sets: the actual anomaly, heatmap of prediction, and the overlay of the heatmap on the original defect, for evaluation. The rows are categorized as follows: corners -> A, edges -> B - C, grooves -> D - E, holes -> F - H, and plain surface -> I - N.
small and blurred anomalies, particularly around the edges. Moreover, the model struggles to identify and localize larger missing sections on melamine-faced boards. The model's behavior around grooves is a mixed bag; it identifies instances of discontinuity but faces difficulties distinguishing artifacts that blend with the surrounding environment. When it comes to plain surface defects, the model performs effectively. It successfully detects large and small surface defects, highlighting its versatility in identifying diverse anomaly types, including those resembling holes in both shape and texture.
In conclusion, the model exhibits promising potential for improving quality control and inspection processes on surfaces of melamine-faced board. This potential is especially evident in its ability to identify defects on plain surfaces, corners, edges, and holes. However, it is necessary to acknowledge the need for continuous refinement to enhance its performance in identifying specific defect types and managing anomalies with less distinct attributes, as discussed in the following subsection.
### Future Scope
This paper introduces an approach to localize anomalies on melamine-faced boards, addressing class imbalance challenges while maintaining quality standards. However, let us look at avenues for enhancing our model and its performance in anomaly localization on melamine-faced boards. One potential improvement involves using a higher-resolution camera with a larger aperture to aid in capturing anomalies more effectively, particularly significant ones. Moreover, such a camera could facilitate the imaging of larger boards, contributing to better feature learning with higher memory use. Another approach is the implementation of a camera array, requiring proper calibration and considerations for overlap regions, rectifying barrel distortion to minimize variations in class features, effectively reducing the number of training images. Addressing class imbalance warrants further investigation, especially in identifying damaged corners. One possible strategy is to train separate models for each feature following an initial classifier, involving weighing trade-offs related to the overall model size. Another is to assign weights to each class, which requires an excellent classifier at the initial stage. Optimizing the hyperparameters for Felzenszwalb's segmentation could lead to an artificial dataset representing natural anomalies. This optimization can enhance the model's robustness and improve performance metrics. Exploring alternative architectures, such as UNet++ introduced by Zhou et al. [35], may offer improved power and generalization capabilities, even though it could lead to additional training parameters. Incorporating techniques like the weighted frequency domain loss, as explored by Nakanishi et al. [15], and NST loss, as investigated by Hida et al. [9], could enhance the model's ability to generalize on the edges and textures of board classes.
By pursuing these avenues of improvement, our model could achieve higher accuracy, greater robustness, and improved generalization for localizing anomalies on melamine-faced boards, contributing to elevating quality control standards within the manufacturing industry.
Conceptualization, D.Mehta; methodology, D.Mehta; software, D.Mehta; formal analysis, D.Mehta; investigation, D.Mehta; data curation, D.Mehta; writing--original draft preparation, D.Mehta.; writing--review and editing, N.Klarmann.; supervision, N.Klarmann.; project administration, N.Klarmann
This research received no external funding.
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
## Abbreviations
The following abbreviations are used in this manuscript |
2309.08665 | Prospects from TESS and Gaia to constrain the flatness of planetary
systems | The mutual inclination between planets orbiting the same star provides key
information to understand the formation and evolution of multi-planet systems.
In this work, we investigate the potential of Gaia astrometry in detecting and
characterizing cold Jupiters in orbits exterior to the currently known TESS
planet candidates. According to our simulations, out of the $\sim 3350$ systems
expected to have cold Jupiter companions, Gaia, by its nominal 5-year mission,
should be able to detect $\sim 200$ cold Jupiters and measure the orbital
inclinations with a precision of $\sigma_{\cos i}<0.2$ in $\sim 120$ of them.
These numbers are estimated under the assumption that the orbital orientations
of the CJs follow an isotropic distribution, but these only vary slightly for
less broad distributions. We also discuss the prospects from radial velocity
follow-ups to better constrain the derived properties and provide a package to
do quick forecasts using our Fisher matrix analysis. Overall, our simulations
show that Gaia astrometry of cold Jupiters orbiting stars with TESS planets can
distinguish dynamically cold (mean mutual inclination $\lesssim5^\circ$) from
dynamically hot systems (mean mutual inclination $\gtrsim 20^\circ$), placing a
new set of constraints on their formation and evolution. | Juan I. Espinoza-Retamal, Wei Zhu, Cristobal Petrovich | 2023-09-15T18:00:04Z | http://arxiv.org/abs/2309.08665v1 | # Prospects from TESS and Gaia to constrain the flatness of planetary systems
###### Abstract
The mutual inclination between planets orbiting the same star provides key information to understand the formation and evolution of multi-planet systems. In this work, we investigate the potential of Gaia astrometry in detecting and characterizing cold Jupiters in orbits exterior to the currently known TESS planet candidates. According to our simulations, out of the \(\sim\) 3,350 systems expected to have cold Jupiter companions, Gaia, by its nominal 5-year mission, should be able to detect \(\sim 200\) cold Jupiters and measure the orbital inclinations with a precision of \(\sigma_{\cos i}<0.2\) in \(\sim 120\) of them. These numbers are estimated under the assumption that the orbital orientations of the CJs follow an isotropic distribution, but these only vary slightly for less broad distributions. We also discuss the prospects from radial velocity follow-ups to better constrain the derived properties and provide a package to do quick forecasts using our Fisher matrix analysis. Overall, our simulations show that Gaia astrometry of cold Jupiters orbiting stars with TESS planets can distinguish dynamically cold (mean mutual inclination \(\lesssim 5^{\circ}\)) from dynamically hot systems (mean mutual inclination \(\gtrsim 20^{\circ}\)), placing a new set of constraints on their formation and evolution.
Exoplanets (498) -- Exoplanet catalogs (488) -- Space astrometry (1541) -- Exoplanet migration (2205) -- Extrasolar gaseous planets (2172) 0000-0002-4002-2886]Juan I. Espinoza-Retamal
0000-0002-4882-7885]Wei Zhu
0000-0002-4882-7885]Cristobal Petrovich
## 1 Introduction
Over 5,100 exoplanets have been confirmed (Akeson et al., 2013), the majority of which were discovered through transits or radial velocities. Around 380 of the known exoplanets were discovered by the Transiting Exoplanet Survey Satellite (TESS, Ricker et al., 2015), and it has \(\sim\) 5,900 candidates yet to be confirmed. These known exoplanets have revealed rich information about the occurrence rate, architecture, and theoretical implications of the planetary systems in general (see a recent review by Zhu and Dong, 2021).
Compared to other detection methods such as transits and radial velocities, astrometry has a controversial past as nearly all claimed planet detections have been discarded by subsequent measurements1(e.g., Bean et al., 2010). However, this panorama is expected to be changed in the next few years as the Gaia astrometry mission is going to release about 20,000 giant planet detections with its upcoming data release (DR) 4 (Perryman et al., 2014). In fact, with the release of the Gaia DR3 (Gaia Collaboration et al., 2022), we already have dozens of astrometric candidates in the substellar regime (see, e.g., Gaia Collaboration et al., 2022), and a few systems have been identified using Hipparcos and Gaia astrometry, and confirmed by direct imaging (e.g., Currie et al., 2023; Mesa et al., 2023; De Rosa et al., 2023). Astrometry will be especially useful as it can provide us measurements of the orbital inclinations and true masses of planets (see, e.g., Sozzetti et al., 2001; Casertano et al., 2008; Sozzetti et al., 2014; Perryman et al., 2014). For example, with data from the Gaia EDR3 (Gaia Collab
oration et al., 2021), Brandt et al. (2021) measured the true mass of the planet HR 8799 e, and thanks to this, they estimated its age at \(\sim 42\,\mathrm{Myr}\). Combining radial velocities with Hipparcos and Gaia astrometry, Venner et al. (2021) measured the orbital inclination and the true mass of the companion to the star HD 92987. They found that, in fact, that object was not a planet but rather a star of \(\sim 0.2\,\mathrm{M}_{\odot}\) in a nearly-polar orbit.
There are multiple pieces of evidence suggesting that planetary systems are not always as flat. Some protoplanetary disks exhibit significant internal misalignments, either warps or disks broken in pieces with different orientations, as evidenced by multiple observations, including scattered light observations (shadows; e.g., Casassus et al., 2018), gas kinematics (e.g., Marino et al., 2015), dust emission from ALMA images (e.g., Francis and van der Marel, 2020), and periodic light extinction caused by dusty disks (e.g., Ansdell et al., 2016). Also, we have found planets orbiting stars with large obliquities (angles between the host star's equator and the planetary orbit). This includes planets from nearly polar to fully retrograde orbits as measured for transiting exoplanets from spectroscopy (see the review by Albrecht et al., 2022), with the Rossiter-McLaughlin effect (e.g., Lendl et al., 2014), spot-crossing events (e.g., Sanchis-Ojeda et al., 2013), stellar rotation (e.g., Winn et al., 2017), and stellar variability (e.g., Mazeh et al., 2015; Li and Winn, 2016). On the population level, statistical studies of the planetary systems found by the Kepler transit survey have suggested that a large fraction of the mature planetary systems probably have substantial mutual inclinations, as revealed from the observed planet multiplicity distributions and timing of the transits (Zhu et al., 2018; He et al., 2020; Millholland et al., 2021).
More recently, using radial velocities and astrometry, both Xuan and Wyatt (2020) and De Rosa et al. (2020) measured the orbital inclination of the cold Jupiter (CJ) in the \(\pi\) Men system (Jones et al., 2002; Huang et al., 2018), and combining that with TESS data of this system they found a large mutual inclination between the transiting super-Earth and its outer giant companion. From this type of measurement, a set of question that motivate our work arise: how many more \(\pi\) Men-like systems will we find? More concretely, for how many planetary systems that have been discovered with TESS will we be able to measure the mutual inclination between the transiting planet and its possible outer companion using astrometry? How can we best exploit these upcoming datasets to understand the evolution of planetary systems? How important are radial velocity follow-ups to better constrain the parameters?
### Mutual Inclinations and formation histories
Mutual inclination measurements can give us indications of past interactions that happened to form the architectures of planetary systems that we see today. These interactions range from violent giant impacts or gravitational scattering (e.g., Huang et al., 2017; Gratia and Fabrycky, 2017; Mustill et al., 2017; Pu and Lai, 2021) to long-term chaotic diffusion (e.g., Wu and Lithwick, 2011; Hamers et al., 2017; Petrovich et al., 2019).
By measuring mutual inclinations in systems with a transiting planet and its outer companion, we may constrain their formation pathway. For instance, in systems composed of two gas giants, including a transiting hot or warm Jupiter (HJ/WJ) and a CJ, their mutual inclinations can constrain the migration mechanism. If the migration was produced by angular momentum exchanges with the protoplanetary disk (e.g., Goldreich and Tremaine, 1980; Ward, 1997; Baruteau et al., 2014), we should expect low mutual inclinations. In turn, if the migration was produced by high-eccentricity migration (e.g., Rasio and Ford, 1996; Wu and Murray, 2003; Petrovich and Tremaine, 2016), we generally expect high mutual inclinations2.
Footnote 2: Coplanar High-Eccentricity Migration (CHEM) stands as an exception Petrovich (2015) to produce low-inclination hot Jupiters relative to the host starβs equator and outer companion.
Other systems of interest are the short-period transiting super-Earth/mini-Neptune (sub-Jovians, SJs) and outer cold Jupiters. As eccentricities in these systems are generally small due to tidal circularization (or stability considerations) and/or hard to constrain by radial velocities due to their low masses (e.g., MacDougall et al., 2021), we may gauge the level of dynamical up-heaval using mutual inclinations.
### Structure
In this paper, we estimate the number of TESS Objects of Interest (TOIs) for which Gaia astrometric observations should detect an outer companion and the number of those that will have a well-constrained orbital inclination. In Section 2, we describe the methodology used for the simulations. In Section 3, we present the results. In Section 4 we discuss how much more we can improve the results if we add information from radial velocity (RV) measurements. In Section 5, we discuss how our results will change with model assumptions, especially the underlying mutual inclination distribution. We conclude in Section 6.
## 2 Methods
We use the TOI catalog that was obtained from the Exoplanet Follow-up Observing Program for TESS (ExoFOP-TESS) 3 on August 23, 2023. Although there will be more detections from the ongoing TESS extended mission and dedicated searches from transit signal, a significant fraction of the identified TOIs are, or will be, false positives and thus not transiting planets (e.g., Guerrero et al., 2021). Thus, our catalog suits the purpose of the present work, namely, to estimate the number of planetary systems with detections from both TESS transit and Gaia astrometry. We did not consider in the analysis TOIs without reported stellar mass or planetary radius. Also, we did not consider stars with masses greather than 2 M\({}_{\odot}\) to avoid unreliable measurements. By applying these filters, we end up with 5,864 TOIs from 5,625 unique stars.
Footnote 3: [https://exofop.ipac.caltech.edu/tess/](https://exofop.ipac.caltech.edu/tess/)
The probability of having an exterior cold Jupiter depends on the properties of the inner planet. In this work, we adopt the following conditional probabilities
\[P(\mathrm{CJ}|\mathrm{in})=\left\{\begin{array}{l}0.30\;\;\mathrm{if}\;\; \mathrm{in}=\mathrm{SJ}\;\;\;\;\mathrm{(Zhu\;\&\mathrm{Wu}\;2018)}\\ \\ 0.75\;\;\mathrm{if}\;\;\mathrm{in}=\mathrm{HJ}\;\;\;\mathrm{(Bryan\;et al.\;2016)}\\ \\ 0.49\;\;\mathrm{if}\;\;\mathrm{in}=\mathrm{WJ}\;\;\mathrm{(Bryan\;et al.\;2016)}\end{array}\right. \tag{1}\]
Here "SJ", "HJ", and "WJ" stand for sub-Jovian, hot Jupiter, and warm Jupiter, respectively. We classify the TOIs into these categories based on the measured planet size and semi-major axis: a SJ if \(R_{\mathrm{p,in}}<6\,R_{\oplus}\), a HJ if \(R_{\mathrm{p,in}}>6\,R_{\oplus}\) and \(a_{\mathrm{p,in}}<0.1\,\mathrm{au}\), and a WJ if \(R_{\mathrm{p,in}}>6\,R_{\oplus}\) and \(a_{\mathrm{p,in}}>0.1\,\mathrm{au}\).
While the conditional probabilities given above are derived from observations, those studies also report non-negligible uncertainties around these benchmark values. Furthermore, different studies also reported different values for these conditional probabilities. For example, the conditional rate of CJs on inner SJs is reported to be lower in Bonomo et al. (2023) (but see Zhu, 2023). These uncertainties on the conditional probabilities will affect the expected numbers of the CJ detections, so the exact number of detections will be useful to further refine the conditional probabilities. For constraining the flatness of the planetary systems, we expect the results of different mutual inclination distributions to be affected in the same way, so our result on the mutual inclination distribution may remain largely unaffected.
We injected the signal of a CJ into each TOI and attempted to recover it using Gaia astrometry in order to assess whether could detect the CJ and the precision with which we could measure the inclination of its orbit. We assumed that each Gaia measurement would have 1-D astrometric precision \(\sigma_{\mathrm{fov}}\), which only depended on the magnitude \(G\) of the star (Perryman et al., 2014). To obtain realistic estimates of the times in which Gaia will observe each star, we used the HTOF tool (Brandt et al., 2021). Epochs taken before January 25, 2020, are considered in order to have a close match with the upcoming Gaia DR4. We randomly reject 20% of the Gaia epochs because this fraction of Gaia observations is shown to be problematic due to satellite dead times, unusable observations, or observations rejected as astrometric outliers (see, e.g., Lindegren et al., 2018; Boubert et al., 2020; Brandt et al., 2021). After applying these rejections, we obtained realistic epochs for 3,350 unique stars. According to Perryman et al. (2014), the number of measurements is primarily dependent on the ecliptic latitude of the target, so we divided stars in bins of 5\({}^{\circ}\) based on this value and selected the TOI with the median value of observations in each bin. We use the epochs of this median TOI in each bin as the epochs for the remaining stars in the same bin without epochs. The HTOF tool can also give the scanning law of the Gaia satellite, but for simplicity, we do not use this information. Instead, we include exactly half of the two-dimensional information of the astrometric measurements in the Fisher matrix analysis. See Appendix A for details.
Footnote 4: [https://www.cosmos.esa.int/web/Gaia/release](https://www.cosmos.esa.int/web/Gaia/release)
For the injected CJs, their physical and orbital properties were randomly sampled from the following distributions:
* The mass-ratio \(q\equiv\mathrm{M_{p}/M_{\star}}\) follows a broken power-law distribution with a break at \(q_{\mathrm{break}}=1.7\times 10^{-4}\). The power-law indexes above and below the break were -0.93 and 0.6, respectively (Suzuki et al., 2016). We worked with planetary masses between 0.3 and 15 M\({}_{\mathrm{J}}\). The lowest mass ratio used in our simulations was \(\sim 1.4\times 10^{-4}\) when M\({}_{\star}\sim 2\) M\({}_{\odot}\).
* The orbital period \(P\) follows a broken power-law distribution with a break at \(P_{\mathrm{break}}=1717\) days. The power-law indices above and below the break were -1.22 and 0.53, respectively (Fernandes et al., 2019). We worked with periods between 100 and 10000 days (\(\sim 0.27-27.4\,\mathrm{yrs}\)).
* The orbital eccentricity \(e\) follows a Beta distribution with parameters \(a=1.12\) and \(b=3.09\)(Kipping, 2013).
* The orbital inclination \(i\) is uniform in \(\cos i\) between 0 and 1.
* The argument of periapsis \(\omega\) and the longitude of ascending node \(\Omega\) both follow a uniform distribution between 0 and \(2\pi\).
Once the properties of the injected CJs were known, we modeled their astrometric signals in the standard way. Specifically, the astrometric motion of the host star along two perpendicular directions is given by
\[\left(\begin{array}{c}\alpha_{x}\\ \alpha_{y}\end{array}\right)=\left(\begin{array}{c}A\\ B\end{array}\right)\left(\begin{array}{c}\cos E-e\\ \sqrt{1-e^{2}}\sin E\end{array}\right)+\left(\begin{array}{c}\mu_{x}(t-t_{0} )\\ \mu_{y}(t-t_{0})\end{array}\right) \tag{2}\]
Here \(A,\ B,\ F,\ G\) are the so-called Thiele-Innes elements:
\[\left\{\begin{array}{rcl}A&=&\rho(\cos\omega\cos\Omega-\sin\omega\sin\Omega \cos i)\\ B&=&\rho(\cos\omega\sin\Omega+\sin\omega\cos\Omega\cos i)\\ F&=&\rho(-\sin\omega\cos\Omega-\cos\omega\sin\Omega\cos i)\\ G&=&\rho(-\sin\omega\sin\Omega+\cos\omega\cos\Omega\cos i)\end{array}\right. \tag{3}\]
where \(\rho\) is the semi-amplitude of the astrometric motion that can be written in terms of the mass-ratio \(q\), semi-major axis \(a\), and stellar distance \(d\) as:
\[\rho=\frac{qa}{d}. \tag{4}\]
The eccentric anomaly \(E\) is related with the mean anomaly \(M\) by:
\[E-e\sin E=M. \tag{5}\]
The mean anomaly is defined as:
\[M\equiv M_{0}+\frac{2\pi}{P}(t-t_{0}). \tag{6}\]
For a chosen reference time \(t_{0}\), the astrometric motion can be modeled by a set of 9 parameters: the systemic velocities \(\mu_{x}\) and \(\mu_{y}\), the semi-amplitude of the astrometric motion \(\rho\), the orbital period \(P\) and eccentricity \(e\), the reference position of the planet \(M_{0}\), and the three angles of orientation of the orbit \(\omega\), \(\cos i\) and \(\Omega\). Note that we choose \(\cos i\) instead of \(i\) just for simplicity. We choose not to perform a joint modeling of the stellar parallactic motion because parallax is much better determined and not correlated with the binary astrometric motion in the frequency domain. For many of the stars studied here, other means of distance determination may be available to further improve the parallax determination.
We use the Fisher matrix analysis to evaluate the detectability of the astrometric signal and the uncertainties on individual model parameters. This approach is more computationally efficient than a Markov chain Monte Carlo (MCMC) approach by a factor of \(\sim\) 3,000. The details of the fisher matrix analysis are given in Appendix A. For each TOI, we carried out \(10^{4}\) simulations and considered that the outer giant was detected if \(\rho/\sigma_{\rho}>3\) and that the orbital inclination was well constrained if \(\sigma_{\cos i}<0.2\). This implies an uncertainty of \(\sim 11^{\circ}\) if the orbit is edge-on and \(\sim 34^{\circ}\) if the orbital inclination is \(20^{\circ}\).
## 3 Results
For each TOI, we obtained a distribution for the uncertainty in \(\rho\) and calculated the probability of detecting the CJ if it exists (i.e., \(\rho/\sigma_{\rho}>3\)). From Equation 1, the probability of the existence of the CJ is related to the type of planet that exists in the inner part of the system. The total number of CJs that should exist around TOI hosts is estimated to be:
\[N_{\rm CJs}=\sum_{i\,\in\,\rm TOIs}P_{i}(\rm CJ|in)\approx 3340. \tag{7}\]
The number of these CJs that could be detected using Gaia astrometry is then
\[N_{\rm det}=\sum_{i\,\in\,\rm TOIs}P_{i}(\rm CJ|in)\times P_{i}(\rho/\sigma_ {\rho}>3)\approx 206. \tag{8}\]
As shown in Figure 1, the probability of detecting the CJ is a strong function of the stellar distance, and the probability is higher for nearby (\(\lesssim 100\) pc) M-dwarfs. About half of the CJs will be detected in systems with SJs, whereas the remaining half in systems with giant planets (HJ or WJ).
From the distribution obtained for \(\sigma_{\cos i}\), we also calculated the probability of having the inclination well constrained (i.e., \(\sigma_{\cos i}<0.2\)) for each TOI system. With this information, we then estimated the number of CJs that would have the inclination well constrained
\[N_{\rm inc}=\sum_{i\,\in\,\rm TOIs}P_{i}(\rm CJ|in)\times P_{i}(\sigma_{\cos i }<0.2)\approx 118. \tag{9}\]
The distribution of the size of the inner transiting planets is shown in the left panel of Figure 2. According to our definitions of small and large planets, 72 and 46 of the CJs with inclination measurements are from systems with SJs and HJs/WJs, respectively. These numbers
suggest that systems like \(\pi\) Men, which has mutual inclination contained between the inner transiting super-Earth and the outer CJ (Xuan and Wyatt, 2020; De Rosa et al., 2020), will not be uncommon. In terms of the stellar properties, the probability is higher for nearby M-dwarfs to have the CJ inclination well constrained, as shown in the right panel of Figure 2.
## 4 Complementary Radial Velocities
In the astrometry method, orbital parameters describing the sky-projected motion of an elliptical orbit can be correlated. Specifically, the orbital inclination is correlated with several of the other parameters, of which the most important one is the astrometric amplitude \(\rho\) mainly due to the planet mass. Therefore, additional constraints on planet properties can help improve the constraints on the orbital inclination. Here, we assess to what level our results improve by adding information from complementary RV observations.
Figure 1: _Left:_ Histogram of the number of cold Jupiters that should be detected as a function of the size of the inner planet. _Right:_ Scatter plot of stellar distance vs. stellar effective temperature of all TOIs. Color represents the probability of detecting the cold Jupiter.
Figure 2: _Left:_ Histogram of the number of cold Jupiters that should have the inclination well constrained as a function of the size of the inner planet. _Right:_ Scatter plot of stellar distance vs. stellar effective temperature of all TOIs. The color represents the probability of having the inclination of the cold Jupiter well constrained.
In appendix B, we show how our Fisher matrix analysis is modified to obtain the uncertainties in a model combining astrometry and RV measurements. This model parameters increases to 10: the previous nine from Section 2 and the systemic velocity in the z-axis (the line-of-sight direction), \(\mu_{z}\).
In Figure 3 we show three examples of how the uncertainty in the inclination improves as the number of radial velocity measurements increases and for different representative precisions. We assume that the radial velocity measurements are taken uniformly over the 5 years after the last epoch of the astrometric observations. Also, we assume that the signal of the transiting planet was removed from the radial velocities, and the only signal present is the one of the CJ. Because RV observations alone provide no information on the orbital inclination, the best constraint one can achieve on the orbital inclination is limited by the information available from Gaia astrometry. As a result, there is a theoretical limit on the statistical uncertainty of the \(\cos i\) parameter. This limit is given by (see Appendix C):
\[\sigma_{\cos i}^{\rm limit}=\frac{\sigma_{\rho}}{\rho}\frac{\sin^{2}i}{\cos i}. \tag{10}\]
There are a few things to notice from Figure 3. First, supplementary RVs will always be useful, even in systems that can be well-constrained by astrometric observations. Second, for systems that cannot be well-constrained by astrometry, supplementary RVs can be crucial in confirming the planet signal and refining the system configurations. In fact, as the middle and right panels indicate, the orbital inclination can be much better constrained with only a few RV observations. Last but not least, RV observations with higher precision are always better. Since all the analysis will depend on the campaign and instruments chosen to carry out the follow-up we decided to make public a python script called Fisher_for_astrometry_and_RV5 with which it is possible to estimate the uncertainties that would be obtained for a system using the methodology described in Appendices A and B. We expect that the code will help observers know the precision level they will achieve in the parameters of a given system if they try different observing strategies.
Footnote 5: [https://github.com/jiespinozar/Fisher_for_astrometry_and_RV](https://github.com/jiespinozar/Fisher_for_astrometry_and_RV)
## 5 Discussion
Our simulations show that if the orbital inclination of the CJs is isotropic, Gaia should detect CJ companions in \(\sim 206\) TOI systems out of the over 5,600 TOI targets. A CJ is considered detectable if its astrometric amplitude is three times the per-measurement uncertainty, namely \(\rho/\sigma_{\rho}>3\). Among these CJ detections, we expect that 118 will have well-constrained orbital inclinations (i.e., \(\sigma_{\cos i}<0.2\)). The majority of CJs with well-constrained inclinations are found in systems with inner sub-Jovian planets, and nearby M-dwarfs are preferred for CJ detections and inclination measurements.
Additionally, we find that complementary RVs will always be useful, even in systems that can be well constrained by astrometric observations. For systems that cannot be well constrained by astrometry, complementary RVs can be crucial in confirming the planet signal and refining the system configuration. RV observations with higher precision require fewer measurements to improve the precision in parameters of the planet.
### Comparison with previous works
Several studies have investigated the potential of Gaia astrometry in exoplanet study, including a few that looked into its capability of constraining the mutual inclination. Sozzetti et al. (2001) evaluated the capability of Gaia to detect planets around solar-type stars in the Solar neighborhood. Using the \(\nu\) And system as the case of their study, they conclude that Gaia should be able to detect the outer two planets in the system and provide estimates of the full set of orbital elements accurate to better than \(1-10\%\). Casertano et al. (2008) studied in more detail the detectability of planets around FGK dwarfs, finding that under favorable orbital configurations (both planets with \(P\leq 4\) yr and \(\rho/\sigma_{\rm fov}\geq 10\)) Gaia could measure their orbital elements to better than 10% accuracy in more than 90% of the time. Using a Galaxy model (Besancon, e.g., Robin et al., 2003) their estimated yield is \(\sim 8,000\) Gaia-detected planets and \(\sim 4,000\) of them with accurately measured orbital parameters, including inclinations. Sozzetti et al. (2014) extended that study to close M-dwarfs concluding that in a sample of \(\sim 3\),150 M-dwarfs within 33 pc, Gaia should detect \(\sim 100\) CJs and almost all of them with good quality orbital solutions. Also, as mentioned in the introduction, Perryman et al. (2014) estimated that \(\sim 20\),000 giant exoplanets should be detected using Gaia astrometry.
Similar to these previous works, we also studied the capability of Gaia in detecting planets and measuring orbital inclinations, but now for a sample of stars in which we know, thanks to TESS, that there are transiting planets at close-in orbits. The advantage of trying to measure orbital inclinations in those systems is that we can put constraints on the mutual inclination between the transiting planet and its outer companion, allowing us to explore the parameter space. With Gaia alone, one
can only detect and measure orbital inclinations of the relatively long-period planets, whereas, with Gaia and TESS combined, one can constrain the mutual inclinations between planets in the inner and the outer parts of the system, which are likely related (e.g., Masuda et al., 2020; Zhu and Dong, 2021).
### Constraining the flatness of planetary systems
The astrometry method is more sensitive to more massive planets at relatively large orbital distances. If there is a second planet in the system detected with transits, we can constrain the mutual inclination between planets, \(i_{\rm mut}\), defined as:
\[\cos i_{\rm mut}=\cos i_{\rm in}\cos i_{\rm CJ}+\sin i_{\rm in}\sin i_{\rm CJ} \cos\left(\Delta\Omega\right), \tag{11}\]
where \(i_{\rm in}\) and \(i_{\rm CJ}\) are the orbital inclinations of the inner planet and the Gaia CJ, respectively. In deriving the mutual inclination, we assume that the difference in longitudes of ascending nodes, \(\Delta\Omega\), follows a uniform distribution between 0 and \(2\pi\).
Until now, the inclination of the CJ has been assumed to follow an isotropic distribution (see Section 2), and thus the mutual inclination also follows an isotropic distribution. To see if we could distinguish between isotropic and, for example, Rayleigh distributions for the mutual inclination, we repeated the same simulations but considering that the mutual inclination followed a Rayleigh distribution with \(\sigma=5\) and \(20^{\circ}\) (hereafter R5 and R20). Using equation (11) and setting \(i_{\rm in}=90^{\circ}\) (transiting), we obtained a new distribution for the inclination of the CJs. With these new distributions, we re-run the simulations and obtained that we should detect 191 and 202 CJs companions to TOIs for R5 and R20, respectively, compared to 206 in the isotropic case. Out of these detections, we expect to have the inclination well constrained for 149 and 121 of them for R5 and R20, respectively, compared to 118 in the isotropic case. In other words, because of the correlation in orbital inclinations between inner and outer planets that forces the CJ to have more inclined orbits (more edge-on), it becomes slightly more difficult to detect the CJ, but once detected it is easier to measure its inclination.
We generated random samples following those distributions (Uniform, R5, and R20) with their respective number of inclinations well constrained (118, 149, and 121) to compare them (see an example in Figure 4) via Kolmogorov-Smirnov (KS) tests. In a single KS test, the null hypothesis was that the two samples were drawn from the same underlying distribution. We set the threshold to be \(p>0.05\) if the hypothesis is to be accepted. Based on 100 simulations, we find the null hypothesis can always be rejected for KS tests between the R5 model and any of the other two models, whereas the null hypothesis is rejected 90% of the time for KS tests between the R20 and the Uniform models. We conclude that, with the expected numbers estimated in this paper, we will always be able to distinguish between R5 and the other 2 and between R20 and the Uniform models most of the time. The conclusion remains unchanged even if the numbers of inclinations well measured are all cut half.
If we restrict our sample to two gas giants--namely a transiting HJ/WJ and a Gaia CJ--we expect to have 62, 48, and 46 systems with well-constrained inclinations for the R5, R20, and uniform inclination distributions, respectively. These numbers allow us to always distinguish between R5 and the other two distributions, as well as between R20 and the uniform distribution most of the time. If the numbers of well-measured inclinations are cut half, R20 and Uniform will be distinguishable only \(\sim 30\%\) of the time.
Figure 3: Uncertainty in inclination as a function of the number of RV data taken for 3 fixed systems. _Left:_ A system detectable with only astrometry and with the inclination well constrained (\(\sigma^{\rm astro}_{\cos i}\approx 0.003\)). _Center:_ A system detectable with astrometry but with the inclination not well constrained (\(\sigma^{\rm astro}_{\cos i}\approx 0.45\)). _Right:_ A system not detectable with only astrometry (\(\sigma^{\rm astro}_{\cos i}\approx 1.5\)). Different colors represent different precisions for the instrument used to measure the RV. The black dashed line corresponds to the analytic limit in Equation 10 reachable in each case.
In turn, if we restrict our samples to an inner SJ and a CJ, we should have the inclination of the CJ well-constrained for 87, 73, and 72 systems if the mutual inclination follows R5, R20, or uniform, respectively. Similar to the whole sample and to the case of HJs/WJs, with those numbers, we will always be able to distinguish between R5 and the other 2 and between R20 and the uniform models most of the time. If the numbers of well-measured inclinations are cut half, R20 and Uniform will be distinguishable only \(\sim\) 30% of the time.
### Caveats
In this work, we studied the capability of Gaia to detect CJs in the current population of TOIs with the idea of constraining the mutual inclination between the transiting planet and its outer giant companion. A strong correlation between the inner planets and the outer giant ones has been adopted in our work. Although a correlation is supported by several pieces of observational evidence, there is an ongoing debate regarding the strength of this correlation and whether it should apply to all types of stellar host types (e.g., Bryan et al., 2016; Zhu and Wu, 2018; Bryan et al., 2019; Herman et al., 2019; Masuda et al., 2020; Rosenthal et al., 2022). This leads to an additional source of uncertainty in the derived numbers of systems with mutual inclination measurements. We will not explore this uncertainty further in the current work, as our primary goal is to investigate the power of Gaia in constraining the flatness of the planetary system. Nevertheless, it is worth noting that the number of actual detections should provide useful constraints on the strength and generality of the inner-outer correlation as well.
Also, we have not considered the possibility that the same systems contain additional planets and their impact on our results so far. In principle, there could be planets that are either undetectable or marginally detectable, such as in the case of \(\pi\) Men, where recently Hatzes et al. (2022) revealed the presence of a third planet on a 125-day orbit. Because only CJs are detectable with Gaia observations, only the presence of a second CJ in the system can affect the measurements of parameters for the detected planet. But, as we argue next the signal contamination from these potential second CJs is expected to be low.
Ground-based RV observations have enabled studies of the fraction of systems with multiple CJs. Recent work by Zhu (2022) analyzed the California Legacy Survey data (Rosenthal et al., 2021) and derived the intrinsic multiplicity distributions of different planet classes. According to that study, about 27% of CJ systems have at least two CJs. This serves as a theoretical upper limit if one is to estimate the fraction of Gaia CJ systems with multiple planet detections. Furthermore, considering that the ground-based RV surveys have better coverage in the planet mass-semi-major axis plane than Gaia astrometry, the above upper limit can be further reduced. According to Zhu (2022), there are only eight two-CJ systems out of the 49 systems with CJs in the CLS sample. This puts an upper limit of \(\sim 16\%\) on the fraction of CJ systems with multiple CJ detections in the Gaia sample.
From a theoretical point of view, the presence of two giant planets in the same system may be unstable. The star with the median probability of detecting the CJ in this study was TOI-5612 and the typical planet detected here was a \(\sim 9\,\mathrm{M_{J}}\) at 3.3 AU with an orbital eccentricity of 0.2. Using this planet as the one detected, we studied the stability of the system if there was another CJ drawn from the same population. From a population of 100,000 CJs, and using the stability criterion from Petrovich (2015), we found that only \(\sim 20\%\) of the simulated two-planet systems are stable. Furthermore, only in \(\sim 10\%\) of the stable systems does the second planet produce a comparable astrometric signal compared to the first planet. Therefore, the fraction of systems that will be affected by planet multiplicity is small. We leave a detailed study of these multi-planet systems to some future study.
## 6 Conclusions
We have performed injection-recovery simulations of the Gaia astrometric observations for the current sample
Figure 4: A random cumulative distribution for the absolute value of the cosine of the inclination of the cold Jupiter generated assuming that the mutual inclination follows a Rayleigh distribution with \(\sigma=5\) and \(20^{\circ}\), and Uniform.
of TOIs (5,625) in order to estimate the detection yields of CJs in these systems as well as their sky-projected inclinations, thereby constraining the mutual inclination between the transiting planet and its outer companion. We find the following results:
* Under the assumption that the mutual inclination distribution is isotropic, out of the estimated 3,340 TOIs with CJ companions, Gaia should detect 206 and have the inclination well constrained for 118 of them. Nearly \(\sim 60\%\) (72/118) of these correspond to TOIs with sub-Jovian size candidates.
* If the mutual inclination follows a Rayleigh distribution with \(\sigma=5^{\circ}\) and \(20^{\circ}\) (R5 and R20), Gaia should detect 191 and 202 CJs and have the inclination well-constrained for 149 and 121 of them, respectively. With those numbers, we can confidently distinguish between the R5 model and the models with broader distributions (R20 and Uniform), while R20 and the Uniform models can be distinguished most of the time. These conclusions remain unchanged even if the numbers of well-measured inclinations are all cut by half.
* The uncertainties in the CJ inclinations can be reduced significantly if complementary RV observations are taken on the Gaia targets. This is especially true for systems in which astrometry alone provides a poor constraint. The RV follow-up strategy should be assessed on a case-by-case basis. We provide a Python script to compute the expected uncertainties using our Fischer matrix formalism quickly.
Overall, our simulations show that Gaia's astrometric measurements of planet-hosting stars from TESS will constrain the flatness of systems hosting inner transiting planets and outer cold Jupiters at levels that can distinguish dynamically cold (mean mutual inclination \(\lesssim 5^{\circ}\)) from dynamically hot system (mean mutual inclination \(\gtrsim 20^{\circ}\)), thus placing a new set of constraints on their formation and evolution.
We would like to thank Gijs Mulders and Andres Jordan for their useful discussions and help with the HTOF tool. We also thank the anonymous referee for comments and suggestions on the manuscript. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. JIER acknowledges support from the National Agency for Research and Development (ANID) Doctorado Nacional grant 2021-21212378. Work by WZ is supported by the National Science Foundation of China (grant No. 12173021 and 12133005) and CASSACA grant CCJRF2105. CP acknowledges support from ANID Millennium Science Initiative-ICN12_009, CATA-Basal AFB-170002, ANID BASAL project FB210003, FONDECYT Regular grant 1210425, CASSACA grant CCJRF2105, and ANID+REC Convocatoria Nacional subvencion a la instalacion en la Academia convocatoria 2020 PAI77200076.
|
2310.12986 | A survey of manifold learning and its applications for multimedia | Manifold learning is an emerging research domain of machine learning. In this
work, we give an introduction into manifold learning and how it is employed for
important application fields in multimedia. | Hannes Fassold | 2023-09-08T07:16:45Z | http://arxiv.org/abs/2310.12986v1 | # A survey of manifold learning and its applications for multimedia
###### Abstract
Manifold learning is an emerging research domain of machine learning. In this work, we give an introduction into manifold learning and how it is employed for important application fields in multimedia.
## 1 Introduction
Deep learning methods are nowadays the best way for the automatic analysis of multimedia data (e.g. images, video or 3D data) for tasks like classification or detection. However, classic neural networks are restricted to data lying in vector spaces, while data residing in smooth non-Euclidean spaces arise naturally in many problem domains. For example, a \(360^{\circ}\) camera actually captures a spherical image, not a rectangular image. We will focus in the following on manifolds, especially Riemannian manifolds, which are well suited for generalizing a vector space because they are locally Euclidian and differentiable.
A _manifold_\(M\) of dimension \(d\) corresponds to a topological structure which locally (so in the neighborhood of a point \(\boldsymbol{p}\in M\)) looks like a \(d-\)dimensional Euclidean space. The "best" local approximation of this neighborhood of \(\boldsymbol{p}\) with a \(d-\)dimensional Euclidean space is its _tangent space_\(T_{p}M\). The tangent space \(T_{p}M\) can be seen as a linear approximation of \(M\) around \(\boldsymbol{p}\). For example, for a 2-dimensional manifold its tangent space \(T_{p}M\) is the tangent plane going through this point (see Figure 1). A _Riemannian manifold_ is a smooth manifold \(M\) equipped with a positive definite inner product \(g_{p}\) on the tangent space \(T_{p}M\) of each point \(\boldsymbol{p}\).
The inner product \(g\) induces a norm on the tangent space, which subsequently allows us to calculate curve lengths and distances on the manifold \(M\). For each curve \(c(t)\) on the manifold its length can be calculated by integrating the norm along the curve (for details see [26, 29, 44, 29, 49]). A _geodesic_ curve is a _length-minimizing_ curve connecting two points \(\boldsymbol{p}\) and \(\boldsymbol{q}\) on the manifold. The distance between these points is defined as the length of the geodesic.
Let \(\boldsymbol{p}\) be a (reference) point on the manifold and \(v\) a vector of its tangent space \(T_{p}M\). The vector \(v\) can be mapped now to the point \(\boldsymbol{q}\) on the manifold that is reached after unit time \(t=1\) by the geodesic \(c(t)\) starting at \(\boldsymbol{p}\) with tangent vector \(v\). This mapping \(exp_{p}(v):T_{p}M\to M\) is called the exponential map at point \(\boldsymbol{p}\).
The inverse mapping \(log_{p}(\boldsymbol{q}):M\to T_{p}M\) is uniquely defined around a neighborhood of \(\boldsymbol{p}\). Informally, the exponential map and logarithm map move points back and forth between the manifold and the tangent space (see Figure 1) while preserving distances. Furthermore, derivative operators like _differential_, _intrinsic gradient_, _divergence_ and _laplacian_ can be also defined on a manifold [53, 11], which allows us to perform calculus on the manifold.
Closely related to manifolds are Lie groups. A _Lie group_ is a smooth manifold that also forms a _group_[26], where both group operations (commonly called _multiplication_ and _inverse_) are smooth mappings of manifolds. The _Lie algebra_\(\mathfrak{g}\) of a Lie group \(M\) is defined as the tangent space at the identity \(T_{e}M\), where \(e\) is the identity element of the group (see section 16 in [53]).
Key components of neural networks - like mean, convolution, nonlinearities and batch normalization - can be defined on Riemannian manifolds as described in [10, 14, 15, 36, 60]. Optimization algorithms for Riemannian manifolds (gradient descent, SGD, Adam etc.) can be found in [1, 2, 7, 18, 34, 35, 48].
Commonly encountered examples of Riemannian manifolds in computer vision are the \(n-\)sphere \(S^{n}\), the manifold of \(n\times n\) symmetric positive matrices \(P_{n}\), the special orthogonal group \(SO(n)\) (rotation matrices), the special euclidean group \(SE(n)\) (rigid body transformations), Grassman manifold \(Gr(n,p)\) (collection of all \(p-\)dimensional linear subspaces in \(\mathbb{R}^{n}\), see [9]) and the Stiefel manifold \(St(n,p)\) (collection of all \(p\)-dimensional orthogonal bases in \(\mathbb{R}^{n}\)).
In the following, we will give an overview of manifold learning methods employed in important application fields in multimedia (similarity search, image classification, synthesis & enhancement, video analysis, 3D data processing, nonlinear dimension reduction) and about available open source software frameworks.
Figure 1: Tangent space and exponential map on a 2-dimensional manifold. Image courtesy of [40].
Similarity search & retrieval
Image retrieval deals with searching for similar images in an image gallery, given a certain query image (see the surveys [17, 24]). Many methods employ for this _metric learning_, which transforms input images into _embeddings_ (\(\approx\) feature vectors) and learns a distance function between these embeddings.
The authors of [5] propose _regularized ensemble diffusion_ for refining/reranking the initial similarity search results. They show that regularized ensemble diffusion is significantly more robust against noise in the data than standard diffusion. A _diffusion_ process [22] models the relationship between objects on a graph-based manifold, wherein similarity values are diffused along the geodesic path in an iterative way.
In [30] an unsupervised framework is presented for the identification of _hard training examples_ for the training of an embedding. Hard training examples (both positive and negative samples) are identified by disagreement between euclidean and manifold similarities.
A time- and memory-efficient algorithm for estimating similarities on the data manifold is proposed in [4]. They adapt the random walk procedure to estimate manifold similarities only an a small number of data in each mini-batch, rather than on all training data.
The \(MLS^{3}RDUH\) algorithm [54] utilizes the intrinsic manifold structure in the feature space and cosine similarity to reconstruct the local semantic structure and build a similarity matrix upon it. Then a novel _log-cosh_ hashing loss function is used to optimize the hashing network in order to generate compact hash codes, guided by the similarity matrix.
The work of [25] proposes a unsupervised metric learning algorithm that learns a metric in a lower dimensional latent space using constraints provided as tuples, which rely on pseudo-labels obtained by a graph-based clustering method (_authority ascent shift_). The parameters of the approach are learned jointly using Riemannian optimization on a product manifold.
## 3 Image classification & object detection
The work [34] proposes a framework for the transformation of problems with manifold constraints into unconstrained problems on an Euclidean space through a mechanism they call _dynamic trivializations_. They show how to implement these trivializations efficiently for a large variety of commonly used matrix manifolds and provide a formula for the gradient of the matrix exponential.
The authors of [55] propose _manifold mixup_, a novel regularizer which forces the training to interpolate between hidden representations - captured in the intermediate layers of the network - of samples. It can be seen as a generalization of input mixup which does the interpolation on a random layer of the network (whereas input mixup uses always layer 0). Experiments for the task of image classification show that manifold mixup flattens the class-specific representation (lower variance) and generates a smoother decision boundary.
In [3]_Hyperbolic Busemann learning_ with ideal prototypes is introduced. It places class prototypes at the ideal boundary of the _Poincare ball_ (a hypersphere manifold with hyperbolic geometric) and introduces the _penalized Busemann loss_ for optimizing with respect to ideal prototypes. They prove its equivalence to logistic regression for the one-dimensional case.
An approach for few-shot image classification is presented in [46] which proposes _embedding propagation_ as an unsupervised non-parametric regularizer. Embedding propagation leverages interpolation between the extracted features of a neural network, based on a similarity graph. Experiments show that embedding propagation yields a smoother embedding manifold and gives better performance on three standard datasets for few-shot image classification.
The work [50] introduces a knowledge distillation method which is able to transfer an existing CNN model trained on perspective images to _spherical_ images captured with a \(360^{\circ}\) camera _without_ any additional annotation effort (see Figure 2). They train a spherical Faster R-CNN model with this method, demonstrating that a object detector for spherical images (in equirectangular projection) can be trained without any annotations in the \(360^{\circ}\) images.
## 4 Image synthesis & enhancement
For image synthesis and enhancement, state of the art algorithms employ either GANs (_generative adversial networks_[59]) or _diffusion models_[20].
The authors of [19] show that current solvers employed in diffusion models throw the generative sample path off the data manifold, causing the error to accumulate. They propose an additional correction term inspired by the manifold constraint to force the iterations to be close to the data manifold. The proposed manifold constraint is easy to add to a solver, yet boosts its performance significantly.
In [21] a novel implicit data augmentation approach
Figure 2: Transfer CNNs trained on flat images to \(360^{\circ}\) images with the method from [50].
for training GANs is proposed which facilitates stable strong and synthesizes high-quality samples. Specifically, the discriminator is interpreted as a metric embedding of the real data manifold, which offers real distances between real data samples. Experiments show that the proposed method improves the performance of image synthesis in the low-data regime.
A method for comparing data manifolds based on their topology is presented in [6]. They introduce novel tools, specifically _cross-barcode_ and _manifold topology divergence score_, which are able to track spatial discrepancies between manifolds on multiple scales. They apply it to assess the performance of generative models in various domains (images, 3D shapes or time series) and demonstrate that these tools are able to detect common problems of GAN-based image synthesis like mode dropping, mode collapse and image disturbance.
The work [37] proposes _progressive attentional manifold alignment_ for style transfer, which progressively aligns content manifolds to their most related style manifolds. Afterwards, _space-aware interpolation_ is performed in order to increase the structural similarity of the corresponding manifolds, which makes it easier for the attention module to match features between them. Experiments show that the method generates high-quality style-transferred images (see Figure 3).
The authors of [45] proposes an algorithm for improving the diversity and visual quality of images generated by a conditional GAN, by systematically encouraging a _bi-ipschitz_ mapping between the latent and output manifold. The performance improvement is shown on several image-to-image translation tasks, like landmark-to-face or sketch-to-anime.
The FLAME algorithm proposed in [42] performs highly realistic image manipulations (e.g. changing expression, hair style or age of a synthetic face, see Figure 4) with minimal supervision. It estimates linear latent directions in the latent space of _StyleGAN2_ using only a few image pairs and introduces a novel method for sampling from the attribute style manifold.
## 5 Video analysis
Most manifold learning methods for video analysis deal with the important task of human action recognition. Often they employ neural networks over the manifold \(P_{n}\) of symmetric positive matrices (usually covariance matrices) for this.
The authors of [60] propose a _dilated convolution_ operator on manifolds, based on the _weighted Frechet mean_[15], as well as a _residual connection_ operator. Both are important building blocks of modern neural networks. They construct a manifold-valued network employing covariance matrices (calculated from CNN features) and train this network for human action detection on the UCF-11 video dataset.
In [10] the convolution is defined as the weighted sum (reprojected to the manifold) in the tangent space \(T_{a}M\), where \(a\) is the Frechet mean of the input points for the convolution. They show that their proposed convolution operator is an isometry of the manifold, which corresponds to the translation-invariance property of the convolution in an Euclidean space.
The work [27] proposes a geometry-aware deep learning algorithm for skeleton-based action recognition, where skeleton sequences are modeled as trajectories on _Kendall's shape space_ and then fed into a CNN-LSTM network. Kendall's shape space [28, 32] is a special quotient manifold that defines shape as the geometric information that remains when location, scaling and rotational effects are filtered out.
The algorithm [57] adopts a neural network over the manifold \(P_{n}\) of symmetric positive definite matrices as the backbone and appends a cascade of _Riemannian autoencoders_ to it in order to enrich the information flow within the network. Experiments on the tasks of emotion recognition, hand action recognition and human action recognition demonstrate a favourable performance compared to state of the art methods.
## 6 3D data processing
The work [41] proposes a novel algorithm for geometric disentanglement (separate intrinsic and extrinsic geometry) of 3D models, based on the fundamental theorem for surfaces. They describe surface features via a combination of _conformal factors_ and surface normal vectors and propose a convolutional mesh autoencoder based on these features. The conformal factor defines a conformal (angle-preserving) deformation between two manifolds. The algorithm achieves state-of-the-art performance on 3D surface generation, reconstruction and interpolation tasks (see Figure 5).
Figure 4: Image editing with FLAME [42].
Figure 3: From left to right: Content image, style image, style-transferred image [37].
The authors of [8] propose an approach for learning generative models on manifolds by minimizing the _probability path divergence_. Unlike other continuous flow approaches, it does not require solving an ordinary differential equation during training.
In [16] a method for rotation (pose) estimation of 3D objects from point clouds and images is presented. For this, they propose a novel _manifold-aware_ gradient in the backward pass of rotation regression that directly updates the neural network weights.
The work [33] introduces _intrinsic neural fields_, a novel and versatile representation for neural fields on manifolds. Intrinsic neural fields are based on the eigenfunctions of the _Laplace-Beltrami_ operator, which can represent detailed surface information directly on the manifold. Furthermore, they extend _neural tangent kernel analysis_ to manifolds for better insight into the spectral properties of neural fields.
## 7 Nonlinear dimension reduction
Many real world high-dimensional datasets are actually lying in a low-dimensional manifold (_manifold hypothesis_). Nonlinear dimensional reduction algorithms project high-dimensional data onto such a low-dimensional manifold, while trying to preserve distance relationships in the original high-dimensional space as good as possible.
Classical approaches for nonlinear dimension reduction are Isomap, Local Linear Embedding (LLE) and Laplacian Eigenmaps (see the survey in [13]). In recent years, more powerful approaches like _t-SNE_, _UMAP_, _TriMAP_ and _PaCMAP_ have emerged [58]. From these, PaCMAP seems to preserve best both the global and local structure of the high-dimensional data.
In [47], the _h-NNE_ algorithm is proposed, which is competitive with t-SNE and UMAP in quality while being on order of magnitude faster. The significant runtime advantage is possible as h-NNE avoids solving an optimization problem and relies on _nearest neighbor graphs_ instead.
The _SpaceMAP_ algorithm [61] (see Figure 6) introduces the concept of _equivalent extended distance_, which makes it possible to match the capacity between two spaces of different dimensionality. Furthermore, _hierarchical manifold approximation_ is performed based on the observation that real-world data has often a hierarchical structure.
The _DIPOLE_ algorithm proposed in [56] corrects an initial embedding (e.g. calculated via Isomap) by minimizing a loss functional with both a local, metric term and a global, topological term based on _persistent homology_. Unlike more ad hoc methods for measuring the shape of data at multiple scales, persistent homology is rooted in algebraic topology and enjoys strong theoretical foundations.
For measuring the intrinsic dimension of a data distribution, in [51] a method is presented based on recent progress in likelihood estimation in high dimensions via _normalizing flows_.
## 8 Open source software frameworks
The Python packages _Geomstats_[38, 39], _geoopt_[7] and _Pymanopt_[52] provide implementation of the standard operators (norm, distance, exp, log, retraction, parallel transport etc.) for commonly used manifolds like \(S^{n}\), \(P_{n}\), \(SO(n)\), \(SE(n)\), \(Gr(n,p)\) and \(St(n,p)\).
_Geomstats_ and _geoopt_ support also more exotic manifolds like Birkhoff polytope [23], stereographic projection model, Kendall's shape space [28, 32], Poincare polydisc or hyperbolic space. Furthermore, _geoopt_ provides optimizers like SGD or Adam and the sampling from a probability distribution on the manifold, whereas _Geomstats_ provides Frechet mean estimators, \(K-\)means, and principal component analysis.
_Theseus_[43] provides _differentiable_ optimizers (Gauss-Newton, Levenberg-Marquardt) and solvers (dense and sparse versions of Cholesky and LU) as well as the manifolds \(SO(3)\) and \(SE(3)\) which are often used in 3D data processing, robotics and kinematics. The differentiability of the optimizers/solvers makes it possible to include them into a neural network layer or loss function.
## Acknowledgment
This work was supported by European Union's Horizon 2020 research and innovation programme under grant number 951911 - AI4Media.
Figure 5: Generated 3D models with the geometric disentanglement algorithm from [41].
Figure 6: Comparison of classic nonlinear dimension reduction methods with SpaceMAP [61]. |
2309.06648 | Exp[licit]-A Robot modeling Software based on Exponential Maps | $ $Deriving a robot's equation of motion typically requires placing multiple
coordinate frames, commonly using the Denavit-Hartenberg convention to express
the kinematic and dynamic relationships between segments. This paper presents
an alternative using the differential geometric method of Exponential Maps,
which reduces the number of coordinate frame choices to two. The traditional
and differential geometric methods are compared, and the conceptual and
practical differences are detailed. The open-source software, Exp[licit], based
on the differential geometric method, is introduced. It is intended for use by
researchers and engineers with basic knowledge of geometry and robotics. Code
snippets and an example application are provided to demonstrate the benefits of
the differential geometric method and assist users to get started with the
software. | Johannes Lachner, Moses C. Nah, Stefano Stramigioli, Neville Hogan | 2023-09-13T00:06:33Z | http://arxiv.org/abs/2309.06648v1 | # Exp[licit]
###### Abstract
Deriving a robot's equation of motion typically requires placing multiple coordinate frames, commonly using the Denavit-Hartenberg convention to express the kinematic and dynamic relationships between segments. This paper presents an alternative using the differential geometric method of Exponential Maps, which reduces the number of coordinate frame choices to two. The traditional and differential geometric methods are compared, and the conceptual and practical differences are detailed. The open-source software, Exp[licit]TM, based on the differential geometric method, is introduced. It is intended for use by researchers and engineers with basic knowledge of geometry and robotics. Code snippets and an example application are provided to demonstrate the benefits of the differential geometric method and assist users to get started with the software.
## I Introduction
In standard robotic textbooks, orthonormal coordinate frames are used to describe robot kinematics and dynamics [1, 2]. When the Denavit-Hartenberg (DH) convention is used, predetermined rules have to be followed to position the coordinate frames and express the translational and rotational relations between them.
While this approach is popular, it has several limitations. First, multiple conventions exist to define the coordinate frames. Within these conventions, different numbers of rules have to be applied. Some conventions need special treatment, e.g., for parallel axes where the description is not unique. Second, a large number of coordinate frames has to be placed. This becomes especially unwieldy for robots with many degrees of freedom (DOF). Third, the kinematics and dynamics are expressed with one fixed set of coordinate frames on the robot bodies; if the kinematics of the robot change, e.g., for re-configurable robots, a new set of DH-parameters has to be assigned [3] and additional efforts have to be made to distinguish between revolute and prismatic joints [4]. Fourth, the choices of task-related stationary and body-fixed frames are restricted which is disadvantageous for algorithms which describe the dynamics of multiple points on different robot bodies, e.g., for whole-body control [5].
In contrast, Differential Geometry can be used as a mathematical framework which lifts the coordinate-level descriptions to the more abstract space of manifolds [6]. Robot kinematics and dynamics can be described as actions on those manifolds [7]. This mathematical abstraction leads to a formulation that requires the least number of coordinate frames to represent the robot's kinematics and dynamics. The theoretical strengths of geometric methods have been shown in excellent textbooks [8, 4, 9] and tutorial papers [10, 11, 12]. Papers that compare traditional and geometric methods emphasize algorithmic and computational aspects [13, 14, 11] but detailed discussion of conceptual and practical differences (e.g., the brief overview in [4]) is rare.
Many powerful software tools exist to simulate and control robots [15, 16, 17]. Since these tools usually offer extensive features, they present an "overhead cost" to learn how to use the software [18, 19]. This might impede first-time users, e.g., students that want to simulate a simple robot for a robotic class.
The main contribution of this paper is a practice-oriented comparison of the traditional and geometric approaches. The first part of the paper details the conceptual differences to derive robot kinematics and dynamics. We show that the geometric method is highly modular, flexible, and requires the least number of coordinate frames. The second part focuses on practical implementation. We introduce our software Exp[licit]TM, a simple MATLAB robotic toolbox which leverages advantages of the geometric method. By providing Exp[licit]1, we want to empower robotic researchers to experience the practical benefits of the geometric method.
Footnote 1: [https://explicit-robotics.github.io/](https://explicit-robotics.github.io/)
Footnote 2: From now on, we use βframe(s)β to refer to βcoordinate frame(s)β.
## II Derivation of Robot Kinematics and Dynamics
This section shows a detailed comparison of both approaches. The theoretical derivation is focused on the Forward Kinematic Map, Jacobian matrix, and Mass Matrix of a \(n\)-DOF robot. A computational comparison with the RVC MATLAB toolbox [17] also includes the gravity and centrifugal/Coriolis terms (fig. 1). More details about the computational comparison are presented in sec. III-B7.
### _Preliminaries_
The set of all robot configurations \(\mathbf{q}\) constitute the manifold \(\mathcal{Q}\) and the set of all homogeneous transformations \(H\) constitute the manifold \(SE(3)\). To represent the robot's workspace motion, either a stationary or body-fixed coordinate frame has to be chosen.3 We assume one stationary
## References
* [1] A. A. Krizhevsky, "The
frame \(\{S\}\), attached to the fixed base of the robot. Moreover, we denote \(\{B\}\) as a body-fixed frame, which can be attached to any point of the robot. Often, \(\{B\}\) coincides with the tool center point (i.e., the end-effector) of the robot. In this case, we denote \(\{B\}\) as \(\{ee\}\).
For a given joint configuration \(\mathbf{q}\in\mathcal{Q}\), the orientation and translation of \(\{ee\}\) with respect to \(\{S\}\) can be derived via the _Forward Kinematic Map_, \(\mathcal{Q}\to SE(3)\) and represented by the _Homogeneous Transformation Matrix_\({}^{S}\mathbf{H}_{ee}(\mathbf{q})=\begin{pmatrix}{}^{S}\mathbf{R}_{ee}&{}^{S}\mathbf{p}_{ee} \\ 0&1\end{pmatrix}\in SE(3)\). Here, \({}^{S}\mathbf{R}_{ee}\in SO(3)\) is the _Rotation matrix_ of \(\{ee\}\) with respect to \(\{S\}\) and \({}^{S}\mathbf{p}_{ee}\in\mathbb{R}^{3}\) is the translation from \(\{S\}\) to \(\{ee\}\).
For a given joint motion \(\dot{\mathbf{q}}\in\mathbb{R}^{n}\), the workspace motion of the robot's end-effector can be derived via the _Hybrid Jacobian Matrix_\({}^{S}\mathbf{H}\mathbf{J}(\mathbf{q})\in\mathbb{R}^{6\times n}\), and represented by a 6D-vector of workspace velocities, called _Spatial Velocity_\({}^{S}\mathbf{V}_{ee}=\begin{pmatrix}{}^{S}\mathbf{v}_{ee}\\ {}^{S}\mathbf{\omega}\end{pmatrix}\in\mathbb{R}^{6}\). Here, \({}^{S}\mathbf{V}_{ee}\) incorporates the linear velocity \({}^{S}\mathbf{v}_{ee}\in\mathbb{R}^{3}\) of the origin of \(\{ee\}\) with respect to \(\{S\}\) and the angular velocity \({}^{S}\mathbf{\omega}\in\mathbb{R}^{3}\) of the end-effector body, both expressed in \(\{S\}\).
The total kinetic co-energy \(\mathcal{L}(\mathbf{q},\dot{\mathbf{q}})\in\mathbb{R}\) of an \(n\)-DOF robot is the sum of all contributions of kinetic co-energy stored by individual bodies: \(\mathcal{L}(\mathbf{q},\dot{\mathbf{q}})=\frac{1}{2}\dot{\mathbf{q}}^{T}\mathbf{M}(\mathbf{q})\dot {\mathbf{q}}\)[4]. The matrix \(\mathbf{M}(\mathbf{q})\in\mathbb{R}^{n\times n}\) is called the _Mass Matrix_ of the robot.
### _Traditional Method_
#### Ii-B1 Forward Kinematic Map via DH-convention
The DH-convention [20] is widely used to derive the Forward Kinematic Map. It is a set of rules to place body-fixed frames on the robot, and to derive the parameters that describe the kinematic relation between adjacent frames [4]. Within the multiple DH-conventions [21, 22], we outline the modified DH-convention which consists of four DH-parameters: link length \(a\), link twist \(\alpha\), link offset \(d\), and joint angle \(\theta\)[1, 4, 17].
To derive the DH-parameters, multiple frames have to be placed on each link using the following rules (fig. 2):
1. Define frames \(\{1\}\), \(\{2\}\), \(\cdots\), \(\{n\}\) on each link, ordered from the base to the end-effector of the robot. Choose axis \(\hat{Z}_{i}\) of frame \(\{i\}\) to be aligned with the \(i\)-th joint. For a revolute (prismatic) joint, direction of \(\hat{Z}_{i}\) is along the positive direction of rotation (translation).
2. For \(i=1,2,...,n-1\), find a line segment that is mutually perpendicular to axes \(\hat{Z}_{i}\) and \(\hat{Z}_{i+1}\). The intersection between this line and \(\hat{Z}_{i}\) is the origin of frame \(\{i\}\). Moreover, axis \(\hat{X}_{i}\) is chosen to be aligned with this line segment, pointing from \(\hat{Z}_{i}\) to \(\hat{Z}_{i+1}\).
3. Attach the origin of frame \(\{ee\}\) to the end-effector. To simplify the derivation of the DH-parameters, the \(\hat{Z}_{ee}\) axis is usually chosen to be parallel to \(\hat{Z}_{n}\)[1]. From \(\hat{Z}_{n}\) and \(\hat{Z}_{ee}\), \(\hat{X}_{n}\) is defined using step (ii). Finally, choose \(\hat{X}_{ee}\) such that valid DH-parameters can be defined [4].
4. The \(\hat{Y}\) axes of frames \(\{1\}\), \(\{2\}\), \(\cdots\), \(\{n\}\), \(\{ee\}\) are defined using the right-hand convention.
5. Attach frame \(\{S\}\) to the robot base. Usually, it is chosen to coincide with frame \(\{1\}\) when joint 1 has zero displacement.
After assigning \(n+2\) frames, \(\{S\}\), \(\{1\}\), \(\cdots\), \(\{n\}\), \(\{ee\}\), the \(4(n+1)\) DH-parameters can be expressed. With these parameters, the Homogeneous Transformation Matrix \({}^{i-1}\mathbf{H}_{i}\in SE(3)\) between frame \(\{i-1\}\) and \(\{i\}\) is defined for \(i=1,2,...,n+1\), where \(\{0\}\equiv\{S\}\) and \(\{n+1\}\equiv\{ee\}\). Finally, by concatenating these matrices, the Forward Kinematic Map, \({}^{S}\mathbf{H}_{ee}(\mathbf{q})\) can be derived:
\[{}^{S}\mathbf{H}_{ee}(\mathbf{q})=\ ^{S}\mathbf{H}_{1}(q_{1})\ ^{1}\mathbf{H}_{2}(q_{2})... ^{n-1}\mathbf{H}_{n}(q_{n})\ ^{n}\mathbf{H}_{ee} \tag{1}\]
#### Ii-B2 Jacobian Matrix by separating linear and angular velocities
To derive the Jacobian Matrix, the traditional method separately relates joint velocities to linear and angular workspace velocities [2]. We denote the linear and rotational part of the Jacobian as \(\mathbf{J}(\mathbf{q})_{v}\in\mathbb{R}^{3\times n}\) and \(\mathbf{J}(\mathbf{q})_{\omega}\in\mathbb{R}^{3\times n}\), respectively.
To derive \(\mathbf{J}(\mathbf{q})_{v}\), the position \({}^{S}\mathbf{p}_{ee}\) has to be extracted from \({}^{S}\mathbf{H}_{ee}(\mathbf{q})\) (sec. II-B1). Since \({}^{S}\mathbf{p}_{ee}\) is an analytical function of \(\mathbf{q}\), \(\mathbf{J}(\mathbf{q})_{v}\) collects the partial derivatives of \({}^{S}\mathbf{p}_{ee}\), with respect to the coordinate components of \(\mathbf{q}\). Often, \(\mathbf{J}(\mathbf{q})_{v}\) is called an "Analytical Jacobian" [2].
The matrix \(\mathbf{J}(\mathbf{q})_{\omega}\) is commonly derived using a geometric method and specifying the frames based on DH-convention [2] (sec. II-B1). More specifically, for \(i=1,2,...,n\):
* If the \(i\)-th joint is a revolute joint with unit-rotation axis \({}^{i}\mathbf{\dot{\omega}}_{i}\) expressed in \(\{i\}\), the \(i\)-th column of \(\mathbf{J}(\mathbf{q})_{\omega}\) is \({}^{S}\mathbf{R}_{i}{}^{i}\dot{\mathbf{\omega}}_{i}={}^{S}\mathbf{\dot{\omega}}_{i}\).
* If the \(i\)-th joint is a prismatic joint, the \(i\)-th column of \(\mathbf{J}(\mathbf{q})_{\omega}\) is a zero vector.
To calculate the spatial velocity \({}^{S}\mathbf{V}_{ee}\), \(\mathbf{J}(\mathbf{q})_{v}\) and \(\mathbf{J}(\mathbf{q})_{\omega}\) can be vertically concatenated:
\[{}^{S}\mathbf{V}_{ee}={}^{H}\mathbf{J}(\mathbf{q})\ \dot{\mathbf{q}} \tag{2}\]
Due to the analytical derivation of \(\mathbf{J}(\mathbf{q})_{v}\) and the geometrical derivation of \(\mathbf{J}(\mathbf{q})_{\omega}\), we call \({}^{H}\mathbf{J}(\mathbf{q})\) the _Hybrid Jacobian Matrix_.
Fig. 2: Frames attached to an open-chain robot, using the DH-conventions.
#### Iii-B3 Mass Matrix via Hybrid Jacobians
To derive the Mass Matrix of the robot, it is necessary to attach \(n\) additional frames to the center of mass (COM) of the \(n\) bodies. These will be denoted as \(\{C_{1}\}\), \(\{C_{2}\}\), \(\cdots\), \(\{C_{n}\}\), ordered from the base to the end-effector of the robot. The moment of inertia of the \(i\)-th body with respect to \(\{C_{i}\}\) is denoted \({}^{i}\mathcal{I}_{i}\in\mathbb{R}^{3\times 3}\). To express \({}^{i}\mathcal{I}_{i}\) in \(\{S\}\), the rotation matrix \({}^{S}\boldsymbol{R}_{i}\) is used (sec. II-B1): \({}^{S}\mathcal{I}_{i}={}^{S}\boldsymbol{R}_{i}\ {}^{i}\mathcal{I}_{i}\ {}^{S} \boldsymbol{R}_{i}\)1.
Footnote 1: For simplicity, we will omit the term βUnitβ in what follows.
For each body \(i\), the Hybrid Jacobian Matrix \({}^{H}\boldsymbol{J}_{i}(\boldsymbol{q})\) is derived to describe the linear and angular velocity of \(\{C_{i}\}\) with respect to \(\{S\}\) (sec. II-B2). Note that for each matrix \({}^{H}\boldsymbol{J}_{i}(\boldsymbol{q})\), the columns from \(i+1\) to \(n\) are set to be zero since they do not contribute to the motion of body \(i\)[2].
Finally, for a given mass \(m_{i}\in\mathbb{R}\) of the \(i\)-th body, \(\boldsymbol{M}(\boldsymbol{q})\in\mathbb{R}^{n\times n}\) can be calculated by:
\[\begin{split}\boldsymbol{M}(\boldsymbol{q})=&\,m_{i} \sum_{i=1}^{n}\boldsymbol{J}_{i}(\boldsymbol{q})_{v}\ ^{T}\ \boldsymbol{J}_{i}(\boldsymbol{q})_{v}\\ &+\sum_{i=1}^{n}\boldsymbol{J}_{i}(\boldsymbol{q})_{\omega}\ ^{T}\ {}^{S} \boldsymbol{\mathcal{I}}_{i}\ \boldsymbol{J}_{i}(\boldsymbol{q})_{\omega}\end{split} \tag{3}\]
### _Differential geometric method_
#### Iii-C1 Forward Kinematic Map via the Product of Exponentials Formula
For the geometric method, only two frames \(\{S\}\) and \(\{ee\}\) have to be chosen and assigned to the initial joint configuration of the robot \(\boldsymbol{q}_{0}\in\mathcal{Q}\). The initial Homogeneous Transformation Matrix is denoted \({}^{S}\boldsymbol{H}_{ee}(\boldsymbol{q}_{0})\equiv{}^{S}\boldsymbol{H}_{ee, 0}\in SE(3)\). In practice it is useful to select \(\{S\}\) and \(\{ee\}\) to have equal orientation (i.e., rotation matrix equals the identity matrix) such that only the translation between \(\{S\}\) and \(\{ee\}\) has to be identified to calculate \({}^{S}\boldsymbol{H}_{ee,0}\).
In the next step, the _Unit Joint Twists4\({}^{S}\hat{\boldsymbol{\eta}}_{i}\in\mathbb{R}^{6}\)_ of each joint at initial joint configuration are expressed with respect to \(\{S\}\). Depending on the type of the \(i\)-th robot joint, \({}^{S}\hat{\boldsymbol{\eta}}_{i}\in\mathbb{R}^{6}\) is defined by:
Footnote 4: For simplicity, we will omit the term βUnitβ in what follows.
* If the \(i\)-th joint is a revolute joint, the unit-axis of rotation is \({}^{S}\hat{\boldsymbol{\omega}}_{i}\). _Any point_\({}^{S}\boldsymbol{p}_{\eta_{i}}\in\mathbb{R}^{3}\) along \({}^{S}\hat{\boldsymbol{\omega}}_{i}\) can be selected to define \({}^{S}\hat{\boldsymbol{\eta}}_{i}=(-[^{S}\hat{\boldsymbol{\omega}}_{i}]^{S} \boldsymbol{p}_{\eta_{i}},\ ^{S}\hat{\boldsymbol{\omega}}_{i})^{T}\). Here, \([^{S}\hat{\boldsymbol{\omega}}_{i}]\in so(3)\) is the skew-symmetric matrix form of \({}^{S}\hat{\boldsymbol{\omega}}_{i}\)[4]. The operation \([^{S}\hat{\boldsymbol{\omega}}_{i}]^{S}\boldsymbol{p}_{\eta_{i}}\) is equal to \({}^{S}\boldsymbol{\omega}\times^{S}\boldsymbol{p}_{\eta_{i}}\).
* If the \(i\)-th joint is a prismatic joint, the unit-axis of translation is \({}^{S}\hat{\boldsymbol{\upsilon}}_{i}\) and therefore \({}^{S}\hat{\boldsymbol{\eta}}_{i}=(^{S}\hat{\boldsymbol{\upsilon}}_{i}, \boldsymbol{0})\).
Note that the \(n\) Joint Twists \({}^{S}\hat{\boldsymbol{\eta}}_{i}\) are defined with respect to a single frame \(\{S\}\). For most robots, the unit-axes of rotation (or translation) can be identified by visual inspection. The positions \({}^{S}\boldsymbol{p}_{\eta_{i}}\) can be determined by using CAD-programs.
Finally, the _Product of Exponentials Formula_[23] can be used to derive the Forward Kinematic Map:
\[\begin{split}{}^{S}\boldsymbol{H}_{ee}(\boldsymbol{q})=& \exp\left([^{S}\hat{\boldsymbol{\eta}}_{1}]q_{1}\right)\ \exp\left([^{S}\hat{\boldsymbol{\eta}}_{2}]q_{2}\right)\\ &\cdots\exp\left([^{S}\hat{\boldsymbol{\eta}}_{n}]q_{n}\right) \ ^{S}\boldsymbol{H}_{ee,0}\end{split} \tag{4}\]
In this equation, \([^{S}\hat{\boldsymbol{\eta}}_{i}]\in se(3)\) is a \(4\times 4\) matrix representation of \({}^{S}\hat{\boldsymbol{\eta}}_{i}\)[8]. Given \({}^{S}\hat{\boldsymbol{\eta}}=(^{S}\boldsymbol{v},^{S}\hat{\boldsymbol{ \omega}})\) and \(q\in\mathbb{R}\), a closed-form solution of \(\exp([^{S}\hat{\boldsymbol{\eta}}]q)\) can be formulated [4]:
\[\begin{split}\exp([^{S}\hat{\boldsymbol{\omega}}]q)=& \mathbb{I}_{3}+\sin q[^{S}\hat{\boldsymbol{\omega}}]+(1-\cos q)[^{S}\hat{ \boldsymbol{\omega}}]^{2}\\ \boldsymbol{G}(q)=&\mathbb{I}_{3}q+(1-\cos q)[^{S} \hat{\boldsymbol{\omega}}]+(q-\sin q)[^{S}\hat{\boldsymbol{\omega}}]^{2}\\ \exp&\left([^{S}\hat{\boldsymbol{\eta}}]q\right)= \begin{bmatrix}\exp([^{S}\hat{\boldsymbol{\omega}}]q)&\boldsymbol{G}(q)^{S} \boldsymbol{v}\\ \boldsymbol{0}&1\end{bmatrix}\end{split} \tag{5}\]
#### Iii-C2 Jacobian Matrices via The Adjoint Map
For the geometric method, two Jacobian matrices exist: the Spatial Jacobian \({}^{S}\boldsymbol{J}(\boldsymbol{q})\in\mathbb{R}^{6\times n}\) and the Body Jacobian \({}^{B}\boldsymbol{J}(\boldsymbol{q})\in\mathbb{R}^{6\times n}\)[8]. The Spatial (respectively Body) Jacobian relates joint velocities \(\hat{\boldsymbol{q}}\) to the Spatial (respectively Body) Twist \({}^{S}\boldsymbol{\xi}\) (\({}^{B}\boldsymbol{\xi}\))[4, 8]:
\[{}^{S}\boldsymbol{\xi}=\begin{bmatrix}^{S}\boldsymbol{v}_{s}\\ {}^{S}\boldsymbol{\omega}\end{bmatrix}={}^{S}\boldsymbol{J}(\boldsymbol{q}) \hat{\boldsymbol{q}}\qquad{}^{B}\boldsymbol{\xi}=\begin{bmatrix}^{B} \boldsymbol{v}_{b}\\ {}^{B}\boldsymbol{\omega}\end{bmatrix}={}^{B}\boldsymbol{J}(\boldsymbol{q}) \hat{\boldsymbol{q}} \tag{6}\]
Here, \({}^{S}\boldsymbol{\omega}\) (respectively \({}^{B}\boldsymbol{\omega}\)) is the angular velocity of the body, expressed in \(\{S\}\) (respectively \(\{B\}\)); \({}^{S}\boldsymbol{v}_{s}\) is _not_ the velocity of the origin of \(\{S\}\), which is zero; it is the linear velocity of a point on the robot structure, viewed as if it travels through the origin of \(\{S\}\)[4, 8]; \({}^{B}\boldsymbol{v}_{b}\) is the velocity of the origin of \(\{B\}\) with respect to \(\{S\}\), expressed in \(\{B\}\)[4, 8].
The columns of \({}^{S}\boldsymbol{J}(\boldsymbol{q})\) and \({}^{B}\boldsymbol{J}(\boldsymbol{q})\) are derived using the Joint Twists \(\hat{\boldsymbol{\eta}}_{i}\) and the _Adjoint Map_\(\boldsymbol{Ad_{H}}:\mathbb{R}^{6}\rightarrow\mathbb{R}^{6}\) associated with \(\boldsymbol{H}\in SE(3)\)[4, 8, 24]. In matrix notation, \(\boldsymbol{Ad_{H}}=\begin{pmatrix}\boldsymbol{R}&[\boldsymbol{p}]\boldsymbol{R }\\ \boldsymbol{0}&\boldsymbol{R}\end{pmatrix}\).
For planar robots, \(\boldsymbol{\eta}^{\prime}_{i}\) can be identified by visual inspection. In general, the \(i\)-th column \(\boldsymbol{\eta}^{\prime}_{i}\) of \({}^{S}\boldsymbol{J}(\boldsymbol{q})\) is:
\[\begin{split}\boldsymbol{\eta}^{\prime}_{i}=\begin{cases}^{S}\hat{ \boldsymbol{\eta}}_{1}&i=1\\ \boldsymbol{Ad}_{{}^{S}\boldsymbol{H}_{i-1}}{}^{S}\hat{\boldsymbol{\eta}}_{i}&i=2,...,n \end{cases}\end{split} \tag{7}\]
In this equation, \({}^{S}\boldsymbol{H}_{i-1}\) can be derived via the Product of Exponentials Formula, i.e., \({}^{S}\boldsymbol{H}_{i-1}=\exp\left([^{S}\hat{\boldsymbol{\eta}}_{1}]q_{1} \right)\exp\left([^{S}\hat{\boldsymbol
can be identified by using CAD-programs. Finally, the robot Mass Matrix can be calculated by:
\[\mathbf{M}(\mathbf{q})=\sum_{i=1}^{n}\ {}^{B}\mathbf{J}_{i}(\mathbf{q})^{T}\ \mathbf{\mathcal{M}}_{i}\ {}^{B}\mathbf{J}_{i}(\mathbf{q}). \tag{9}\]
## III Exp[licit]: Concept, Features and Use-Cases
This section is split into two parts. First, we highlight the conceptual and practical differences between the traditional and geometric methods. To demonstrate the practical differences, we use a Franka robot.5 Second, we introduce Exp[licit], a MATLAB-based robot software which leverages the advantages of the geometric method. By using Exp[licit], the model parameters of the Franka robot can be derived. The modular structure of Exp[licit] will be described by using code snippets and an example application. Finally, we compare the computational efficiency of Exp[licit] with the MATLAB-based open-source robotics software "Robotics, Vision and Control" (RVC) which is based on the DH-convention [17].
Footnote 5: [https://www.franka.de/](https://www.franka.de/)
### _Conceptual and practical comparison between traditional and geometric methods_
#### Iii-A1 Forward Kinematic Map
The DH-convention provides a minimal parameter representation (four parameters) to define the Homogeneous Transformation Matrix [4]. This comes at a cost: a set of rules has to be carefully stipulated, which requires an extensive preparation in placing and transforming \(n+2\) frames. If adjacent axes intersect or are parallel to each other, additional rules have to be considered to handle these exceptions for step (ii) in Section II-B1 [1]. Since rotations and translations are only allowed along/about axes \(\hat{X}\) and \(\hat{Z}\), the choices for frames \(\{S\}\) and \(\{ee\}\) are restricted.
In contrast, the geometric method requires only two frames: the fixed inertial frame \(\{S\}\) and the body-fixed frame \(\{B\}\). Compared to the DH-approach, there are no restrictions on their position and orientation. The Product of Exponentials Formula provides considerable flexibility. To calculate the Joint Twists at initial configuration, any point on the twist axis can be chosen (sec. II-C1). Once the Joint Twists are defined, the Forward Kinematic Map can be derived for any point on the robot structure (sec. III-B6). This conceptual advantage yields a reduced computation time for the Forward Kinematic Map (sec. III-B7)
The practical benefit of the geometric method for the Franka robot can be seen in fig. 3. Compared to the DH-convention with nine frames [25], only two frames are needed. For our choice of initial configuration, the calculation of \({}^{S}\mathbf{H}_{ee,0}\) is straightforward since only the position of the end-effector has to be calculated. For our example, \({}^{S}\mathbf{p}_{ee,0}=(0.088,0,1.033)\) and \({}^{S}\mathbf{R}_{ee,0}=\mathbb{I}_{3}\).
The Joint Twists of the Franka robot are shown in the appendix. For a robot with revolute joints, the geometric approach needs at most four parameters (three translations parameters and one rotational parameter) like the DH-approach. For prismatic joints, the geometric approach needs only three parameters.
#### Iii-A2 Jacobian Matrices
For the traditional method, the Hybrid Jacobian Matrix \({}^{H}\mathbf{J}(\mathbf{q})\) is separated into linear and angular parts. Before the linear part of \({}^{H}\mathbf{J}(\mathbf{q})\) can be derived, a choice for end-effector frame \(\{ee\}\) has to be made. Changing the frame at a later stage will need a recalculation of position, extracted from the Forward Kinematic Map.
The geometric approach derived two different Jacobian matrices, \({}^{S}\mathbf{J}(\mathbf{q})\) and \({}^{B}\mathbf{J}(\mathbf{q})\). The basis of the derivation are the Joint Twists at initial configuration. Hence, no separation into linear and rotational parts is needed. \({}^{S}\mathbf{J}(\mathbf{q})\) and it's output \({}^{S}\mathbf{\xi}\) (eq. (6)) only depend on one frame \(\{S\}\). By using the Adjoint Map, \({}^{S}\mathbf{\xi}\) can be mapped to any point on the robot structure. By choosing a point equal to the origin of \(\{ee\}\), the Spatial Velocity can be derived:
\[{}^{S}\mathbf{V}_{ee}=\underbrace{\begin{pmatrix}\mathbb{I}_{3}&-[^{S}\mathbf{p}_{ee}] \\ \mathbf{0}&\mathbb{I}_{3}\end{pmatrix}}_{{}^{H}\mathbf{J}(\mathbf{q})}\ {}^{S}\mathbf{J}(\mathbf{q})}_{{}^{H}\mathbf{J}(\mathbf{q})}\ \dot{\mathbf{q}}. \tag{10}\]
Here, no modification of the Forward Kinematic Map is needed, which improves the length and clarity of the code and reduces the computation time of \({}^{H}\mathbf{J}(\mathbf{q})\) (sec. III-B7).
#### Iii-A3 Mass Matrix
For both approaches, the frames \(\{C_{i}\}\) have to be attached to the COM of the robot at initial configuration. For the traditional method, the orientation of these coordinate frames is restricted to obtain a valid set of DH-parameters. Commonly, \(\{C_{i}\}\) is chosen to be aligned with frame \(\{i\}\) (fig. 3A) and separately rotated by \({}^{S}\mathbf{\mathcal{I}}={}^{S}\mathbf{R}_{i}{}^{i}\mathbf{\mathcal{I}}^{S}\mathbf{R}_{i}^{T}\).
For the geometric approach, the orientation of body frames \(\{C_{1}\}\), \(\{C_{2}\}\),..., \(\{C_{n}\}\) can be freely chosen. For each COM, the Body Jacobians are derived, again using the Adjoint Map (eqs. (7), (8)).
While the traditional method divides the derivation into linear and rotational contributions, the geometric method uses the generalized inertia matrices \(\mathbf{\mathcal{M}}_{i}\) (eq. (9)) to derive the Mass Matrix. Even though \(\mathbf{\mathcal{M}}_{i}\) may not be aligned with \(\{S\}\), it need not be separately transformed. The transformation is incorporated in the map \({}^{B}\mathbf{J}_{i}(\mathbf{q})\).
### _Exp[licit]--Robot modeling based on Exponential Maps_
The software can be installed from our Github repository: [https://github.com/explicit-robotics/Explicit-MATLAB/](https://github.com/explicit-robotics/Explicit-MATLAB/). The documentation of the software can be found here: [https://explicit-robotics.github.io/](https://explicit-robotics.github.io/).
#### Iii-B1 Software structure
The core of the software is the RobotPrimitives-class, which is used as the parent class of the software. It provides the member functions getForwardKinematics, getSpatialJacobian, getHybridJacobian, getBodyJacobian, getMassMatrix, getGravityVector, and getCoriolisMatrix for deriving the robot parameters. By inheriting the RobotPrimitives-class, a new robot class can be defined that shares the attributes and the member functions of the parent class. Each robot class
brings its kinematic and dynamic properties (e.g., axes of rotation, link lengths, masses, etc.).
#### Iii-A2 Initialization
Exp[licit] supports various 2D and 3D-robots (fig. 4). In this paper, we will use a Franka robot example (franka.m), which is inherited from the RobotPrimitives-class. The initialization is shown below:
```
%CallFrankaRobotrobot=franka( );robot.init( );
```
The init-function initializes all Joint Twists and Generalized Mass Matrices for the initial configuration (fig. 3).
#### Iii-A3 Symbolic member functions
All member functions also accept symbolic arguments. This feature is helpful for control methods that require an analytical formulation of the robot's equations of motion, e.g., adaptive control methods [26]. An example to read out the symbolic form of the Forward Kinematics Map can be seen below:
```
%Createsymboliccolumnvectorq_sym=sym('q',[robot.nq,1]); %SymbolicformofHom.Trans.MatrixH_ee_sym=robot.getForwardKinematics(q_sym);
```
#### Iii-A4 Visualization and Animation
For visualization, the robot object can be passed to a 2D or 3D-animation object:
```
%Createanimationanim=Animation('Dimension',3,'xLim',[-0.7,0.7],'yLim',[-0.7,0.7],'zLim',[0,1.4]);anim.init( );anim.attachRobot(robot )
```
The Animation-class heavily relies on MATLAB graphic functions (e.g., axes, patches, lighting). The key to our animation is to create a chain of transform objects (hotransforms) instead of transforming vertices. The Animation-class has an optional input that allows the recording of videos with adjustable playback speeds.
At run-time (simulation time t), the robot object (in configuration \(\triangleleft\)) and the animation can be updated:
```
%Updatekinematicsrobot.updateKinematics(q);anim.update(t);
```
#### Iii-A5 Modularity through Joint Twists
The key to the modularity of Exp[licit] is the setJointTwists( )-function of the RobotPrimitives-class. So far, Exp[licit] supports revolute and prismatic joint types, indicated by the JointTypes( )-attribute. For each robot, the Joint Twists are derived from the joint directions (AxisDirections) and joint positions (AxisOrigins) in initial configuration. All member functions of the RobotPrimitives-class then re-use joints twists at runtime to map them from initial to current configuration (eq. (4) for Forward Kinematics, eq. (7) for Spatial Jacobian, and eq. (8) for Body Jacobian and Mass Matrix).
#### Iii-A6 Example simulation
By default, the simulation loop is set to be real-time. It is beneficial to structure the simulation script the following way: (1) calculation of all kinematic and dynamic robot parameters; (2) trajectory generation; (3) control law; (4) integration and update. For (1), the member functions of the robot object can be used. Parts (2) and (3) are generally user specific. For the integration (4), any integrator can be used, e.g., MATLABs pre-built ode45.m.
To help users with parts (2) and (3), we implemented a simple impedance controller [27] for a Franka robot (main_franka_IC.m) that moves the end-effector around a circular path, while keeping its elbow position (joint four) fixed (fig. 5).
Fig. 3: Franka robot at initial configuration. The DH-convention is shown in (A) and the geometric method in (B). Only two frames are required for the geometric method (B). The frames shown in (A) are derived from [25].
Thanks to the modularity of the implemented geometric method, the kinematics of any point on any body can be selected by specifying the robot body ('bodyID') and the corresponding position on the body ('position'):
```
@Getend-effectorkinematics(default) H_ee=robot.getForwardKinematics(q); J_ee=robot.getHybridJacobian(q);
```
```
@Getkinematicsofaspecificpointon theelbowbody H_eb=robot.getForwardKinematics(q,'bodyID',4,'position',[-0.1,0,0]); J_eb=robot.getHybridJacobian(q,'bodyID',4,'position',[-0.1,0,0]);
```
#### Iii-C7 Comparison with MATLAB robotic toolbox
We compared the computational speed of Exp[licit] with the RVC MATLAB software [17], which uses the DH-convention. For RVC, version RTB10+MVTB4 (2017) was used.6 By using native MATLAB scripts, the computation time was compared for the Forward Kinematic Map, Hybrid Jacobian, Mass Matrix, centrifugal/Coriolis terms, and Gravity vector of an \(n\)-DOF open-chain planar robot. The robot consisted of \(n\) identical uniform-mass bars with length \(l=1\)m and mass \(m=1\)kg. While Exp[licit] calculates the gravity and centrifugal/Coriolis terms with a closed-form algorithm, RVC use recursive Newton-Euler methods (RNE). Both, Exp[licit] and RVC uses.m-MATLAB scripts. For the mass matrix, gravity and the centrifugal/Coriolis effects, the RVC-Method can invoke MEX-files to improve the computation speed. MEX-files are native C or C++ files that are dynamically linked to the MATLAB application at runtime.
Footnote 6: The software can be downloaded at [https://petercorke.com/toolboxes/robotics-toolbox/](https://petercorke.com/toolboxes/robotics-toolbox/)
For the RVC software, the robot was constructed from the SerialLink-class which consists of \(n\)Revolute-classes. For Exp[licit], the robot was constructed from the SnakeBot-class (fig. 4A). Robots with various DOF were constructed and tested. The test was performed with a MacBook air (M1 Chip, 16GB Memory), using MATLAB 2022a. The timeit() function was used to measure the computation time.
The results of our computational comparisons are shown in Figure 1. For almost all computations, Exp[licit] was faster than the RVC software. Only for more than 70 DOF, the gravity vector of the RVC MEX-file option was faster than Exp[licit]. For both software, the computation of the Forward Kinematic Map and the Hybrid Jacobian showed a linear trend. The RVC software was capable of computing the Forward Kinematic Map of a 15-DOF robot within 1ms, whereas Exp[licit] required less than 0.5ms for more than 100 DOF. For the Hybrid Jacobian, the RVC software required more than 1ms for a 15-DOF robot, while Exp[licit] could accomplish the same for 80 DOF. The computation of the Mass Matrix showed an exponential trend for both software. While Exp[licit] outperformed RVC for MATLAB scripts by a factor of 100, RVC had a much better performance using MEX-files. Nevertheless, it was still slower than Exp[licit]. A similar trend was seen for the gravity vector: RVC's performance was improved by invoking MEX-files and showed better performance for more than 70 DOF. However, for the centrifugal/Coriolis terms, Exp[licit] drastically outperformed RCV.
These results highlight the computational advantages of a geometric approach, theoretically discussed in [28].
## IV Summary and Conclusion
This paper summarizes and compares a traditional and a geometric method to derive the kinematic and dynamic pa
Fig. 4: Exp[licit] supports various 2D and 3D-robots. (A) Two planar robots: a Cart-Pole (left) and a Snake-Robot with variable DOF (right). (B) Two robots can be combined by using the addKinematics-method of the RobotPrimitives-class. In the example (B), the two robots of (A) are combined. (C) Currently supported 3D-robots: KUKA LBR i iwa (7 and 14 kg), YouBot, and Franka.
Fig. 5: Simulation of a simple impedance controller, using a Franka robot.
rameters of an open-chain robot. We highlight the conceptual and practical differences between the two approaches. While the geometric method demands a more abstract perspective (i.e., mapping of Joint Twists), we showed several advantages compared to traditional methods. In summary, the advantages of the geometric method are: 1) Flexibility to express kinematic and dynamic relations without predefined rules and exceptions (sec. III-A); 2) Highly modular structure, since Joint Twists can be reused throughout the calculation (sec. III-B5). 3) No more than two frames to describe robot kinematics and dynamics (fig. 3).
We introduce Exp[licit], a MATLAB-based toolbox which implements the geometric method and leverages its advantages. Thanks to the computational advantages and highly modular structure, we believe this software can support various robotic applications. We hope to show that differential geometric methods are not limited to their conceptual strengths but can be useful for practical implementations.
## V Future Work
So far, the purpose of our software is to simulate different 2D and 3D robots using MATLAB. In future, Exp[licit] will offer a C++ and Python option that can be used for real-time control of robots, e.g., for torque control of cobots. At that point, it will be necessary to compare our methods with [19] which is also a library implemented in C++.
At the moment, Exp[licit] is limited to supporting open-chain robot structures. In the future, we are exploring the possibility of incorporating branched structures such as robotic hands, as well as closed-loop structures like delta robots.
|
2309.05341 | Chemisorption Induced Formation of Biphenylene Dimer on Surfaces | We report an example that demonstrates the clear interdependence between
surface-supported reactions and molecular adsorption configurations. Two
biphenyl-based molecules with two and four bromine substituents, i.e.
2,2-dibromo-biphenyl (DBBP) and 2,2,6,6-tetrabromo-1,1-biphenyl (TBBP), show
completely different reaction pathways on a Ag(111) surface, leading to the
selective formation of dibenzo[e,l]pyrene and biphenylene dimer, respectively.
By combining low-temperature scanning tunneling microscopy, synchrotron
radiation photoemission spectroscopy, and density functional theory
calculations, we unravel the underlying reaction mechanism. After
debromination, a bi-radical biphenyl can be stabilized by surface Ag adatoms,
while a four-radical biphenyl undergoes spontaneous intramolecular annulation
due to its extreme instability on Ag(111). Such different chemisorption-induced
precursor states between DBBP and TBBP consequently lead to different reaction
pathways after further annealing. In addition, using bond-resolving scanning
tunneling microscopy and scanning tunneling spectroscopy, we determine the bond
length alternation of biphenylene dimer product with atomic precision, which
contains four-, six-, and eight-membered rings. The four-membered ring units
turn out to be radialene structures. | Zhiwen Zeng, Dezhou Guo, Tao Wang, Qifan Chen, Adam MatΔj, Jianmin Huang, Dong Han, Qian Xu, Aidi Zhao, Pavel JelΓnek, Dimas G. de Oteyza, Jean-Sabin McEwen, Junfa Zhu | 2023-09-11T09:40:12Z | http://arxiv.org/abs/2309.05341v1 | # Chemisorption Induced Formation of Biphenylene Dimer on Surfaces
###### Abstract
We report an example that demonstrates the clear interdependence between surface-supported reactions and molecular adsorption configurations. Two biphenyl-based molecules with two and four bromine substituents, _i.e._ 2,2'-dibromo-biphenyl (DBBP) and 2,2',6,6'-tetrabromo-1,1'-biphenyl (TBBP), show completely different reaction pathways on a Ag(111) surface, leading to the selective formation of dibenzo[e,]]pyrene and biphenylene dimer, respectively. By combining low-temperature scanning tunneling microscopy, synchrotron radiation photoemission spectroscopy, and density functional theory calculations, we unravel the underlying reaction mechanism. After debromination, a bi-radical biphenyl can be stabilized by surface Ag adatoms, while a four-radical biphenyl undergoes spontaneous intramolecular annulation due to its extreme instability on Ag(111). Such different chemisorption-induced precursor states between DBBP and TBBP consequently lead to different reaction pathways after further annealing. In addition, using bond-resolving scanning tunneling microscopy and scanning tunneling spectroscopy, we determine the bond length alternation of biphenylene dimer product with atomic precision, which contains four-, six-, and eight-membered rings. The four-membered ring units turn out to be radicalene structures.
+
Footnote β : journal: Physics Letters B
**Introduction**
On-surface synthesis (OSS) has shown its great potential in the fabrication of functional molecules and covalent nanostructures with atomic precision in the last decade.[1, 2, 3] Different from solution phase chemistry, due to the required clean reaction environment (ultrahigh vacuum) at the gas-solid interface, the use of catalysts is largely limited in OSS. Hence, steering reaction pathways in OSS has been more challenging overall than that in wet chemistry. Chemical organic reactions on surfaces typically include three basic steps: molecular adsorption, diffusion and reaction. The reported examples toward steering reaction pathways on surfaces were mostly focused on tuning the molecular diffusion and the reaction barriers.[1, 2, 3] For instance, it has been demonstrated that self-assembly templates can efficiently direct the reaction pathways by confining molecular diffusion.[4, 5, 6, 7] In addition, different metal substrates or metal adatoms normally have a different catalytic activity toward a specific on-surface reaction, thus impacting the reaction barriers.[8, 9, 10, 11] However, related
studies on the adsorption process are rare, although it actually differentiates the heterogeneous on-surface synthesis from the homogenous solution phase chemistry. A few reported examples were focused on the physisorption of intact molecules on surfaces. The adsorption height from the surface [12] as well as the adsorption site [13] can play important roles on the reactivity of the functional groups of precursor molecules.
**Scheme 1. Reaction pathways of (a) DBBP and (b) TBBP on the Ag(111) surfaces, respectively.**
The coupling reactions typically involve the generation and coupling of radicals and have been extensively studied [2, 14, 15]. Radicals are generated when functional groups are activated. The adsorption configuration of newly formed radical species is very different from that of the initial molecule because active radical species are normally stabilized by surface atoms or stray surface adatoms [16]. Thus, the adsorption configurations of the activated molecules are largely determined by their radical sites. As a result, it is reasonable to infer that molecule with the same backbone, but different number of radicals may lead to a dramatically different adsorption behavior on surfaces after activation. In turn, the different adsorption behaviors may potentially influence the reaction pathways and the final products.
Herein, we report such an example by comparing the reactions of 2,2'-dibromo-biphenyl (DBBP) and 2,2',6,6'-tetrabromo-1,1'-biphenyl (TBBP) molecules on a Ag(111) surface. In our previous work [17], we showed that the biradical biphenyl species formed upon debromination of DBBP were efficiently stabilized by surface Ag adatoms at 300 K. Further annealing led to the formation of dibenzo[e,]]pyrene nanagraphene. However, in this work, TBBP shows a unique reaction pathway: the first step involves the generation of four-radical biphenyl species after the debromination of the precursor; then TBBP undergoes intramolecular annulation spontaneously at 300 K due to its extreme instability on Ag(111), forming a biradical biphenylene monomer that is anchored to the surface; further annealing leads to the formation of an organometallic intermediate state, followed by its transformation into covalent biphenylene dimers containing 4, 6, and 8- membered carbon rings. The chemical structure and electronic properties of the biphenylene dimer have been studied by bond-resolving scanning tunneling microscopy (BR-STM) and scanning tunneling spectroscopy (STS), offering significant insights into its potential anti-aromaticity. The mechanism for the different reaction selectivity between DBBP and TBBP, _i.e._ the formation of dibenzo[e,]]pyrene _vs._ a biphenylene dimer, has been further studied by density functional theory (DFT) calculations. This work reveals that the chemisorption behavior of adsorbates can play a decisive role on the reaction pathway. In addition, the bond alternation of the intriguing biphenylene dimer [18], as proposed by organic and theoretical chemists, has been corroborated here in real space. In fact, the fabrication of four-membered ring containing structures on surfaces has become a hot topic and been widely studied recently due to their exotic electronic and mechanical properties, which can be achieved by either intramolecular or intermolecular [2+2] annulation reactions [19, 20, 21, 22, 23, 24, 25, 26]. An outstanding example was reported Fan _et al._[24]. In their work, the biphenylene network with periodically arranged four-, six-, and eight-membered rings was synthesized by inter-polymer hydrogen-fluoride zipping reaction and exhibits metallic electronic properties. The chemisorption induced formation of four-membered ring as presented in our work provides a new insight into the fabrication of four-membered ring containing functional nanostructures on surfaces.
**Results and Discussion**
**Synthesis of biphenylene dimer.** Figure 1 presents the experimental result of TBBP molecules adsorbing on Ag(111) at room temperature. TBBP molecules stay intact on Ag(111) at \(\sim\)250 K, as revealed by Br 3\(d\) and C 1\(s\) synchrotron radiation photoemission (SRPE) spectra in Figure 1a and 1b. The ratio between C-Br and C-C is 1:2.4, in fair agreement with the ideal value of 1:2 as derived from the structural model shown in Figure 1c. The corresponding STM images are shown in Figure S1. The molecules self-assemble into square ordered islands. A single molecule is composed of one bright head (yellow dotted contour) and one weak tail (green dotted contour), indicating its twisted adsorption configuration because of the repulsion between the adjacent Br atoms of intact TBBP [15, 27].
After depositing TBBP molecules onto the Ag(111) surface held at 300 K, the molecules exhibit bright protrusions in the STM images as shown in Figure 1d and 1e (an overview STM image is shown in Figure S2). Most molecules aggregate into close-packed islands (Figure 1d), together with a few sparsely distributed trimers and tetramers (Figure 1e). In particular, the existence of the trimer (Figure 1e) implies that the bright protrusion cannot from a submolecular feature, _i.e._ one protrusion cannot correspond to one phenyl group of TBBP. In addition, the center-to-center distance between adjacent bright protrusions is measured to be 7.8 A, which is much larger than the distance between the two phenyls of TBBP (\(\sim\)4.3 A). Therefore, one bright protrusion can only correspond to one individual TBBP molecule. The majority of C-Br bonds of TBBP are dissociated at 300 K, as evidenced by the downward shift of the Br 3\(d\) core level binding energies from 250 K to 300 K [28]. The partial dissociation of C-Br bonds on Ag(111) was also reported by previous works [2, 29, 30, 31]. The Br adatoms on the surface can be recognized as the relatively dark and small dots in the STM images [6, 31], as pointed out by the blue arrows in Figure 1e. Interestingly, the appearance of a new C 1\(s\) component at a low binding
energy of 283.0 eV at 300 K (Figure 1b) implies that radicals are probably stabilized by the surface atoms _via_ a C-Ag coordination [31, 28]. The ratio between C-Ag and C-C is about 1:6 from the C 1s spectrum (C-Br: C-C: C-Ag=0.3: 6.5: 1), in agreement with the idea value in Figure 1c, implying that only two radicals are coordinated to the surface atoms. It is worth noting that the C 1s binding energy shift toward the low energy direction at 300 K with respect to that at 250 K is attributed to the increase of the surface work function induced by chemisorbed Br adatoms on the surface [28, 29, 30, 31, 32, 33, 34]. Inspired by these findings and related previous works [35], one can intuitively deduce that the four-radical biphenyl undergoes intramolecular emulation reactions at these conditions, forming biphenylene, while the two residual unquenched radicals are stabilized by the Ag surface. Consequently, it adsorbs perpendicularly to the surface, as schematically shown in Scheme 1. This is the reason why adsorbates show such bright features as compared to that of conventional flat molecules that adsorb parallel to the surface. A comparison between the apparent height of the bi-radical biphenylene monomer and the final biphenylene dimer product on Ag(111) is presented in Figure S3, where a difference of 1.8 A is obtained. The perpendicular adsorption configuration also explains the formation of trimers and tetramers in Figure 1d and 1e, which should be stabilized by \(\pi\)-\(\pi\) stacking between face-to-face phenyls [36, 37, 38]. The DFT-calculated structural model of trimer and tetramer are displayed in Figure 1f and 1g. The distance between two adjacent molecules is about 7.8 A for both trimer and tetramer, which is in excellent agreement with the experimental result (Figure 1e). The formation of biphenylene dimer is verified by our DFT calculations. Once TBBP molecules are fully debrominated, four-radical biphenyl species are not stable on the Ag surface but
Figure 1: (a, b) Br 3d and C 1s SRPE spectra of the samples prepared by depositing TBBP on Ag(111) held at 250 K, held at 300 K followed by annealing to 400 K, 540 K. The photon energies for Br 3d and C 1s are 180 and 380 eV, respectively. (c) The molecular models of the major products in each temperature points. Different carbon atoms are depicted by different colors to illustrate their chemical environments. The ideal ratios of these C atoms are shown below each molecular model. (d, e) STM images of the sample upon deposition of TBBP on Ag(111) held at 300 K. The three high symmetric directions of the Ag(111) surface are shown as the white arrows; same for the following overview STM images. Tunneling parameters: (d, e) U = -1.5 V, 1 = 50 pA. (f, g) The DFT-calculated structural model of timer and tetramer in (e). (h) The possible structural model of two TBBP molecules in the green circle of (d). (i, j) Top and side views of DFT optimized structure of debrominated TBBP adsorbing on Ag(111). Color code: C, black; Ag, grey; H, pink; Br, brown; C bonded to Br atom, green; C bonded to Ag atom, purple.
immediately start an intramolecular annulation reaction (Figure 1i and 1j). As a result, a biphenylene complex is formed with bi-radical binding with two surface silver atoms, leading to the non-radical side of the benzene ring pointing away from the surface. We tested various adsorption sites of the Ag (111) surface with biphenylene (as shown in Figure S8 and S9) and found that two radicals prefer to bind with two nearby surface Ag atoms rather than Ag adatom.
Note that there are a few C-Br bonds remaining intact at 300 K from the XPS analysis, which could belong to the molecules inside the green circle in Figure 1d because of their similar STM morphology as the intact molecules at 250 K (Figure S1). Although the possibility that one or more Br atoms are lost on these molecules cannot be excluded, it is most probable that these molecules are attributed to intact TBBP. Because the m-m stacking interactions between TBBP and the neighboring biradical biphenylenes (Figure 1d) is different from the m-m interactions in the ordered TBBP island at 250 K (Figure S1), the morphology of TBBP in STM images can be slightly different.[39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 161, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 231, 242, 243, 244, 245, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 289, 300, 311, 329, 332, 340, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 422, 434, 445, 446, 451, 452, 463, 474, 481, 482, 483, 484, 490, 411, 422, 434, 44, 447, 448, 449, 453, 464, 465, 466, 477, 481, 482, 483, 484, 485, 486, 487, 488, 490, 411, 422, 434, 449, 454, 455, 456, 457, 458, 459, 466, 470, 409, 411, 422, 434, 449, 460, 412, 447, 448, 491, 449, 471, 482, 483, 484, 485, 486, 487, 488, 492, 489, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 528, 538, 54, 513, 54, 52, 54, 53, 55, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 88, 89, 91, 84, 87, 88, 89, 92, 85, 89, 93, 86, 88, 94, 89, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 151, 1475, 1391, 1375, 1391, 1375, 138, 1475, 1391, 1375, 1391, 1375, 1391, 1375, 138, 1475, 1391, 1375, 1391, 1375, 1391, 1375, 138, 1475, 1391, 1375, 1391, 1375, 1391, 1375, 138, 149, 140, 1515, 1475, 1391, 1375, 1375, 1391, 1375, 1391, 1375, 138, 140, 1515, 1475, 1391, 1375, 1391
bright dots in the middle are assigned to two Ag adatoms, while the darker sides are biphenylenes. In addition, DFT optimized structural model (Figure 2d) and simulated STM image (Figure 2e) both fit the experimental result well. The formation of an organometallic dimer is also supported by the C 1\(s\) SRPE spectrum which is deconvoluted into C-C and C-Ag components with a ratio of 5.4:1, in good agreement with the proposed molecular structure (ideally 5:1, as derived from the structural model shown in Figure 1c). At these conditions, full debromination of TBBP completes (see Br 3\(d\) SRPES in Figure 1a). Br adatoms image as relatively dark dots surround the organometallic dimer _via_ Br-\(\cdot\cdot\)H hydrogen bonds (Figure 2a).[41]
Further annealing this sample at 540 K triggers the formation of the final covalent product biphenylene dimer (Figure 2f and S4) after the removal of interstitial Ag adatoms from the organometallic dimers. The submolecular structure is clearly characterized by the BR-STM with a CO functionalized probe, as seen in Figure 2g. The covalent connection between two monomers is further confirmed by C 1\(s\) SRPES where C-Ag signal disappears at 540 K (Figure 1b). Because the annealing from 400 K to 540 K leads to the partial desorption of Br atoms as revealed in the Br 3\(d\) spectra, the C 1\(s\) peak shifts back to the high binding energy position at 540 K.[28, 29, 30, 31, 32, 33, 34]
**Chemical structure of biphenylene dimer.** The biphenylene dimer is of particular interest for studying molecular anti-aromaticity since it is a 4, 6, and 8-membered rings containing structure.[18, 24, 43] A cyclic molecule typically shows antiaromaticity if it holds 4n (n is a positive integer) delocalized electrons while a cyclic molecule containing 4n+2 delocalized electrons is aromatic.[44] In particular, cyclobutadiene is literally thought to be very antiaromatic and not stable. Plenty of theoretical and experimental efforts (mostly crystallography) have been made to study the bond alternation of several biphenylene derivatives.[18, 45, 46, 47, 48, 49, 50, 51] A few typical examples are shown in Figure 2j. The linear polyphenylene **1** shows delocalized electronic properties and one double bond can be involved inside the four-membered ring. In contrast, the angular polyphenylene **2** and oligobiphenylenes **3** possess localized \(\pi\)-bonding and hold radialene structures. The double bonds tend to be exocyclic with respect to four-membered rings in **2** and **3** to minimize the antiaromatic character of four-electron \(\pi\)-bonding within the cyclobutadiene.
The first real-space evidence for the bond alternation of biphenylene related structure was given by Kawai _et al._, by analyzing the bond lengths of molecule **4** using non-contact atomic force microscopy (nc-AFM).[35] Here we demonstrate that the bond alternation of biphenylene dimer **5** is similar to that of biphenylene monomer, that is, the radialene structure is energetically favorable. A Laplace filtered BR-STM image is presented in Figure 2h which enhances the bond feature.[52] We draw the best fitting straight lines along the different bonds of the molecular structure and take the crossing points as reference for the bond length analysis. Remarkably large differences in the bond lengths can be easily identified, that is, those predicted to display a double bond character are clearly shorter. A detailed analysis (Figure 2i) reveals that the average length for the single bonds (marked in green) is 1.78\(\pm\)0.12 A, whereas the average length for the double bonds (marked in red) is 1.29\(\pm\)0.12 A, clearly out of the error range of one another. It must be reminded, that these values are not the actual bond lengths. However, the artifact that causes these lengths to deviate from the real values is strongly bond-order dependent, which thus allows for an easy discrimination of the single and double bonds by this comparative analysis.[53] We also calculated the length of each C-C bond of the biphenylene dimer adsorbed on the Ag(111) surface and derived the harmonic oscillator model of aromaticity (HOMA) values (Figure 2k.[54] A higher HOMA value is generally associated with a higher degree of \(\pi\)-electron delocalization and increased aromatic stabilization. The phenyl rings have the highest HOMA value of 0.85; The eight-membered and four-membered rings have HOMA values of only \(-\)0.54 and \(-\)1.06, respectively, indicating their low aromaticities. This is presumably because of the single bonds involved there, in which decreases the \(\pi\)-electron delocalization on them. Nevertheless, the HOMA value of the four-membered ring is still much higher than the value of cyclobutadiene (-4.277) which indicates the anti-aromaticity in the four-membered ring is significantly reduced in the biphenylene dimer.[54] This is further supported by the nuclear-independent chemical shift (NICS) analysis.[55] as shown in Figure 2k. The phenyl rings are rather _quasi_-nonaromatic (-5 ppm), while the four- and eight-membered rings are highly antiaromatic (36.6 and 19.1 ppm, respectively). In addition, we have investigated the induced currents that result from an applied magnetic field by the anisotropy of the induced current density (ACID).[56] The plot depicted in Figure 2l clearly shows a high anticlockwise current flowing along the central rings including both four- and eight-membered ring, which suggests the anti-aromaticity of the four- and eight-membered ring. In contrast, only fragmented clockwise ring current (weak aromaticity) is shown in phenyl rings, in agreement with their low NICS values. The reduced aromaticity of the phenyl ring is due to the fixed localization of the double bonds caused by the four membered rings.
**Electronic structure of biphenylene dimer.** The electronic properties of the biphenylene dimer adsorbed on Ag(111) have been probed by STS, as shown in Figure 3a. The two peaks at -1.63 and 1.27 V should be attributed to the highest occupied and the lowest unoccupied molecular orbitals (HOMO and LUMO), respectively. This is supported by the good agreement between experimentally obtained (Figure 3b and c) and DFT simulated dI/dV maps of the HOMO and LUMO (Figure 3d and e) of biphenylene dimer (the ten dark rings in Figure 3b are from the contribution of the surrounding Br adatoms). Thus, the band gap of the biphenylene dimer adsorbed on Ag(111) is 2.9 eV, which is larger than that of biphenylene ribbon, as reported by Fan _et al._,[24] due to its smaller size. The relatively large bandgap is presumably attributed to the relatively large electronic localization on the phenyl groups (weak electronic conjugation between them). The calculated electronic local density of states (LDOS) distributions related to the HOMO and LUMO orbitals of the biphenylene dimer are presented in Figure 3f and 3g. Accordingly, the HOMO is largely localized in the phenyl groups, while the LUMO is mostly distributed in the four-membered ring and the single bond between two biphenylene monomers, thus fitting the proposed bond alternation well, as shown in Figure 2k. The DFT calculated charge density of HOMO and LUMO of biphenylene dimer adsorbing on Ag(111) is presented in Figure S10, very similar to those in gas phase as shown in Figure 3f-g, implying a weak interaction between biphenylene dimer and the Ag(111) surface.
**Reaction pathway analysis.** Next, we focus on the reaction mechanisms of TBBP and DBBP on Ag(111). The optimized structures from a DFT-based model for each reaction step of TBBP are shown in Figure 4a. The details of these calculations are given in the SI.
As shown in Figure 4a, the reaction of TBBP is an exothermic reaction where the energy diagram goes downhill in each reaction step. Notably, the first step is an adsorption-determined spontaneous process, which is also reflected by the considerable heat release of 3.96 eV. After full debromination of TBBP, the intramolecular annulation reaction of the four-radical biphenyl turns out to be the most thermodynamically favorable pathway. This is supported by the fact that the energy of Ag-biphenyl complex, as another possible structure, is 3.75 eV higher than the biphenylene monomer product on Ag(111), as seen in Figure S15. In fact, the energy barrier of the intramolecular annulation of TBBP on Ag(111) should be from the debromination reaction of TBBP. As shown in Figure 4b, two Br atoms at the same benzene ring (either 1 and 2, or 3 and 4 sites) prefer dissociating simultaneously with a transition energy barrier of 0.72 eV and heat releasing of 1.25 eV. This excludes the possibility of the intramolecular [2+2] annulation may also occur as long as two bromines at the same side (either 1 and 3, or 2 and 4 sites) of molecular backbone dissociated (instead of full debromination).
The following steps are the conventional planarization of bi-radical biphenylene monomer with the help of Ag adatoms and the subsequent Ullmann coupling by thermal treatments, finally forming biphenylene dimer.
In contrast, for the reaction of DBBP on Ag(111), the reaction is an Ullmann coupling followed by the cyclodehydrogenation, forming dibenzo[e,]pyrene (Scheme 1). It is obvious that the difference between the two reactions (TBBP and DBBP) on Ag(111) originates from different molecular adsorption configurations. Different from the spontaneous annulation of TBBP, in case of DBBP, both the Ag surface and Ag adatoms are adequate to stabilize bi-radical biphenyl, resulting in the formation of the organometallic dimer with four-fold C-Ag bonds (Figure S18). It is known that Ullmann coupling has a very low reaction barrier after the removal of interstitial Ag atoms from organometallic intermediate [57, 58, 59] thus the formation of dibenzo[e,]pyrene is expected. Each reaction step of DBBP on Ag(111) is an exothermic reaction, as seen in Figure S18.
Figure 3: Electronic properties of biphenylene dimer on Ag(111). (a) Differential conductance (dI/dV) spectra. The blue curve shows the dI/dV spectrum recorded at the blue dot marked position on the molecule, with a Br functionalized tip. The grey curve shows the reference dI/dV spectrum measured on the bare Ag(111) surface with the same tip. The ten dots surrounding the molecule are ten bromine adatoms. (b, c) Constant-current dI/dV maps taken at the bias voltages of -1.7 and 1.25 V, respectively. Lock-in amplitude, 20 mV; oscillation, 731 Hz; current, 500 pA. (d, e) DFT calculated dI/dV maps corresponded to HOMO and LUMO of biphenylene dimer in gas phase, respectively. (f, g) DFT calculated LDOS distribution of HOMO and LUMO orbitals of biphenylene dimer in gas phase, respectively.
It is worth noting that the intramolecular annulation reaction of bi-radical biphenyl should be also possible after removal of Ag atoms from the organometallic dimer. However, a much higher temperature is needed for the intramolecular annulation of bi-radical biphenyl. For example, in the work of Kawai _et al._, intramolecular annulation to form **4** in Figure 2j could be achieved only at temperatures higher than 406 K.[35] This should be higher than that of the Ullmann coupling between radicals (usually < 1 eV; demetallization is normally the rate- determining step of Ullmann reaction on surfaces).[58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 141, 152, 153, 154, 155, 156, 157, 160, 161, 171, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 231, 242, 243, 250, 251, 261, 272, 273, 274, 275, 276, 277, 288, 290, 291, 202, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 333, 341, 342, 343, 344, 345, 346, 347, 348, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 424, 435, 446, 451, 462, 471, 485, 486, 493, 411, 424, 449, 452, 453, 454, 463, 474, 475, 476, 486, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 525, 512, 525, 535, 544, 556, 561, 562, 575, 576, 577, 585, 585, 593, 594, 500, 511, 525, 535, 544, 556, 561, 575, 585, 595, 596, 597, 598, 599, 600, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 70, 72, 74, 79, 71, 75, 76, 78, 79, 71, 73, 74, 75, 76, 79, 72, 75, 76, 77, 78, 79, 70, 73, 77, 79, 71, 74, 75, 76, 78, 79, 70, 74, 79, 72, 75, 76, 77, 78, 79, 70, 74, 79, 73, 78, 79, 72, 75, 76, 77, 78, 79, 73, 79, 70, 74, 79, 75, 76, 78, 79, 70, 75, 77, 79, 71, 76, 78, 79, 72, 75, 77, 79, 73, 77, 78, 79, 70, 74, 79, 75, 77, 78, 79, 70, 76, 79, 71, 78, 79, 72, 79, 73, 74, 75, 76, 77, 78, 79, 70, 79, 70, 77, 78, 79, 71, 79, 72, 73, 74, 75, 76, 78, 79, 70, 77, 71, 72, 73, 75, 76, 79, 73, 77, 78, 79, 70, 77, 71, 72, 73, 74, 75, 76, 78, 79, 70, 78, 79, 70, 79, 71, 72, 74, 75, 76, 79, 70, 77, 78, 79, 71, 72, 75, 76, 77, 78, 79, 70, 79, 72, 77, 78, 79, 70, 71, 72, 79, 73, 74, 75, 76, 79, 71, 72, 75, 76, 78, 79, 70, 77, 78, 79, 72, 79, 74, 75, 76, 78, 79, 70, 77, 79, 70, 78, 79, 71, 70, 79, 72, 73, 75, 76, 77, 78, 79, 70, 79, 71, 70, 78, 79, 72, 71, 73, 76, 78, 79, 70, 79, 71, 70, 78, 79, 72, 73, 74, 75, 76, 79, 77, 78, 79, 70, 79, 70, 78, 79, 70, 79, 71, 70, 78, 79, 72, 71, 73, 74, 75, 76, 78, 79, 70, 79, 71, 72, 73, 75, 76, 77, 78, 79, 71, 72, 79, 73, 78, 79, 74, 75, 76, 79, 77, 78, 79, 70, 79, 72, 79, 73, 77, 79, 70, 78, 79, 70, 79, 70, 79, 71, 70, 79, 72, 73, 74, 75, 76, 78, 79, 73, 75, 76, 79, 70, 78, 79, 70, 79, 71, 70, 79, 72, 73, 74, 75, 76, 78, 79, 71, 75, 76, 79, 70, 78, 79, 70, 79, 73, 77, 78, 79, 70, 79, 71, 72, 73, 74, 75, 76, 78, 79, 71, 79, 72, 75, 76, 79, 73, 77, 78, 79, 70, 79, 70, 79, 70, 78, 79, 71, 70, 79, 72, 73, 74, 75, 76, 78, 79, 71, 73, 77, 78, 79, 70, 79, 70, 79, 72, 74, 75, 76, 79, 71, 78, 79, 70, 79, 70, 79, 70, 78, 79, 70, 79, 71, 70, 79, 72, 71, 73, 74, 75, 76, 78, 79, 71, 78, 79, 70, 79, 70, 79, 71, 70, 79, 72, 73, 75, 76, 79, 71, 72, 73, 74, 75, 76, 77, 78, 79, 70, 79, 72, 78, 79, 73, 79, 70, 79, 70, 79, 71, 70, 79, 72, 71, 73, 74, 75, 76, 78, 79, 70, 73, 78, 79, 70, 79, 70, 79, 70, 79, 71, 70, 79, 72, 73, 74, 75, 76, 78, 79, 70, 79, 73, 78, 79, 70, 79, 71, 70, 79, 72, 73, 74, 75, 76, 79, 71, 75, 77, 78, 79, 71, 79, 72, 73, 79, 73, 74, 75, 76, 79, 70, 78, 79, 70, 78, 79, 70, 79, 70, 79, 71, 70, 79, 72, 73, 74, 75, 76, 78,
In short, the main reason for the different reaction pathways of TBBP and DBBP on Ag(111) is from their different chemisorption configurations. Four-radical biphenyl takes a spontaneous intramolecular annulation to lower the overall energy. In sharp contrast, bi-radical biphenyl can be easily stabilized by the surface adatoms. In addition, intramolecular annulation of DBBP is less favored than the competing Ullmann coupling, thus leading to its completely different reaction selectivity on Ag(111) as compared to TBBP.
**Conclusions**
In summary, biphenyle dimers are selectively synthesized on a Ag(111) surface with a high yield, starting from TBBP. Using BR-STM and STS, we demonstrate that the radialene rather than a cyclobutadiene structure is preferred in the biphenyle dimer. The high selectivity toward a biphenyle dimer is attributed to the special adsorption configuration of TBBP on Ag(111). After debromination, the four-radical biphenyl cannot be stabilized simply with the help of surface atoms, which instead undergoes intramolecular annulation reaction and finally forms a biphenyle dimer _via_ intermolecular Ullmann coupling. In contrast, the bi-radical biphenyl from debrominations of DBBP can be efficiently stabilized by surface Ag adatoms, forming an organometallic dimer. This organometallic intermediate subsequently reacts into dibenzo[e,]pyrene _via_ Ullmann coupling. Control experiments demonstrate that the energy barrier of intramolecular annulation reaction is higher than that of the Ullmann coupling reaction for bi-radical biphenyl on Ag(111). The different adsorption configurations on Ag(111) lead to different reaction pathways for the two structurally similar adsorbates. We believe this work can invoke scientists' interests since it serves as an example on how one can control the reaction selectivity for given adsorbate by tuning its adsorption behavior.
More interestingly, based on STS, and the combined HOMA, NICS, and ACIDS analyses, we provide comprehensive interpretations toward the antiaromaticities of four- and eight- membered rings, which contain 4n electrons. In addition, because of the bond-confinement caused by the existence of four-membered ring, the aromaticity of phenyl is significantly reduced. This bond-confinement effect revealed here could be potentially employed for other graphene-based and non-benzenoid carbon structures, tuning the electronic properties and chemical reactivity of these materials.
## Associated Content
**Supporting Information**. Detailed descriptions of experimental and theoretical methods, additional STM images. This material is available free of charge via the Internet at [http://pubs.acs.org](http://pubs.acs.org).
## Author Information
### Corresponding Author
* J. F. Zhu, [email protected]
* J.-S. McEwen, [email protected]
* T. Wang, [email protected]
### Author Contributions
* \(\sharp\)These authors contributed equally. The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.
### Notes
The authors declare no competing financial interest.
## Acknowledgment
This work was financially supported by the National Natural Science Foundation of China (21773222, 51772285, U1732272, and U1932214), the National Key R&D Program of China (2017YFA0403402, 2017YFA0403403, and 2019YFA0405601), Users with Excellence Program of Hefefefef Science Center CAS (2020HSC-UE004). The work at Washington State University was primarily funded through that National Science Foundation CAREER program under contract number CBET-1653561. This work was also partially funded by the Joint Center for Deployment and Research in Earth Abundant Materials (JCDREAM) in Washington State. Most of the computational resources was provided by the Kamiak HPC under the Center for Institutional Research Computing at Washington State University. A portion of the computer time for the computational work was performed using EMSL, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. The work at DIPC was primarily funded through Juan de la Cierva grant (No. FJC2019-041202-I) from Spanish Ministry of Economy and Competitiveness, the European Union's Horizon 2020 research and innovation program (Marie Sklodowska-Curie Actions Individual Fellowship (No. 101022150).
|
2309.03730 | A Causal Perspective on Loan Pricing: Investigating the Impacts of
Selection Bias on Identifying Bid-Response Functions | In lending, where prices are specific to both customers and products, having
a well-functioning personalized pricing policy in place is essential to
effective business making. Typically, such a policy must be derived from
observational data, which introduces several challenges. While the problem of
``endogeneity'' is prominently studied in the established pricing literature,
the problem of selection bias (or, more precisely, bid selection bias) is not.
We take a step towards understanding the effects of selection bias by posing
pricing as a problem of causal inference. Specifically, we consider the
reaction of a customer to price a treatment effect. In our experiments, we
simulate varying levels of selection bias on a semi-synthetic dataset on
mortgage loan applications in Belgium. We investigate the potential of
parametric and nonparametric methods for the identification of individual
bid-response functions. Our results illustrate how conventional methods such as
logistic regression and neural networks suffer adversely from selection bias.
In contrast, we implement state-of-the-art methods from causal machine learning
and show their capability to overcome selection bias in pricing data. | Christopher Bockel-Rickermann, Sam Verboven, Tim Verdonck, Wouter Verbeke | 2023-09-07T14:14:30Z | http://arxiv.org/abs/2309.03730v1 | # A Causal Perspective on Loan Pricing
###### Abstract
In lending, where prices are specific to both customers and products, having a well-functioning personalized pricing policy in place is essential to effective business making. Typically, such a policy must be derived from observational data, which introduces several challenges. While the problem of "endogeneity" is prominently studied in the established pricing literature, the problem of selection bias (or, more precisely, bid selection bias) is not. We take a step towards understanding the effects of selection bias by posing pricing as a problem of causal inference. Specifically, we consider the reaction of a customer to price a treatment effect. In our experiments, we simulate varying levels of selection bias on a semi-synthetic dataset on mortgage loan applications in Belgium. We investigate the potential of parametric and nonparametric methods for the identification of individual bid-response functions. Our results illustrate how conventional methods such as logistic regression and neural networks suffer adversely from selection bias. In contrast, we implement state-of-the-art methods from causal machine learning and show their capability to overcome selection bias in pricing data.
_Keywords:_ Pricing, OR in banking, Bid selection bias, Causal inference
## 1 Introduction
Pricing loans is a challenging task for banks. A pricing policy must take into account and balance several risks to be profitable. Prices must be both high enough to account for operating costs and losses and low enough to match price sensitivities of customers and to be competitive in the overall market. If a price is too high, customers may reject it and seek a loan at a competitor. However, if a price is too low, the bank may not turn the loan into a profit and possibly suffer losses.
As customers and their preferences are heterogeneous, an optimal price in terms of revenue or profit maximization is a personalized one (Varian, 1989). In reality, many pricing policies are lacking systematic and data-driven personalization. Instead, prices are typically derived from rigid pricing grids that segment customers into large groups. Individual components, such as a
discount, are typically applied at the discretion of a bank clerk incentivized by corporate policies (Phillips et al., 2015).
Developing a personalized pricing policy to optimize loan prices is challenging since a bank must know the individual preferences and characteristics of a customer. Typically, these characteristics, such as price sensitivity and willingness to pay, are hidden. Banks must assume or approximate those characteristics to build well-functioning models Phillips (2021). These approximations prove complicated as they rely on observational data of past loan negotiations, introducing several challenges: while the problems of endogeneity and hidden confoundedness are generally prominently studied in the established pricing literature (Berry, 1994; Villas-Boas and Winer, 1999), selection bias (Heckman, 1990) is not. Selection bias (or "bid selection bias", as in the context of this work) persists in observational pricing data due to bias in the established pricing policy. Certain customer groups might receive preferential treatment in accordance with an established pricing grid or regulatory requirements (Jain et al., 2016), bank clerks might suffer from unconscious bias in handling customers and applying discounts, and customer groups might self-select in choosing which financial products to purchase.
Our work aims to study the impact of selection bias on learning personalized pricing models from observational data and builds on identifying methodologies that overcome potential adverse effects of selection bias. In order to do so, we propose to frame loan pricing as a problem of causal inference and look at pricing as a case of treatment and effect. The offered loan price, the "bid", is considered to be a continuously-valued treatment, whereas the treatment effect is the probability of a customer accepting this bid, i.e., the "bid response". A model of the individual treatment effect, that is, the individual bid response, can be used to optimize and personalize pricing. Applications include decision support for bank clerks, data-driven allocation of personalized prices, and improved customer segmentation in existing pricing policies.
To study the effects of bid selection bias, we work with a semi-synthetic dataset created in collaboration with a bank in Belgium. We apply a set of machine learning techniques, including novel methods from causal machine learning, to estimate bid-response curves and evaluate their robustness to varying levels of selection bias in the training data.
To the best of our knowledge, our study is the first of its kind and adds to the literature in the following ways:
* we frame the pricing problem as a problem of causal inference;
* we showcase the issue of selection bias in observational pricing data and its impact on decision making;
* we evaluate the robustness of established and novel machine learning methods against bid selection bias in observational data.
The remainder of our work is organized as follows: in Section 2, we provide an overview of the established literature on customized pricing. Section 3 defines pricing as a problem of causal inference and discusses reasons for selection bias, as well as methods for bid-response estimation. Section 4 describes our experimental evaluation, including the generation of our dataset. We conclude in Section 5, in which we reflect on results and discuss potential future research.
Related Work
**Customized pricing:** Loan pricing is a case of customized pricing (Phillips, 2012, 2013). Contrary to the pricing of uniform goods (see, e.g., Varian, 2014), in customized pricing, neither goods nor prices are homogeneous across customers (Phillips, 2021). In loan pricing, customized products and prices can better meet customers' financing demands and offer banks a way to manage and mitigate risks. Customization of the product occurs by varying the duration, amount, and starting principal of the loan. Additionally, there typically is no price transparency, as customers cannot observe prices offered to other individuals. This changes the objective from setting a single best price for a customer group to setting the optimal price per individual. As a result, customized pricing is often considered an optimization problem (Oliver and Oliver, 2014; Phillips, 2012; Sundararajan et al., 2011) that aims to optimize a certain business objective. We summarize the customized pricing process as such: after a customer has approached a seller, they request a price for a product, commonly referred to as a _bid_. The seller will try to determine the bid that maximizes a certain objective, for example, revenue or profit generation per customer, which is facilitated by estimating the probability that a bid is accepted. The probability of acceptance as a function of the height of the bid is referred to as the _bid-response function_. It differs from the demand curve in microeconomics by representing a probability concerning a single individual instead of an aggregated demand over a customer group. Hereby, customized pricing is related to the wider field of market response modeling, to which an introduction can be found in, e.g., Hanssens et al. (2003). For the case of loans, we describe an exemplary loan pricing and sales process in Section 3.1.
**Endogeneity in pricing:** Traditionally, estimation of the bid response is approached with parametric and semiparametric methods, assuming a certain functional form of the bid response (Phillips, 2021; van Ryzin, 2012). This estimation is subject to various challenges in terms of, for example, the available information and infrastructure, model complexity, and legal and regulatory requirements (Phillips, 2012). The most prominent challenge discussed in the established pricing literature is "endogeneity" in observational training data (or "hidden confoundedness" in the wider sense). Endogeneity is present if variables critical to explaining customer preferences or their actions are unobserved (or not available for modeling) (Phillips et al., 2012). Reasons for endogeneity are plentiful, stemming, for example, from interactions between customers and sales representatives that are not or insufficiently recorded, due to internal and external regulations. There is a wide body of literature on identifying and handling endogeneity, both in pricing and more generally in economics and statistics (see, e.g., Heckman, 1978; Kuksov and Villas-Boas, 2008; Newey, 1987; Rivers and Vuong, 1988). De facto standard methods are instrumental variable regression (e.g., Hausman et al., 2012), and BLP procedures named after the authors of Berry et al. (1995). More recently, machine learning has been proposed to finding bid-response models (Agrawal and Ferguson, 2007; Arevalillo, 2019; Guelman and Guillen, 2014; Lawrence, 2003).
**Selection bias in pricing:** The impact of selection bias (cf. Section 3.2) on pricing has received little attention in the literature. There is, however, an established body of research from other domains on the issue, such as medicine (Berrevoets et al., 2020), economics (Varian, 2016), market modeling (Ferkingstad et al., 2011), and maintenance (Vanderschueren et al., 2023).
Methodology
To showcase the impact of selection bias on customized pricing, we propose a methodology in four steps. In Section 3.1, we describe the general loan pricing and sales process and frame it as a problem of causal inference. In Section 3.2, we build on this framing and introduce selection bias in observational data and its relevance towards pricing. In Section 3.3, we discuss the necessary assumptions for the causal identification of bid responses. Finally, in Section 3.4, we present the methods that we apply to learn bid-response functions from observational data. Our experimental setup and results follow in Section 4.
### Formalizing the loan pricing and sales process
The general process of a customer taking out a loan is formalized as follows:
* first, a customer approaches a finite number of banks to negotiate a loan;
* second, a bank obtains information about the customer and evaluates whether and at what conditions it wants to grant the loan;
* third, each bank either rejects the customer or provides loan conditions, including a prospective price, i.e., the "bid";
* fourth and finally, the customer evaluates the offers received from the banks and decides whether to accept any of them, and if so, which. A round of negotiations might follow after the initial bid is placed. For the purpose of this study, we do not investigate any negotiations but focus on the first response of a customer to the first bid only.
For the purpose of our study, we simplify the process and look at it from the perspective of a single bank. The process reduces to three steps: a customer approaches the bank, the bank provides a loan offer including a bid for the price of the loan, and finally a customer decides on acceptance or refusal of the offer. The simplified process is depicted in Figure 1.
**Loan price decomposition:** In practice, a loan price often consists of multiple parts, such as a base price and a customer-specific component. The base price serves to cover risks and costs to the bank and is determined via risk-based pricing. For risk-based pricing, banks cluster customers into a finite number of risk bands and determine a price per risk band that covers running costs of the bank, as well as expected losses (Phillips, 2013). Risk models are typically proprietary, resulting in price dispersion in the market. The individual component to a loan price is given by a bank clerk during the sales process based on their information and personal assessment of the customer. Discounts might be driven by incentive schemes resulting from corporate strategy. The impact of such incentive schemes (Garrow et
Figure 1: Simplified loan pricing process
2015) and the risk of principal-agent problems (Berhold, 1971; Ross, 1973) is out of scope in this work. Additionally, we do not take into account the impacts of negotiations between customers and bank clerks that might occur.
**Loan pricing as a problem of causal inference:** We assume that a bank has records on their historical sales processes available, which are formalized as such: after a customer \(i\) has decided about a specific loan offer, the result is summarized as a tuple \((\mathbf{x}_{i},b_{i,f},y_{i,f})\). All values are realizations of random variables \((\mathbf{X},B,Y)\). \(\mathbf{X}\in\mathbb{R}^{d}\) describes the characteristics of the customer, as well as the specifications of the loan, excluding the bid, and information on competitor offers1. \(B\in\mathbb{R}\) is the bid made to the customer for the loan price, i.e., the credit rate in percent of interest per year. \(Y\in\{0,1\}\) is the decision to accept the offer and take up the loan or not. We assume \(Y\) to be a Bernoulli random variable that is dependent on \(P\in[0,1]\), the probability of accepting a loan offer; hence, \(Y\sim Bernoulli(P)\).
Footnote 1: We assume that all information relevant to the customer decision is known and in \(\mathbf{X}\) to satisfy necessary assumptions (cf. Section 3.3).
In framing loan pricing as a problem of causal inference, we adopt the Neyman-Rubin potential outcome framework (Rubin, 2004, 2005) and assume that for every realization of a bid \(b\) we find a potential outcome \(P(b)\) of the probability of accepting the loan offer and a realization \(Y(b)\in\{0,1\}\). All observed data and corresponding probabilities are referred to as "factual", including the factual bid \(B_{f}\), the factual acceptance probability \(P_{f}=P(B_{f})\), and the factual acceptance \(Y_{f}\sim Bernoulli(P_{f})\). Any potential outcomes to unobserved combinations of a customer characteristic and a bid are referred to as "counterfactual outcomes".
**Bid response as a treatment effect:** For our study, we assume that a bank is interested in identifying the individual response of a customer to a certain bid conditional on the customer's characteristics and the loan they have applied for. Such a model could be used for data-driven bid allocations, improved customer segmentation for risk-based pricing, or to support bank clerks in defining the optimal discretionary discount to give to a customer to make a winning bid. To win a customer, two requirements must be met: on the one hand, a bid cannot exceed the overall willingness-to-pay of the customer, that is, even without competition, a customer is not willing to pay any price for the loan. On the other hand, the loan conditions offered (including the bid) must be more favorable to the customer than those of any of the other competing banks. We note that there might be situations in which a customer might not just pick a bank based on loan conditions but based on personal relations or past experiences. Modeling such a behavior is out of the scope in our study.
In our experiments, we aim to find an unbiased estimate of the individual bid-response function, i.e., the probability of accepting a loan conditional on the bid and customer characteristics:
\[\mu(b,\mathbf{x})=\mathbb{E}\left[P(b)|\mathbf{X}=\mathbf{x}\right] \tag{1}\]
To evaluate how a bank could use information about \(\mu(b,\mathbf{x})\), we assume that a bank wants to determine the optimal bid \(b_{i}^{*}\) per customer that maximizes the expected revenue from the loan pricing and sales process. We expect the revenue \(R\) from a loan offer to be proportional to both the height of the bid \(b\) and the decision to take out the loan with respect to the bid \(Y(b)\):
\[R(b)\sim b*Y(b) \tag{2}\]
The optimal bid \(b_{i}^{*}\) for an individual \(i\) is defined as the bid that maximizes the expected revenue for the customer characteristics \(\mathbf{x}_{i}\):
\[\begin{split} b_{i}^{*}&=\operatorname*{arg\,max}_{b} \,\mathbb{E}\left[R(b)|\mathbf{X}=\mathbf{x}_{i}\right]\\ &=\operatorname*{arg\,max}_{b}\,b*\mu(b,\mathbf{x}_{i})\end{split} \tag{3}\]
Our formalization of the loan pricing and sales process makes two simplifications that we want to motivate: first, in reality, a bank is interested in profit over revenue maximization. Yet, to maximize profit, further assessments are needed, such as those of risk. These assessments are beyond the scope of this work. In addition, the bid response might be a component of assessing risk, thereby motivating this simplification. Second, we do not take loan negotiation rounds into account. In several markets, loan pricing and sales processes involve multiple rounds of iterative bidding. We simplify this process and assume that only one bid is made by the bank, as stated before.
### Selection bias
We are interested in learning \(\mu(b,\mathbf{x})\) from observational data. This, in itself, is an intricate task, as for every observation, one observes the outcome of a loan pricing and sales process to only the factual bid \(b_{f}\) and not to any other. This is known as the "fundamental problem of causal inference" (Holland, 1986). Furthermore, for a single customer, there is only a small number of observations available, as a single individual takes out only a low number of loans in their lifetime and not necessarily all at the same bank.
The task is further complicated as the observational data that can be used to learn \(\mu\) are expected to be collected under a specific pricing policy, in contrast to data collected using predefined experiments, for example, randomized controlled trials (Angrist and Pischke, 2009; Deaton and Cartwright, 2018; Rubin, 1974). This established pricing policy introduces selection bias in the data. Selection bias describes the relationship between the factually assigned bid (or any intervention more generally) and the pretreatment covariates of an observation. Figure 2 shows the dependency of the factual bid and the pretreatment covariates of an observation, as assumed in the remainder of this manuscript. Running randomized control trials in loan pricing is infeasible, for example, due to regulation, compliance, ethical concerns, and costs.
Figure 2: DAG representing the causal dependencies between the variables in the loan pricing and sales process
The problem of learning effects of an intervention under the presence of selection bias has been studied for both the case of binary interventions (Rosenbaum and Rubin, 1983, 1984; Yoon et al., 2018) and continuously-valued interventions (Bica et al., 2020; Hirano and Imbens, 2004; Imai and Van Dyk, 2004; Imbens, 2000). Adverse effects of selection bias on traditional approaches to estimating intervention effects have been studied across fields (see, e.g., Schwab et al., 2020; Vanderschueren et al., 2023).
We elaborate on three exemplary reasons for bid selection bias in pricing data:
1. Loan offers were made according to an established pricing policy. Hence, some customer groups are more prone to being assigned a higher or lower bid than others.
2. Bank clerks might have suffered from implicit stereotypes, either offering higher discretionary discounts to some or even denying loans to certain customer groups (Cozarenco and Szafarz, 2018).
3. Customers might have suffered from various types of self selection, e.g., driven by socioeconomic status. Customer groups may have a preference for certain loan products over others. Other customer groups might not have applied for a specific loan at all.
### Assumptions
In the literature, we find three assumptions that typically underly the estimation of intervention effects from observational data (Imbens, 2000; Lechner, 2001). We will discuss these assumptions and their applicability in pricing:
**Assumption 1**.: _Overlap: \(\forall b\in\mathbb{R}:\forall\mathbf{x}\in\mathbb{R}^{d}\) with \(\mathbb{P}(\mathbf{x})>0:0<\mathbb{P}(b|\mathbf{x})<1\)_
Overlap requires that for every possible customer characteristic \(\mathbf{x}\), every possible bid \(b\) has a nonzero possibility of being offered. In reality, an established pricing policy might assign specific bids with little to no deviation to customer groups, resulting in a violation of the overlap assumption. Overlap might be ensured, however, due to bank clerks assigning discretionary discounts of sufficient size to their customers. The dispersion of bids ensures that the true bid response can be estimated for all possible bid levels for a certain customer.
**Assumption 2**.: _Consistency: \(\forall\mathbf{x}\in\mathbb{R}^{d}:Y_{f}=(Y|\mathbf{X}=\mathbf{x})\)_
Consistency means that the observed outcome is the true potential outcome for the applied treatment. As Vanderschueren et al. (2023) note, this appears to be a straightforward assumption. To ensure consistency in pricing, we must carefully observe any changes in economic sentiment and correct or retrain models if, e.g., prime rates change. For example, if monetary policy tightens, the response of a customer to a certain bid \(b\) will likely change as well. For our experiments, we assume that the economic sentiment is stable and that consistency can be assumed.
**Assumption 3**.: _Unconfoundedness: \((P(b)|b\in\mathbb{R})\perp B_{f}|\mathbf{X}\)_
Unconfoundedness requires that all information that determined the factually assigned bid is known and available for modeling. In pricing, this assumption is easily violated, e.g., due to unrecorded interactions between bank clerks and customers (Phillips et al., 2012), legal bounds on the use of personal data, and missing information about competitor offers, commonly referred
to as "competitive uncertainty".
For the purpose of our work, we do not further evaluate the impacts of violating these assumptions on estimating bid responses. Instead, we assume the assumptions are met, to isolate the effect of selection bias.
### Predicting bid-response curves
We evaluate a total of seven methods to model individual bid responses, covering parametric models, nonparametric models, and causal and noncausal methods, as depicted in Table 1. Each method is discussed further below:
We consider four noncausal methods for bid-response learning to test their robustness to selection bias and appropriateness to modeling bid-response functions: first, naive pricing is a strategy in which we do not estimate any dose response but assume that the best bid \(b^{*}\) per customer is the factually assigned bid under the established pricing policy. Second, logistic regression is a classification method that is widely adopted across scientific domains and industry applications, such as fraud detection, credit risk modeling, and churn prediction. It is also the de facto standard in estimating bid-response models (van Ryzin, 2012) due to its well-suited hypothesis space. Logistic regression is fully parametric and easy to interpret. Third, we consider a random forest classifier (Breiman, 2001). The random forest classifier combines multiple decision trees into an ensemble to improve the accuracy and reduce the variance of the final model. Random forests can handle a variety of input variables and are robust to overfitting. Random forests, and tree-based methods more generally, have found wide adoption in industry applications of machine learning, motivating our choice to adopt them in our experiments. Fourth, we fit an artificial neural network, i.e., a multilayer perceptron (MLP). MLPs are universal function approximators (Hornik et al., 1989) and are powerful methods for a multitude of prediction tasks, making them an important benchmark for our experiments.
Additionally, we implement three causal methods that are meant to handle selection bias by design, expecting that these methods will show improved robustness to increasing levels of bid selection bias in pricing data. First, we implement the Hirano Imbens estimator ("HIE", Hirano and Imbens, 2004), which builds on the idea of the generalized propensity score, as proposed by Imbens (2000). The HIE is an imputation-type estimator. Initially, a model of the treatment assignment is built. Based on that model, the propensity for every observation and their factual bid is calculated. Finally, the bid response is modeled as a function of the bid level and the
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Parametric & Causal \\ \hline NaΓ―ve pricing & n.a. & \\ Logistic regression & & \\ Random Forest & & \\ MLP & & \\ HIE & & \\ VCNets & & \\ DRNets & & \\ \hline \hline \end{tabular} n.a.: not applicable
\end{table}
Table 1: Models for bid-response prediction
propensity score. We follow Schwab et al. (2020) and Bica et al. (2020) in considering a linear relationship between bid and pretreatment covariates with normally distributed error terms, as well as a logistic regression on the second-order polynomial of bid level and propensity score as the final model of the bid response. Second, we implement DRNets (Schwab et al., 2020). DRNet extends TARNet (Shalit et al., 2017) for estimating effects of binary interventions to the continuous case, by training a neural network with \(\rho\) individual prediction heads for observations with different levels of a continuous intervention. DRNets learn a shared representation of the training observations, which implicitly have a regularizing effect on overfitting the training data. Third, we implement VCNets. Nie et al. (2021) propose VCNets as an extension to DRNets, by proposing a varying coefficient neural network, where coefficients of the neural network are a function of the continuously valued intervention, enforcing a continuous dose response function, over the tendency of DRNets to estimate discontinuous functions. We use the architecture proposed in Nie et al. (2021). We illustrate the architectures of DRNets and VCNets in Figure 3. Finally, we would like to motivate our choice of not including SCIGAN (Bica et al., 2020), another powerful causal machine learning method for the estimation of continuously valued interventions, in the experiments. Even though SCIGAN has shown high performance in previous studies (Bica et al., 2020; Vanderschueren et al., 2023), the set-up of using generative models for the generation of counterfactuals to learn unbiased responses is not suited for the case of pricing, as the training data are not continuous. In our own experiments, SCIGAN performed worse than both DRNets and all other methods considered.
## 4 Experimental Evaluation
### Data
We use a real-life dataset on mortgage loan offers from a bank in Belgium. The data cover over 12,000 loan offers. Each offer relates to precisely one customer. In collaboration with the bank, we selected and processed an anonymized subset of 13 variables for training. The selection of variables was based on their availability for all customers to ensure no missing data, an expert assessment of their relevance for estimating price sensitivity. Categorical variables have been dummy encoded, and all variables have been standardized. These variables include information on terms and conditions of the loan (5 variables), financial background of the customer (3 variables), the socioeconomic background of the customer (4 variables), and the existing relationship of the customer with the bank (1 variable). The precise definition of the variables is confidential
Figure 3: Network architectures
information that cannot be shared.
As highlighted in Section 3.1, evaluating the correctness of predicted potential outcomes is challenging. First, we never observe the true bid-response curve but only the outcome to one bid per customer due to the fundamental problem of causal inference. Second, estimating the overall effectiveness of one pricing policy over another over a group of customers is complicated due to the observational data. Methods designed to estimate such an aggregate effect, such as the Qini-coefficient (Radcliffe, 2007), require randomized controlled trial (RCT) data (Angrist and Pischke, 2009) and are negatively affected by selection bias. Due to compliance, regulation, and the risks of losses and reputation, such trials cannot be implemented reasonably in many real-life settings, including banking operations.
To overcome this issue, we make use of semisynthetic data, a common approach in evaluating potential outcome estimation and to avoid the fundamental problem of causal inference (Berrevoets et al., 2020; Qian et al., 2023; Schwab et al., 2020; Yoon et al., 2018). In such a setup, typically, the explanatory variables in the data are kept as-is, whereas both treatment variables \(B\) and outcomes \(P\) and \(Y\) are generated from a well-defined and known ground truth to control factors such as the strength of the selection bias and the functional form of the potential outcome in relation to \(\mathbf{X}\). We deem the use of semisynthetic data in pricing as particularly interesting to assess the impacts of selection bias on modeling bid responses.
In the following two subsections, we discuss how we model the underlying ground-truth bid response (Section 4.1.1) and assignment of factual bids (Section 4.1.2). For our experiments, we use 70% of the total data for training, 10% for validation and parameter tuning and 20% for testing.
#### 4.1.1 Ground-truth bid response
Based on van Ryzin (2012) and Phillips (2021), we define four requirements with regards to a synthetic ground-truth bid-response function. In our experiments, we standardize bids to be in the domain \([0,1]\), with \(b=0\) the lowest observed bid and \(b=1\) the highest observed bid:
* At \(b=0\), every customer has a unique probability of acceptance, \(p(0,\mathbf{x})\leq 1\). The probability is \(p\leq 1\) but not \(p=1\), as even for the lowest observed bid, a customer might consider the bid to be too high according to their individual preferences or reject a bid based on their customer experience during the loan pricing and sales process.
* When \(b\) increases, the probability of accepting a loan converges but not necessarily towards zero, i.e., \(p(\infty,\mathbf{x})\geq 0\). The probability does not necessarily converge to zero, as even for the highest observed bid, a customer might take out a loan.
* The price sensitivity, that is, \(\frac{\mathrm{d}p(b,\mathbf{x})}{\mathrm{d}b}\), is heterogeneous across customers. Ceteris paribus, two customers might react differently in terms of their bid response to an increase of the bid.
* The probability of loan acceptance is monotonically decreasing for an increasing value of the bid, \(b\).
Furthermore, we consider two different ground-truth bid-response curves: a Richards curve and a stacked sigmoid curve (Figure 4 provides a comparative visualization of both curves).
**Richards curve:** The generalized logistic function or Richards curve (Richards, 1959) is defined as:
\[p(b,\mathbf{x})_{RC}=(1-\alpha)-\frac{\beta-\alpha}{1+e^{-\gamma*(\delta-b)}}+\epsilon \tag{4}\]
with \(1-\alpha\), the left asymptote (or maximum level of acceptance at the lowest observed bid \(b_{min}\)), \(1-\beta\), the right asymptote (or minimum level of acceptance at the highest observed price \(b_{max}\)), \(\gamma\) the steepness of the curve and \(\delta\) the position of the turning point of the curve. The logistic function has seen various applications across domains and is often used as a modeling tool for classification tasks (Cramer, 2002).
To condition \(p(b,\mathbf{x})_{RC}\) on individual customer characteristics, we parameterize the function by means of linear combinations of the input features in \(\mathbf{x}\) via:
\[\alpha =(0.2\mathbf{w}_{1}^{\intercal}\mathbf{x}) \tag{5}\] \[\beta =(0.8+0.2(\mathbf{w}_{2}^{\intercal}\mathbf{x}))\] \[\gamma =(0.5+5(\mathbf{w}_{3}^{\intercal}\mathbf{x}))\] \[\delta =(\mathbf{w}_{4}^{\intercal}\mathbf{x})\] \[\epsilon \sim\mathcal{N}(0,0.1)\]
For all \(i\in\{1,2,3,4\}\), \(\mathbf{w}_{i}\sim\mathcal{U}((0,1)^{d\times 1})\), as proposed in Bica et al. (2020).
**Stacked sigmoid:** To test the performance of models in the case of highly nonlinear bid-response curves, we evaluate them considering the bid response as a mixture of two sigmoid curves. We define:
\[p(b,\mathbf{x})_{SS}=\alpha+\beta*\sigma(P/\gamma)+(1-\alpha-\beta)*\sigma((b- \delta)/(1-\delta))+\epsilon \tag{6}\]
\(\sigma(x)\) is defined as a sigmoid curve:
\[\sigma(x)=1-\frac{1}{1+e^{-20*(0.5-x)}} \tag{7}\]
Figure 4: Illustrative visualization of bid-response curves
As before, we condition \(p(b,\mathbf{x})_{SS}\) on individual customer characteristics via:
\[\alpha =(0.2\mathbf{w}_{1}^{\intercal}\mathbf{x}) \tag{8}\] \[\beta =(0.8\mathbf{w}_{2}^{\intercal}\mathbf{x})\] \[\gamma =\mathbf{w}_{3}^{\intercal}\mathbf{x}\] \[\delta =\mathbf{w}_{4}^{\intercal}\mathbf{x}\] \[\epsilon \sim\mathcal{N}(0,0.1)\]
For all \(i\in\{1,2,3,4\}\), \(\mathbf{w}_{i}\sim\mathcal{U}((0,1)^{d\times 1})\).
#### 4.1.2 Bid assignment
Next to the simulation of the bid response, we control the levels of selection bias by sampling the factual bid \(b_{f}\) from a beta distribution. The approach is motivated and detailed in Bica et al. (2020). We model varying levels of selection bias by assigning to every observation \(\mathbf{x}\) a factual bid:
\[b_{f}\sim Beta\left(\theta+1,\frac{\theta}{\phi+(1-\theta)}\right) \tag{9}\]
where \(\theta\geq 1\) controls the level of selection bias. \(\theta=0\) results in no selection bias. \(\mu\) defines the modal rate, i.e., the most likely assigned price, and is defined as
\[\phi=\mathbf{w}_{5}^{\intercal}\mathbf{X} \tag{10}\]
with \(\mathbf{w}_{5}\sim\mathcal{U}((0,1)^{d\times 1})\). Hence, the distribution of the assigned bids in the dataset depends on \(\mathbf{X}\), thereby introducing selection bias. Different levels of bias are illustrated in Figure 5.
Finally, in addition to directly controlling the level of selection bias, we use the observed bid levels from the original dataset to assess the impact of levels of bid selection bias observed in real-world scenarios.
### Evaluation metrics
We evaluate our method using three performance metrics:
**Mean Integrated Squared Error (MISE)**: The MISE evaluates the potential of a method to predict the true bid response per customer along all observed bid levels:
Figure 5: Visualization of different levels of selection bias \(\theta\) for mode \(\mu=0.25\)
\[\text{MISE}=\frac{1}{n}\sum_{i=1}^{n}\int_{b_{min}}^{b_{max}}\left(\mu(b,\mathbf{ x}_{i})-\hat{\mu}(b,\mathbf{x}_{i})\right)^{2}db \tag{11}\]
with \(n\) the number of test observations, \(b_{min}\) the lowest bid, and \(b_{max}\) the highest bid observed in the training data2. The MISE was initially proposed for individual continuous treatment effect estimation by Schwab et al. (2020)3.
Footnote 2: Due to the overlap assumption (cf. Section 3.3), we only evaluate bids between \(b_{min}\) and \(b_{max}\)
Footnote 3: In addition to to evaluating the bid response, we evaluate the MISE over the expected revenue in accordance with Equation 3, further referred to as the _MISE R_. The results can be found in the Appendix.
**Policy error (PE)**: The PE evaluates how well a method is able to identify the optimal bid \(b^{*}\) and was proposed by Schwab et al. (2020):
\[\text{PE}=\frac{1}{n}\sum_{i=1}^{n}\left(b_{i}^{*}-\hat{b_{i}^{*}}\right)^{2} \tag{12}\]
with \(n\) the number of test observations, \(b_{i}^{*}\) the actually optimal bid of observation \(i\), and \(\hat{b_{i}^{*}}\) the estimated optimal bid.
**Brier score (BS):** Finally, we evaluate each model in terms of their Brier score (Brier et al., 1950) to assess the potential to estimate the factual outcome of a loan pricing and sales process. The Brier score informs about a model's capacity to estimate the factual response:
\[\text{BS}=\frac{1}{n}\sum_{i=1}^{n}\left(y_{i,f}-\hat{\mu}(b_{i,f},\mathbf{x}_ {i})\right)^{2} \tag{13}\]
with \(y_{i,f}\) the factual outcome of observation \(i\) and \(\hat{\mu}(b_{i,f},\mathbf{x}_{i}\) the estimated probability of bid acceptance of observation \(i\) for their factually assigned bid.
### Implementation
We implement our experiments in Python 3.9. Neural network-based methods are implemented in PyTorch (Paszke et al., 2017). Random forest models and logistic regression are implemented using the SciKit-Learn library (Pedregosa et al., 2011). HIE is implemented using the statsmodels library (Seabold and Perktold, 2010). For VCNets, we use the implementation provided in the original work of Nie et al. (2021). For a list of hyperparameters used to train each method, we refer to Section B in the Appendix. All code used for this project is available online at:
[https://github.com/christopher-br/Causal_Pricing](https://github.com/christopher-br/Causal_Pricing).
### Experimental results
We apply our data generating process to the data presented in Section 4.1. We run the process a total of 10 times for each of seven bias strengths increasing from no bias (\(\theta=0\)) to heavy bias (\(\theta=20\)), as well as the factual bids. The stated performance metrics are averaged over each of the 10 randomly initialized runs.
The metrics introduced in Section 4.2 enable evaluation of each model in terms of its capacity to predict the bid response along all observed bids (via the MISE metric) and its appropriateness for operational decision-making to set an optimal bid \(b^{*}\) (via the PE metric). The Brier score, moreover, evaluates the ability to estimate the factual outcome of an observation.
Note that the stated performances depend on both our data generation process (cf. Section 4.1) and the operational objective we have set (cf. Section 3.1).
**Dose response prediction (MISE, Tables 2 and 3):** If the true dose-response function assumes the shape of a Richards curve (cf. Table 2), the standard neural network (MLP) achieves the best performance. Second best are standard logistic regression and VCNets. Contrarily to what we had initially assumed, logistic regression and the MLP are seemingly least affected by the increase in selection bias, whereas, e.g., DRNets (a causal method) sees an increase of almost 100% in MISE from no bias to the strongest bias.
The strong performance of logistic regression is likely explained by its hypothesis space being closely related to the ground-truth bid response in the simulation. As the hypothesis spaces are comparable, standard logistic regression is likely able to find a well-performing model by averaging over the observed data. Likewise, DRNet is not able to adequately fit the dose responses. We relate this to the criticism of Nie et al. (2021) on DRNets being prone to fitting discontinuous functions due to estimating separate head networks for different bid levels. This can explain why VCNets improve upon DRNets in the case of the Richard's curve, as their estimated bid-response curve is continuous. HIE does not improve upon the nonparametric causal methods, which might be due to the functional form chosen for modeling the treatment assignment or bid response.
If we assume higher degrees of nonlinearity in the dose response, i.e., in the case of a stacked sigmoid ground truth (cf. Table 3), we observe an increasingly adverse effect of selection bias on estimating the bid response. The logistic regression model is not able to model the functional shape of the bid response with an increasing MISE for higher levels of selection bias. The random forest classifier is again most affected by selection bias, performing worst of the methods we have applied. As with the Richard's curve, the MLP achieves superior performance when no selection bias is present. This time, when bias increases, the MLP is significantly affected by the selection bias in the data. In this setting, DRNets exceed the performance of other methods, generating second-best results for low levels of bias and achieving best results for bias levels
\begin{table}
\begin{tabular}{l c c c c c c c c} \multicolumn{1}{c}{**Ground truth:** Richards curve} & \multicolumn{5}{c}{_Bias strength_} & \multicolumn{1}{c}{\({}^{\shortmid}\)} \\ \cline{3-10} Model & 0.0 & 2.5 & 5.0 & 7.5 & 10.0 & 15.0 & 20.0 & \({}^{\shortmid}\) \\ \hline NaΓ―ve pricing & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. \\ Logistic Regression & 0.115 & 0.095 & 0.096 & 0.097 & _0.100_ & _0.104_ & _0.112_ & 0.141 \\ Random Forest & 0.083 & 0.105 & 0.122 & 0.135 & 0.143 & 0.152 & 0.170 & 0.164 \\ MLP & **0.056** & **0.062** & **0.058** & **0.062** & **0.066** & **0.060** & **0.064** & **0.068** \\ HIE & 0.118 & 0.120 & 0.122 & 0.124 & 0.124 & 0.125 & 0.126 & 0.126 \\ DRNets & _0.076_ & _0.082_ & 0.117 & 0.117 & 0.111 & 0.118 & 0.140 & _0.111_ \\ VCNets & 0.088 & _0.082_ & _0.084_ & _0.092_ & _0.100_ & _0.104_ & 0.114 & 0.112 \\ \hline \end{tabular} n.a.: not applicable
\end{table}
Table 2: Mean Integrated Squared Error (MISE) on Richards curve
larger than \(\theta=2.5\), as well as for the bid levels observed in the original dataset. DRNets even outperform VCNets, indicating their applicability to causal effect estimation when the training data are binary. VCNets appear to be not well suited in these kinds of settings, as soon as there is a larger degree of nonlinear relations in the data.
**Optimal bid prediction (PE, Tables A1 and A2):** In terms of policy error, we observe similar results as with the MISE. If the true dose response is a Richards curve (cf. Tabel A1), the best-performing models in terms of PE are the MLP and logistic regression. The added flexibility of nonparametric methods, or the use of causal methods, is not sufficient to outperform these two baseline approaches. Again, the method seemingly most affected by selection bias is the random forest classifier.
With increased nonlinearity, the causal methods again score better and best. For moderate levels of bias, VCNets outperform DRNets, likely due to their continuous estimate of the bid response over the discontinuous function of DRNets. If the bias increases further, this benefit is not sufficient to overcome the performance of DRNets, seemingly a more robust estimator.
**Factual outcome estimation (BS, Tables 4 and A3):** Evaluating each of the methods in terms of the Brier score illustrates the need to look at pricing from a causal perspective. If models are solely tested on their predictive performance within the established pricing mechanism, the results are robust across all levels of selection bias. Most noteworthy, the random forest classifier achieves second-best performance under the highest level of selection bias (cf. Table 4), even though it achieves the lowest performance in terms of MISE and PE. This finding stresses that the Brier score cannot be used to evaluate how well a pricing model can generalize across bid levels, i.e., its generalization power. Hence, testing and selecting models based on the Brier score might even introduce vicious cycles (see, e.g., Adam et al., 2020; Mansoury et al., 2020), an important threat that must be further investigated in the case of banking and lending operations.
## 5 Conclusion
In this paper, we explore the use of (causal) machine learning to estimate models of individual bid-response curves from observational loan pricing data. We investigate the adverse effects of bid selection bias. In a series of experiments, we evaluate the impact of varying levels of
\begin{table}
\begin{tabular}{l c c c c c c c c} \multicolumn{1}{c}{**Ground truth:** Stacked sigmoid} & \multicolumn{6}{c}{_Bias strength_} & True bids \\ Model & 0.0 & 2.5 & 5.0 & 7.5 & 10.0 & 15.0 & 20.0 & \\ \hline Naive pricing & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. \\ Logistic Regression & 0.110 & 0.125 & 0.134 & 0.143 & 0.147 & 0.154 & 0.155 & _0.119_ \\ Random Forest & 0.088 & 0.128 & 0.151 & 0.170 & 0.174 & 0.186 & 0.196 & 0.196 \\ MLP & **0.059** & **0.073** & _0.100_ & _0.110_ & _0.119_ & _0.135_ & _0.147_ & 0.132 \\ HIE & 0.087 & 0.101 & 0.109 & 0.122 & 0.121 & 0.149 & 0.157 & 0.176 \\ DRNets & _0.075_ & _0.080_ & **0.098** & **0.093** & **0.115** & **0.128** & **0.109** & **0.103** \\ VCNets & 0.099 & 0.111 & 0.136 & 0.150 & 0.156 & 0.172 & 0.178 & 0.120 \\ \hline \end{tabular}
\end{table}
Table 3: Mean Integrated Squared Error (MISE) on stacked sigmoid
bid selection bias using a semisynthetic dataset on mortgage loan applications and test several data-driven approaches in learning individual bid-response models. Thereby, we frame pricing as problem of causal inference and, more specifically, of individual treatment effect estimation.
The presented results show that ensuring robustness to bid selection bias is crucial in developing a data-driven pricing solution, which potentially is an overlooked issue and threat both in literature and industry. Failing to adjust for selection bias may significantly impact bid-response models that are obtained from observational data and, as such, introduce systematic errors. This, in turn, might induce vicious feedback cycles in which the effectiveness of a pricing policy is both overestimated and suboptimal. Tree-based models, such as random forests, seem especially unable to accurately predict counterfactual outcomes, even when their performance on estimating the outcomes of an existing pricing policy might be convincing. The wide adoption of tree-based methods in the industry makes this a substantial threat that calls for further research. Finally, we show that approaches from causal machine learning prove potent in overcoming issues that result from selection bias, at least when the underlying assumptions discussed in this paper are satisfied.
We draw two key conclusions from our seminal work:
1. The estimation of bid responses from observational data varies from previously discussed settings of estimating dose responses (Bica et al., 2020; Schwab et al., 2020; Vanderschueren et al., 2023). In estimating a bid response, practitioners must infer a continuous function from binary training data. This significantly complicates the task in comparison with previous applications with continuous training data. Methodologies such as VCNet have not been able to show the same level of robustness to selection bias as in preceding studies, e.g., as in Nie et al. (2021).
2. Traditional machine learning methods suffer from increasing levels of bid selection bias, especially when the degree of nonlinearity in the underlying ground-truth bid response increases. Causal machine learning methods can help to overcome this issue but cannot not fully solve it yet.
Having adopted a semisynthetic set of experiments, we want to highlight the assumptions and limitations of our work: in our data simulation (cf. Section 4.1), we purposefully satisfy both overlap and unconfoundedness, two assumptions that are deemed critical for the application of
\begin{table}
\begin{tabular}{l c c c c c c c c} \multicolumn{1}{c}{**Ground truth:** Richards curve} & \multicolumn{6}{c}{_Bias strength_} & True bids \\ Model & 0.0 & 2.5 & 5.0 & 7.5 & 10.0 & 15.0 & 20.0 & \\ \hline Naive pricing & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. & n.a. \\ Logistic Regression & 0.355 & 0.366 & 0.371 & 0.377 & 0.372 & 0.372 & 0.342 \\ Random Forest & _0.345_ & 0.362 & 0.367 & 0.375 & 0.370 & 0.368 & _0.368_ & _0.337_ \\ MLP & **0.342** & **0.356** & **0.362** & **0.369** & **0.362** & **0.362** & **0.362** & **0.334** \\ HIE & 0.356 & 0.381 & 0.389 & 0.396 & 0.394 & 0.392 & 0.389 & 0.345 \\ DRNets & _0.345_ & _0.359_ & _0.366_ & _0.373_ & _0.368_ & _0.367_ & 0.369 & _0.337_ \\ VCNets & 0.347 & 0.362 & 0.367 & 0.374 & 0.369 & 0.368 & 0.369 & 0.341 \\ \hline \end{tabular} n.a.: not applicable
\end{table}
Table 4: Brier score (BS) on Richards curve
causal inference. Both assumptions might be violated in real-life data. We currently assume that the decision to accept a certain bid is a function of only the observed characteristics \(\mathbf{X}\) and the bid \(B\). This does not hold in reality, where hidden confounding is likely to occur, as explained in Section 3.1 and prominently discussed in previous studies on endogeneity in pricing (cf. Section 2). We see two interlinking components: first, customers compare bid offers obtained from different banks, which are typically not observed. Second, some available data are not permitted to be used for business decision making, for example, under regulation 2016/679 of the European Parliament4 (GDPR). Finally, our experiments do not consider any temporal component in the training data. If economic sentiment changes and average credit rates decrease, the bid response, \(\mu(b,\mathbf{x})\), ceteris paribus, will certainly shift, but possibly change in form as well.
Footnote 4: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.
We conclude with an outlook on future work. First, we believe that developing and testing causal machine learning methods for business processes such as pricing is crucial but has not yet been tackled in the established literature. Specifically, we call for the development of methods that learn an individual probability estimate as an effect on a continuously valued intervention from binary training data. This research might advance the application of causal machine learning in many other fields, such as risk assessment and policy evaluation. Second, further research should evaluate to what extent violations of the assumptions in our study impact the performance of causal methods and to what extent assumptions such as overlap and unconfoundedness are violated in practice, potentially requiring the development of alternative solutions. Such research might have critical importance to the adoption of causal machine learning in practice. We link this stream of future research to Bertsimas and Kallus (2022) and the notion that for operational decision-making, a true, unbiased causal model (by the standards of meeting all requirements discussed in Section 3.3) might not always be needed. Third, we highlight the need for research in identifying the strength of selection bias in data. While counterfactual data will never be available for modeling, the strength of selection bias could be assessed. Such a methodology would be able to inform decision makers on whether a causal approach must be considered. Finally, our performance metrics rely on relative performance per customer and do not take absolute costs and revenues into account. We believe that pricing would therefore be a promising application of cost-sensitive approaches in causal inference, such as introduced in Verbeke et al. (2020) and Verbeke et al. (2023).
## Acknowledgements
The authors want to thank AXA Bank Belgium, Crelan, and the involved project team who have supported this project throughout by providing feedback, know-how, and access to the data used for this research project.
This work was supported by the FWO research project G015020N.
## Conflict of interest
The authors state no conflict of interest. |
2309.13523 | LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain
Adaptation | We introduce LiDAR-UDA, a novel two-stage self-training-based Unsupervised
Domain Adaptation (UDA) method for LiDAR segmentation. Existing self-training
methods use a model trained on labeled source data to generate pseudo labels
for target data and refine the predictions via fine-tuning the network on the
pseudo labels. These methods suffer from domain shifts caused by different
LiDAR sensor configurations in the source and target domains. We propose two
techniques to reduce sensor discrepancy and improve pseudo label quality: 1)
LiDAR beam subsampling, which simulates different LiDAR scanning patterns by
randomly dropping beams; 2) cross-frame ensembling, which exploits temporal
consistency of consecutive frames to generate more reliable pseudo labels. Our
method is simple, generalizable, and does not incur any extra inference cost.
We evaluate our method on several public LiDAR datasets and show that it
outperforms the state-of-the-art methods by more than $3.9\%$ mIoU on average
for all scenarios. Code will be available at
https://github.com/JHLee0513/LiDARUDA. | Amirreza Shaban, JoonHo Lee, Sanghun Jung, Xiangyun Meng, Byron Boots | 2023-09-24T02:02:00Z | http://arxiv.org/abs/2309.13523v1 | # LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain Adaptation
###### Abstract
We introduce LiDAR-UDA, a novel two-stage self-training-based Unsupervised Domain Adaptation (UDA) method for LiDAR segmentation. Existing self-training methods use a model trained on labeled source data to generate pseudo labels for target data and refine the predictions via fine-tuning the network on the pseudo labels. These methods suffer from domain shifts caused by different LiDAR sensor configurations in the source and target domains. We propose two techniques to reduce sensor discrepancy and improve pseudo label quality: 1) LiDAR beam subsampling, which simulates different LiDAR scanning patterns by randomly dropping beams; 2) cross-frame ensembling, which exploits temporal consistency of consecutive frames to generate more reliable pseudo labels. Our method is simple, generalizable, and does not incur any extra inference cost. We evaluate our method on several public LiDAR datasets and show that it outperforms the state-of-the-art methods by more than \(3.9\%\) mIoU on average for all scenarios. Code will be available at [https://github.com/JHLee0513/LiDARUDA](https://github.com/JHLee0513/LiDARUDA).
## 1 Introduction
Modern approaches to perception for robotics and autonomous driving rely on supervised LiDAR segmentation methods that can accurately identify objects and scenes from 3D point clouds. These methods have advanced significantly thanks to large public datasets [2, 5, 35, 29] that enable the development of efficient and powerful deep neural networks [47, 8, 43], and they have inspired applications in other domains such as off-road navigation [25, 34], locomotion navigation [13], and construction site mapping [14]. However, supervised LiDAR segmentation often struggles to adapt to new domains (_i.e.,_ domains that the model is not trained on) due to distributional shifts [32] between source and target datasets. LiDAR perception poses a unique challenge in this regard because different sensor configurations (_e.g.,_ beam patterns, reflectivity estimates, mounting position, etc.) introduce significant distributional shifts [39]. To mitigate such concerns, several Unsupervised Domain Adaptation (UDA) approaches [21, 17, 44, 4] have been proposed, transferring the knowledge of a model trained on one domain to another without requiring additional labels. UDA plays an essential role in LiDAR segmentation since it relieves the necessity of an expensive and labor-intensive labeling process.
Domain adaptation methods based on _self-training_, which work by iteratively generating pseudo labels on target data and retraining the model with these labels, have achieved great success in reducing covariate shift in image-based semantic segmentation tasks [26, 48, 12, 1]. These self-training methods operate under the assumption that a model trained on source data yields mostly accurate predictions on at least a subset of the target dataset, enabling the model to adapt and refine its predictions iteratively through fine-tuning over the pseudo labels. However, in the case of LiDAR segmentation, the beam pattern gap between different LiDAR sensors hampers the source model from predicting reasonably good pseudo labels in the target domain for initializing the self-training approach.
To overcome this gap between the source and target datasets, we propose a simple yet effective structured point cloud subsampling method that simulates different LiDAR beam patterns. Specifically, we randomly subsample rows in the range image [27] of a high-beam LiDAR sensor to simulate low-beam LiDAR sensors. Additionally, we propose _cross-frame ensembling_, a temporal ensembling module, to ensure consistency of pseudo labels across LiDAR scans within each sequence. Cross-frame ensembling aggregates predictions from multiple scans and uses nearest neighbors to refine the pseudo labels. While we could simply calculate the average with uniform weights, this method ignores the temporal (i.e., time from the reference scan) and spatial variations (i.e., distance to sensor origin for each scan) of points captured by the LiDAR sensor when aggregating multiple scans. We address this issue by training a Learned Aggregation Model (LAM) that resembles graph convolution [38, 40]. LAM learns how to aggregate pseudo
labels within a sequence and weigh labels for each point differently according to its importance. Adopting this approach eliminates the need for ad-hoc approaches to deal with special cases such as moving objects [44, 21].
We show that the combination of proposed modules achieves state-of-the-art performance in domain adaptation scenarios for urban and off-road driving. Moreover, our framework is applicable to off-the-shelf LiDAR segmentation networks since it does not require any architectural modifications or impose additional computational costs during inference. In contrast to previous work [21, 44] that uses aggregated LiDAR scans within a sequence as a dense and sensor-agnostic representation for the segmentation network, our approach maintains the sparsity of the point cloud during the network forward pass. This characteristic enables us to use state-of-the-art network architectures that favor sparse convolutions for efficient LiDAR segmentation.
## 2 Related Work
**LiDAR Semantic Segmentation** LiDAR semantic segmentation is a fundamental capability for scene understanding in autonomous driving and robotics. In recent years, deep learning methods have achieved remarkable results on LiDAR segmentation, thanks to several large-scale datasets and benchmarks, such as nuScenes [5], SemanticKITTI [2], and SemanticPOSS [29]. Approaches to LiDAR segmentation can be broadly classified into point-based [27, 16], image-based [11, 41], sparse voxel-based [47], and hybrid [36] categories. Despite remarkable progress, existing methods still face challenges in generalizing to different datasets due to two factors: 1) different datasets have different semantic classes and geometric feature distributions, depending on the environments where they were collected. 2) LiDAR sensors have different mounting positions and produce different beam patterns. Therefore, models trained on one dataset may not perform well on another dataset [44, 17, 21]. This limitation restricts the practical applicability of LiDAR-based segmentation methods because labeling LiDAR points is costly and time-consuming.
**Domain Adaptation for LiDAR** There has been an increasing interest in developing domain adaptation techniques to improve the generalization ability of LiDAR perception models across different LiDAR sensors and environments. Domain adaptation methods for LiDAR can be grouped into three categories, which we describe here. 1) _Learning domain-invariant representation_. These methods transform the source and target domain point clouds into a common representation that is independent of sensor characteristics. The common representation can be a 3D mesh [21, 44] or a bird's eye view projection [31]. Notably, Complete & Label [44] learns to complete 3D surfaces from sparse LiDAR scans using a sensor-specific network, and then applies a segmentation network on the completed surfaces. However, this method requires a simplified segmentation network to handle the dense point clouds and also needs to remove moving objects from the common representation using heuristic methods [44] or manual annotations [21]. We use a data-driven approach to decide how different semantic classes should be aggregated. 2) _Learning domain-invariant features_. These methods align or adapt the feature representations of source and target domains using various techniques, such as feature alignment [41, 28], adversarial training [17], multi-task learning [33], and graph matching [4]. These methods do not modify the input point clouds but learn to extract features that are robust to domain variations. 3) _Domain transfer_. These methods explicitly model the difference between source and target domains and apply it to transfer one domain to another. For example, some methods learn a noise model from real data and add it to synthetic data to make it more realistic [5]. Our method applies LiDAR beam subsampling to reduce the domain gap without needing heuristics on which rows to drop and instead uses a random selection scheme.
**Self-training** We leverage self-training, a semi-supervised learning technique [20, 37] that has been successfully applied for unsupervised domain adaptation in the image domain [1, 49, 23], but has not been extensively explored for LiDAR domain adaptation. Self-training, also known as teacher-student training or self-ensembling, iteratively trains a model on a mixture of labeled source data and pseudo labeled target data, where the pseudo labels are generated by the model itself on unlabeled target data. However, since the pseudo labels may be noisy or inaccurate, self-training often requires some regularization strategies to improve their quality and reliability, such as class balancing [49], adversarial pre-training [42], and uncertainty estimation [45]. In our approach, we adopt a teacher-student paradigm and use a data-driven aggregation scheme that selectively aggregates the pseudo labels within each sequence as a regularizer. This further enhances the performance of our model on the target domain.
## 3 Method
### Definitions and Framework Overview
We consider the problem of point cloud semantic segmentation in a domain adaptation setting. Let \(\mathcal{S}\) and \(\mathcal{T}\) denote the source and target datasets, respectively. Each element in the source dataset consists of a tuple \((\mathcal{P},\mathsf{I})\), where \(\mathcal{P}\in\mathbb{R}^{P\times 3}\) represents a 3D point cloud and \(\mathsf{I}\in\{0,1\}^{P\times K}\) denotes the corresponding one hot semantic labels with \(K\) classes. In contrast to the source dataset, the target dataset only contains unlabeled point clouds. We address the closed-set adaptation problem [39], where both the source and target domains share the same semantic classes. Our objective is to train a model on the labeled source
dataset and unlabeled target point clouds, and then evaluate it on a held-out target set using ground-truth annotations. Our approach employs a two-stage self-ensembling strategy to learn a performant segmentation model for the target dataset. A LiDAR segmentation model \(F_{\theta}:\mathbb{R}^{P\times 3}\rightarrow\mathbb{R}^{P\times K}\) is first trained on the labeled source dataset, and then adapted to the target dataset. The source model training and domain adaption stages are detailed next.
### Source Model Training
We train the source model using standard supervised learning, which enables our framework to be applied to any generic LiDAR segmentation model. To facilitate generalization to target domains with fewer LiDAR beams than the source domain, we apply _structured point cloud subsampling_ along with conventional data augmentations during training. Specifically, we subsample the point cloud on the _range image_, which is created by spherical mapping of a LiDAR scan into a 2D image [27]. The image is represented as \(I_{\mathcal{P}}\in\mathbb{R}^{H\times W\times 3}\), where \(H\) and \(W\) are the height and width of the projected image. The mapping from the 3D point \(\mathbf{p}=(x,y,z)\) to the image coordinate \((u,v)\) is defined as
\[\left(\begin{array}{c}u\\ v\end{array}\right)=\left(\begin{array}{c}\frac{1}{2}\left[1-\arctan(y,x) \pi^{-1}\right]W\\ \left[1-\left(\arcsin\left(z/||\mathbf{p}||_{2}\right)+f_{\text{down}}\right) f^{-1}\right]H\end{array}\right),\]
where \(f_{\text{down}}\) and \(f\) denote lower and vertical LiDAR field-of-view, respectively. Then, we randomly drop all the points on a horizontal line with probability \(1-\min(1,r)\), where \(r=n_{\text{target}}/n_{\text{source}}\) and \(n_{\text{target}},n_{\text{source}}\) are the number of laser beams in the target and source datasets, respectively. In cases where the target LiDAR has more beams than the source dataset, we do not apply subsampling to the source dataset. Instead, we address the domain gap by subsampling the target point cloud during the domain adaptation stage in Section 3.3.1. As demonstrated in Figure 1, row subsampling effectively simulates a LiDAR scan with fewer laser beam patterns. We further elaborate on the effectiveness of this data augmentation in Section 4.5.
### Target Domain Adaptation
The domain adaptation stage is an iterative process where each iteration involves generating pseudo labels using a teacher model (_i.e.,_ label generation step), and training a student network with given pseudo labels (_i.e.,_ training step). The adaptation allows for multiple iterations, where any additional iterations after the initial source-to-target adaptation may be viewed as further refinements within the target domain.
Figure 2 summarizes the domain adaptation stage. We employ within-frame and cross-frame ensembling techniques to enhance the quality of the pseudo labels and improve the training of the student model. In the initial source-to-target adaptation iteration, we employ within-frame subsampling (Section 3.3.1) to reduce high-beam target beams to match the low-beam source model. We further enhance the pseudo labels by aggregating predictions within each sequence using cross-frame ensembling described in Section 3.3.2.
The student network is randomly initialized and trained with aggressive data augmentation to enforce consistent predictions across different augmentations for domain adaptation, following a common practice in previous work [1]. While recent literature [1, 46] utilizes a momentum network as a teacher, we adopt a fixed teacher model that allows us to pre-compute the pseudo labels at the beginning of each domain adaptation iteration and reduce the training time significantly.
Figure 1: Comparison of a) original SemanticKITTI LiDAR scan with \(64\,\mathrm{LiDAR}\) beams, b) row-subsampled by randomly dropping range image rows with probability \(0.5\), and c) nuScenes LiDAR scan with \(32\) beams. The figure shows that subsampling rows effectively simulates LiDAR scans with fewer laser beams.
Figure 2: Overview of the domain adaptation process. We first apply within-frame ensembling with the source model (source network) \(F_{\theta}(\cdot)\) to generate the pseudo labels. Subsequently, we apply cross-frame ensembling with the LAM module \(g_{\text{-}}(\cdot)\) to refine the initially generated pseudo labels. Then, we adapt the student network to the target domain by training it with the refined pseudo labels for a certain number of epochs, and finally, re-generate the pseudo labels from the trained student network. The cross-frame ensembling and adaptation steps are iterated multiple times.
#### 3.3.1 Within-frame Ensembling
We use within-frame ensembling when we have more beams in the target LiDAR than in the source LiDAR. As shown in Figure 3-a, this approach works as follows: First, we create a batch of randomly subsampled point clouds from an input LiDAR scan. Then, we use the source model to predict labels for each subsampled point cloud. Finally, we average the predictions across all subsampled point clouds to get the final prediction.
Let \(\mathcal{P}\) be an input point cloud. We generate a set of subsampled point clouds \(\mathbb{T}(\mathcal{P})=\{\tau_{i}(\mathcal{P})\}_{i=1}^{N_{s}}\), where \(N_{s}\) is the number of trials and \(\tau(\cdot)\) is a subsampling operation. We set \(\tau_{1}(\cdot)\) as an identity mapping to keep the original input point cloud and follow a similar approach as in Section 3.2, but we use the source-to-target ratio (\(1/r\)) to drop points within a row of the LiDAR image with a probability of \(1-\min(1,1/r)\). Then, we obtain a set of predictions from the pretrained network \(F_{\mathbf{\theta}}(\cdot)\) by computing \(\mathbb{P}(\mathcal{P})=\{F_{\mathbf{\theta}}(\tau_{i}(\mathcal{P}))\}_{i=1}^{N_{ s}}\). For each point in the original point cloud \(\mathbf{p}\in\mathcal{P}\), we compute its final prediction by averaging all the predictions associated with \(\mathbf{p}\) within the augmented point clouds \(\mathbb{P}(\mathcal{P})\). As the original point cloud is always included, every point appears at least once during aggregation.
#### 3.3.2 Cross-frame Ensembling
Our cross-frame ensembling is illustrated in Figure 3b. To further refine the pseudo labels of individual scans, we utilize predictions on scans from both previous and subsequent timestamps. Given an input query scan \(\mathcal{P}_{t}\) at time \(t\), we aggregate scans from the past and future into a dense point cloud \(\mathcal{D}=\bigcup_{i=t-T}^{t+T}A_{i}(\mathcal{P}_{i})\), where \(A_{i}\) represents the transformation from index \(i\) to \(t\), and \(T\) controls the number of aggregated frames. Let \(\mathcal{N}(\mathbf{p})\subset\mathcal{D}\) denote the set of points that fall in the vicinity of \(\mathbf{p}\in\mathcal{P}_{t}\). We compute the enhanced class probability vector \(\tilde{\mathbf{v}}(\mathbf{p})\) as
\[\tilde{\mathbf{v}}(\mathbf{p})=\frac{1}{Z(\mathbf{p})}\sum_{\mathbf{p}^{ \prime}\in\mathcal{N}(\mathbf{p})}\kappa(\mathbf{p},\mathbf{p}^{\prime}) \mathbf{v}(\mathbf{p}^{\prime}), \tag{1}\]
where \(\mathbf{v}(\mathbf{p}^{\prime})\) is the single scan pseudo label of \(\mathbf{p}^{\prime}\), \(\kappa:\mathbb{R}^{3}\times\mathbb{R}^{3}\rightarrow\mathbb{R}^{+}\) is a positive scoring function, and \(Z\) is a normalizer that ensures \(\tilde{\mathbf{v}}(\mathbf{p})\) remains a probability vector,
Figure 3: Illustration of our within-frame and cross-frame ensembling modules. All the predictions in the figure are obtained from our nuScenes to SemanticKITTI experiment. a) Within-frame ensembling: we randomly select horizontal rows with a probability of \(1-\min(1,1/r)\) and drop all the points in the rows to simulate the different beam patterns of the target domain. To obtain more robust predictions, we apply this subsampling several times and average the prediction. b) Cross-frame ensembling: with the obtained predictions from step a), we temporally aggregate the point clouds and their predictions. Afterward, we calculate the nearest neighboring points within \(\epsilon\)-ball and predict their summing weight using LAM. Finally, we obtain the refined pseudo labels by weight averaging the pseudo labels of the neighboring points.
i.e.,
\[Z(\mathbf{p})=\sum_{\mathbf{p}^{\prime}\in\mathcal{N}(\mathbf{p})}\kappa(\mathbf{p},\mathbf{p}^{\prime}). \tag{2}\]
In our experiments, we compute \(\mathcal{N}(\mathbf{p})\) by finding the \(k\)-nearest neighbors and then selecting the subset that lies within an \(\epsilon\)-ball (i.e., a ball with radius \(\epsilon\)) centered on \(\mathbf{p}\). This approach guarantees that the point count remains under a specific threshold, and all points are within the \(\epsilon\)-ball surrounding \(\mathbf{p}\).
We can set \(\kappa(\mathbf{p},\mathbf{p}^{\prime})\) to a constant value to obtain the standard KNN algorithm, but this method is more suitable for static objects. Assigning the same weight to all the points in \(\mathcal{N}(\mathbf{p})\) overlooks their semantic classes, their distances from the query point \(\mathbf{p}\), and the fact that these points are captured at different times and distances from the LiDAR sensor.
We further improve the quality of refined labels by learning an attention model for label aggregation, which we refer to as the Learned Aggregation Model (LAM). To account for the various sources of aggregation error, LAM considers not only the Euclidean distance between input points \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\), but also their single scan pseudo labels \(\mathbf{v}(\mathbf{p})\) and \(\mathbf{v}(\mathbf{p}^{\prime})\), the temporal offset between \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\), and the distance of \(\mathbf{p}^{\prime}\) to its sensor origin. For ease of notation, we use \(\Phi(\mathbf{p},\mathbf{p}^{\prime})\in\mathbb{R}^{D}\) to denote the feature vector that concatenates each of these factors. Then, we use a fully connected network \(g_{\omega}:\mathbb{R}^{D}\rightarrow\mathbb{R}\) to predict an attention score for each feature vector. Finally, enhanced pseudo labels are obtained via attention by setting \(\kappa(\mathbf{p},\mathbf{p}^{\prime})=\exp(g_{\omega}(\Phi(\mathbf{p}, \mathbf{p}^{\prime})))\) in Equation (1).
To train LAM, we use the source model predictions and aforementioned features as inputs, while supervision is provided through the source dataset ground-truth labels. We first compute the single-scan pseudo label \(\mathbf{v}(\mathbf{p})\) for each point within the source dataset using the source model network \(F_{\theta}(\cdot)\). Next, we construct the dense point cloud by aggregating scans and pre-compute the nearest neighbors \(\mathcal{N}(\mathbf{p})\) for all the points to speed up the training. During training, we estimate the enhanced labels using Equation (1) and use a combination of multi-class cross-entropy and Lovasz-Softmax loss [3] to update model parameters \(\omega\).
The LAM model \(g_{\omega}\), shown in Figure 4, consists of an input standardization layer and \(3\) fully connected hidden layers each followed by batch-norm and ReLU layers. The collected statistics (i.e. mean and variance) for the standardization layer captured by LAM are fit to the source domain and hence are not optimal for adaptation. Therefore, we _modulate_ the statistics by updating the layer with statistics acquired from the target domain. Since the statistics are collected from input point features including semantic pseudo labels, it does not involve acquiring any privileged information from the target domain. We find that updating the statistics in the first layer serves a similar purpose to the Adaptive Batch Normalization (ABN) domain adaption method [24] that adapts the batch-norm statistics across the network.
## 4 Experiments
### Experimental Setup
We compare our method against prior domain adaptation methods on publicly available LiDAR segmentation datasets using the mean Intersection over Union (mIoU) metric. In particular, our main experiment is split into two tracks: 1) between SemanticKITTI [2] and nuScenes [5]; 2) between SemanticKITTI [2], SemanticPOSS [29], and SemanticUSL [17]. These tracks demonstrate that our method is superior to prior work in the presence of environmental shifts, sensor configuration shifts, and different sets of semantic classes.
### Implementation Details
As illustrated in Figure 4, the LAM architecture has three fully-connected layers which consist of 32, 64, and 128 channels, respectively. We train LAM on the same train/validation split as the source model. However, for nuScenes [5] totaling around 400K sweeps, in comparison to SemanticKITTI [2] (\(\sim\)20K), SemanticUSL [17](\(\sim\)18K), and SemanticPOSS [29](\(\sim\)3K), running cross-frame ensembling on the entire set of sequences is costly in terms of both memory and storage. Therefore, when using the nuScenes dataset to train LAM, we subsample the training data while validation is kept unmodified. We randomly select 210 sequences from the nuScenes training set, out of the total 700 sequences. The list of these 210 sequences will be included in the future release of the code for reproducibility purposes.
We use the Pytorch [30] library for our training code, and we implement the sparse 3D convolutions with the Spconv [10] library. Our cross-frame ensembling aggregates 60 frames with a stride of 3 for efficient but dense coverage of the scene. The student model is trained for 25 epochs using the Adam [19] optimizer with a learning rate of \(1e-3\) and with other optimization hyperparameters set to default.
Figure 4: Architecture of \(g_{\omega}(\cdot)\) our Learned Aggregation Model (LAM) module. Note that \(\mathbf{f}\) denotes \(\mathbf{f}=\mathbf{\Phi}(\mathbf{p},\mathbf{p}^{\prime})\). We first apply the batch normalization layer to effectively address the differences in statistics of source and target domains.
We pre-compute the neighbors \(\mathcal{N}(\mathbf{p})\) as well as the model predictions for label generation, which significantly reduces the pre-processing time during the student model training. We use the Faiss [18] library to run a nearest neighbor search to find the 60 nearest neighbors for each query point. We employ an \(\epsilon=0.2m\) radius filtering of found nearest neighbors exclusively in our SemanticKITTI and nuScenes experiments since such radius filtering does not yield any benefits in other experiments. We maintain a fixed size set for all points with zero padding.
We train the source model with 4 types of data augmentation. We apply 1) random rotation around the z-axis with an angle sampled from \([-45^{\circ},45^{\circ}]\), 2) flip augmentation by randomly flipping \(x\) and/or \(y\) coordinates, 3) random scaling with a value sampled from \([0.95,1.05]\), and 4) random translation with zero-mean Gaussian noise and a standard deviation of 0.1. We refer to this setting as the _basic augmentation_ scheme.
As shown in the ablation study, the student model with self-ensembles performs better when trained with a more aggressive, _intense augmentation_ scheme. In this setting, we use the same set of augmentations but increase the range of values. Specifically, we increase the random scale to \([0.9,1.1]\) and set the standard deviation for random translation to 0.5.
### Comparisons on SemanticKITTI and nuScenes
SemanticKITTI and nuScenes datasets focus on urban driving, and hence their semantic labels are specific to the urban roads, including but not limited to road surface, sidewalk, pedestrian, car, etc. For this experiment, we adopt the results from four prior works as the baseline. The baseline methods and our method all use the MinkowskiNet [9] architecture to make fair comparisons.
The SemanticKITTI and nuScenes datasets present challenges from significant sensor configuration shifts, collected with a 64-beam HDL-64E and with a 32-beam VLP-32, respectively. Additionally, the sensor from nuScenes is facing the right side of the vehicle, while the sensor from SemanticKITTI is facing forward, resulting in a \(-90^{\circ}\) rotation difference between them around the \(z\)-axis. From minor differences such as axis rotation, sensor height, and viewpoint angle, to major variances such as beam pattern and resolution, the domain gap in sensor configuration makes these adaptation scenarios highly challenging.
As shown in Table 1, the source models without any DA experience severe degradation in target domain performance. In stark contrast, our method improves over the source model by 14.1% mIoU and 10.9% mIoU in each scenario. We also compare our method with two state-of-the-art methods Complete & Label [44], which uses aggregated LiDAR scans as a dense representation, and Graph Matching [4], which uses graph-based feature extraction to align local features across domains. Our method surpasses both methods by a large margin. As we elaborate in Section 4.5, our success stems from our structural point cloud subsampling augmentation, cross-frame ensembling, and LAM.
### Comparisons on SemanticKITTI, SemanticPOSS, and SemanticUSL
We additionally evaluate our method on adaptation scenarios using SemanticKITTI and SemanticPOSS as the source domains, and SemanticKITTI, SemanticPOSS, and SemanticUSL as the target domains. In contrast to the semanticKITTI/nuScenes scenarios, the corresponding three datasets have relatively similar sensor configurations. Instead, the primary domain gap lies in shifts caused by the different environments since SemanticKITTI is collected strictly from on-road scenarios while SemanticPOSS is collected on the campus area, and SemanticUSL is collected on both campus and off-road testing sites.
We compare our method to LiDARNet [17], the current state-of-the-art in UDA for LiDAR segmentation across the SemanticKITTI, SemanticUSL, and SemanticPOSS datasets. LiDARNet adopts the SalsaNext [11] architecture backbone and employs adversarial training for adaptation. The SalsaNext backbone utilizes LiDAR intensities in conjunction with the 3D point cloud. However, we have found that the LiDAR intensity domain gap adversely affects the quality of the pseudo labels generated for the target dataset by our source model. Consequently, we choose not to use LiDAR intensities when training the source model and generating pseudo labels for the target set during the first iteration of adaptation. However, we do employ LiDAR intensities in subsequent adaptation steps when training the student model and generating pseudo labels.
As shown in Table 2, our method outperforms LiDARNet by 3.2% mIoU and 3.9% mIoU in the SemanticKITTI to SemanticUSL and SemanticPOSS domain adaptation tasks, respectively. When using SemanticPOSS as the
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Source & Target & Method & mIoU (\%) \(\uparrow\) \\ \hline \hline \multirow{4}{*}{KITTI} & KITTI & Source & 45.80 \\ \cline{2-4} & \multirow{4}{*}{NUS} & Source2SegV2\({}^{*}\)[41] & 27.75 \\ & & \multicolumn{2}{c|}{SWD\({}^{*}\)[22]} & 27.70 \\ & & Complete \& Label [44] & 31.60 \\ & & \multicolumn{2}{c|}{Graph Matching [4]} & 37.30 \\ & & \multicolumn{2}{c|}{**LiDAR-UDA**} & **41.84** \\ \hline \multirow{4}{*}{NUS} & NUS & Source & 50.72 \\ \cline{2-4} & \multirow{4}{*}{KITTI} & Source2 & 23.17 \\ \cline{1-1} & & \multicolumn{2}{c|}{SqueezeSegV2\({}^{*}\)[41]} & 13.40 \\ \cline{1-1} & & \multicolumn{2}{c|}{SWD\({}^{*}\)[22]} & 24.50 \\ \cline{1-1} & & \multicolumn{2}{c|}{Complete \& Label [44]} & 33.70 \\ \cline{1-1} & & \multicolumn{2}{c|}{**LiDAR-UDA**} & **34.04** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of methods for SemanticKITTI (KITTI) and nuScenes (NUS) datasets. MinkowskiNet [9] architecture is adopted for all methods. \({}^{*}\) Results from [44]
source domain, our method outperforms LiDARNet by 5.7% mIoU on SemanticKITTI and 6.1% mIoU on SemanticUSL.
The aforementioned intensity domain gap becomes apparent when comparing the LiDARNet [17] model, trained with intensity values as input, with our _Source (Ours)_ model, trained using the MinkowskiNet architecture without intensity. Despite the SalsaNext model outperforming our MinkowskiNet architecture on the source domains, our source model demonstrates significantly better performance on the target domains, revealing a domain gap in the LiDAR intensities. Further details can be found in the appendix.
### Ablation Study
In this section, we analyze the individual contributions made by our proposed modules, namely the structural point cloud subsampling, within-frame / cross-frame ensembling, and LAM. Additionally, we explore the applicability of LiDAR-UDA to other point cloud segmentation architectures and self-training strategies.
**Structural Point Cloud Subsampling** In the upper section of Table 3, we compare the nuScenes pseudo labels obtained from source models trained on SemanticKITTI using various subsampling methods. With a target-to-source ratio of \(r=0.5\), our method (Random) drops each row in the range image with a \(50\%\) chance. This simulates diverse LiDAR patterns and significantly reduces the domain gap, resulting in an \(8.9\%\) improvement in mIoU compared to when no subsampling is applied. On the other hand, regular subsampling, which drops every other row in the LiDAR image, is not as robust to the variability of LiDAR patterns (second row).
**Cross-frame Ensembling** We also compare the impact of cross-frame ensembling using constant weights (Uniform) and LAM in the lower section of Table 3. The table shows that uniform aggregation boosts the performance by \(3.3\%\) in mIoU, and LAM further enhances it by \(2.2\%\) over the uniform method. Therefore, Table 3 demonstrates that both point cloud subsampling and attention-based cross-frame aggregation are essential for improving the adaptation performance without making specific assumptions on the sensor shift.
To further demonstrate the advantage of LAM, we show the confusion matrices of Uniform and LAM in Figure 5, where we compare their performance on static (road, terrain, trunk, etc.) and dynamic (car, pedestrian, bicycle, etc.) semantic label classes. We notice that both models perform similarly on static objects, but LAM has significantly fewer false negatives on dynamic objects (_i.e.,_ dynamic objects predicted as one of the static classes) than Uniform. This could be explained by the fact that static objects have a higher density in the aggregated point cloud, and uniform weight assignment is potentially biased towards the objects with higher density.
**Within-frame Ensembling** We examine the effect of using the original point cloud and different numbers of subsampled point clouds in within-frame ensembles in Table 4. We observe consistent improvement using a larger number of ensembles, which effectively bridges the gap between different beam patterns. Additionally, comparing the sec
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Source & Target & Method & Person & Rider & Car & Trunk & Vegetation & Sign & Pole & Object & Building & Fence & Bike & Ground & mIoU \\ \hline \hline \multirow{8}{*}{KITTI} & Source (LiDARNet) & 62.09 & 74.21 & 93.59 & 61.15 & 91.11 & 37.99 & 57.94 & 50.36 & 84.82 & 54.64 & 15.48 & 94.13 & 64.79 \\ & Source (Ours) & 42.06 & 69.11 & 94.89 & 60.53 & 85.58 & 31.86 & 59.00 & 39.94 & 88.50 & 47.58 & 9.49 & 94.34 & 60.24 \\ \hline \hline \multirow{8}{*}{KITTI} & Source & 33.90 & 0.00 & 27.45 & 10.68 & 36.89 & 16.20 & 12.72 & 5.68 & 41.61 & 3.55 & **31.60** & 75.95 & 24.69 \\ & CyCADA & 0.38 & 0.00 & 28.70 & 13.83 & 57.11 & 20.70 & 23.83 & 3.78 & 53.14 & 22.30 & 9.24 & 72.36 & 25.45 \\ & LidarNet & 33.17 & 0.00 & 67.75 & 38.95 & **85.60** & 49.43 & 43.44 & 8.94 & **72.86** & **44.06** & 23.07 & **93.18** & 46.75 \\ \cline{2-13} & Source & 42.00 & 0.00 & 69.18 & 35.08 & 82.94 & 6.80 & 43.41 & 13.23 & 68.02 & 42.75 & 2.13 & 92.45 & 41.51 \\ & **LiDAR-UDA** & **49.50** & 0.00 & **78.42** & **53.78** & 85.47 & **58.56** & **59.97** & **19.42** & 69.70 & 32.16 & 0.00 & 92.70 & **49.97** \\ \hline \hline \multirow{8}{*}{POSS} & Source & 22.77 & 1.78 & 35.91 & 16.86 & 39.84 & 7.08 & 9.73 & 0.18 & 57.03 & 1.64 & **18.17** & 41.99 & 21.08 \\ & CyCADA & 0.00 & 0.00 & 0.00 & 1.45 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.012 \\ & LiDARNet & 31.39 & **23.98** & **70.78** & 21.43 & 60.68 & **9.59** & 17.48 & **4.97** & **79.53** & 12.57 & 0.78 & **82.41** & 34.63 \\ \cline{2-13} & Source & 31.76 & 9.07 & 46.81 & 22.69 & 60.61 & 0.05 & 26.51 & 2.51 & 70.87 & 23.3 & 1.05 & 75.06 & 30.86 \\ & **LiDAR-UDA** & **65.59** & 2.19 & 64.12 & **27.49** & **65.40** & 6.44 & **36.57** & 4.19 & 75.21 & **40.31** & 0.00 & 75.06 & **38.55** \\ \hline \multirow{8}{*}{POSS} & Source (LiDARNet) & 64.47 & 48.25 & 85.77 & 29.71 & 62.71 & 27.29 & 38.19 & 0.87 & 84.90 & 48.50 & 65.56 & 72.56 & 53.00 \\ & Source (Ours) & 73.65 & 34.18 & 69.48 & 27.58 & 71.11 & 28.24 & 24.43 & 13.90 & 79.71 & 44.11 & 47.73 & 78.93 & 49.42 \\ \cline{2-13} & Source & 5.20 & 0.50 & 22.57 & 0.54 & 44.00 & 1.90 & 12.83 & 0.08 & 43.09 & 0.70 & 0.40 & 5.62 & 11.45 \\ & LiDARNet & **23.64** & 24.86 & 71.31 & 23.67 & 72.38 & 4.17 & 31.28 & **2.48** & 59.41 & 0.36 & 0.53 & 68.68 & 32.06 \\ \cline{2-13} & Source & 14.17 & **48.21** & 63.57 & 18.93 & 65.43 & 1.63 & 9.47 & 0.16 & 55.75 & 1.06 & 0.52 & 84.07 & 30.25 \\ & **LiDAR-UDA** & 11.65 & 39.1 & **83.17** & **27.46** & **76.95** & **8.60** & **33.14** & 0.28 & **66.43** & **1.73** & **1.82** & **92.95** & **37.77** \\ \hline \hline \multirow{8}{*}{USL} & Source & 2.45 & 0.00 & 16.15 & 1.21 & 27.94 & 1.34 & 4.52 & 0.62 & 44.37 & 0.12 & 1.16 & 8.05 & 8.99 \\ & LiDARNet & 30.38 & 0.00 & 45.73 & 28.69 & 63.08 & **22.29** & **33.92** & 4.12 & 63.70 & 1.89 & 9.42 & 77.49 & 31.73 \\ \cline{1-1} \cline{2-13} & Source & 35.07 & 0.00 & 51.99 & 27.51 & 75.84 & 13.36 & 25.98 & 0.05 & 65.47 & 1.10 & **11.21** & **90.95** & 31.18 \\ \cline{1-1} & **LiDAR-UDA** & **45.49** & 0.00 & **61.74** & **31.37** & **82.67** & 15.30 & 25.19 & **15.87** & **68.10** & **8.20** & 8.19 & **92.30** & **37.87** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of methods for SemanticKitti (KITTITI), SemanticPOSS (POSS), and SemanticUSL (USL) datasets. MinkowskiNet [9] architecture is adopted for LiDAR-UDA, while CyCADA [15] and LiDARNet adopted the LiDARNet architecture [17], a boundary-aware variant of SalsaNext [11].
ond and third rows reveals the importance of having the original point cloud in the ensemble. We used two random samples alongside the original point cloud in Section 4.3 to balance time complexity and performance.
**Effects of LAM & Data Augmentation on Student** Table 5 compares the student models trained with 1) single scan pseudo labels, i.e., directly adapting to the target domain model with pseudo labels from the source model, 2) basic data augmentation scheme with LAM, and 3) our intense augmentation with LAM.
Using LAM yields significant gains over using the single scan pseudo labels with an improvement of 4.3% mIoU. Meanwhile, the performance gains of 1.2% mIoU also align with the general understanding that applying stronger augmentation on the student model in a self-training or semi-supervised training framework is beneficial [1, 6, 7]. Lastly, we also observe that the basic data augmentation scheme reduces performance in the single scan pseudo label scenario.
**Applicability to Other Architectures** To demonstrate the model-agnostic nature of our proposed framework, we test integrating Cylinder3D [47] into LiDAR-UDA. Cylinder3D utilizes asymmetric cylindrical 3D convolutions, resulting in superior performance compared to MinkowskiNet [9]. While maintaining the experimental setup outlined in Section 4.3, we opt to use constant weights (Uniform) for this experiment, avoiding the time taken to train LAM. The results in Table 6 demonstrate significant improvements for LiDAR-UDA over the source model: 16.1% mIoU improvement for SemanticKITTI \(\rightarrow\) nuScenes and 14.5% mIoU improvement for nuScenes \(\rightarrow\) SemanticKITTI. Furthermore, when compared to the results in Table 1, using Cylinder3D yields significant improvements, including over our own method using MinkowskiNet.
**LiDAR-UDA with Class-Balanced Self-Training** While our method achieves state-of-art performance in the SemanticKITTI to SemanticPOSS in the adaptation experiment, we observe a reduction of rider and bike classes IoU. We hypothesize that this drop in performance is due to class imbalance, as the rider and bike classes account for only approximately 0.5% and 5% of the entire dataset, respectively. To test this hypothesis, we employ CBST [48], a class-balanced self-training framework that avoids the dominance of large classes in pseudo label generation by performing class-wise confidence normalization and selecting a portion of pseudo labels with higher confidence.
Table 7 demonstrates that using CBST with LAM im
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Source & Target & Method & mIoU (\%) \(\uparrow\) \\ \hline \hline \multirow{3}{*}{KITTI} & KITTI & Source & 61.62 \\ \cline{2-4} & \multirow{2}{*}{NUS} & Source & 32.72 \\ & & **LiDAR-UDA** & **48.79** \\ \hline \multirow{3}{*}{NUS} & NUS & Source & 74.70 \\ \cline{2-4} & KITTI & Source & 32.05 \\ \cline{1-1} \cline{2-4} & **LIDAR-UDA** & **46.58** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of methods for SemanticKITTI (KITTI) and nuScenes (NUS) datasets. Cylinder3D [47] architecture is adopted for all methods.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Method & \multicolumn{2}{c}{mIoU (\%) \(\uparrow\)} \\ \hline input & 23.17 \\
2 x random & 23.39 \\ input + 2 x random & 25.50 \\ input + 4 x random & 26.13 \\ input + 8 x random & 26.59 \\ input + 16 x random & 26.84 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the within-frame ensembling on the target SemanticKITTI domain for the source model trained on the nuScenes dataset. No adaptation is applied to the model for this comparison. We use MinkowskiNet [9] architecture for all methods.
Figure 5: Normalized confusion matrices for static and dynamic classes in SemanticKITTI \(\rightarrow\) nuScenes DA experiment. The original row-normalized 10x10 histogram matrix is condensed into a 2x2 matrix by grouping the static and dynamic classes. The results show that LAM outperforms standard uniform weights (Uniform) in predicting dynamic objects.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Method & mIoU (\%) \(\uparrow\) \\ \hline \hline \multirow{2}{*}{Effects of LAM} & Single-scan + Intense Aug. & 37.52 \\ & LAM + Intense Aug. (Ours in Table 1) & 41.84 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Effects of Augmentation \\ \end{tabular} } & Single-scan + Basic Aug. & 37.36 \\ & Single-scan + Intense Aug. & 37.52 \\ & LAM + Basic Aug. & 40.61 \\ & LAM + Intense Aug. (Ours in Table 1) & 41.84 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparision of student models for SemanticKITTI \(\rightarrow\) nuScenes DA. The single-scan method trains the student model directly from source model pseudo labels without any ensembling. The source model used by all the methods in the table is trained with structural point cloud subsampling.
proves the IoU for minor classes, such as rider, traffic sign, pole, and bike. However, this improvement also results in a slight decrease in the overall mIoU. The optimal performance is achieved by setting the pseudo label portion to be \(p=20\%\). Please refer to Algorithm 2 of CBST [48] for further details on the implemented CBST framework.
Our ensembling and LiDAR augmentation techniques are applicable to various self-training strategies. Thus, exploring different self-training and class-balancing approaches offers promising avenues for future research in LiDAR unsupervised domain adaptation.
### Qualitative Evaluation
Figure 6 shows the effectiveness of our method with a set of examples comparing the base model prediction, uniform weight aggregation, and our method with LAM. We circle the areas of noticeable improvement, where we are able to observe that LAM correctly aggregates model predictions with regard to the geometric and temporal information of the points to segment pedestrians, which are either missed by the base model or washed out when the predicted semantic classes are uniformly aggregated.
## 5 Conclusion
We present LiDAR-UDA, a novel unsupervised domain adaptation framework for LiDAR segmentation. Based on self-training, the framework enables the transfer of model knowledge from the labeled source domain to the unlabeled target domain. Using structural LiDAR point cloud subsampling that reduces the geometric structural gap between the source and target domain input, and cross-frame ensembling that regularizes the self-training, LiDAR-UDA offers an efficient, model-agnostic adaptation method. We demonstrate the effectiveness of our method by surpassing the current state-of-the-art UDA methods on various publicly available LiDAR segmentation datasets. We hope this paper lays a foundation for further exploration of self-training methods for domain adaptation in LiDAR perception.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Classes & Source & LiDAR-UDA & + CBST (\(p=0.2\)) & + CBST (\(p=0.5\)) \\ \hline Person & 31.76 & **65.59** & 40.01 & 29.28 \\ Rider & 9.07 & 2.19 & **19.89** & 11.72 \\ Car & 46.81 & **64.12** & 51.55 & 60.84 \\ Trunk & 22.69 & **27.49** & 25.73 & 27.41 \\ Vegetation & 60.61 & **65.40** & 59.77 & 63.75 \\ Traffic-sign & 0.05 & 6.44 & **21.07** & 4.47 \\ Pole & 26.51 & 36.57 & 37.31 & **39.22** \\ Object & 2.51 & **41.9** & 1.31 & 1.95 \\ Building & 70.87 & **75.21** & 64.58 & 72.90 \\ Fence & 23.30 & **40.31** & 28.17 & 31.50 \\ Bike & 1.05 & 0.00 & **19.94** & 0.27 \\ Ground & 75.06 & 75.06 & 75.73 & **77.34** \\ \hline mIoU & 30.86 & **38.55** & 36.34 & 35.05 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of source, Lidar-UDA, and Lidar-UDA + CBST methods on the SemanticKITTI to SemanticPOSS adaptation scenario. CBST denotes class-balanced self-training from Zou _et al._[48].
Figure 6: Visualization of two example frames from the held-out target domain data for SemanticKitti \(\rightarrow\) nuScenes adaptation scenario. We compare the ground truth against pseudo labels from the base model (single scan), the cross-frame ensembling using uniform weights, and LAM. We circle the specific points of interest, where we see a noticeable improvement in segmenting small objects or sparse parts of a scene with LAM compared to other methods. Note that the unlabeled points are colored in black in the ground truth. |
2309.04739 | Data Augmentation for Conversational AI | Advancements in conversational systems have revolutionized information
access, surpassing the limitations of single queries. However, developing
dialogue systems requires a large amount of training data, which is a challenge
in low-resource domains and languages. Traditional data collection methods like
crowd-sourcing are labor-intensive and time-consuming, making them ineffective
in this context. Data augmentation (DA) is an affective approach to alleviate
the data scarcity problem in conversational systems. This tutorial provides a
comprehensive and up-to-date overview of DA approaches in the context of
conversational systems. It highlights recent advances in conversation
augmentation, open domain and task-oriented conversation generation, and
different paradigms of evaluating these models. We also discuss current
challenges and future directions in order to help researchers and practitioners
to further advance the field in this area. | Heydar Soudani, Evangelos Kanoulas, Faegheh Hasibi | 2023-09-09T09:56:35Z | http://arxiv.org/abs/2309.04739v2 | # Data Augmentation for Conversational AI
###### Abstract.
Advancements in conversational systems have revolutionized information access, surpassing the limitations of single queries. However, developing dialogue systems requires a large amount of training data, which is a challenge in low-resource domains and languages. Traditional data collection methods like crowd-sourcing are labor-intensive and time-consuming, making them ineffective in this context. Data augmentation (DA) is an affective approach to alleviate the data scarcity problem in conversational systems. This tutorial provides a comprehensive and up-to-date overview of DA approaches in the context of conversational systems. It highlights recent advances in conversation augmentation, open domain and task-oriented conversation generation, and different paradigms of evaluating these models. We also discuss current challenges and future directions in order to help researchers and practitioners to further advance the field in this area.
Data Augmentation, Dataset Creation, Conversation Generation, Conversational AI +
[MISSING_PAGE_POST]
[
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
[
[MISSING_PAGE_POST]
[
Footnote β : [
Footnote β : [
[
[MISSING_PAGE_POST]
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
[
[MISSING_PAGE_POST]
[
Footnote β : [
[
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
Footnote β : [
[
[MISSING_PAGE_POST]
[
[MISSING_PAGE_POST]
* Conversational Information Seeking: Theory and Application (Beng et al., 2019) in SIGIR 2022. The tutorial aims to provide an introduction to Conversational Information Seeking (CIS), covering recent advanced topics and state-of-the-art approaches.
* Self-Supervised Learning for Recommendation (Krishnan et al., 2022) in CIKM 2022. This tutorial aims to present a systematic review of state-of-the-art self-supervised learning (SSL)-based recommender systems.
* Limited Data Learning (Zhu et al., 2022) in ACL 2022. This tutorial offers an overview of methods alleviating the need for labeled data in NLP, including DA and semi-supervised learning.
* Proactive Conversational Agents (Zhu et al., 2022) in WSDM 2023. This tutorial introduces methods to equip conversational agents with the ability to interact with end users proactively.
Unlike previous tutorials that focus on either conversational systems or the data scarcity problem, this tutorial provides an in-depth exploration of the challenges associated with augmenting and creating conversational data, highlighting the unique difficulties posed in conversational context. To the best of our knowledge, no tutorial has specifically focused on dataset creation techniques for dialogue systems.
_Target audience and prerequisites._ This tutorial is designed for professionals, students, and researchers in information retrieval and natural language processing, specifically interested in conversational AI. Familiarity with machine learning, deep learning, and transformers is required. No prior knowledge of dialogue system models or data augmentation methods is necessary.
_Tutorial material._ Tutorial slides, a collection of essential references, and other support materials can be found at the tutorial website [https://dataug.convai.github.io](https://dataug.convai.github.io).
## 2. Tutorial Outline
We plan to give a half-day tutorial (three hours). The tutorial starts by an introduction, followed by three main sessions. We plan to have a short Q&A after each session and conclude the tutorial with a discussion on evaluation, future direction, and a final Q&A session.
### Agenda
A tentative schedule of the tutorial is as follows.
1. Introduction (**10 min**) 1.1 Conversational Systems 1.2 Problem of Data Scarcity 1.3 Data Augmentation (**2**) Conversation Augmentation (**30 min**) 2.1 Generic Token-level & Sentence-level Augmentation 2.2 Dialogue Data Augmentation 3.1.1.1 Conversion Generation Generation: Open Domain (**80 min**) 3.1 Single-turn QA Pair Generation 3.1 Multi-turn Dialogue Generation 3.1.1.2 Topic-aware Dialogue Agent 3.1.2 Conversion Generation: Task-oriented (**40 min**) 4.1 Schema-guided Generation 4.2 Simulator-Agent Interaction 4.3 E2E Dataset Creation
5. Evaluation (**10 min**) 6. Conclusion and Future Direction (**10 min**)
### Content
_Introduction._ We start by introducing the audience to the basics of conversational systems, including TOD and ODD systems. We provide an overview of the components and concepts associated with TOD and ODD systems, ensuring that participants can grasp the necessary knowledge to follow the tutorial independently. We further discuss the data scarcity problem in creating dialogue systems, particularly in low-resource domains and languages and give an introduction to the proposed techniques to tackle this issue. Given that dialogue datasets may not be available for all languages and domains, we discuss dialogue generation methods that leverage external resources such as unstructured text files, knowledge graphs, and Large Language Models (LLMs).
_ConversationAugmentation._ We proceed by providing an overview of existing works in conversation augmentation. Augmentation techniques have demonstrated effectiveness in various NLP tasks, involving the creation of new samples through modifications of existing ones. However, augmenting dialogue data requires precision due to the interconnected nature of multi-turn user-bot utterances, presenting additional challenges. Within augmentation methods, there are two broad categories applicable to text-based tasks: token-based (Beng et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) and sentence-based (Beng et al., 2019; Chen et al., 2019; Krizhevsky et al., 2014) approaches. These categories involve the replacement of original tokens or sentences with relevant alternatives. We discuss specific techniques have been proposed to generate new dialogue samples for both TOD (Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Zhu et al., 2022) and ODD (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) systems. These models employ generative models, RL-based models, counterfactual learning, or user dialogue at augmentation and offer new avenues to generate dialogue samples, further enriching the available training data for dialogue systems.
_Conversation Generation: Open Domain._ In this part of our tutorial, we focus on the methods available for generating dialogue samples for an ODD system. The pipeline approach, initially introduced for synthetic QA pair generation (Chen et al., 2019), is one way to generate ODD samples. This method consists of four sequential stages: passage selection, answer extraction, question generation, and a subsequent filtering process to maintain quality of generated QA pairs. Building upon the successful generation of QA pairs (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014), researchers have extended the pipeline approach to generate complete conversation samples, addressing challenges such as sub-passage selection, flow consistency, coreference alignment, and handling different question types (Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Krizhevsky et al., 2014; Krizhevsky et al., 2014). However, a major limitation of the pipeline approach is that the conversation's flow is primarily determined by the passage's flow. This means that a passage is initially divided into multiple chunks, and each turn of the conversation is generated based on its corresponding chunk. To achieve more control over the conversation flow, one potential solution involves generating a multi-turn conversation along a path of entities or keywords extracted from a knowledge graph (KG). Based on this idea, various tasks have been defined to connect the initial entity or sentence to the target entity. One such task is the one-turn topic transition, which generates a
"bridging" utterance to connect the newly introduced topic to the previous turn's topic (Kang et al., 2018; Liu et al., 2020; Liu et al., 2020). Additionally, we introduce target-oriented dialogue systems, where models actively guide conversations towards predefined target topics, ensuring smooth transitions and progress towards the desired targets (Wang et al., 2019; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020). Furthermore, we explore goal-directed dialogue planning strategies that empower the dialogue system to embrace a discourse-level perspective, taking into account the overarching objective of the conversation, with the aim of generating a response that aligns with it (Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020).
Conversation Generation: Task-orientedWe next discuss the conversation generation methods for TOD systems. In such systems, the primary objective of the dialogue system is to understand the user's intent throughout a multi-turn conversation and subsequently provide relevant suggestions to assist the user in achieving their goal. However, accurately capturing the essential information from the user's utterances to ensure successful task completion requires meticulous attention and domain expertise (Liu et al., 2020).
We begin by introducing schema-guided generation methods (Liu et al., 2020; Liu et al., 2020), which leverages self-play models to generate dialogues. The generated dialogues are then annotated and filtered using crowdsourcing techniques. Another approach focuses on enhancing user simulator models, which simulate user behavior and engage in conversations with dialogue systems to generate dialogues for training and evaluation (Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020). Improvements in these simulators contribute to overall enhancements in dialogue system performance and their ability to handle diverse user inputs and scenarios. Lastly, we explore end-to-end approaches that aim to directly generate dialogue without explicitly defining intermediate steps or modules (Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020). End-to-end models offer the advantage of encapsulating the entire dialogue generation process within a single model, simplifying both training and inference procedures.
EvaluationAfter discussing dialogue data creation methods, we turn into evaluating the quality of these data. The evaluation process encompasses two levels: turn-level evaluation and global-level evaluation (Liu et al., 2020; Liu et al., 2020; Liu et al., 2020; Liu et al., 2020). At the turn-level, the system's response is compared to the ground-truth response, and this evaluation primarily relies on automatic metrics. Moving to the global-level evaluation, the aim is to assess the overall conversation quality by considering characteristics such as naturalness, coherence, answerability, and success rate in achieving targets. This evaluation level involves generating conversation samples through interactions between the dialogue system and a user simulator or a human, followed by scoring the entire conversation sample. Alternatively, the generated data can be used to train downstream tasks (Liu et al., 2020; Liu et al., 2020; Liu et al., 2020), and the resulting improvements in performance can be measured. We thoroughly discuss the advantages and disadvantages of these evaluation methods, considering their suitability for different scenarios.
Conclusion and Future DirectionWe conclude the tutorial with an exploration of open research problems and future directions in the field.
## 3. Presenter Biography
Heydar Soudani is a first-year Ph.D. student at Radboud University's Institute of Computing and Information Sciences (iCIS), where he is being supervised jointly by Faegheh Hasibi and Evangelos Kanoulas. He holds a Bachelor's degree from Polytechnic of Tehran and a Master's degree from Sharif University of Technology. His research primarily focuses on conversational systems in low-resource domains and languages. Specifically, he is dedicated to the development of knowledge-grounded models that generate synthetic multi-turn conversation data.
Faegheh Hasibi is an assistant professor of information retrieval at the Institute of Computing and Information Sciences (iCIS) at Radboud University. Her research interests are at the intersection of Information Retrieval and Natural Language Processing, with a particular emphasis on conversational AI and semantic search systems. She explores various aspects, including knowledge-grounded conversational search, entity linking and retrieval, and the utilization of knowledge graphs for semantic search tasks. Her contributions to the field are published in renowned international conferences such as SIGIR, CIKM, COLING, and ICTIR and have been recognized by awards at the SIGIR and ICTIR conferences. She has given mutiple invited talks and has extensive experience as a lecturer.
Evangelos Kanoulas is a full professor of computer science at the University of Amsterdam, leading the Information Retrieval Lab at the Informatics Institute. His research lies in developing evaluation methods and algorithms for search, and recommendation, with a focus on learning robust models of language that can be used to understand noisy human language, retrieve textual data from large corpora, generate faithful and factual text, and converse with the user. Prior to joining the University of Amsterdam, he was a research scientist at Google and a Marie Curie fellow at the University of Sheffield. His research has been published at SIGIR, CIKM, WWW, WSDM, EMNLP, ACL, and other venues in the fields of IR and NLP. He has proposed and organized numerous search benchmarking competitions as part of the Text Retrieval Conference (TREC) and the Conference and Labs of the Evaluation Forum (CLEF). Furthermore, he is a member of the Ellis society ([https://ellis.eu/](https://ellis.eu/)).
|
2307.16691 | Number of ordered factorizations and recursive divisors | The number of ordered factorizations and the number of recursive divisors are
two related arithmetic functions that are recursively defined. But it is hard
to construct explicit representations of these functions. Taking advantage of
their recursive definition and a geometric interpretation, we derive three
closed-form expressions for them both. These expressions shed light on the
structure of these functions and their number-theoretic properties.
Surprisingly, both functions can be expressed as simple generalized
hypergeometric functions. | T. M. A. Fink | 2023-07-31T14:03:32Z | http://arxiv.org/abs/2307.16691v1 | # Number of ordered factorizations and recursive divisors
###### Abstract.
The number of ordered factorizations and the number of recursive divisors are two related arithmetic functions that are recursively defined. But it is hard to construct explicit representations of these functions. Taking advantage of their recursive definition and a geometric interpretation, we derive three closed-form expressions for them both. These expressions shed light on the structure of these functions and their number-theoretic properties. Surprisingly, both functions can be expressed as simple generalized hypergeometric functions.
2023
## 1. Introduction
This paper is devoted to two related arithmetic functions that are recursively defined. The first is the number of ordered factorizations into integers greater than one. The second is the number of recursive divisors, which measures the extent to which a number \(n\) is highly divisible, whose quotients are highly divisible, and so on.
The first function was introduced 90 years ago by Kalmar [1], and for this reason is called \(K(n)\). For example, \(K(8)=4\), since \(8\) can be factorized in \(4\) ways: \(8=2\cdot 4=4\cdot 2=2\cdot 2\cdot 2\). Other values of \(K(n)\) are given in Table 1. Hille [2] extended Kalmar's results and gave them prominence. Canfield et al. [3] and Deleglise et al. [4] studied the indices of sequence records of \(K\), that is, values of \(n\) for which \(K(n)>K(m)\) for all \(m<n\). Newburg and Naor [5] showed that \(K\) arises in computational biology, in the so-called probed partial digest problem, which prompted Chor et al. [6] to study the upper bound of \(K\). This bound was improved by Klazar and Luca [7], who also considered arithmetic properties of the function.
\(K(n)\) can be defined recursively as follows. Let \(n=m\,d\), where \(m>1\). All of the factorizations of \(n\) that begin with \(m\) can be had by counting the ordered factorizations of \(d\), of which there are \(K(d)\). Thus we obtain the recursion relation \(K(n)=\sum_{d\mid n}K(d)\), where \(m\lfloor n\) means \(m|n\) and \(m<n\). Along with the initial condition \(K(1)=1\), this completely determines \(K(n)\). This, at least, is how \(K\) was originally defined [2]. But it is much more fruitful to embed the initial condition into the recursion relation itself:
\[K(n)=\varepsilon+\sum_{d\mid n}K(d), \tag{1}\]
where \(\varepsilon=\lfloor 1/n\rfloor=1,0,0,0,\ldots\). As we shall see, this lets us manipulate the defining equation without having to keep track of the corresponding initial condition.
In contrast with \(K(n)\), the second function \(\kappa_{0}(n)\) is much more recent [8, 9]. It counts the number of recursive divisors:
\[\kappa_{0}(n)=1+\sum_{d\lfloor n}\kappa_{0}(d). \tag{2}\]
For example, \(\kappa_{0}(8)=1+\kappa_{0}(1)+\kappa_{0}(2)+\kappa_{0}(4)=8\), and other values of \(\kappa_{0}\) are given in Table 1. \(\kappa_{0}(n)\) is the simplest case of the more general
\[\kappa_{x}(n)=n^{x}+\sum_{d\lfloor n}\kappa_{x}(d),\]
which was introduced as a recursive analogue of the usual divisor function \(\sigma_{x}(n)\)[8]. Like \(K(n)\), \(\kappa_{0}(n)\) depends only on the prime signature of \(n\)--though this is not the case for other values of \(x\). The analogy with \(\sigma_{x}(n)\) motivated the study of recursively perfect numbers (\(\kappa_{0}(n)=n\)) and recursively abundant numbers (\(\kappa_{0}(n)>n\)) [9].
The two sequences \(K\) and \(\kappa_{0}\) are intimately related. In particular, as we showed in [8], for \(n\geq 2\), \(\kappa_{0}(n)=2K(n)\). Furthermore,
\[\kappa_{0}(n)=\sum_{d\mid n}K(d).\]
Both \(\kappa_{0}\) and \(K\) have a geometric interpretation: \(\kappa_{0}\) is the number of squares in the divisor tree of \(n\), whereas \(K\) is the number of squares of size \(1\) in the divisor tree of \(n\) (see Fig. 1E). The divisor tree is constructed as follows. Starting with a square of side length \(n\) (Fig. 1A), the main arm of the divisor tree is made up of smaller squares with side lengths equal to the proper divisors of \(n\) (Fig. 1B). For each square in the main arm, a secondary arm is made up of squares with side lengths equal to that square's proper divisors (Fig. 1C). The process is repeated, creating sub-arms off of sub-arms, until the last sub-arms are of size \(1\) (Fig. 1E).
A few words on notation. As we mentioned, \(m\lfloor n\) means \(m|n\) and \(m<n\), that is, \(m\) is a proper divisor of \(n\). We denote the Dirichlet series of an arithmetic function \(f\) by \(\widetilde{f}\) and the Dirichlet convolution of two arithmetic functions \(f\) and \(g\) by
\[f\star g=\sum_{d\mid n}f(d)\,g\left(\frac{n}{d}\right).\]
Figure 1. **Divisor trees.** Both \(\kappa_{0}(n)\) and \(K(n)\) have a geometric interpretation. The number of recursive divisors \(\kappa_{0}(n)\) is the number of squares in the divisor tree of \(n\) (E), whereas the number of recursive divisors \(K(n)\) is the number of squares of size \(1\) (orange squares in E). As the divisor tree is built up over successive generations, from generation \(i=0\) (A) to \(i=\Omega(36)=4\) (E), the number of squares increases by \(\upsilon_{i+1}\). For \(n=36\), the values of \(\upsilon_{i}\) are \(\upsilon_{1}=1\), \(\upsilon_{2}=8\), \(\upsilon_{3}=19\), \(\upsilon_{4}=18\) and \(\upsilon_{5}=6\). Then \(\kappa_{0}(36)=\upsilon_{1}+\ldots+\upsilon_{5}=52\). As expected, \(K(36)=\kappa_{0}(36)/2=26\).
## 2. Statement of results
In this paper, we give three closed form representations of \(\kappa_{0}=2K\) for arbitrary \(n\). As well as making it easier to compute both functions, these representations also shed light on their structure and their number-theoretic properties.
Let
\[n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\ldots p_{\omega}^{\alpha_{\omega}}\]
be the prime factorization of \(n\) and let
\[\Omega=\alpha_{1}+\alpha_{2}+\ldots+\alpha_{\omega}.\]
**Theorem 1**.: \[\kappa_{0}(n)=2K(n) =\frac{1}{2}\sum_{i=0}^{\infty}\frac{1}{2^{i}}\prod_{k=1}^{ \omega}\binom{\alpha_{k}+i}{\alpha_{k}}\] \[=\tfrac{1}{2}\,_{\omega}F_{\omega-1}\genfrac{[}{]}{0.0pt}{}{\alpha_ {1}+1,\ldots,\alpha_{\omega}+1}{1,\ldots,1};\tfrac{1}{2}\bigg{]}\,,\]
where \({}_{\omega}F_{\omega-1}\) is the generalized hypergeometric function.
**Theorem 2**.: \[\kappa_{0}(n)=2K(n)=\sum_{i=0}^{\Omega}\sum_{j=0}^{i}(-1)^{i-j}\binom{i}{j} \prod_{k=1}^{\omega}\binom{\alpha_{k}+j}{\alpha_{k}}.\]
**Conjecture 1**.: \[\kappa_{0}(n)=2K(n)=2^{\alpha_{\omega}}\sum_{i_{1}=0}^{\alpha_{1}}\ldots\sum _{i_{\omega-1}=0}^{\alpha_{\omega-1}}\prod_{k=1}^{\omega-1}\binom{\alpha_{k}}{ i_{k}}\binom{\alpha_{k+1}+i_{1}+\ldots+i_{k}}{\alpha_{k+1}}.\]
We tested the conjecture for \(n\in[1,10^{5}]\) and it is correct in all cases.
**Example.** The easiest way to gain some intuition for these three different expressions is to consider an example: \(n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}p_{3}^{\alpha_{3}}p_{4}^{\alpha_{4}}.\) Then \(\omega=4\) and \(\Omega=\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}.\) The two theorems and the conjecture give
\[\kappa_{0}(n) =\frac{1}{2}\sum_{i=0}^{\infty}\frac{1}{2^{i}}\binom{\alpha_{1}+i }{\alpha_{1}}\binom{\alpha_{2}+i}{\alpha_{2}}\binom{\alpha_{3}+i}{\alpha_{3}} \binom{\alpha_{4}+i}{\alpha_{4}}\] \[\kappa_{0}(n) =\sum_{i=0}^{\Omega}\sum_{j=0}^{i}(-1)^{i-j}\binom{i}{j}\binom{ \alpha_{1}+j}{\alpha_{1}}\binom{\alpha_{2}+j}{\alpha_{2}}\binom{\alpha_{3}+j }{\alpha_{3}}\binom{\alpha_{4}+j}{\alpha_{4}}\] \[\kappa_{0}(n) =2^{\alpha_{4}}\sum_{i_{1}=0}^{\alpha_{1}}\sum_{i_{2}=0}^{\alpha _{2}}\sum_{i_{3}=0}^{\alpha_{3}}\binom{\alpha_{1}}{i_{1}}\binom{\alpha_{2}}{i _{2}}\binom{\alpha_{3}}{i_{3}}\binom{\alpha_{2}+i_{1}}{\alpha_{2}}\binom{ \alpha_{3}+i_{1}+i_{2}}{\alpha_{3}}\binom{\alpha_{4}+i_{1}+i_{2}+i_{3}}{ \alpha_{4}}.\]
## 3. Proof of Theorem 1
It is convenient to rewrite (2) as
\[2\kappa_{0}(n)=1+\sum_{d|n}\kappa_{0}(d),\]
where now the sum is over all the divisors of \(n\) rather than just the proper divisors.
We can express this in the language of Dirichlet convolutions:
\[\kappa_{0}=(\mathbf{1}+\mathbf{1}\star\kappa_{0})/2,\]
where \(\mathbf{1}\) is the all \(1\)s sequence, \(1,1,1,\ldots\). Iterating this recursive identity leads to the infinite series
\[\kappa_{0}=\frac{\mathbf{1}}{2}+\frac{\mathbf{1}\star\mathbf{1}}{2^{2}}+\frac{ \mathbf{1}\star\mathbf{1}\star\mathbf{1}}{2^{3}}+\ldots. \tag{3}\]
We can rewrite this as
\[\kappa_{0}(n)=\frac{\tau_{1}(n)}{2}+\frac{\tau_{2}(n)}{2^{2}}+\frac{\tau_{3}(n )}{2^{3}}+\ldots, \tag{4}\]
where \(\tau_{1}=1,1,1,\ldots\) and
\[\tau_{i}(n)=\sum_{d|n}\tau_{i-1}(d).\]
Values of \(\tau_{1}\) to \(\tau_{4}\) are shown in Table 1. These quantities have a natural interpretation: \(\tau_{2}\equiv d\) is the number of divisors of \(n\); \(\tau_{3}\) is the number of divisors of the divisors of \(n\); and so on. It is well known that
\[\tau_{2}\equiv d=\prod_{k=1}^{\omega}(\alpha_{k}+1),\]
and in general
\[\tau_{i}=\prod_{k=1}^{\omega}\binom{\alpha_{k}+i-1}{i-1}. \tag{5}\]
Substituting this into (4), we obtain the desired result, namely,
\[\kappa_{0}(n) =\sum_{i=1}^{\infty}\frac{1}{2^{i}}\prod_{k=1}^{\omega}\binom{ \alpha_{k}+i-1}{i-1}\] \[=\frac{1}{2}\sum_{i=0}^{\infty}\frac{1}{2^{i}}\prod_{k=1}^{\omega }\binom{\alpha_{k}+i}{i}.\]
This can be expressed as a generalized hypergeometric function:
\[\kappa_{0}(n)=\tfrac{1}{2}\,_{\omega}F_{\omega-1}\!\!\left[\!\!\begin{array}[] {c}\alpha_{1}+1,\ldots,\alpha_{\omega}+1\\ 1,\ldots,1\end{array}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 4. Proof of Theorem 2
We know that \(\kappa_{0}(n)\) is the total number of squares in the divisor tree of \(n\) (Fig. 1E). Let's consider the construction of a divisor tree, generation by generation, as shown in Fig. 1. Let \(\upsilon_{1}(n)=1\) be the number of squares in the root of the tree, namely, the single largest square (Fig 1A). Let \(\upsilon_{2}(n)\) be the number of squares in the main arm of the tree, not including the root (Fig 1B). Let \(\upsilon_{3}(n)\) be the number of squares in the secondary arms, not including their roots (Fig. 1C), and so on. The quantity \(\upsilon_{i}(n)\) has a natural interpretation: \(\upsilon_{2}(n)\) is the number of proper divisors of \(n\), \(\upsilon_{3}(n)\) is the number of proper divisors of the proper divisors of \(n\), and so on. As with \(\tau_{i}(n)\), we have \(\upsilon_{1}=1,1,1,\ldots\) and
\[\upsilon_{i}(n)=\sum_{d\mid n}\upsilon_{i-1}(d). \tag{6}\]
Values of \(\upsilon_{1}\) to \(\upsilon_{4}\) are shown in Table 1. We can then express \(\kappa_{0}(n)\) as the sum of the \(\upsilon_{i}\) over the root and the \(\Omega(n)\) generations of arms:
\[\kappa_{0}(n)=\sum_{i=0}^{\Omega}\upsilon_{i+1}. \tag{7}\]
The \(\upsilon_{k}\) are related to the \(\tau_{k}\) as follows:
\[\upsilon_{1} =\tau_{1}\] \[\upsilon_{2} =\tau_{2}-\tau_{1}\] \[\upsilon_{3} =\tau_{3}-2\tau_{2}+\tau_{1}\] \[\upsilon_{4} =\tau_{4}-3\tau_{3}+3\tau_{2}-\tau_{1},\]
and in general, by the principle of inclusion and exclusion,
\[\upsilon_{i}=\sum_{j=0}^{i-1}(-1)^{i-1-j}\binom{i-1}{j}\tau_{j+1}.\]
Substituting this into (7),
\[\kappa_{0}(n)=\sum_{i=0}^{\Omega}\sum_{j=0}^{i}(-1)^{i-j}\binom{i}{j}\tau_{j+1}.\]
Substituting \(\tau_{i}\) from (5) into this, we obtain the desired result:
\[\kappa_{0}(n)=\sum_{i=0}^{\Omega}\sum_{j=0}^{i}(-1)^{i-j}\binom{i}{j}\prod_{k =1}^{\omega}\binom{\alpha_{k}+j}{j}.\qed\]
## 5. Discussion
Theorem 1 tells us that, to our surprise, \(\kappa_{0}(n)=2K(n)\) can be expressed as a generalized hypergeometric function. It is in some sense the simplest such function that can be naturally tied to the prime signature of a number. This connection opens the door to the considerable machinery that is known for the generalized hypergeometric function.
Theorem 1 offers some insight into the properties of \(\kappa_{0}(n)\) and \(K(n)\). Our proof
of it suggests a simple demonstration that, for \(n\geq 2\), \(\kappa_{0}(n)=2K(n)\). As with \(\kappa_{0}(n)\), it is convenient to express (2) as
\[2K(n)=\varepsilon+\sum_{d|n}K(d).\]
In the language of Dirichlet convolutions, this is
\[K=(\varepsilon+\mathbf{1}\star K)/2, \tag{8}\]
where recall \(\varepsilon=1,0,0,0,\ldots\). Iterating this leads to the infinite series
\[K =\frac{\varepsilon}{2}+\frac{\mathbf{1}}{2^{2}}+\frac{\mathbf{1} \star\mathbf{1}}{2^{3}}+\frac{\mathbf{1}\star\mathbf{1}\star\mathbf{1}}{2^{4}}+\ldots\] \[=\frac{1}{2}\left(\varepsilon+\frac{\mathbf{1}}{2}+\frac{ \mathbf{1}\star\mathbf{1}}{2^{2}}+\frac{\mathbf{1}\star\mathbf{1}}{2^{3}}+ \ldots\right)\] \[=\left(\varepsilon+\kappa_{0}\right)/2,\]
where the last step makes use of (3).
We can also readily calculate the Dirichlet series for \(\kappa_{0}(n)\) and \(K(n)\). Our starting point is (4). Denoting the Dirichlet series of \(\kappa_{0}\) and \(K\) by \(\widetilde{\kappa}_{0}\) and \(\widetilde{K}\), we can write
\[\widetilde{\kappa}_{0}=\frac{\widetilde{\tau}_{1}}{2}+\frac{\widetilde{\tau }_{2}}{2^{2}}+\frac{\widetilde{\tau}_{3}}{2^{3}}+\ldots.\]
Since \(\tau_{1}=\mathbf{1}\), \(\tau_{2}=\mathbf{1}\star\mathbf{1}\), and so on, and the Dirichlet series for \(\mathbf{1}\) is \(\zeta(s)\), we have \(\widetilde{\tau}_{i}=\zeta^{i}(s)\). Then
\[\widetilde{\kappa}_{0} =\frac{\zeta(s)}{2}+\frac{\zeta(s)^{2}}{2^{2}}+\frac{\zeta(s)^{3 }}{2^{3}}+\ldots\] \[=\frac{\zeta(s)}{2-\zeta(s)}.\]
As for \(\widetilde{K}\), from (8), \(2\widetilde{K}=1+\widetilde{\kappa}_{0}\), so \(\widetilde{K}=1/(2-\zeta(s))\).
When \(n=p_{1}p_{2}\ldots p_{\omega}\) is the product of \(\omega\) distinct primes, all of the \(\alpha_{i}\) equal one. Then Theorem 1 reduces to
\[\kappa_{0}(p_{1}\ldots p_{\omega}) =\frac{1^{\omega}}{2}+\frac{2^{\omega}}{2^{2}}+\frac{3^{\omega}}{ 2^{3}}+\ldots\] \[=\text{Li}_{-\omega}(1/2),\]
where Li is the polylogarithm. For \(\omega=1,2,3,\ldots\), this has values 2, 6, 26, 150, 1082, \(\ldots\). Its exponential generating function is
\[\text{EG}(\kappa_{0}(p_{1}\ldots p_{\omega}),x) =\sum_{\omega=0}^{\infty}\frac{x^{\omega}}{\omega!}\sum_{i=1}^{ \infty}\frac{i^{\omega}}{2^{i}}\] \[=\sum_{i=1}^{\infty}\frac{1}{2^{i}}\sum_{\omega=0}^{\infty}\frac{ (ix)^{\omega}}{\omega!}\] \[=\sum_{i=1}^{\infty}\frac{e^{ix}}{2^{i}}\] \[=\frac{e^{x}}{2-e^{x}}.\]
MacMahon [11] derived a somewhat more complex version of Theorem 2, using a more laborious approach. It is
\[K(n)=\sum_{i=1}^{\Omega}\sum_{j=0}^{i-1}(-1)^{j}\binom{i}{j}\prod_{k=1}^{\omega} \binom{\alpha_{k}+i-j-1}{\alpha_{k}}.\]
In proving Theorem 2, we made use of the sequences \(v_{i}\), shown in Table 1. We can also calculate their Dirichlet series. Adding \(v_{i-1}\) to (6), and turning to the language of Dirichlet convolutions,
\[v_{i}+v_{i-1}=\mathbf{1}\star v_{i-1}.\]
Denoting the Dirichlet series of \(v_{i}\) by \(\widetilde{v}_{i}\), this implies \(\widetilde{v}_{i}+\widetilde{v}_{i-1}=\zeta(s)\widetilde{v}_{i-1}\), that is,
\[\widetilde{v}_{i}=(\zeta(s)-1)\widetilde{v}_{i-1}.\]
Since \(\widetilde{v}_{1}=\zeta(s)\), it follows that
\[\widetilde{v}_{i}=\zeta(s)(\zeta(s)-1)^{i-1}.\]
Conjecture 1, which is correct for \(n\leq 10^{5}\), can be written more symmetrically:
\[\kappa_{0}(n)=\sum_{i_{1}=0}^{\alpha_{1}}\ldots\sum_{i_{\omega}=0}^{\alpha_{ \omega}}\prod_{k=1}^{\omega}\binom{\alpha_{k}}{i_{k}}\binom{\alpha_{k}+i_{1}+ \ldots+i_{k-1}}{\alpha_{k}}.\]
But since \(i_{\omega}\) only appears in \(\binom{\alpha_{\omega}}{i_{\omega}}\), it sums to \(2^{\alpha_{\omega}}\), giving the original form of Conjecture 1. Since the \(\alpha_{k}\) can be permuted at will, a corollary of this is that \(\kappa_{0}(n)\) is divisible by \(2^{\alpha^{*}}\), where \(\alpha^{*}\) is the largest of the \(\alpha_{k}\)s, which we proved in [8]. This makes Conjecture 1 the most efficient of our three expressions for calculating values of \(K\) and \(\kappa_{0}\) for very large values of \(n\). In particular, it is useful for calculating the indices of the sequence records of \(K\) and \(\kappa_{0}\)--the K-champion numbers [4] (A307866 [10]) and the recursively highly composite numbers [8] (A333952 [10]).
There are a number of open questions about \(K(n)\) and \(\kappa_{0}(n)\). Here are four.
1. Can Conjecture 1 be proved?
2. Can Theorem 1 be be generalized from \(\kappa_{0}(n)\) to \(\kappa_{x}(n)\)?
3. When \(\alpha_{i}=1\) for all \(i\), Theorem 1 reduces to the polylogarithm and has a simple exponential generating function. What about when \(\alpha_{i}=j\) for all \(i\)?
4. What is the significance of the sequence \(\kappa_{0}(n)/2^{\alpha^{*}(n)}\), namely, 1, 1, 1, 1, 1, 3, 1, 1, 1, 3, 1, 4, 1, 3, 3, 1, 1, 4, 1, 4, 3, 3, 1, 5, 1, 3, 1, 4, 1, 13? (The last and 30th term distinguishes this sequence from others.)
|
2307.16849 | A Trajectory K-Anonymity Model Based on Point Density and Partition | As people's daily life becomes increasingly inseparable from various mobile
electronic devices, relevant service application platforms and network
operators can collect numerous individual information easily. When releasing
these data for scientific research or commercial purposes, users' privacy will
be in danger, especially in the publication of spatiotemporal trajectory
datasets. Therefore, to avoid the leakage of users' privacy, it is necessary to
anonymize the data before they are released. However, more than simply removing
the unique identifiers of individuals is needed to protect the trajectory
privacy, because some attackers may infer the identity of users by the
connection with other databases. Much work has been devoted to merging multiple
trajectories to avoid re-identification, but these solutions always require
sacrificing data quality to achieve the anonymity requirement. In order to
provide sufficient privacy protection for users' trajectory datasets, this
paper develops a study on trajectory privacy against re-identification attacks,
proposing a trajectory K-anonymity model based on Point Density and Partition
(KPDP). Our approach improves the existing trajectory generalization
anonymization techniques regarding trajectory set partition preprocessing and
trajectory clustering algorithms. It successfully resists re-identification
attacks and reduces the data utility loss of the k-anonymized dataset. A series
of experiments on a real-world dataset show that the proposed model has
significant advantages in terms of higher data utility and shorter algorithm
execution time than other existing techniques. | Wanshu Yu, Haonan Shi, Hongyun Xu | 2023-07-31T17:10:56Z | http://arxiv.org/abs/2307.16849v1 | # A Trajectory K-Anonymity Model Based on Point Density and Partition
###### Abstract.
As people's daily life becomes increasingly inseparable from various mobile electronic devices, relevant service application platforms and network operators can collect numerous individual information easily. When releasing these data for scientific research or commercial purposes, users' privacy will be in danger, especially in the publication of spatiotemporal trajectory datasets. Therefore, to avoid the leakage of users' privacy, it is necessary to anonymize the data before they are released. However, more than simply removing the unique identifiers of individuals is needed to protect the trajectory privacy, because some attackers may infer the identity of users by the connection with other databases. Much work has been devoted to merging multiple trajectories to avoid re-identification, but these solutions always require sacrificing data quality to achieve the anonymity requirement. In order to provide sufficient privacy protection for users' trajectory datasets, this paper develops a study on trajectory privacy against re-identification attacks, proposing a trajectory K-anonymity model based on Point Density and Partition (KPDP). Our approach improves the existing trajectory generalization anonymization techniques regarding trajectory set partition preprocessing and trajectory clustering algorithms. It successfully resists re-identification attacks and reduces the data utility loss of the k-anonymized dataset. A series of experiments on a real-world dataset show that the proposed model has significant advantages in terms of higher data utility and shorter algorithm execution time than other existing techniques.
Trajectory dataset; Privacy protection; Re-identification attack; Trajectory clustering +
Footnote β : Both authors contributed equally to this research
+
Footnote β : Corresponding author
+
that trajectories from different users in the released dataset are indistinguishable from each other. As a result, trajectories in the original dataset typically need to be replaced with the generalized trajectory for several users. The process of thus replacing a specific value with a more general and imprecise value is called generalization (Beng et al., 2017). The higher the level of generalization, the higher the extent of privacy protection, but the lower the data utility of the published trajectories and the higher the loss of information after generalization. In order to balance the degree of privacy protection and the generalization information loss, we preprocess trajectories by segmenting them according to the point density and generalize them based on the idea of DBSCAN cluster algorithm (Kumar et al., 2017), achieving the resistance of the released dataset to re-identification attacks and preserving the distribution features of the trajectories in the best manner possible. To the best of our knowledge, our paper proposes such a partition preprocessing mechanism for the first time. Our main contributions are summarised as follows.
* We investigate the shortcomings of existing trajectory privacy-preserving algorithms and propose a trajectory K-anonymity model based on Point Density and Partition (KPDP). The deficiencies of the existing models mainly stem from the irregularity of the shape distribution of real trajectories and the specific data structure, making it difficult to measure the similarity between trajectories, and thus unable to accurately cluster and generalize trajectories, resulting in a high information loss in the released dataset relative to the original dataset. Based on this situation, KPDP can segment trajectories based on point density before clustering them so that the length of trajectories is relatively balanced and the spatial distribution characteristics of the original trajectories are retained, yielding a lower generalization information loss than other models.
* To further enhance the utility of anonymized trajectory datasets and to achieve k-anonymity, this paper proposes an adaptive DBSCAN trajectory clustering algorithm. The algorithm measures the distance between trajectories using the loss from the alignment of trajectories and then clusters them based on sample density. However, due to the uncertainty of the number of samples in the clusters and the possible presence of noise from DBSCAN, direct adoption of its idea cannot guarantee k-anonymity. We consequently developed an adaptive DBSCAN trajectory clustering algorithm that can automatically adjust the values of parameters based on the number of trajectories and noise in each cluster and repeatedly call the core module to cluster. The main advantage of DBSCAN over other unsupervised machine learning-based algorithms is that it is not constrained by given values of parameters and can produce clustering results that better reflect the characteristics of the trajectory distribution, thus improving the data utility of the released dataset.
* We conducted extensive experiments based on a realistic trajectory dataset to evaluate the privacy-preserving effects of segmentation preprocessing mechanisms and trajectory clustering algorithms under different privacy metrics. The experiment results show that our approach performs better in terms of information loss and running time compared to other existing approaches.
The subsequent structure of this paper is organized as follows. Section 2 introduces and defines trajectory privacy attacks, privacy anonymity criteria, privacy-preserving methods, generalization hierarchy models, and trajectory alignment techniques. We then show an overview of KPDP in Section 3. Following this framework, we illustrate the rationale of the segmentation preprocessing mechanism and the design of the anonymization model in Section 4 and 5. The experiment results and evaluation are presented in Section 6. Finally, we conclude with an overview of our contributions in Section 7.
## 2. Background and Related Works
### Attack Model
A trajectory privacy attack is the acquisition of a user's private information from a trajectory dataset by an attacker with background knowledge. In general, most studies assume that the background knowledge known to the attacker is part of the spatiotemporal points on the user's trajectory, and the privacy information the attacker attempts to disclose is the complete trajectory data of that victim. For a given anonymized dataset, Zhen Tu et al. (2017) denote the set of users as \(U=U_{i}\) and the corresponding set of trajectories as \(T=T_{i}\), where \(T_{i}\) denotes the spatiotemporal points of the trajectory of user \(U_{i}\). A constant number of partial points sampled from the actual trajectories is considered the attacker's external background knowledge, denoted as \(E=E_{i}\), where \(E_{i}\) denotes the attacker's external observation of the user \(U_{i}\). With any external information \(E_{i}\), an attacker makes a successful re-identification attack if he can match only one trajectory, whose formulation is shown in Eq. (1).
\[C_{i}=\left\{\begin{array}{ll}1&\quad\quad\left|T_{j}\mid T_{j}\cap E_{j},T _{j}\in T\right|=1,\\ 0&\quad\quad otherwise\end{array}\right.\sum_{i}C_{i}\geq 1 \tag{1}\]
where \(C_{i}\) denotes whether the user \(U_{i}\) is re-identifiable and \(|*|\) denotes the size of the set \(*\).
In addition, adversaries can also launch attacks based on more public information. Zhen Tu et al. (2017) stated that an attacker could infer a victim's motivation and behaviour to visit a location by associating the Point of Interest (Pol) that the user passes on a map with the primary function of its corresponding location. Huaxin Li et al. (2017) matched the locations shared by users on social networks with their real travel trajectories to enable external attackers to infer information such as their age, gender, and education. John Krumm (2017) quantified the effectiveness of using different attack algorithms to recognize the location of subjects' homes and then identify them through a programmable web search engine. According to (Zhu et al., 2017), the types of location privacy attacks explicitly include Single position attack (Zhu et al., 2017), Multiple position attack (Zhu et al., 2017; Li et al., 2017; Li et al., 2017) and Context linking attack (Li et al., 2017; Li et al., 2017; Li et al., 2017). Although there are many approaches to attacking user privacy, re-identification attack remains the most fundamental problem. This paper focuses on studying resistance to privacy issues caused by re-identification attack.
### Privacy Model
The protection of individuals from re-identification attacks has been a topic of much discussion in recent years. The k-anonymity criterion is the most commonly used privacy-preserving metric to resist re-identification attacks for data publishing within the privacy and anonymity domain. K-anonymity is a concept introduced by Samarati and Sweeney in 1998. K-anonymity requires that each record stored in a published dataset should be indistinguishable from at least \(k-1\) other records (Sararati and Sweeney, 1998; Sweeney, 2000; Sweeney, 2000), i.e. it requires that the same quasi-identifier refers to at least multiple records, making it impossible for adversaries to connect records with other databases by quasi-identifiers and thus deduce user identity and more private information. Current k-anonymity implementations are mostly used to protect data anonymity for category and numerical attributes in general relational databases, including Generalization and suppression, Incognito, Top-down specialization, Clustering, and Multidimensional partitioning (Sarati and Sweeney, 2000; Sweeney, 2000; Sweeney, 2000). However, for such irregular geometric data structure as trajectory, it requires a specific processing method to achieve k-anonymity (Sarati and Sweeney, 2000; Sweeney, 2000; Sweeney, 2000).
In this paper, we need to ensure at least \(k\) distinct trajectories in each cluster obtained from the original trajectory set and generalize them to identical anonymous records to form a trajectory dataset that conforms to k-anonymity.
### Defense Techniques
Among diverse researches to achieve k-anonymity of trajectory data, generalization is one of the most dominant approaches. According to the different details of generalization techniques, such as the encoding and operation of the Domain Generalization Hierarchy (DGH) tree, there are three main types of generalization: full domain generalization, subtree generalization and cell level generalization (Zamarati and Sweeney, 2000). Acar Tamersoy et al. (2007) proposed a heuristic approach based on the concept of generalization to achieve k-anonymity. Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (Sinah and An (S andh An (S andh (S andh (S andh)S}S (S(S(S(S(((((((((((((((((((((((((((((((((((((((
## 3. System Overview
### System Utility Measurement
In the KPDP framework, trajectory alignment is the key to performing trajectory anonymization, and information loss is incurred in trajectory alignment. In order to calculate the loss of KPDP in the process of anonymizing trajectories more accurately and efficiently, a new DGH tree is proposed in this paper. This DGH tree is a partially ordered tree structure, which is able to map the specific and generalized values of attribute \(A\) for a certain attribute \(A\). The root node of the DGH tree indicates the case with the highest degree of generalization. Our DGH tree is constructed by dividing a number of small intervals of equal length within the range of corresponding values taken and then using these small intervals as leaf nodes to construct a full binary tree. If the number of leaf nodes is not enough to fill the bottom level of the binary tree, some invalid points are added to fill it up. A simple illustration of a DGH tree with a 4-layer structure is shown in Figure 1. The leaf nodes numbered 12 can be generalized to the parent node 6 or the ancestor nodes 3 and 1. Specifically, the DGH trees of KPDP in this paper are two DGH trees formed by building latitude and longitude in the trajectory set, corresponding to the x-axis and y-axis coordinate systems on the map plane space, respectively.
For KPDP, the information loss generated by this system mainly comes from the generalized information loss in the process of satisfying the k-anonymity criterion. The calculation of generalized information loss is based on the relationship between nodes on the DGH tree. The generalized information loss includes single-node generalized information loss as well as multi-node generalized information loss.
**Definition 1. Single-node generalized information loss:** The information loss incurred when generalizing a node to a parent or higher level node is calculated as shown in Eq. (2).
\[Loss_{g}(node_{i},node_{j})=log_{2}(LF(node_{i}))-log_{2}(LF(node_{j})) \tag{2}\]
Where \(Loss_{g}(node_{i},node_{j})\) is the generalization information loss generated by generalizing \(node_{j}\) to \(node_{i}\), \(LF(node_{k})\)returns the number of leaf nodes owned by \(node_{k}\). The special case of leaf nodes being generalized to the root is called suppression(Kendal, 1998), and in the suppression case, the generalization information loss is calculated as shown in Eq. (3).
\[Loss_{g}(node_{i})=H \tag{3}\]
Where H denotes the height of the DGH tree.
**Definition 2. Multi-node generalized information loss:** Any two nodes on the DGH tree need to be generalized by finding the smallest subtree containing both nodes. The Lowest Cmmon Ancestor (LCA) of two nodes is the result of their generalization. The information loss caused by generalizing two nodes to their LCA nodes is calculated as shown in Eq. (4).
\[\begin{split} Loss_{g}(node_{i},node_{j},node_{LCA})=Loss_{g}( node_{LCA},node_{i})\\ +Loss_{g}(node_{LCA},node_{j})\end{split} \tag{4}\]
Since the trajectories input to KPDP system usually has irregular geometry, in order to cluster different trajectories to achieve the purpose of trajectory anonymization of KPDP, this paper uses PSA algorithm in order to cluster multiple trajectories in PSA. We need to calculate the trajectories with the smallest relative distance and the closest shape for clustering to achieve the purpose of trajectory anonymization. In order to complete the calculation process of PSA, this paper adopts the DSA algorithm to calculate the distance between trajectories and the information loss generated in the process of clustering trajectories and uses the information loss generated by trajectory alignment as a measure of the relative distance between trajectories in clustering. In DSA, the generalization information loss of generalizing two trajectory points and suppressing a certain trajectory point is calculated based on the DGH tree generalization model with the corresponding dimensional attributes. According to Eq. (3) and Eq. (4), for any two trajectories and, when DSA is performed on these two trajectories, the recursive equation of dynamic programming is shown in Eq. (5).
\[\begin{split} SAmatrix[i][j]=\\ min\left\{\begin{array}{l}SAmatrix[i-1][j-1]+(Loss_{g}(p_{i}.X,q_{j}.X,X_{LCA})\\ +Loss_{g}(p_{i}.Y,q_{j}.Y,Y_{LCA})),\\ SAmatrix[i][j-1]+(Loss_{g}(q_{j}.X)+Loss_{g}(q_{j}.Y))\\ SAmatrix[i-1][j]+(Loss_{g}(p_{i}.X)+Loss_{g}(p_{i}.Y))\\ \end{array}\right.\end{split} \tag{5}\]
### KPDP Workflow
KPDP is mainly composed of two parts, which are the Partition model and the Anonymization model, the trajectory dataset of multiple users is the input of KPDP, and the anonymized trajectory dataset is the output of KPDP. In this case, because the length difference of two trajectories close to each other is large, the information loss from DSA alignment is large, and thus the two trajectories cannot be grouped into one cluster in the clustering algorithm based on the distance of the trajectories, thus makes the clustering in Anonymization model less effective and generates a larger information loss. As shown in Figure 2, it can be found from Eq. (4) that in the process of aligning trajectory \(tr_{1}\) with trajectory \(tr_{2}\), \(p_{1}\) and \(q_{1}\), \(p_{2}\) and \(q_{2}\) are generalized to multiple nodes, while \(q_{3}\), \(q_{4}\), \(q_{5}\) are generalized to the root node of DGH tree by a single node, and this process will produce excessive information loss. In this paper, we set up a Partition model to reduce the information loss
Figure 1. Schematic diagram of the DGH tree structure used in the utility measurement of KPDP
of KPDP anonymization while ensuring the requirement of KPDP anonymization.
The specific workflow is shown in Figure 3. The trajectory dataset needs to be preprocessed by the Partition model first, which enables all trajectories to be processed in advance to keep the original geometric features of trajectories in the Anonymization model as much as possible, as well as to prevent excessive information loss in the process of Anonymization model. The partition model prevents the loss of information in the process of the Anonymization model. The processed datasets are transferred from the Partition model to the Anonymization model, which uses the PSA algorithm and the adaptive DBSCAN clustering algorithm proposed in this paper to complete the trajectory clustering, and finally outputs the anonymized trajectory datasets of the KPDP system.
The specific workflow is shown in Figure 3. The trajectory dataset needs to be preprocessed by the Partition model first, which enables all trajectories to be processed in advance to keep the original geometric features of trajectories in the Anonymization model as much as possible, as well as to prevent excessive information loss in the process of Anonymization model. The partition model prevents the loss of information in the process of the Anonymization model. The processed datasets are transferred from the Partition model to the Anonymization model, which uses the PSA algorithm and the adaptive DBSCAN clustering algorithm proposed in this paper to complete the trajectory clustering, and finally outputs the anonymized trajectory datasets of the KPDP system.
## 4. Partition Model
Based on the workflow of KPDP in Section 3.2, this section focuses on the segmentation preprocessing of trajectories to reduce the generalization information loss of trajectories afterwards. This process refers to segmenting the trajectories based on point density before anonymizing the trajectory set so that the released dataset will retain the distribution features of the trajectories and reduce the generalization information loss in the alignment and clustering steps as much as possible. We illustrate the three main steps of the partition model - generating auxiliary points on the trajectory, then clustering the point set, and segmenting the trajectory based on the clustering distribution. The steps are interlocked to make the length of trajectories relatively average. The specific segmentation preprocess of the original trajectory set is schematically shown in Figure 4.
First, for the original trajectory set in Figure 4(a), Figure 4(b) shows the generation of green auxiliary points on the trajectory with the same distance \(d\). All the points in Figure 4(c), including the actual existing and virtual auxiliary points, are considered whole point sets for clustering. The clusters of points in Figure 4(d) are distinguished from each other by different colours, i.e., points of different colours belong to different clusters. These points are mapped back to the trajectory set in Figure 4(e), and the trajectories are partitioned at the neighbouring points belonging to different clusters according to the boundaries of the point clusters. The final result is shown in Figure 4(f), where the auxiliary points as trajectory endpoints after segmenting are kept in the trajectory set and form a new segmented dataset with other actual points.
We conducted extensive experiments to evaluate our segmentation model. The results demonstrate that, compared with the direct method of clustering and generalizing the trajectories, adding the preprocessing step can not only effectively reduce the overall generalization information loss but also speed up the running of the trajectory clustering algorithm.
### Auxiliary Point Generation
Before segmenting the trajectory, relatively dense auxiliary points are added between adjacent points. These virtual auxiliary points are equally spaced, as shown in Figure 4(b). Since the actual points
Figure 4. Schematic diagram of the preprocessing
Figure 3. KPDP Workflow
Figure 2. Schematic diagram of the DGH tree structure used in the utility measurement of KPDP
on the trajectory are time-ordered, the primary purpose of setting auxiliary points is to make line segments of different lengths have the same effect on the density of points in their neighbourhoods so that the line segments represented by points can be more similar to the solid form of the line in space. auxiliary points are defined as follows.
**Definition 3. Auxiliary Point:** Points that do not exist in the trajectory dataset and are used to reflect the spatial distribution structure of the trajectory. For the line segment formed between two time sequence adjacent points, starting from the end of the previous time sequence, a auxiliary point is added for each fixed distance \(d\) along the line segment.
The smaller the distance between the generated auxiliary points, the better the point set can reflect the distribution shape of trajectories in space. In contrast, if the spacing is too small, it will increase the amount of processed data and affect the operation efficiency.
### Point Set Clustering
In order to make the length of trajectories relatively uniform, partitioning trajectories based on the difference in spatial trajectory density is our proposed solution. Regarding distribution, the density of trajectories in macroscopic space is reflected as the density difference of the points on the microscopic level. The point clustering algorithm can automatically gather the close points into clusters, reflecting the density distribution of points on the plane space.
For the trajectory dataset with auxiliary points, all coordinate points on the trajectory are regarded as the whole point set to be clustered. Meanwhile, the mapping relationship between each point and the cluster it belongs to is recorded for the subsequent segmentation operation.
We called the k-means point clustering method from the machine learning Sklearn library to divide the points into \(k\) clusters based on the spatial Euclidean distance between them. The k-means algorithm is one of the most basic and widely used clustering algorithms that can divide data samples with different attribute values into a designated number of clusters and use the mean of all samples within each cluster as the representative points (Zhou et al., 2017). The main idea is to divide the data set into different classes by iteratively adjusting the clustering centres so that the mean error criterion function, which measures the clustering performance, is optimal, thus ensuring that the generated clustering results are compact within clusters and sparse from each other.
The effective operation of the clustering algorithm is generally based on the homogenization and standardization of the data feature variables. Since the attribute values used to calculate the Euclidean distance between trajectories only contain two dimensions, longitude and latitude, and there is no significant data disparity with uniform magnitude, the k-means algorithm can be directly applied to divide the point set on the plane space map into clusters.
### Trajectory Segmentation
In this stage, we use the clustering boundaries generated by the k-means algorithm to segment the trajectories to reduce the disparity in the length of trajectories in the original dataset. Referring to (Zhou et al., 2017), the sum of segmented trajectories is not necessarily the original trajectory but a characteristic reflection of its structure distribution. Therefore, when trajectory clustering is performed later, the segments of a trajectory may belong to several different clusters and subsequently be generalized to different anonymous trajectories. However, the accuracy of the trajectory clustering will be relatively higher due to the reduced cost of information loss when aligning long and short trajectories later. In contrast, the overall trajectory clustering will lose more detailed features and incur higher generalization information loss during generalization. In the KPDP framework, after clustering the segmented trajectories, the length of trajectories within each cluster is relatively consistent, so the shape of the anonymous trajectories will be more reasonable.
The segmentation process of the trajectory set is described as follows. Iterate through each trajectory in the trajectory dataset containing auxiliary points and check whether the adjacent points on a trajectory belong to the same cluster. If they are not the same, the trajectory is segmented, and a new trajectory is generated. When the endpoint of a segmented trajectory is a auxiliary point, it will be added to the newly generated trajectory dataset as a real trajectory point, while other non-endpoint auxiliary points will be removed and will not be involved in the subsequent privacy-preserving processing. The pseudo-code for generating a segmented trajectory dataset is shown in Algorithm 1, whose input is the trajectory dataset containing auxiliary points.
```
input :Dataset \(T\) output :Partitioned Dataset \(T_{partitioned}\) Let \(T_{partitioned}\) be an empty set that will store the new partitioned trajectory dataset; fortr inTdo Let \(new_{tr}\) be an empty set that will store the new trajectory; Appendtr[0] as the first point to \(new_{tr}\); forp inrange\((0,len(tr)-1)\)do ifthe point \(p\) and the adjacent point p+1 belong to the same clusterthen Appendtr[p+1] to \(new_{tr}\) else ifpoint \(p\) is not a real pointthen Appendtr[p] to \(new_{tr}\) Appendnew\({}_{tr}\) to \(T_{partitioned}\); Let \(new_{tr}\) be an empty set that will store the new trajectory again; Appendtr[p+1] to \(new_{tr}\); Appendnew\({}_{tr}\) to \(T_{partitioned}\); return\(T_{partitioned}\)
```
**Algorithm 1**Trajectory segmentation algorithm
On the one hand, the maximum value of distance lost in segmenting the trajectory is \(d\) because the spacing will not be smaller than the distance between the adjacent auxiliary points and the actual point or between two actual points when generating virtual auxiliary points along the trajectory direction before. In order to make the segmented trajectory closer to the original one, the parameter \(d\) should be as small as possible without causing the algorithm to be overly complicated so that the loss due to segmentation can be
minimized when cutting the line segment between two adjacent points. On the other hand, the trajectory segmentation should not only ensure accuracy but also have simplicity, i.e., use as few points as possible to characterize the shape of the trajectory. The virtual auxiliary points that are not endpoints on the trajectory do not contribute significantly to the subsequent generalization process of the trajectory but rather increase the time complexity of the alignment algorithm, so they are discarded when generating the new segmented trajectory dataset.
## 5. Anonymization Model
In order to achieve the anonymity requirement, we introduce clustering algorithms that can gather data samples based on similarity. Clustering is an unsupervised learning method in the field of machine learning that is capable of discovering patterns implicit in a dataset. By clustering the preprocessed trajectory set wisely, it can produce a low information loss during generalization and anonymization, further maintaining the distribution characteristics and data utility of the original trajectory set. In this paper, two trajectory clustering algorithms are considered to construct the anonymization model, respectively, the iterative k'-means algorithm and the adaptive DBSCAN algorithm. Both use the alignment information loss obtained by DSA as the distance indicator between two trajectories and provide a design such that the number of trajectories contained in each cluster is no less than \(k\), ensuring compliance with the privacy-preserving requirement of k-anonymity. Among them, the adaptive DBSCAN algorithm is the primary one that this paper focuses on as a method that can significantly improve the utility of the anonymous trajectory dataset and reduce the model running time, while the iterative k'-means algorithm is mainly used for comparison. These two algorithms run independently in the anonymization model of KPDP. After clustering, KPDP will apply PSA to generalize the trajectories of each cluster to derive the anonymous trajectory set for publication.
### Iterative K'-means algorithm
We borrowed the idea from (Srivastava et al., 2017) to perform k'-means clustering on trajectories (where the "*" is used to distinguish the "k" that has different meanings in k-means and k-anonymity) and ensure the number of trajectories within each cluster is at least \(k\) by iteration. K'-means is a distance-based clustering algorithm. Its clustering similarity is calculated using the mean distance between objects within each cluster. The brief idea is to divide data objects into \(k^{\prime}\) clusters according to the input value of \(k^{\prime}\), making the similarity within each cluster higher and the similarity between different clusters lower. The iterative k'-means algorithm is used for comparison with the adaptive DBSCAN algorithm.
The basic k'-means algorithm works by first selecting any \(k^{\prime}\) objects from the dataset as the initial cluster centers and assigning the remaining objects to the most similar clusters (i.e., closest in the distance) to them based on their similarity (usually Euclidean distance). Then for each cluster, a new cluster center is calculated based on the mean value of the distances of all objects in the cluster. This process is repeated until the cluster centers no longer change or the standard measure of clustering performance converges.
Measuring the relative distance between trajectories is a major difficulty with an irregular data structure like spatiotemporal trajectories. The iterative k'-means algorithm in this paper uses the information loss generated by DSA of two trajectories to measure the relative distance of trajectories. In addition, when designing the trajectory clustering algorithm based on k'-means, many technical details need to be adjusted according to the characteristics of the trajectory data and the rationality of the processing method so that the iterative k'-means algorithm can effectively cluster and generalize the trajectories in the trajectory privacy protection model. Its workflow is described as follows: **(1)** Calculate the initial number of clusters based on the value of \(k\) required for k-anonymity and the number of trajectories in the dataset. **(2)** A randomly selected trajectory from the trajectory set is used as the initial clustering center for each cluster. **(3)** Assigning all trajectories in the trajectory set to the cluster center that produces the least loss of alignment information with its DSA. **(4)** Apply the PSA algorithm to each cluster and generalize and merge the trajectories it contains to form a new cluster center. **(5)** Repeat steps (3) and (4) until the trajectories contained in each cluster no longer change, completing k'-means clustering of trajectories. **(6)** Dissolve the clusters containing less than \(k\) trajectories and repeat the steps of k'-means clustering until all clusters conform to k-anonymity.
Compared with the basic k'-means algorithm, the iterative k'-means algorithm gets the centers of a cluster of trajectories by PSA, except for the initial clustering centers randomly selected from the set of trajectories. In addition, the ordinary k'-means algorithm determines whether to perform the next clustering iteration based on the change of the cluster centers. However, due to the specificity of the generalization trajectory, when the cluster assignment is no longer changed, it marks the iteration stop to reduce the algorithm complexity.
Theoretically, the iterative k'-means algorithm is random for selecting initial clustering centers. This may lead to a high overall generalization information loss by generalizing each cluster when the distribution of initial clustering centers is poor, reducing the data utility of the final trajectory set used for publication. The experimental performance of the iterative k'-means algorithm on real datasets will be discussed in Section 6.
### Adaptive DBSCAN Algorithm
Inspired by the iterative k'-means algorithm, we propose the adaptive DBSCAN algorithm, which can capture the distribution characteristics among trajectories with more details. DBSCAN is a density-based spatial clustering of applications with noise, which measures the similarity between data samples in terms of density (Kal
The basic DBSCAN algorithm requires two parameters to be entered before working: the neighbourhood radius \(epsilon\) and the minimum number of samples contained in the neighbourhood \(minPts\). Once it starts running, the DBSCAN algorithm will traverse and label each sample in the dataset. First, for any sample that has not been labelled, find all samples whose relative distance to it is within \(epsilon\). If the number of samples contained in the neighbourhood of the sample reaches the threshold indicator \(minPts\), the sample and all samples in its neighbourhood will form a cluster, and the sample will be marked as visited. Then recursive processing is performed for the other samples in that cluster to extend the cluster by the same steps.
Conversely, if the number of samples contained in the neighbourhood of that sample is less than \(minPts\), the sample is temporarily marked as noise. Once the recursion is over, the cluster has been sufficiently extended, i.e. all samples are marked as visited. The algorithm then proceeds to traverse the points in the dataset, and the points that have not been labelled are processed similarly. The basic DBSCAN algorithm outputs clusters from sample density expansion and possibly noisy samples that are still labelled as the noise at the end of the algorithm.
We conducted an intensive study on the utilization of DBSCAN ideas for the trajectory clustering algorithm and proposed an adaptive DBSCAN algorithm that meets the privacy preservation requirement. Similar to the iterative k'-means algorithm, the adaptive DBSCAN algorithm measures the relative distance of two trajectories by the information loss generated by DSA. In order to make the anonymous trajectory dataset generated by clustering and generalization fulfil the k-anonymity criterion, we assign \(k\) as the value of \(minPts\) in the adaptive DBSCAN algorithm. This is because the parameter \(minPts\) is the threshold indicator of whether a trajectory is clustered with its neighbouring trajectories, so as long as the value of \(minPts\) is greater than \(k\), it can ensure that the number of trajectories within each cluster is at least \(k\), resulting in k-anonymity of the trajectory dataset.
As for the noisy samples that may exist in DBSCAN, we also handled them specifically in the trajectory privacy-preserving scenario. Analogous to the iterative k'-means algorithm, the adaptive DBSCAN algorithm repeatedly calls the core DBSCAN code until all clusters satisfy k-anonymity in order to make the noisy trajectories eventually satisfy the anonymity requirement as well. The noisy trajectories formed by DBSCAN each time will become the new input dataset for the next clustering, while another input parameter, the neighbourhood radius \(epsilon\), will be enlarged appropriately to lower the judgment criterion of density connection between samples so that those noisy trajectories can be clustered more easily. The algorithm will not stop calling the DBSCAN core code until there are no more noisy samples in the dataset.
The pseudo-code of the adaptive DBSCAN algorithm is shown in Algorithm 2. It takes the trajectory dataset, the value of \(k\) of the k-anonymity criterion and the neighbourhood radius parameter \(epsilon\) as inputs, and it outputs the clustered trajectory dataset. Its workflow is described as follows: **(1)** For trajectories that have not been labelled in the trajectory set, **(2)** if the number of trajectories with the information loss generated by DSA with that trajectory is less than the neighbourhood radius \(epsilon\) is greater than the threshold \(minPts\) (which takes the value of \(k\)), find all trajectories connected to that trajectory to form a cluster, and mark all trajectories in the cluster. **(3)** otherwise, mark the trajectory as noise, find the next unmarked trajectory, and repeat the previous step until all trajectories are marked. **(4)** For the set of trajectories that are still marked as noisy at this time, adaptively enlarge the value of the neighbourhood radius epsilon and repeat the above steps until all the generated clusters satisfy k-anonymity.
```
input :Dataset \(T\), Anonymity Criterionk, Neighbor Radius epsilon output :Trajectory Cluster Dataset \(T_{\textit{class}}\) Let \(T_{\textit{class}_{k}}\) be an empty set that will store the clusters with at least \(k\) trajectories; whiletruedo \(T,T_{\textit{class}_{k}}\leftarrow\) TrajectoryDBSCANClustering\((T,epsilon,k)\); Append the cluster in \(T_{\textit{class}}\) to \(T_{\textit{class}_{k}}\); if\(|T|<2*k\)then Cluster \(T\)'s remaining trajectories together and append the last cluster to \(T_{\textit{class}_{k}}\); break if\(epsilon<top_{epsilon}\)then Increase the value of \(epsilon\) else Cluster \(T\)'s remaining trajectories together and append the last cluster to \(T_{\textit{class}_{k}}\); break return\(T_{\textit{class}}\)
```
**Algorithm 2**Adaptive DBSCAN algorithm
In the loop of the algorithm, the value of neighbourhood radius \(epsilon\) will be changed adaptively based on the statistical distribution of the relative distance between trajectories. For example, in the first several rounds, the density of the trajectories is high, and the relative distances between trajectories will be concentrated at a low level. If we increase the neighbourhood radius \(epsilon\) by a small margin, we can efficiently cluster the trajectories in the area of high density. As looping times increase, the number of trajectories with low relative distances decreases, and the main distribution of distances among trajectories will tend to a higher value range. At this point, the neighbourhood radius \(epsilon\) must be raised by a greater magnitude to reduce invalid loops of the core function and improve the efficiency of the adaptive DBSCAN algorithm.
Theoretically, the time complexity of the adaptive DBSCAN algorithm will be an order of magnitude lower than the iterative k'-means algorithm due to no iteration of cluster centres required, which significantly reduces the time consumption in the trajectory clustering session. In addition, the algorithm can specialize noisy data to enhance the data utility of the generalized trajectories, and the stability of the trajectory cluster generation process is also an advantage it has. As for the logic of the algorithm, how to adaptively adjust the value of the neighbourhood radius parameter is the key point to improve the operation efficiency. Reducing the algorithm's complexity and optimizing the parameter values are unavoidable contradictions requiring a balanced algorithm design. Extensive
experiments on a real dataset will evaluate the performance of our proposed model.
## 6. Evaluation
This section describes the experiments on trajectory privacy protection of the KPDP framework against re-identification attacks on real-world datasets. We mainly evaluate and analyze the effectiveness of the preprocessing step of the trajectory set and the performance of two trajectory clustering algorithms and explore the role of different values of parameters in the k-anonymity criterion on the experimental procedure and results. The experimental results reflect the superior performance of our method in all aspects.
### Dataset Introduction
The trajectory dataset used in the experiment is from the Geolife project (Geolife, 1998; Geolife, 1999; Geolife, 1999) and the T-Drive dataset (Geolife, 1999; Geolife, 1999), which consists of GPS trajectories of mobile device users in the Beijing area, specifically including the longitude and latitude information and time series relationship of trajectory points. After obtaining the basic trajectory data, the original trajectory set used for trajectory privacy protection in this paper is all the trajectories intercepted in an area on the map of Beijing, China, corresponding to the latitude and longitude ranges of \(116.300000\sim 116.316000^{\circ}E\) and \(39.989500\sim 40.000000^{\circ}N\). The road network model composed of this trajectory set is shown in Figure 5, where The trajectory consists of longitude and latitude coordinate points collected after a certain time interval.
obtained from a large number of reliable repeated experiments. In the experiments in Figures 6 and 7, the number of segmented trajectories increases from 270 to 1372.
In order to make the final published trajectory dataset resistant to re-identification attacks, the trajectory privacy-preserving model needs to anonymize the trajectory set according to the selected k-values in k-anonymity criterion. In this process, the DGH tree generalization model of the trajectory set needs to be established first, i.e., the corresponding coordinate values of the trajectory points are represented by the numbers of the leaf nodes on the latitude and longitude DGH trees. Then the iterative k'-means algorithm and the adaptive DBSCAN algorithm cluster the trajectories, respectively. Finally, the clusters of trajectories formed by the clusters are generalized to obtain the trajectory dataset conforming to k-anonymity and the corresponding loss of generalization information.
### Analysis of Experimental Results
In the experiments on trajectory privacy preservation against re-identification attacks, this paper will compare and evaluate two trajectory clustering algorithms with and without segment preprocessing in three aspects: total information loss, average information loss per cluster, and execution time.
Figure 8 shows the comparison of the values of the three metrics obtained by the iterative k'-means algorithm and the adaptive DBSCAN algorithm for different k-anonymity metrics without trajectory segmentation preprocessing during the experiment, where the k-anonymity criterion takes the \(k\) of 2, 4, 8 and 10.
As shown in Figure 8(a), the total generalized information loss of the trajectory set increases with the increases of \(k\). In contrast, the protection model for trajectory, which is clustering by the adaptive DBSCAN algorithm, produces lower information loss than the iterative k'-means algorithm at all four values.
As shown in Figure 8(b), the average generalized information loss per cluster of the trajectory set also increases with increasing \(k\). The information loss generated by the iterative k'-means algorithm is two to four times higher than that of the adaptive DBSCAN algorithm. The difference becomes more pronounced as the values increase.
Figure 8. Comparison of the three performances of the two trajectory clustering algorithms without segmentation preprocessing at different \(k\) values
Figure 9. After segmentation preprocessing, the three performance comparisons of the two trajectory clustering algorithms at different \(k\) values
The execution time of the two trajectory clustering algorithms in the model is shown in Figure 8(c), with a decreasing trend of the algorithm execution time when increasing. In the experiments for each value, the execution time of the adaptive DBSCAN algorithm is stable within 5000 seconds, while the execution time of the iterative k'-means algorithm is much higher than the other algorithm for values 2 and 4, and relatively lower and smoother for values 8 and 10, but still higher than the other algorithm.
In Figure 9, a comparison of the values of the three metrics obtained by the iterative k'-means algorithm and the adaptive DBSCAN algorithm with different k-anonymity criteria for the dataset preprocessed by trajectory segmentation at the time of the experiment is shown, where the k-anonymity criteria take the values of 2, 4, 8 and 10.
Similar to the overall trend and comparison in Figure 8, the adaptive DBSCAN algorithm outperforms the iterative k'-means algorithm in three aspects: total generalized information loss (Figure 9(a)), average information loss per cluster (Figure 9(b)), and execution time (Figure 9(c)). Overall, the total generalized information loss and the average information loss per cluster of both trajectory clustering algorithms subsequently increase with the increase of \(k\), and the execution time decreases as the value of \(k\) increases gradually.
In contrast, compared with Fig 8, in both the adaptive DBSCAN algorithm and the iterative k'-means algorithm, the total information loss and average information loss per cluster of the final generated anonymized dataset after segmentation preprocessing by the partition model are relatively small, and the consumed execution time is also reduced to different degrees. Especially it is evident in the per-cluster average information loss metric of the adaptive DBSCAN algorithm, which can be obtained from Figure 9(b), that the information loss per cluster obtained by the adaptive DBSCAN algorithm decreases by about 86%, 73%, 71% and 70%, respectively, when the values of 2, 4, 8 and 10 are taken. Compared with the trajectory set without segmentation in Figure 8(b).
Such a decrease is because the preprocessing of the partition model can make the clustered trajectories closer within the clusters. Besides, the smaller the value \(k\), the larger the number of clusters after segmentation, and the closer the trajectories that make up the clusters will be. The more obvious is the effect of the partition model in reducing the information loss of generalization within the clusters.
The experimental results shown in Figure 8 and Figure 9 are consistent with the expectations of KPDP design in this paper. For the case where the generalized information loss increases with the value of \(k\), this is because an increase in the value of \(k\) directly leads to an increase in the number of trajectories in each cluster, resulting in a more extensive total information loss and average information loss per cluster. The superior performance of the adaptive DBSCAN algorithm in the three metrics is attributed to the ability of the algorithm to cluster trajectories close to each other more scientifically and efficiently than the iterative k'-means algorithm, with lower time complexity.
According to a series of experiments, it is proved that the way of adjusting the cluster centers in the k'-means algorithm is not fully applicable to the trajectory clustering process, while the adaptive DBSCAN algorithm forms each cluster by the expansion of the density connection between trajectories, which not only reduces the information loss but also can effectively speed up the processing of the model. The effectiveness of adding a segment preprocessing step in both trajectory clustering algorithms is due to the fact that after preprocessing, the relatively long trajectories in the dataset are avoided to be aligned and combined with shorter trajectories in the trajectory clustering and generalization process, so the information loss from the final generalization is reduced. In addition, because the long trajectories in the dataset are split into relatively short trajectories, the situation that two long trajectories are aligned with each other will be significantly avoided in the alignment, so the execution time of the trajectory clustering algorithm is also shortened.
In summary, the adaptive DBSCAN algorithm and the trajectory set segmentation preprocessing step proposed in this paper to have superior performance in controlled experiments under different scenarios, validating the theoretical expectation of reducing generalization information loss and speeding up model processing when designing the model. In the privacy-preserving phase of trajectory resistance to re-identification attacks, the trajectory preprocessing and adaptive DBSCAN algorithm for trajectory clustering to form anonymous trajectory datasets in Figure 9 has significant advantages in terms of data utility and running time for each value.
\begin{table}
\begin{tabular}{|c|l|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Total Information Loss} & \multicolumn{4}{|c|}{Average Information Loss Per Cluster} \\ \hline k & clustering & without & with & & & & without & with & \\ value & algorithm & Partition & partition & reduction(\%) & k & clustering & Partition & partition & reduction(\%) \\ \hline \multirow{2}{*}{k=2} & kβ-means & 154263 & 150892 & 2.19 & \multirow{2}{*}{k=2} & kβ-means & 1773.14 & 818.14 & 53.86 \\ \cline{2-2} \cline{6-8} & DBSCAN & 71840 & 40763 & 43.26 & & DBSCAN & 704.31 & 101.84 & 85.54 \\ \hline \multirow{2}{*}{k=4} & kβ-means & 218881 & 175469 & 19.83 & \multirow{2}{*}{k=4} & kβ-means & 8755.24 & 3476.56 & 60.29 \\ \cline{2-2} \cline{6-8} & DBSCAN & 112913 & 105214 & 6.82 & & DBSCAN & 2171.4 & 590.36 & 72.81 \\ \cline{2-2} \cline{6-8} & kβ-means & 193291 & 204264 & -5.68 & & kβ-means & 21476.78 & 6970.07 & 67.55 \\ \cline{2-2} \cline{6-8} & DBSCAN & 162031 & 150085 & 7.37 & & DBSCAN & 6481.24 & 1865.16 & 71.22 \\ \hline \multirow{2}{*}{k=10} & kβ-means & 201961 & 239978 & -18.82 & \multirow{2}{*}{k=10} & kβ-means & 22773.44 & 12142.007 & 46.68 \\ \cline{2-2} \cline{6-8} & DBSCAN & 167921 & 164364 & 2.12 & & DBSCAN & 8396.05 & 2530.06 & 69.87 \\ \hline \end{tabular}
\end{table}
Table 1. KPDP performance with Partition model and without Partition model
## 7. Conclusion
In this paper, we proposed a trajectory privacy protection framework against re-identification attacks, which can effectively anonymize the spatiotemporal trajectory dataset. We innovated a point density-based trajectory segmentation preprocessing mechanism to enable accurate clustering and generalization of trajectories. Furthermore, we applied DBSCAN in machine learning to trajectory clustering and presented the adaptive DBSCAN algorithm, which minimizes the generalization information loss to acquire higher data utility while ensuring the k-anonymity of the generated trajectory dataset. Extensive experiments on a realistic dataset also showed that there is the superiority of the short execution time of our approach compared with previous works.
|
2301.13467 | Perfect QCD -- a new Universal approach to soft QCD | The ideas presented in this proceeding aims to be a first step towards a
description of hadronic collisions where all soft processes are fundamentally
strongly coupled and the same Universal strongly coupled physics drives both
initial and final-state interactions. As it is not currently possible to derive
such a picture from first principles, instead, an attempt to generalize the
perfect liquid observation to a ``perfect QCD'' guiding principle is presented,
focusing on implications for particle production in small systems. The first
steps towards a microscopic model is taken by arguing that ``perfect QCD''
suggests that the screening in the initial state is so large that multi-parton
interactions are of little or no importance. Instead, a target and projectile
remnant is coherently excited and particle production is mainly driven by
radiation in a qualitative similar manner as $e^+e^- \rightarrow q\bar{q}$.
Finally, some of the possible implications of this ``excited remnant model''
are presented. It is argued that the time ordering of soft and hard physics can
explain the absence of jet quenching in small systems and that the coherence
scale of the projectile and target provides insights into what small systems
will exhibit flow. | Peter Christiansen | 2023-01-31T08:12:47Z | http://arxiv.org/abs/2301.13467v1 | # Perfect QCD - a new Universal approach to soft QCD
###### Abstract
The ideas presented in this proceeding aims to be a first step towards a description of hadronic collisions where all soft processes are fundamentally strongly coupled and the same Universal strongly coupled physics drives both initial and final-state interactions.
As it is not currently possible to derive such a picture from first principles, instead, an attempt to generalize the perfect liquid observation to a "perfect QCD" guiding principle is presented, focusing on implications for particle production in small systems. The first steps towards a microscopic model is taken by arguing that "perfect QCD" suggests that the screening in the initial state is so large that multi-parton interactions are of little or no importance. Instead, a target and projectile remnant is coherently excited and particle production is mainly driven by radiation in a qualitative similar manner as \(e^{+}e^{-}\to q\bar{q}\).
Finally, some of the possible implications of this "excited remnant model" are presented. It is argued that the time ordering of soft and hard physics can explain the absence of jet quenching in small systems and that the coherence scale of the projectile and target provides insights into what small systems will exhibit flow.
Research OR Education of Physics Revista Mexicana de Fisica **?** (**??????) MES? ANO?
## 1 Introduction
The goal of this proceeding for the Winter Workshop 2022 is to present a new picture for hadronic collisions. To be precise, the focus in this paper is only on non-diffractive inelastic collisions and only the soft physics1, which is expected to be responsible for bulk particle production. When hadronic collisions are mentioned in the following it always refers to this type of collision unless another type is explicitly mentioned.
The motivation for doing this is the observation of several phenomena in small systems2 that has traditionally been associated with the formation of a quark-gluon plasma (QGP) in large systems, see, e.g., Refs. [3, 4] for an overview. These new phenomena can all be explained by the presence of large final-state interactions in small system and many excellent ideas have been presented for describing this with weakly coupled physics, see e.g., [5], but what seems to the author to be a fundamental flaw in these models is that a weakly coupled interaction leads to a non-vanishing mean free path so that the QGP-like effects will build up as the system grows and first dominate at a certain system size [5]. This means that QGP-like effects do not in a natural way extend down to the smallest systems, even if there is no indication in data of an onset [3, 4]. At the same time, a non-vanishing mean free path will introduce diffusion and dissipation effects that will supposedly modify the initial-state correlations, which the author is unaware of experimental evidence for, see e.g. C. A. Pruneau's contribution to these proceedings [6].
Figure 1: Illustration of how the initial soft scatterings are described in different models/pictures. Top: in Pythia, soft and hard interactions are modeled in the same way as leading-order perturbative processes and _multiple_ interactions occur in most pp collisions. The strings forming between color charges are not shown. Bottom: in the βperfect QCDβ picture a remnant of each nucleon is excited as a whole, as if there was only a _single_ interaction, and the picture is therefore denoted the βexcited remnant modelβ.
In this paper, the decision has been to take a fresh look at things from the perspective offered by the new measurements and try to bring forth a picture that is fundamentally strongly coupled with a vanishing mean free path so that large final-state effects are present in all systems and do not introduce diffusion or dissipation (are essentially reversible) thereby hopefully preserving correlations such as those introduced by string breakings or similar processes. In traditional pictures, "soft" can have two very different meanings:
1. The extrapolation from high-momentum transfers to low momentum transfer, e.g., using leading-order perturbative cross sections even for situations where next-to-leading order correlations are large
2. Phenomenological physics such as the Lund string model [7]
The approach in this paper is to claim that point 1 does not work, meaning that next-to-leading order corrections distorts the leading-order picture, and the proposal is instead that "perfect QCD" is a Universal version of point 2 and can provide guidance in that way. This means that any time where soft is mentioned in the text one should in principle be able to apply the "perfect QCD" principle. To help convince the reader that this leads to fundamentally different physics from that found in existing models, one of the main findings will be already discussed here and illustrated in Fig. 1. In pp event generators, such as Pythia [1, 2], one typically treats the initial stages of pp collisions as two interacting parton gases where the scattering of each parton-parton interaction is motivated by perturbative (weakly coupled) QCD, Fig. 1 top. In the Color-Glass Condensate (CGC) model, not shown, one instead considers it as a weakly coupled interaction between dense gluon fields [8] that produce longitudinal Glasma tubes, Fig. 1. In both models the collision can involve one or more interactions and _the number of interactions is the main driving mechanism of the final-state multiplicity_. In the picture motivated in this paper, one considers a strongly coupled scenario where the color field of each projectile parton is neutralized by the target partons. It is argued that this results instead in that the remnant of the projectile and the target is coherently excited, corresponding essentially to a _single_ soft interaction. This gives rise to two semi-independent color fields, Fig. 1 bottom, which would mean that most of the particle production is driven by final-state radiation from the colored target and projectile remnants, similar to \(e^{+}e^{-}\to q\bar{q}\).
Concretely, the idea of this paper is to extend the experimental observation that the QGP behaves like a perfect liquid to a "perfect QCD" principle that can guide our understanding of particle production in general. The goal is not to come up with a full model, but to demonstrate that it is possible using the proposed "perfect QCD" principle to obtain surprising insights into particle production where the physics and the explanations for observed phenomena are very different from those found in existing models, such as Pythia and the CGC.
## 2 Perfect QCD
One of the most remarkable discoveries of the heavy-ion program at RHIC and LHC is that the Quark-Gluon Plasma (QGP) behaves as a perfect liquid [9, 10, 11, 12, 13, 14, 15]. The shear-viscosity-to-entropy density (\(\eta/s\)) is as low as possible [16]. This means that the build up of flow is almost deterministic, which has enabled the precise measurement of fluctuations in the initial distribution of matter, e.g., Refs. [17, 18]. At the Winter Workshop it was further shown how the same minimal \(\eta/s\) is also obtained when analyzing balance functions and momentum correlations, see C. A. Pruneau's contribution to these proceedings [6].
The perfect nature of the liquid seems to indicate that it is very fundamental and since it is observed in all hadronic collisional systems (pp, p-Pb, and Pb-Pb collisions), see for example Refs. [19, 20] for small systems, one could hope that it provides a deep insight into QCD.
Based on the characteristics of the perfect liquid it is proposed that "perfect QCD" has to have the following two characteristics:
* Strongly interacting
* Minimal entropy production
The minimal entropy production comes from the observation that the hydrodynamic description of the QGP is as close to ideal (reversible) as it can be and means that dissipation and diffusion can play no significant role in the description of the system.
## 3 The Perfect QCD Picture of Particle Production
It might seem impossible to derive a microscopic picture from a strongly interacting soft QCD model because one looses the perturbative guidance but the surprise is that the proposed picture is extremely simple. The "perfect QCD" principle dictates that the entropy production during the initial collisions should be as small as possible, yet strongly interacting, and this suggests that all that happens is the exchange of a single soft gluon so that only color and essentially no momentum is exchanged. As the interacting hadrons are of course made up of partons, this would require that the screening in the initial state is so strong for the initial interactions that the soft parton-parton (and/or CGC equivalent) interactions are suppressed to a degree where they can be neglected. One will of course have parton-parton interactions for very large momentum transfers but they are not of interest here where the focus is on bulk production.
Let us first treat the rest of the collision, ignoring possible radiation, using the Lund string model [7], which, as it is derived from the confining long-range part of the QCD potential, is a strongly coupled model. In the Lund string model, strings will form between colors and anti-colors that
eventually breaks, producing hadrons uniformly in rapidity. In this case, two strings will form as the gluon carries both a color and anti-color. Let us assume that all the energy of each proton is carried by the color and the anti-color systems. If both have half the energy, the total string length will be \(\approx\)\(4(y_{\rm beam}-\log 2)\) while if one color (or anti-color) has all the energy one can supposedly form a string of length \(2y_{\rm beam}\) (this must be the minimal length for the color field to stretch between the target and projectile). As the average number of particles produced a by a string is proportional to the string length [7], the "perfect QCD" principle tells us that nature will take the 2nd solution. This means that instead of having two remnants with a similar amount of energy, one will have a "valence"-like remnant with almost all the energy and a "sea"-like remnant with almost no energy. This is reminiscent of the BGK picture [21], and so it is naturally to propose that the "valence" remnant in one proton is color-coupled to the "sea" remnant in the other proton, and vice versa, so that one in some sense has two semi-independent systems carrying approximately half the total initial energy each.
Let us finally try to give a partonic picture of how the "perfect QCD" picture can be understood. As the two nucleon penetrate at high energy the partons inside them are interacting strongly but the claim is that they interact in a way that screens the partonic interactions. However, this screening can only happen in a certain regime. If \(x\) denotes the usual four momentum fraction then one can maximally "organize" the nucleon into \(n\approx 1/x\) constituents. Screening will be impossible when the four momentum transfer, \(Q^{2}\), is very large because one can resolve individual partons (the hard scattering limit), or when \(x\) is large so that the number of constituents is small. The latter argument is why nucleon remnants will be excited as a whole.
In the current picture, \({\rm d}N_{\rm ch}/{\rm d}\eta\) at \(\eta=0\) would be independent of \(\sqrt{s}\) as all the energy will go to extend the strings in rapidity. What has been ignored is radiation: the color charge carrying most of the energy is, as QCD is strongly interacting, very likely to emit soft or collinear radiation. How to calculate this radiation is not trivial, but one can at least note that one qualitatively get a system very similar to what one has for \(e^{+}e^{-}\to q\bar{q}\) (denoted \(e^{+}e^{-}\) in the following). Comparing particle production in \(e^{+}e^{-}\) collisions to that of pp collisions, one finds that the former produces _more_ particles on the average [11]. The common understanding is that it is possible for part of the proton to escape as a color neutral object, taking away around 50% of the energy [22]. Based on the observed particle production in \(e^{+}e^{-}\), it is concluded that there is no fundamental reason one should not be able to create the observed particle production via radiation also in pp, pA, and AA collisions.
To recap, the general microscopic "perfect QCD" picture of pp, pA, and AA collisions will be that the soft initial interactions will excite a remnant of each nucleon in a "projectile" coherently and that the main particle production at high energy collisions is driven by final-state radiation. For this reason the picture will be denoted the "excited remnant model". This might sound like the Dual Parton Model but it is important to note that the Dual Parton Model contains MPIs [23].
In the limit that particle production is dominated by radiation, the color-connections to the "sea" systems in the "target" can be ignored and one can therefore factorize the soft particle production into \(N_{\rm part}\) semi-independent terms. Semi-independent, because there must be some dependence on the nucleon-nucleon impact parameter to explain the slightly increased particle production per participant in AA collisions.
### An Illustration of Particle Production in pp Collisions
The main goal here is to discuss small systems. In these systems, e.g., pp collisions, the full "perfect QCD" picture of a collision is:
1. the initial interactions produce up to three semi-independent systems: * coherently excited target and projectile remnants * possible color-neutral target and projectile remnants that act as spectators (escape with energy along beam direction) * possible hard parton-parton scatterings
2. the excited remnants radiate gluons
3. the color fields decay into partons
Figure 2: \({\rm d}N_{\rm ch}/{\rm d}\eta\) measured in \(\sqrt{s}=200\) GeV/\(c\) pp collisions by UA5 for NSD events and for events with different final-state charged particle multiplicities, \(n\). The data have been read off from the published figures [24]. As the figure is just meant to illustrate a trend, the statistical uncertainties have not been included for clarity.
4. final-state partonic interactions: flow, strangeness enhancement
5. hadronization
6. possible final-state hadronic rescattering
One could in principle try to implement a generator along these lines but the goal here is to illustrate the picture using UA5 data [24]. Fig. 2 shows the \(\mathrm{d}N_{\mathrm{ch}}/\mathrm{d}\eta\) measured by UA5 for NSD events as well as for multiplicity selected events. In low-multiplicity events, \(\mathrm{d}N_{\mathrm{ch}}/\mathrm{d}\eta\) is flat as one would expect for a single long string. As the multiplicity grows, one observes a narrowing of \(\mathrm{d}N_{\mathrm{ch}}/\mathrm{d}\eta\), which in the perfect QCD picture should be caused by the radiation adding shorter and shorter (less energetic) strings. In this way the "excited remnant model" is at least qualitatively consistent with the observed trends by UA5.
## 4 Insights and Predictions for Small Systems
In this section, the hope is to demonstrate for the reader that the perfect-QCD picture of particle production can provide many new insights and predictions.
### A Simple Explanation for the Absence of Jet Quenching in Small Systems
One can immediately notice that, if the time scales involved with the hard interactions are shorter than the formation time for step 2 ("the excited remnants radiate gluons") as one would imagine from the scales of the momentum transfers involved, then one can understand why there is no jet quenching in small systems even if there is a relation between flow and jet quenching in a large system. The medium simply has not been produced yet when the jet propagates. This seems very attractive to the author as this is in line with experimental findings, see, for example, Ref. [25], and it is hard to explain in most existing models.
### Flow in pp Collisions \(\gg\) Flow in \(\mathrm{e^{+}e^{-}}\) Collisions
It should be clear from the way the "excited remnant model" work that it "postdicts" that the particle production in pp collisions and \(e^{+}e^{-}\) collisions should be very similar because in this model, and unlike traditional MPI-based models, the growth with \(\sqrt{s}\) is in both cases driven by radiation. Indeed this surprising similarity have been noted and discussed much in the past by experimental collaborations [11, 26], even it was never theoretically understood.
It can therefore be surprising that while one observes strong flow in pp collisions, one does not observe it for \(e^{+}e^{-}\) collisions [27]. However, there could be a simple explanation for that. As the "excited remnant model" postulates that for each nucleon a single "valence" remnant is excited as a whole, then it is clear that the radiation in step 2 will have to have very low transverse momentum, \(p_{\mathrm{T}}<1/R\), where \(R\) is the size of the excited remnant. As the \(p_{\mathrm{T}}\) is so low, the color fields will have to stack and so one will naturally get a quite dense system of parallel color fields with a large energy density. In the "perfect QCD" picture these color fields will be strongly interacting and so they will immediately start to build up collective flow. This makes a big difference when comparing to \(e^{+}e^{-}\), where all the energy is located with a single parton and so the radiated gluons can and will typically have very large \(p_{\mathrm{T}}\). This means that most energy will be radiated away from the initial color field and so there is little time where system is dense and can build up collective flow.
### How to Control Flow in Ultra-Small Systems
In the previous subsection it was argued that for small systems, the size of the excited remnant determines the flow that can be built up in the final system. This then is naturally in line with the observation of flow in Ultra-Peripheral Collisions (UPCs), where the photon field of one nuclei interacts with the other nuclei, because in this case the photon field has a long wavelength since it is emitted coherently by the protons in the nuclei. Recall that photons can interact as a "hadronic" system by fluctuating into a \(q\bar{q}\) pair, which will have a size that reflects the photon four momentum (\(Q^{2}\)). ATLAS has observed flow in UPC Pb-Pb events [28] and CMS has reported non-zero \(v_{2}\{2\}\) in p-Pb events [29], which is in line with the ideas presented here.
By going to electron-proton or electron-ion collisions one can in principle measure the wavelength of the photon from the change in electron four momentum. One can in this way select different sizes of the excited remnants and if the picture is true, control the \(p_{\mathrm{T}}\) radiation and switch on (low \(Q^{2}\)) and off (high \(Q^{2}\)) flow. ZEUS and H1 has reanalyzed old data both for low and high \(Q^{2}\) but neither ZEUS [30, 31] nor H1 [32] observes any signatures of collective flow. This clearly goes against the ideas presented here. However, it seems that if there is flow in UPCs at LHC then there would also likely be flow in low \(Q^{2}\) ep collisions at HERA and vice verse. On the other hand, one knows that flow in small systems is very hard to detect. Looking from the outside, it would be good if one could resolve the situation so that one is as certain as possible that similar procedures have been used before one concludes too strongly on the current results.
## 5 Conclusions
An attempt to generalize the perfect-liquid nature from flow to particle production has been presented. The "perfect QCD" principle has been proposed to be a Universal principle for soft QCD that applies both in the initial and final state of hadronic collisions. Using the idea of minimal entropy production, a microscopic picture, the "excited remnant model", has been presented. In the microscopic picture, the screening as the two hadronic systems penetrate is so large that sub-collisions between constituents does not occur, in contrast to most existing pictures, e.g., MPI and CGC based ones.
No attempt has been done to prove the "perfect QCD" principle in this paper but several surprising insights have been provided, such as simple arguments for why jet quenching is absent in small systems and which collisional systems will exhibit flow. The hope is that the principle can be used to provide novel insights into a wide range of topics, for example, jet quenching in large systems and the relation between diffractive and non-diffractive physics.
## 6 Acknowledgements
The author would like to thank Adrian Nassirpour for many valuable comments on earlier versions of similar manuscripts.
|
2309.08103 | Testing Bell inequality through $h\toΟΟ$ at CEPC | The decay of Higgs boson into two spin-1/2 particles provides an ideal system
to reveal quantum entanglement and Bell-nonlocality. Future $e^+e^-$ colliders
can improve the measurement accuracy of the spin correlation of tau lepton
pairs from Higgs boson decay. We show the testability of Bell inequality
through $h\to \tau\tau$ at Circular Electron Positron Collider (CEPC). Two
realistic methods of testing Bell inequality are investigated, i.e.,
T\"{o}rnqvist's method and Clauser-Home-Shimony-Holt (CHSH) inequality. In the
simulation, we take into account the detector effects of CEPC including
uncertainties for tracks and jets from $Z$ boson in the production of
$e^+e^-\to Zh$. Necessary reconstruction approaches are described to measure
quantum entanglement between $\tau^+$ and $\tau^-$. Finally, we show the
sensitivity of CEPC to the Bell inequality violation for the two methods. | Kai Ma, Tong Li | 2023-09-15T01:48:52Z | http://arxiv.org/abs/2309.08103v2 | # Testing Bell inequality through \(h\to\tau\tau\) at CEPC
###### Abstract
The decay of Higgs boson into two spin-1/2 particles provides an ideal system to reveal quantum entanglement and Bell-nonlocality. Future \(e^{+}e^{-}\) colliders can improve the measurement accuracy of the spin correlation of tau lepton pairs from Higgs boson decay. We show the testability of Bell inequality through \(h\to\tau\tau\) at Circular Electron Positron Collider (CEPC). Two realistic methods of testing Bell inequality are investigated, i.e., Tornqvist's method and Clauser-Home-Shimony-Holt (CHSH) inequality. In the simulation, we take into account the detector effects of CEPC including uncertainties for tracks and jets from \(Z\) boson in the production of \(e^{+}e^{-}\to Zh\). Necessary reconstruction approaches are described to measure quantum entanglement between \(\tau^{+}\) and \(\tau^{-}\). Finally, we show the sensitivity of CEPC to the Bell inequality violation for the two methods.
###### Contents
* I Introduction
* II Local Quantum Model and Bell Inequality
* II.1 Tornqvist's method
* II.2 Clauser-Home-Shimony-Holt Inequality
* III Measurements at Future Lepton Colliders
* III.1 Simulation and Detector Effects
* III.2 Reconstruction Method
* III.3 Reconstruction by using Impact Parameters
* IV Sensitivity of CEPC to the Bell Inequality Violation
* V Summary
## I Introduction
It is well known that the most important debate on whether the Quantum Mechanics (QM) is a complete local theory is the challenge raised by Einstein, Podolsky, and Rosen (EPR), named EPR paradox [1]. The interpretation of the EPR paradox in local hidden variable theory (LHVT) shows the contradiction of LHVT with QM and presents the non-local nature of QM. Later on, Bohm et al. proposed a realistic experiment with a system of two spin-1/2 particles to illustrate the EPR paradox [2]. Based on this consideration, Bell established a theorem that the two particles' spin correlation satisfies a Bell inequality (BI) in realistic LHVT [3]. By contrast, the QM predictions may violate this inequality in some certain parameter space. Clauser, Horne, Shimony and Holt (CHSH) also generalized the original Bell inequality and established a more practical inequality [4]. The test of the Bell inequality delivers a direct justification if the QM is a complete local theory [5].
In the past years, the violation of this Bell inequality has been observed in many low-energy experiments (such as optical experiments) [6; 7; 8; 9; 10; 11; 12; 13] as the foundation of the modern quantum information theory. The predictions of QM are proved to be consistent with the results of these experiments. However, for testing the completeness of QM beyond the electromagnetic interaction regime, it is still a challenge of the test of Bell inequality in high-energy physics (see review Ref. [14] and references therein). At \(e^{+}e^{-}\) colliders, the testability of BI was first suggested by using the polarization correlation in the process of \(e^{+}e^{-}\to\Lambda\bar{\Lambda}\to\pi^{-}p\pi^{+}\bar{p}\)[15; 16] or \(e^{+}e^{-}\to Z\to\tau^{+}\tau^{-}\)[17; 18; 19]. Based on the CHSH method, dedicated proposals were also raised to test the BI in the final states of \(t\bar{t}\) pair [20; 21; 22; 23; 24; 25; 26; 27] or two weak gauge bosons [28; 29; 30; 31; 32; 33; 34; 35; 36] at the Large Hadron Collider (LHC). Nevertheless, the spin-0 state formed by a pair of spin-1/2 particles in the process such as \(e^{+}e^{-}\to\Lambda\bar{\Lambda}\to\pi^{-}p\pi^{+}\bar{p}\) has the largest entanglement.
The Higgs boson is the only spin-0 elementary particle in the Standard Model (SM) and can play as a natural spin singlet state to test LHVT through the Bell inequality at high energies. The properties of SM Higgs boson will be measured to high precision at future \(e^{+}e^{-}\) colliders such as the Circular Electron Positron Collider (CEPC) [37]. We thus propose to test the Bell inequality at CEPC through the Higgsstrahlung process with subsequent decay \(h\to\tau^{+}\tau^{-}\)[38]
\[e^{+}e^{-}\to Zh\to Z\tau^{+}\tau^{-}\,. \tag{1}\]
The tau lepton pair is correlated in the decay process, and Bell inequality (or quantum entanglement) can be tested by measuring their spin correlation. However, spin information of the tau leptons can only be partially inferred from its decay particle. Here we consider only the tau leptons followed by the 1-prong decay mode \(\tau^{\pm}\to\pi^{\pm}\nu_{\tau}\) which is the best spin analyzer for the tau lepton polarization. In principle, the other decay modes can also be employed. However, it is more challenging in practice because of the kinematic reconstruction of the tau lepton as well as limited spin analyzing power. In order to have higher statistics, the associated \(Z\) boson will be reconstructed by both its leptonic and hadronic decay modes. Furthermore, both Tornqvist's method [15] and the CHSH method [4] are explored to evaluate the violation of Bell inequality. To have a more realistic estimation on the experimental sensitivities, we investigate the kinematic reconstruction and simulate the detector effects to reveal the quantum entangled spin correlations in the decay of tau pairs.
This paper is organized as follows. In Sec. II, we first outline the LHVT and Bell inequality. Then we show Tornqvist's method and the CHSH method in terms of the polarization correlation in decay \(h\to\tau^{+}\tau^{-}\to\pi^{+}\bar{\nu}_{\tau}\pi^{-}\nu_{\tau}\). In Sec. III, we describe the simulation of process \(e^{+}e^{-}\to Zh\to Z\tau^{+}\tau^{-}\) and discuss the detector effects as well as reconstruction methods. The results of projected sensitivity to the Bell inequality violation are given in Sec. IV. Finally, in Sec. V we summarize our conclusions.
## II Local quantum model and Bell inequality
In this section, we describe the original and generalized expressions of Bell inequality and the realistic methods of testing it in high-energy physics.
In the LHVT with the hidden variable being \(\lambda\), the Bell inequality can be phased in terms of the polarization correlation
\[P(\vec{a},\vec{b})=\int d\lambda\,q(\lambda)\cdot\mathcal{P}_{A}(\vec{a}, \lambda)\cdot\mathcal{P}_{B}(\vec{b},\lambda)\,, \tag{2}\]
where \(\mathcal{P}_{A(B)}(\vec{x},\lambda)\) is the probability of the fermion \(A\) (or \(B\)) with spin along the direction \(\vec{x}=\vec{a}\,(\vec{b})\) for given hidden variable \(\lambda\), and \(q(\lambda)\) is the corresponding probability distribution of the hidden variable \(\lambda\). The original expression of Bell inequality refers to three independent spatial directions \(\vec{a}\), \(\vec{b}\) and \(\vec{c}\) as
\[\big{|}P(\vec{a},\vec{b})-P(\vec{a},\vec{c})\big{|}\leq 1+P(\vec{b},\vec{c})\;. \tag{3}\]
On the other hand, in QM the quantum average of the correlation operator \(\mathcal{O}(\vec{a},\vec{b})\equiv\left[\vec{\sigma}^{A}\cdot\vec{a}\right] \,\left[\vec{\sigma}^{B}\cdot\vec{b}\,\right]\) is given by
\[P(\vec{a},\vec{b})=\big{\langle}00\big{|}\big{[}\vec{\sigma}^{A}\cdot\vec{a} \big{]}\,\left[\vec{\sigma}^{B}\cdot\vec{b}\,\right]\big{|}00\big{\rangle}=- \vec{a}\cdot\vec{b}\,, \tag{4}\]
where \(\langle 00|\) or \(|00\rangle\) refers to a singlet state of the total spin. After inserting the QM prediction Eq. (4) in Eq. (3), the Bell inequality Eq. (3) may be violated in some region of phase space. However, in realistic investigations, the spin correlation of the two fermions \(A\) and \(B\) can only be transferred to the kinematics of their decay products. In terms of \(h\to\tau^{+}\tau^{-}\) and tau leptons' hadronic decay mode \(\tau^{\pm}\to\pi^{\pm}\nu_{\tau}\), we will describe two existing methods to perform the test of Bell inequality at high energy colliders.
### Tornqvist's method
In Ref. [15], Tornqvist suggested to test the BI by using the polarization correlation in the process
\[e^{+}e^{-}\to\Lambda\bar{\Lambda}\to\pi^{-}p\pi^{+}\bar{p}\;. \tag{5}\]
The parent particle of \(\Lambda\bar{\Lambda}\) could be either spin-0 \(\eta_{c}\) or spin-1 \(J/\psi\). Although there was debate as to whether such proposal contains controversial assumption [18; 19], it is a practical and attractive experiment to test the Bell inequality. We instead establish the polarization correlation of decay \(h\to\tau^{+}\tau^{-}\to\pi^{+}\bar{\nu}_{\tau}\pi^{-}\nu_{\tau}\). Since the Higgs boson is a scalar and the \(\tau\)-lepton is a spin-1/2 particle, the decay process \(h\to\tau^{+}\tau^{-}\) provides an ideal system for testing the Bell inequality. The joint spin density matrix for the \(\tau^{+}\tau^{-}\) system is given by
\[\rho_{\tau\bar{\tau}}=\frac{1}{4}\Big{(}1-\vec{\sigma}_{\tau}\cdot\vec{\sigma} _{\bar{\tau}}\Big{)}\,, \tag{6}\]
which means the state with parallel \(\vec{\sigma}_{\tau}\) and \(\vec{\sigma}_{\bar{\tau}}\) vanishes because of spin-zero condition. For the correlation operator \(\mathcal{O}(\vec{a},\vec{b})\), one can easily find the probability is given as
\[P(\vec{a},\vec{b})=\big{\langle}00\big{|}\rho_{\tau\bar{\tau}}\mathcal{O}(\vec {a},\vec{b})\big{|}00\big{\rangle}=-\vec{a}\cdot\vec{b}\,. \tag{7}\]
However, spin states of the \(\tau\)-leptons can not be measured directly at collider, and can only be accessed by angular distributions of their decay products. Here we only investigate the 1-prong decay mode \(\tau^{-}\to\pi^{-}\nu_{\tau}\), in which the momentum direction of the charged pion (or equivlently the neutrino) is correlated to the spin direction of the tau lepton. Thus, this decay mode has the largest spin analyzing power compared to the cases of the other decay modes. The decay amplitude of the process \(\tau^{-}\to\pi^{-}\nu_{\tau}\) in the rest frame of the mother particle can be written as
\[\mathcal{M}_{\tau}=\frac{1}{\sqrt{4\pi}}\big{(}S+P\vec{\sigma}_{\tau}\cdot\vec {a}\big{)}\,, \tag{8}\]
where \(\vec{a}\) is the unit vector along the \(\pi^{-}\) momentum direction in the rest frame of \(\tau^{-}\), \(S\) and \(P\) are the \(S\)- and \(P\)-wave amplitudes respectively. Similar expression is valid for decay process \(\tau^{+}\to\pi^{+}\bar{\nu}_{\tau}\) as well. Then, the probability of having \(\pi^{-}\) flying along \(\vec{a}\) and \(\pi^{+}\) flying along \(\vec{b}\) (\(\vec{b}\) is the unit vector along the \(\pi^{+}\) momentum direction in the rest frame of \(\tau^{+}\)) becomes
\[\widetilde{P}(\vec{a},\vec{b})=\big{\langle}00\big{|}\rho_{\tau\bar{\tau}} \big{[}\mathcal{M}_{\tau}\mathcal{M}_{\bar{\tau}}\big{]}^{\dagger}\,\big{[} \mathcal{M}_{\tau}\mathcal{M}_{\bar{\tau}}\,\big{]}\,\big{|}00\big{\rangle}= \left[\frac{1}{4\pi}\Big{(}\big{|}S\big{|}^{2}+\big{|}P\big{|}^{2}\Big{)} \right]^{2}\Big{(}1+\alpha^{2}\vec{a}\cdot\vec{b}\Big{)}\,, \tag{9}\]
where
\[\alpha=-\frac{2\Re SP^{*}}{\left|S\right|^{2}+\left|P\right|^{2}}\approx 0.573\,. \tag{10}\]
The above value is obtained by fitting in our numerical simulation. One can see that \(\widetilde{P}(\vec{a},\vec{b})\) is a partial measurement of the spin states of the \(\tau\)-lepton pair. Its normalized value \(\widetilde{P}^{N}(\vec{a},\vec{b})\) is related to \(P(\vec{a},\vec{b})\) by the following relation
\[P(\vec{a},\vec{b})=\frac{1}{\alpha^{2}}\Big{[}1-\widetilde{P}^{N}(\vec{a},\vec {b})\Big{]}\,. \tag{11}\]
The normalized differential cross section is given as
\[\frac{1}{\sigma}\frac{d\sigma}{d\cos\theta_{ab}}=\frac{1}{2}\widetilde{P}^{N} (\vec{a},\vec{b})=\frac{1}{2}\left[1-\alpha^{2}\,P(\vec{a},\vec{b})\right]\,, \tag{12}\]
where \(\cos\theta_{ab}\equiv\vec{a}\cdot\vec{b}=-P(\vec{a},\vec{b})\). On the other hand, hidden variable theory predicts [15]
\[\left|P(\vec{a},\vec{b})\right|\leq 1-\frac{2}{\pi}\theta_{ab}\,,\ \ \theta_{ab}\in[0,\pi]\;. \tag{13}\]
Then we have the following classical region satisfying the Bell inequality
\[\begin{cases}\frac{1}{2}-\alpha^{2}\Big{(}\frac{1}{2}-\frac{ \theta_{ab}}{\pi}\Big{)}\leq\frac{1}{\sigma}\frac{d\sigma}{d\cos\theta_{ab}} \leq\frac{1}{2}+\alpha^{2}\Big{(}\frac{1}{2}-\frac{\theta_{ab}}{\pi}\Big{)}\,& \theta_{ab}\in[0,\pi/2]\\ \frac{1}{2}+\alpha^{2}\Big{(}\frac{1}{2}-\frac{\theta_{ab}}{\pi}\Big{)}\leq \frac{1}{\sigma}\frac{d\sigma}{d\cos\theta_{ab}}\leq\frac{1}{2}-\alpha^{2} \Big{(}\frac{1}{2}-\frac{\theta_{ab}}{\pi}\Big{)}\,&\theta_{ab}\in(\pi/2,\pi]\end{cases}\;. \tag{14}\]
### Clauser-Home-Shimony-Holt Inequality
Clauser, Horne, Shimony and Holt (CHSH) generalized the original Bell inequality Eq. (3) by considering general properties of the quantum density matrix of a spin-1/2 particles system [4]. Density matrix of the quantum state having two spin-1/2 particles can be expressed in general as
\[\rho=\frac{1}{4}\left[\mathbb{I}_{A}\otimes\mathbb{I}_{B}+A_{i}\cdot\big{(} \sigma_{A,i}\otimes\mathbb{I}_{B}\big{)}+B_{j}\cdot\big{(}\mathbb{I}_{A} \otimes\sigma_{B,j}\big{)}+C_{ij}\big{(}\sigma_{A,i}\otimes\sigma_{B,j}\big{)}\right] \tag{15}\]
where \(\sigma_{A(B),i}\) and \(\mathbb{I}_{A(B)}\) are Pauli matrices and the unit \(2\times 2\) matrix for the particle \(A\) (\(B\)), respectively. The Bell operator associated with the quantum CHSH inequality is defined as
\[\mathcal{B}_{\text{CHSH}}=\vec{a}\cdot\vec{\sigma}_{A}\otimes\big{(}\vec{b}+ \vec{b}^{\prime}\big{)}\cdot\vec{\sigma}_{B}+\vec{a}^{\prime}\cdot\vec{\sigma} _{A}\otimes\big{(}\vec{b}-\vec{b}^{\prime}\big{)}\cdot\vec{\sigma}_{B}\,, \tag{16}\]
where \(\vec{a}\), \(\vec{a}^{\prime}\), \(\vec{b}\), \(\vec{b}^{\prime}\) are unit vectors. Then the CHSH inequality is given by [39]
\[\left|\text{Tr}(\rho\mathcal{B}_{\text{CHSH}})\right|\leq 2\;. \tag{17}\]
Again, in practice it is hard to test the above inequality directly because of the challenge in measuring of the spin directions \(\vec{a}\), \(\vec{a}^{\prime}\), \(\vec{b}\), \(\vec{b}^{\prime}\). Alternatively, the matrix with following coefficients
\[C_{ij}=\text{Tr}\big{[}\rho\sigma_{i}\otimes\sigma_{j}\big{]}\:, \tag{18}\]
can provide an indirect inequality. It was shown that if sum of the two largest eigenvalues of the matrix \(U=C^{T}C\) is larger than 1, then the CHSH inequality is violated [39].
At colliders, the density matrix can be estimated by angular distributions of the two spin-1/2 particles' decay products. The normalized differential cross section can be generally parameterized as [40]
\[\frac{\sigma^{-1}d\sigma}{d\cos\theta_{A,i}d\cos\theta_{B,j}}=\frac{1}{4}\Big{[} 1+A_{i}\cos\theta_{A,i}+B_{j}\cos\theta_{B,j}+C_{ij}\cos\theta_{A,i}\cos\theta _{B,j}\Big{]}\, \tag{19}\]
where \(\theta_{A(B),i(j)}\) are the polar angle of charged particles \(A\) (\(B\)) from the decays of their mother particles, and measured from the \(i(j)\)-th axis. In our case of \(h\rightarrow\tau^{+}\tau^{-}\rightarrow\pi^{+}\bar{\nu}_{\tau}\pi^{-}\nu_{\tau}\), the cosine quantities of the above polar angles are defined as
\[\cos\theta_{\pi^{+},i}=\hat{p}_{\pi^{+}}\cdot\hat{i}\,\quad\cos\theta_{\pi^{-},j} =\hat{p}_{\pi^{-}}\cdot\hat{j}\, \tag{20}\]
where the unit vectors \(\hat{i}\) and \(\hat{j}\) are defined in the rest frames of the \(\tau^{+}\) and \(\tau^{-}\), respectively. They belong to a chosen orthonormal basis \(\hat{j}\in\{\hat{k},\hat{r},\hat{n}\}\) and satisfy the relation \(\hat{i}=-\hat{j}\). More precisely, we define an unit vector \(\hat{k}\) as the direction of \(\tau^{-}\) momentum in the rest frame of the Higgs boson. In the rest frame of the \(\tau^{-}\) lepton, we define an unit vector \(\hat{r}\) in the decay plane of the \(\tau^{-}\) lepton and perpendicular to \(\hat{k}\), and an unit vector \(\hat{n}=\hat{k}\times\hat{r}\). It was shown that the matrix \(C\) can be estimated as [40; 21].
\[C_{ij}=-9\int d\cos\theta_{A,i}d\cos\theta_{B,j}\frac{\sigma^{-1}d\sigma}{d \cos\theta_{A,i}d\cos\theta_{B,j}}\cos\theta_{A,i}\cos\theta_{B,j}. \tag{21}\]
Then, one can diagonalize the spin correlation matrix \(C^{T}C\) and find the two largest eigenvalues to test the CHSH inequality. Since the Higgs boson is considered to be on-shell, and its spin is zero, there is no any invariant mass and orientation dependencies, compared to the \(t\bar{t}\) final states in Ref. [21].
## III Measurements at future lepton colliders
At \(e^{+}e^{-}\) colliders, the dominant production mode of the Higgs boson is the so-called Higgsstrahlung channel, \(e^{+}e^{-}\to Zh\). For our interested mode \(h\rightarrow\tau^{+}\tau^{-}\) with subsequent
decay channels \(\tau^{\pm}\to\pi^{\pm}\nu\), two neutrinos appear in the final state. Hence kinematic reconstruction is necessary in order to measure quantum entanglement between \(\tau^{+}\) and \(\tau^{-}\). Next, we describe our numerical simulation and implementation of the detector effects, and then the reconstruction approaches in both the leptonic and hadronic decay modes of \(Z\) boson.
### Simulation and Detector Effects
Our numerical simulations are conducted using the MadGraph5_aMC@NLO[41] package, and the quantum entangled spin correlations in the tau-lepton decay are preserved by the TauDecay [42] package. For a realistic simulation, detector resolutions have to be included for the objects. Charged tracks can be precisely measured by the CEPC detector for the decay products in \(Z\to\ell^{+}\ell^{-}\) (\(\ell=e,\mu\)) or \(\tau^{\pm}\to\pi^{\pm}\nu\). Table 1 lists typical uncertainties of the azimuthal angle (\(\phi\)), rapidity (\(\eta\)) and magnitude of the transverse momentum (\(|\vec{p}_{T}|\)) at CEPC [37]. One can see that the CEPC uncertainties for tracks are quite small. We smear tracks (both leptons and pions) by randomly sampling the azimuthal angle, pseudo-rapidity and transverse momentum according to Gaussian distribution with standard deviations given in Table 1[43; 44].
However, uncertainties of jet from \(Z\) boson's hadronic decay are relatively large. Measurement of jet is not only smeared by fragmentation of partons and the corresponding jet clustering processes, but also by the jet clustering of reconstructed objects after detector response and its matching to jet at the generator level [45]. The results in Ref. [46; 47] indicate that the uncertainty induced by the jet clustering and matching can be as significant as those from the detector response, and becomes the dominant uncertainty especially for final state with more than two jets. Hence, sophisticated jet clustering algorithm has to be
\begin{table}
\begin{tabular}{c c} \hline Observables & Uncertainties \\ \hline \(\phi\) & \(0.0002|\eta|\) + \(0.000022\) \\ \hline \(\eta\) & \(0.000016|\eta|\) + \(0.00000022\) \\ \hline \(|\vec{p}_{T}|\) & \(0.036|\vec{p}_{T}|\) \\ \hline \end{tabular}
\end{table}
Table 1: CEPC uncertainties for tracks
used [45]. The energy resolution of the jet from light quarks can be described as [37]
\[\sigma_{\rm jet}(E)=\frac{25.7\%}{\sqrt{E}}\,\oplus\,2.4\%\,. \tag{22}\]
Jets from charm and bottom quarks have slightly larger uncertainties because of neutrinos in their decays [37]. In consideration of this, in this paper we also use a smearing algorithm to account for detector resolutions. Detector responses to the partons (for the channel \(Z\to q\bar{q}\)) are included by smearing energy of the partons according to Gaussian distribution with standard deviations given in Eq. (22).
To see the impact of the above uncertainties, in Fig. 1, we show the distributions of the difference between the real value and the smeared value of transverse momentum \(p_{T}\), azimuthal angle \(\phi\) and rapidity \(\eta\) (defined as \(\Delta p_{T}\), \(\Delta\phi\) and \(\Delta\eta\)) for the objects in different decay modes of \(Z\) boson. One can see that, due to the jet energy smearing, the \(p_{T}\) uncertainties of jets in \(Z\) boson's hadronic decay are quite large compared to those in the leptonic mode. As a result, the \(Z\) boson decaying to dijet is not well reconstructed as shown in Fig. 2.
Furthermore, detection efficiency is also affected by particle identification. For CEPC, the detector is designed to identify prompt leptons with high efficiency and high purity [37]. For leptons with energies above 5 GeV, the identification efficiency is higher than 99% and misidentification rate is smaller than 2%. For the \(\tau\)-jet with visible energy between 20 and 80 GeV, the identification efficiency is above 80% with a purity closing to 90% [37], further improvement can be expected by optimizations. In our simulation, we ignore the momentum dependence and use an universal identification efficiency 80% estimate experimental significance for \(\tau\)-jet. For jets from hadronic decay of the \(Z\) boson, b-jets can be tagged with an efficiency of 80% and a purity of 90%. Similarly, an efficiency of 60% and a purity of 60% can be achieved for the c-jet tagging [37]. In our case, since the \(Z\) boson is treated inclusively, jet-tagging is irrelevant to our analysis. Therefore, we will use an factor of 0.8 to account for possible efficiency loss in reconstruction at the detector level.
### Reconstruction Method
For decay of the Higgs boson, the degree of freedom of the corresponding phase space is 8, and 6 of them can be measured thanks to the two charged pions. As a result, only 2 of them are undetermined. Considering that the decay width of \(\tau\)-lepton is very small compared to
its mass, it is an excellent approximation to assume that both \(\tau\)s are on-shell. With help of the on-shell conditions, the 8 kinematic degree of freedom can be determined. In the following studies, we always assume that the \(Z\)-boson is reconstructed by its visible decay products, and momentum of the Higgs boson is obtained by energy-momentum conservation
Figure 2: Normalized number of events as a function of \(\Delta p_{T}\) (left), \(\Delta\phi\) (middle) and \(\Delta\eta\) (right) for the \(Z\) boson in leptonic (top) and hadronic (bottom) decay modes.
condition, _i.e._, \(p_{h}=p_{e^{+}}+p_{e^{-}}-p_{Z}\). In the approximation of that \(P^{0}=\sqrt{s}\), and \(\vec{P}=0\) with \(P=p_{e^{+}}+p_{e^{-}}\), invariant mass of the Higgs momentum is given by \(p_{h}^{2}=s+p_{Z}^{2}-2E_{Z}\sqrt{s}\). Since the decay width of Higgs boson is also expected to be very small, \(p_{h}^{2}\approx m_{h}^{2}\) is again an excellent approximation. In practice, \(p_{h}^{2}\equiv\hat{m}_{h}^{2}\) may deviate from \(m_{h}^{2}\) significantly due to experimental uncertainties in measurement of the \(Z\) boson momentum.
Since the Higgs boson decay isotropically, the reconstruction is done in the rest frame of \(h\). Assuming that both \(\tau\)-leptons are on-shell, then energy and magnitude of the \(\tau\)-lepton momentum in this reference frame can be obtained directly,
\[E_{\tau}^{\star}=\frac{1}{2}\hat{m}_{h}\,,\ \ \ p_{\tau}^{\star}=\frac{1}{2} \hat{m}_{h}\sqrt{1-\frac{4m_{\tau}^{2}}{\hat{m}_{h}^{2}}}\,. \tag{23}\]
Intersection angles between momentum of the \(\tau^{\pm}\) and \(\pi^{\pm}\) in this frame are given as,
\[\cos\theta_{\pm}^{\star}=\frac{2E_{\tau}^{\star}E_{\pi^{\pm}}^{\star}-m_{\tau} ^{2}-m_{\pi^{\pm}}^{2}}{2p_{\tau}^{\star}p_{\pi^{\pm}}^{\star}} \tag{24}\]
Without loss of generality, we define the \(z\)-axis as the momentum direction of the negatively charged decay product, and the positively charged decay product lies in the \(x-z\) plane. In this reference frame, momenta of the of the charged decay products can be written as,
\[p_{\pi^{-}}^{\star\mu} = E_{\pi^{-}}^{\star}\big{(}1,\,0,\,0,\,\beta_{\pi^{-}}^{\star} \big{)}\,, \tag{25}\] \[p_{\pi^{+}}^{\star\mu} = E_{\pi^{+}}^{\star}\big{(}1,\,\beta_{\pi^{+}}^{\star}\sin\theta_ {\pi^{+}}^{\star},\,0,\,\beta_{\pi^{+}}^{\star}\cos\theta_{\pi^{+}}^{\star} \big{)}\,. \tag{26}\]
Furthermore, assuming azimuthal angle of the \(\tau^{-}\)-lepton is \(\phi_{-}^{\star}\), then it's momentum can be parameterized as,
\[p_{\tau^{-}}^{\star\mu}=E_{\tau}^{\star}\big{(}1,\,\beta_{\tau}^{\star}\sin \theta_{-}^{\star}\cos\phi_{-}^{\star},\,\beta_{\tau}^{\star}\sin\theta_{-}^{ \star}\sin\phi_{-}^{\star},\,\beta_{\tau}^{\star}\cos\theta_{-}^{\star}\big{)}\,, \tag{27}\]
where \(\beta_{\tau}^{\star}=\sqrt{1-4m_{\tau}^{2}/\hat{m}_{h}^{2}}\). It turns out that momentum of the \(\tau^{+}\)-lepton is given as
\[p_{\tau^{+}}^{\star\mu}=E_{\tau}^{\star}\big{(}1,\,-\beta_{\tau}^{\star}\sin \theta_{-}^{\star}\cos\phi_{-}^{\star},\,-\beta_{\tau}^{\star}\sin\theta_{-}^ {\star}\sin\phi_{-}^{\star},\,-\beta_{\tau}^{\star}\cos\theta_{-}^{\star}\big{)}\,. \tag{28}\]
Using the equation, \(\vec{p}_{\tau^{+}}^{\star}\cdot\vec{p}_{\pi^{+}}^{\star}=\cos\theta_{+}^{\star} |\vec{p}_{\tau^{+}}^{\star}|\big{|}\vec{p}_{\pi^{+}}^{\star}\big{|}\,,\) one can immediately have,
\[-\sin\theta_{\pi^{+}}^{\star}\sin\theta_{-}^{\star}\cos\big{(}\phi_{-}^{\star} \big{)}-\cos\theta_{\pi^{+}}^{\star}\cos\theta_{-}^{\star}=\cos\theta_{+}^{ \star}\,. \tag{29}\]
Then we get following solutions,
\[\phi_{-}^{\star}=\pm\arccos\left[-\frac{\cos\theta_{\pi^{+}}^{\star}\cos \theta_{-}^{\star}+\cos\theta_{+}^{\star}}{\sin\theta_{\pi^{+}}^{\star}\sin \theta_{-}^{\star}}\right]\,. \tag{30}\]
Both solutions satisfy all the kinematic constraints, hence there is a two-fold ambiguity.
We then test the above analytical reconstruction method at parton level. Fig. 3 shows the longitudinal and transverse correlations as a result of the above analytical solutions for the leptonic decay mode of \(Z\) boson. In the left panel of Fig. 3, the colored densities indicate the true values of \(\cos\theta_{\pi^{+}}\) versus \(\cos\theta_{\pi^{-}}\) and the black contours show the reconstructed values. The above two-fold ambiguity induces a reduction of the transverse spin correction which is described by the azimuthal angle difference of the two decay planes \(\delta\phi\), as shown in the right panel of Fig. 3. In the leptonic decay mode of \(Z\), one can see that the above analytical reconstruction method works well.
However, there are some drawbacks in this analytical reconstruction method. First of all, the two-fold ambiguity of kinematic solutions exists as mentioned above. One cannot determine the complete \(\phi_{-}^{\star}\) distribution at a time. More importantly, due to the energy uncertainty of jets, the \(Z\) boson cannot be well reconstructed in its hadronic decay mode. As a result, the uncertainties of the Higgs momentum given by analytical reconstruction are very large in the hadronic decay mode of the \(Z\) boson. Quantum correlation effects are completely washed out, and hence it is nearly impossible to observe violation of the Bell identity in the hadronic mode. Therefore, we adopt the other reconstruction method by using impact parameters in next section.
Figure 3: Longitudinal (left) and transverse (right) correlations as a result of analytical reconstruction method for leptonic decay mode of \(Z\) boson at parton level.
### Reconstruction by using Impact Parameters
It was shown that impact parameters of the charged \(\pi\)s are very useful to reconstruct the full decay kinematics [48; 49; 50]. The \(\tau\)-leptons emerging from Higgs decay are strongly boosted. As a result, its typical decay length \(\sim 3000\)\(\mu\)m is long enough to induce sizable impact parameter for the charged decay product. The CMS group has used the impact parameter to study CP property of the interaction between Higgs and the tau pair [51]. Here we adopt a similar method proposed in Ref. [48]. Furthermore, excellent impact parameter resolution can be achieved by the CEPC vertex detector. The main performance goals for spatial resolution near the IP can be better than 3 \(\mu\)m [52]. Here, the real impact parameters of the pions are smeared according to Gaussian distribution with standard deviations \(\sigma_{\rm IP}=3\)\(\mu\)m.
We use magnitudes of the \(\tau^{\pm}\) momenta, \(|\vec{p}_{\tau^{\pm}}|\), as the free parameters for finding the best fit. For single tau decay, for instance \(\tau^{-}\rightarrow\pi^{-}\nu_{\tau}\), the opening angle between \(\tau^{-}\) and \(\pi^{-}\) is
\[\cos\theta_{\tau^{-}\pi^{-}}=\frac{2E_{\tau^{-}}E_{\pi^{-}}-m_{\tau}^{2}-m_{ \pi^{-}}^{2}}{2|\vec{p}_{\tau^{-}}||\vec{p}_{\pi^{-}}|}\,, \tag{31}\]
where \(E_{\tau^{-}}=\sqrt{m_{\tau}^{2}+|\vec{p}_{\tau^{-}}|^{2}}\) given by the on-shell condition. Then, the momentum of \(\tau^{-}\) is given as
\[\vec{p}_{\tau^{-}}=|\vec{p}_{\tau^{-}}|\cdot\frac{\vec{b}_{\pi^{-}}+\frac{| \vec{b}_{\pi^{-}}|}{\tan\theta_{\tau^{-}\pi^{-}}}\frac{\vec{p}_{\pi^{-}}}{| \vec{p}_{\pi^{-}}|}}{\left|\vec{b}_{\pi^{-}}+\frac{|\vec{b}_{\pi^{-}}|}{\tan \theta_{\tau^{-}\pi^{-}}}\frac{\vec{p}_{\pi^{-}}}{|\vec{p}_{\pi^{-}}|}\right|}\,, \tag{32}\]
where \(\vec{b}_{\pi^{-}}\) is the impact parameter of the \(\pi^{-}\). Momentum of the neutrino can be obtained by the momentum conservation condition, \(p^{\mu}_{\nu_{\tau}}=p^{\mu}_{\tau^{-}}-p^{\mu}_{\pi^{-}}\). Similarly, one can obtain momenta of the \(\tau^{+}\) and anti-neutrino as functions of the parameter \(|\vec{p}_{\tau^{+}}|\) and the impact parameter \(\vec{b}_{\pi^{+}}\). The best values of the parameters \(|\vec{p}_{\tau^{\pm}}|\) are obtained by minimizing following likelihood function
\[\rho=\rho_{BW}(\hat{m}_{\tau\tau},m_{Z},\Gamma_{Z})\cdot\rho_{G}(\hat{m}_{\tau \tau}-m_{Z},\Gamma_{Z})\cdot\prod_{\mu=0,1,2,3}\rho_{G}(\hat{p}^{\mu}_{Z}-p^{ \mu}_{Z},\sigma^{\mu}_{Z})\,, \tag{33}\]
where \(\rho_{BW}\) is the usual Breit-Wigner distribution for the resonant production of the \(Z\) boson, and \(\hat{m}_{\tau\tau}\) is the reconstructed invariant mass of the tau-lepton pair; \(\hat{p}^{\mu}_{Z}\) is the reconstructed momentum of the \(Z\) boson, and \(p^{\mu}_{Z}\) is the momentum obtained by summing momenta of its decay product; \(\rho_{G}(x,y)\) is the Gaussian function with mean value \(x\) and variance \(y\). Here \(\sigma^{\mu}_{Z}\) is estimated by our numerical simulation.
In Figs. 4 and 5, after using the method of impact parameters, we show the observable uncertainties for the reconstructed \(\tau\) lepton and the reconstructed \(Z\) mass in different decay modes of \(Z\) boson, respectively. It turns out that the reconstruction in hadronic mode is as good as that in leptonic mode for azimuthal angle, rapidity and the reconstructed \(Z\) boson mass. The reconstructed transverse momentum \(p_{T}\) in hadronic mode is still worse than that in leptonic mode. We also display the reconstructed longitudinal and transverse correlations for hadronic and leptonic decay modes of \(Z\) in Fig. 6. As seen in the bottom panel, the reconstructed transverse correlation agrees well with the true result at parton level. It turns out that the method of impact parameters has no the drawback of two-fold ambiguity.
Finally, in Fig. 7, we show the distribution of \(\cos\theta_{\pi\pi}\) for hadronic (red) and leptonic (blue) decay modes of \(Z\) and compare with the Bell inequality. The LHVT holds between the two dashed lines (gray fitted region) given by the inequality in Eq. (14). The black solid
Figure 4: Normalized number of events as a function of \(\Delta p_{T}\) (left), \(\Delta\phi\) (middle) and \(\Delta\eta\) (right) for the reconstructed \(\tau\) lepton in leptonic (blue) and hadronic (red) decay modes of \(Z\) boson.
Figure 5: Normalized number of events as a function of reconstructed \(Z\) mass \(m_{ff}^{\rm Rec}\) in leptonic (red) and hadronic (blue) decay modes of \(Z\) boson.
line shows the normalized differential cross section in Eq. (12) and the result at parton level is shown in green dots. The simulation results after using the method of impact parameters are represented by the red and blue histograms for hadronic and leptonic decay modes of \(Z\) boson, respectively. Naively seen from the distributions, they stay outside the region satisfying the Bell inequality and agree with the QM/QFT prediction in black solid line.
Fig. 8 shows the reconstructed angular distributions of the charged pions. In general, quantum entanglement disappears when we observe any single angular observable of the six angles. This is because quantum correlations among \(\pi^{+}\) and \(\pi^{-}\) are integrated out, as shown by the flat distributions in the middle panel for the \(\pi^{+}\) and the right panel for both \(\pi^{+}\) and \(\pi^{-}\). Since the \(x-z\) plane is by definition spanned by the momentum direction of the \(\tau^{-}\) and \(\pi^{-}\), the observable \(\cos\theta_{\pi^{-},n}\) can only be zero as shown in the middle panel. The nontrivial distributions shown in the left-panel of the Fig. 8 are purely kinematic. Since \(d\sigma/d\cos\theta_{\pi^{-},k}\) is proportional to a constant and \(\theta_{\pi^{-},r}=\frac{\pi}{2}-\theta_{\pi^{-},k}\), we have
\[\frac{\sigma^{-1}d\sigma}{d\cos\theta_{\pi^{-},r}}=\frac{\sigma^{-1}d\sigma}{ d\sin\theta_{\pi^{-},k}}\propto\text{const.}\times\cot\theta_{\pi^{-},r}\,, \tag{34}\]
which is essentially reconstructed in our approach as shown by the red-solid and green-dashed lines in the left-panel of the Fig. 8. Similarly, the asymmetric distribution for the \(\pi^{+}\) is due
Figure 6: Reconstructed longitudinal (top) and transverse (bottom) correlations for hadronic (top left) and leptonic (top right) decay modes of \(Z\) boson.
to the fact that we have defined \(\cos\theta_{\pi-,r}>0\), which can give a nontrivial integration on the differential cross section \(d\sigma/d\cos\theta_{\pi-,r}d\cos\theta_{\pi+,r}\propto 1+C_{rr}\cos \theta_{\pi-,r}\cos\theta_{\pi+,r}\) (the integration region is limited to the range \(\cos\theta_{\pi-,r}\in[0,\,1]\)). The CHSH inequality is buried in the coefficients \(C_{ij}\).
## IV Sensitivity of Cepc to the Bell Inequality Violation
At the CEPC with \(\sqrt{s}=240\,\)GeV, the total cross section of the Higgsstrahlung process is
\[\sigma_{Zh}=196.2\,\text{fb}\,. \tag{35}\]
Figure 8: Reconstructed angular distributions of \(\pi^{\pm}\) for \(\cos\theta_{\pi,r}\) (left), \(\cos\theta_{\pi,n}\) (middle) and \(\cos\theta_{\pi,k}\) (right).
A huge number of the Higgs boson events will be produced with an expected integrated luminosity of \({\cal L}=5.6\) ab\({}^{-1}\)[37]. However, since both the branching ratios \({\cal B}(h\to\tau\tau)=6.32\%\) and \({\cal B}(\tau\to\pi\nu_{\tau})=10.82\%\) are small, only hundreds of the events are available to test the Bell inequality. The following kinematic cuts are used to select well-reconstructed events, and match to the real detector configuration
\[p_{T}(\ell/j)>10\,{\rm GeV},\ |\eta(\ell/j)|<3,\ \left|m_{ff}^{\rm Rec.}-m_{Z} \right|<10\,{\rm GeV}. \tag{36}\]
The efficiency for the above kinematic cuts is 0.645 for the decay mode \(Z\to\ell\ell\), and 0.648 for the hadronic decay mode \(Z\to jj\). Furthermore, as we have mentioned, a universal jet reconstruction efficiency 0.8 will be used in our following estimation. The number of events can be further reduced by \(\tau\)-jet identification which is assumed to be 0.8 for a purity closing to 90% at the CEPC [37]. Table 2 gives the expected number of events at the CEPC.
The experimental sensitivity for the Tornqvist's approach is studied by defining the following asymmetric observable
\[{\cal A}=\frac{N(\cos\theta_{\pi\pi}<0)-N(\cos\theta_{\pi\pi}>0)}{N(\cos\theta _{\pi\pi}<0)+N(\cos\theta_{\pi\pi}>0)}\,. \tag{37}\]
The analytical prediction of the observable \({\cal A}\) is \({\cal A}=0.164\) in QM or \({\cal A}=0.119\) in LHVT. The experimental sensitivity at the CEPC is estimated by performing 10000 pseudo-experiments. We obtain \({\cal A}=0.155\pm 0.287\) for \(Z\to\ell\ell\) channel and \({\cal A}=0.140\pm 0.098\) for \(Z\to jj\) channel, respectively, as listed in Table 3. Both central values are closer to the prediction of QM. The channel from the hadronic decay mode of \(Z\) boson produces more events and gives more reasonable result with small error. In the CHSH approach, the violation of Bell inequality refers to the fact that the sum of the two largest eigenvalues of the matrix \(U=C^{T}C\) (denoted by \(m_{1}+m_{2}\)) is larger than 1. It turns out that both channels lead to
\begin{table}
\begin{tabular}{c c c} \hline CEPC (240 GeV, 5.6 ab\({}^{-1}\)) & \(Z\to\ell\ell\) & \(Z\to jj\) \\ \hline No. of Events & 55 & 568 \\ \hline Kin. Cuts and jet reconstruction & 22 & 151 \\ \hline \(\tau\)-identification & 14 & 97 \\ \hline \end{tabular}
\end{table}
Table 2: Number of events used to test the Bell inequality at the CEPC.
\(m_{1}+m_{2}>1\), as listed in Table 3. As we can see, for both the Tornqvist's and CHSH approach, the Bell inequality can be tested at the CEPC at \(1\sigma\) level. It is expected that the sensitivity can be further improved by using sophisticated jet reconstruction method and enhanced \(\tau\)-jet identification efficiency.
## V Summary
Since spin state can not be directly measured at collider, it is a challenge of the test of quantum entanglement and Bell-nonlocality in high-energy collider physics. However, testing Bell-nonlocality in high energy scattering process is essentially important because it provides a unique way to address the quantum entanglement at high energy scale. We investigate the testability of Bell inequality through \(h\to\tau^{+}\tau^{-}\), which is an ideal system to observe the LHVT violation, at future \(e^{+}e^{-}\) collider CEPC. We demonstrated how to use angular distributions of decay products of the spin-correlated \(\tau\)-pair to address the Bell-nonlocality. Future \(e^{+}e^{-}\) colliders can improve the measurement accuracy of the spin correlation of tau lepton pairs from Higgs boson decay. Two realistic methods of testing Bell inequality, i.e., Tornqvist's method and the CHSH inequality are studied in terms of the polarization correlation in decay \(h\to\tau^{+}\tau^{-}\to\pi^{+}\bar{\nu}_{\tau}\pi^{-}\nu_{\tau}\).
We simulate the production of \(e^{+}e^{-}\to Zh\to Z\tau^{+}\tau^{-}\) as well as the \(Z\) boson's leptonic and hadronic decay modes. The detector effects of CEPC including uncertainties for tracks and jets from \(Z\) boson are taken into account. We also describe necessary reconstruction
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Channels & Obs. & QM & Clas. & Exp. @ CEPC \\ \hline \multirow{2}{*}{\(Z\to\ell\ell\)} & \(\mathcal{A}\) & 0.164 & 0.119 & \(0.155\pm 0.287\) \\ & \(m_{1}+m_{2}\) & \(>1\) & \(\leq 1\) & \(2.12\pm 1.11\) \\ \hline \multirow{2}{*}{\(Z\to jj\)} & \(\mathcal{A}\) & 0.164 & 0.119 & \(0.140\pm 0.098\) \\ & \(m_{1}+m_{2}\) & \(>1\) & \(\leq 1\) & \(1.20\pm 0.37\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The results of observables testing the Bell inequality in Tornqvistβs method and the CHSH approach. The experimental predictions are given for the CEPC with colliding energy \(\sqrt{s}=240\,\mathrm{GeV}\) and a total luminosity of \(5.6\,\,\mathrm{ab}^{-1}\).
approaches to measure quantum entanglement between \(\tau^{+}\) and \(\tau^{-}\). Finally, we find that for both the Tornqvist's and CHSH approachs, the Bell inequality can be tested at the CEPC at \(1\sigma\) level. Further improvements are expected by employing sophisticated jet reconstruction method and enhanced \(\tau\)-jet identification efficiency.
## Acknowledgements
T.L. would like to thank Xue-Qian Li for helpful discussion. T.L. is supported by the National Natural Science Foundation of China (Grants No. 11975129, 12035008) and "the Fundamental Research Funds for the Central Universities", Nankai University (Grant No. 63196013). K.M. was supported by the Innovation Capability Support Program of Shaanxi (Program No. 2021KJXX-47).
|
2309.07230 | ESRO: Experience Assisted Service Reliability against Outages | Modern cloud services are prone to failures due to their complex
architecture, making diagnosis a critical process. Site Reliability Engineers
(SREs) spend hours leveraging multiple sources of data, including the alerts,
error logs, and domain expertise through past experiences to locate the root
cause(s). These experiences are documented as natural language text in outage
reports for previous outages. However, utilizing the raw yet rich
semi-structured information in the reports systematically is time-consuming.
Structured information, on the other hand, such as alerts that are often used
during fault diagnosis, is voluminous and requires expert knowledge to discern.
Several strategies have been proposed to use each source of data separately for
root cause analysis. In this work, we build a diagnostic service called ESRO
that recommends root causes and remediation for failures by utilizing
structured as well as semi-structured sources of data systematically. ESRO
constructs a causal graph using alerts and a knowledge graph using outage
reports, and merges them in a novel way to form a unified graph during
training. A retrieval-based mechanism is then used to search the unified graph
and rank the likely root causes and remediation techniques based on the alerts
fired during an outage at inference time. Not only the individual alerts, but
their respective importance in predicting an outage group is taken into account
during recommendation. We evaluated our model on several cloud service outages
of a large SaaS enterprise over the course of ~2 years, and obtained an average
improvement of 27% in rouge scores after comparing the likely root causes
against the ground truth over state-of-the-art baselines. We further establish
the effectiveness of ESRO through qualitative analysis on multiple real outage
examples. | Sarthak Chakraborty, Shubham Agarwal, Shaddy Garg, Abhimanyu Sethia, Udit Narayan Pandey, Videh Aggarwal, Shiv Saini | 2023-09-13T18:04:52Z | http://arxiv.org/abs/2309.07230v1 | # _Esto_: Experience Assisted Service Reliability against Outages
###### Abstract
Modern cloud services are prone to failures due to their complex architecture, making diagnosis a critical process. Site Reliability Engineers (SREs) spend hours leveraging multiple sources of data, including the alerts, error logs, and domain expertise through past experiences to locate the root cause(s). These experiences are documented as natural language text in outage reports for previous outages. However, utilizing the raw yet rich semi-structured information in the reports systematically is time-consuming. Structured information, on the other hand, such as alerts that are often used during fault diagnosis, is voluminous and requires expert knowledge to discern. Several strategies have been proposed to use each source of data separately for root cause analysis. In this work, we build a diagnostic service called ESRO that recommends root causes and remediation for failures by utilizing structured as well as semi-structured sources of data systematically. ESRO constructs a causal graph using alerts and a knowledge graph using outage reports, and merges them in a novel way to form a unified graph during training. A retrieval-based mechanism is then used to search the unified graph and rank the likely root causes and remediation techniques based on the alerts fired during an outage at inference time. Not only the individual alerts, but their respective importance in predicting an outage group is taken into account during recommendation. We evaluated our model on several cloud service outages of a large SaaS enterprise over the course of \(\sim\)2 years, and obtained an average improvement of 27% in rouge scores after comparing the likely root causes against the ground truth over state-of-the-art baselines. We further establish the effectiveness of ESRO through qualitative analysis on multiple real outage examples.
System Monitoring, Cloud Services, Causal Graph, Knowledge Graph
## I Introduction
In recent years, software development and system design in organizations are moving away from traditional massive monoliths and towards a microservices-based design, resulting in faster rate of development and release [1, 2]. These microservices are often deployed on the cloud and offered as Software-as-a-Service (SaaS) products to customers. However, this has raised concerns about maintaining the availability of these services since any production outage can negatively affect customers, resulting in significant financial losses for the enterprises [3, 4]. For example, 37 minute of downtime on YouTube was estimated to cost Google US$1.7 million in ad revenue alone [5], while one hour of downtime could cost Amazon US$100 million on major shopping days [6]. Despite ongoing reliability efforts over the years, cloud services continue to face unavoidable severe incidents [7, 8, 9, 10] and outages [11]. As a result, there has been a surge of research in the field of AI Ops (AI for IT operations) [12].
In the traditional outage management workflow, Site Reliability Engineers (SREs) and On-Call Engineers (OCEs) manually investigate issues, leading to long investigation times and high resource wastage. This, in turn, increases both the mean time to detection (MTTD) and the mean time to remediation (MTTR), which are essential in maintaining service level agreements (SLAs) [13]. The current outage management process consists of five steps: (1) detecting outages through alerts, microservice traces, or performance metrics; (2) triaging the incident by communicating back and forth to assign the correct team for handling the issue; (3) identifying the root cause of the outage using multiple sources of data; (4) resolving the incident and finding a fix for the root cause; and (5) documenting the entire workflow as a natural language analysis report. This process is often inefficient and error-prone, requiring significant time and resources.
Diagnosing outages requires a significant amount of domain expertise, often gained from investigating past outages. However, manually searching through a large database of past outages is not feasible during the outage management process. As a result, additional resources are often required to communicate about any similar outages that have occurred in the past. Since most symptoms are not entirely unique, the expertise of SREs and the team responsible for a particular service is valuable throughout the entire process. Nevertheless, recent research studies, such as those presented in [14, 15, 16, 17], have explored data-driven techniques for predicting outages, performing root cause analysis, and triaging outages. These techniques can reduce the MTTD and MITTR while improving the overall On-Call Engineer (OCE) experience.
Several research studies, such as We use 'ancestral alert' instead of 'root cause alert' [15, 18, 15], use alerts to detect
and forecast outages, as well as suggest root cause alerts1. However, relying solely on alerts for root cause analysis can be inaccurate due to their volume and the presence of redundant and low severity alerts. Additionally, some alerts may not have triggered on the root cause service, but only on the affected services due to a snowball effect. Though some works use performance metrics collected through system monitoring to predict the root cause service, we argue that a larger set of data points will be needed since metrics capture normal and less critical system behaviour as well. Alerts are typically intended to identify problems that can have a significant impact on the system or its users, which makes it more suitable to be used to diagnose system failures. Some studies have also used past outage reports to predict likely root causes [19, 20] by comparing symptom similarities. These works try to compute the similarity between the symptoms of the current outage and those described in previous outage reports. However, such a methodology will just map a recent incident to a past one, without considering any details regarding the pattern of the incident observed through alerts.
Footnote 1: We term alerts identified as βroot cause alertsβ in prior literature as βancestral alertsβ to avoid confusion with the outage reports terms.
We conjecture that utilizing both the alerts and the outage reports information in a systematic way leads to a more informed outage diagnosis process by assisting the OCEs. Though alerts are voluminous, they are structured and capture the real-time information about the degradation in the services, as the detailed symptoms may not be available until some time has elapsed since the fault occurred. Outage reports, on the other hand, contain rich information about past incidents from multiple services, but are written in semi-structured natural language text. Hence, combining this information with structured data can help engineers navigate and diagnose outages more effectively.
In this paper, we build an Experience assisted Service Reliability system against Outages (ESRO) that retrieves similar outages based on a comparison of current alerts fired during an ongoing outage with symptoms of previous outages. It then recommends potential root causes and remediation techniques based on the retrieved outages. The novelty of our approach lies in integrating the structured information that is readily available in real-time during an incident along with the semi-structured historical outage reports to improve the experience of root cause analysis and performance diagnosis for the reliability engineers. The advantage that ESRO brings is the combination of outage-specific real-time information from the structured alerts data along with data-driven experience obtained from the past outages. We build a causal graph to represent the dependence relationships among the corresponding alerts, and a knowledge graph to represent the outage reports. The contribution of both sources of data is brought about by integrating the graphs in a way such that the alerts responsible for an outage are linked to the corresponding node represented in the knowledge graph. The linkage between the two graphs is accomplished through a temporal overlap of the outages with the alerts. To improve the prediction accuracy during inference, our approach also builds a predictor for an outage group (set of similar outages) using alerts, which ensures that only the alerts indicative of an outage are given more weightage in prediction. We have evaluated ESRO through real production outage data obtained from a large SaaS company, collected over a course of 2 years. We have observed at least 16% improvement in accuracy in recommending potential root causes over state-of-the-art baselines. We further demonstrate the efficacy of our approach through real outage examples in the production scenario.
The key contributions are summarized as follows:
* We develop a system that contributes to the development of a more effective performance diagnosis methodology by leveraging structured and real-time alerts data along with outage reports which contain semi-structured natural language text.
* We build a causal graph to represent the alerts and a knowledge graph to represent the information present in the outage reports succinctly. We then merge the two graphs in a novel way to form a linkage between the alerts in the causal graph to the outage symptoms in the knowledge graph.
* Inference during an ongoing outage makes use of only the available alerts to rank past outages with similar set of alerts and symptoms.
* Experiments on real outage dataset shows the advantages of ESRO over the baselines. We observed 16% improvement in predicting root causes and 38% improvement in predicting mitigation steps. We also present qualitative review on few real outages.
## II Related Work
Root Cause Analysis has been studied in literature in the context of microservices and cloud services [16, 18, 19, 21, 22, 23, 24, 25]. Several works [16, 21, 23, 26, 27] have utilized time series KPI metrics data obtained from _Prometheus_ to predict the root cause metric and service. These works usually build a causal graph among the performance metrics using some causal discovery algorithm [28], which are traversed during inference time to locate root cause metrics. Qiu _et al._[29] on the other hand uses domain knowledge in the form of a knowledge graph to improve the causal graph learnt by the causal discovery algorithms and follows similar graph traversal algorithms to locate the root causes.
Works like AirAlert [14], eWarn [18] and Fog of War [15] use alerts from multiple services for performance diagnosis. These works either extract suitable features from alerts or build a dependency graph structure for performance diagnosis of cloud services, and report the root cause alerts. However, in some cases, the root cause service may not trigger any alerts, making it difficult to correctly identify the faulty service solely through alert-based methods. Moreover, there is a possibility that the root cause service alert was triggered outside the designated time window used for creating alert-based features. In such cases, outage reports containing historical information
provides more information about the root cause service, and the remediation technique that needs to be followed by comparing with similar past outages.
Works like [17, 19, 20, 30] mines information from the natural language text present in the outage reports for various performance diagnosis tasks. Saha _et al._[19] builds a knowledge graph with the past semi-structured incident reports by extracting the symptom and root cause information from them using topic models [31] and language models [32]. It then runs inference on the graph to yield the most probable root cause. Ahmed _et al._[20] used Large Language Models (LLMs) to understand the abilities of the outage report in predicting root causes and mitigation steps. Extensive experimentation with various language models suggests that outage reports are very useful in outage diagnosis. Another line of research uses the outage diagnosis data to learn correlations among them using deep-learning techniques to perform outage triaging [17, 30, 33]. Liu _et al._[34] has attempted to correlate alerts with support tickets, but their design objectives differ from ours.
However, none of the above methods use any structured data available during a fault along with outage reports to recommend root cause and mitigation steps. Semi-structured information, such as outage analysis reports, is generated only after the fault has been mitigated or after a certain period of time following the occurrence of the outage. Thus, semi-structured data is unavailable during inference and hence we need to use structured alerts data which flows in real-time. The prior works do not address this issue.
## III Data Description
We give a brief overview on the different sources of data available to us for modelling ESRO. It uses structured data in the form of alerts obtained in real-time, along with the historical outage reports documented by the SREs after the mitigation of the outage.
1. **Alerts Data:** Alerts are fired by an alerting mechanism when the monitored metric values for a service component within a system exceed predetermined thresholds. These alerts can be of varying severity, and contain information such as the alert description, the condition that triggered the alert, severity level, the service affected, and the timestamp at which the alert was generated. For instance, a microservice may trigger an alert if the buffer queue size surpasses a certain value. The alerts data provide critical insights into the system's performance and any potential issues that may arise. They are available in real-time and forms the structured data.
2. **Outage Reports Data:** Outage reports are created by the SREs after an outage has been resolved, through a comprehensive analysis and summary of the possible causes. These documents capture the discussions during the outage and provide detailed insights into the resolution process. These reports provide a detailed description of the symptoms and its impact, as well as the root cause and remediation techniques employed. Additionally, the reports contain timestamps for when the outage was recorded and when it was resolved. Outage reports provide a duration during which the error in the system escalated, and hence the alerts generated during this period can be utilized to identify potential root causes. Moreover, by correlating outages with similar symptoms but for different system components, outage reports can suggest remediation techniques to mitigate similar occurrences in the future.
## IV Solution Overview
Our approach involves building a system called ESRO that is capable of predicting potential and likely root causes and remediation steps during an outage. A schematic overview of ESRO is shown in Figure 2. Our proposed solution combines structured alerts data and semi-structured outage reports to accurately group similar outages that had occurred in the past and predict the likely root causes and remediation strategies from the set of similar past outages. The structured alerts data provides recent information on the system performance, while the semi-structured outage reports data offers detailed and comprehensive information on past outages.
In order to capture the relevant information from structured alerts and outage reports data, we construct a causal graph (CG) and a knowledge graph (KG), respectively. The CG represents alerts as nodes and edges as causal relationships between alerts, while the KG summarizes the rich information present in the outage reports data. The training phase of ESRO merges the two graphs, creating a comprehensive understanding of system behaviour during an outage, which facilitates accurate prediction of root causes and remediation steps. This merging of the causal graph and the knowledge graph along with their use during inference time is a key novelty of ESRO, enhancing the effectiveness of our proposed solution.
Fig. 1: _Types of data available during an outage_
Our approach involves: (i) construction of the individual and merged graphs, and (ii) leveraging the graph for inference at the time of an outage. Each of these steps can be further divided into sub-steps, which are described in detail. Experiments on real-world data demonstrate the effectiveness of our approach, which has the potential to significantly improve the reliability of complex distributed systems by enabling faster and more accurate resolution of outages.
## V Graph Construction Phase
In this section, we present the methodology for representing the information present in the alerts and the outage reports via graphs to facilitate root cause and remediation steps prediction. This involves multiple sub-tasks, including information extraction from incident reports and alerts, constructing individual graphs and finally merging the two graphs. Next, we delve into a detailed description of each sub-step.
### _Information Extraction_
1. **Alerts:** The set of alerts generated is grouped into the nearest time window of duration \(t\) based on the timestamp at which they were fired. For our experiments, we have set \(t\) to 15 minutes. This grouping results in a list of alerts fired for each time window. We then construct an indicator dataset, where each row represents the indicator function for each alert at a specific time window of \(t\) minutes. The number of columns in the dataset equals the number of unique alerts fired, while the rows represent \(t\) time duration windows. The structured dataset is considerable in size and potentially noisy, with certain columns exhibiting low variability and limited significance. Hence, we filter the number of time windows and the number of unique alerts to remove noise. Specifically, we remove the unique alert columns where alerts fired less than 10 times in the entire time period of over 1.5 years for which the data is collected, subject to the condition that no such alert fired during an outage. This ensures that only the relevant and frequent alerts are considered. We apply an additional filtering criterion to remove 95% of time period rows where no alerts fired when there was an outage. This corresponds to the situations where the data does not show the relevant alerts.
2. **Outage Reports:** The outage reports available are parsed to create a JSON, where symptoms, root cause, and remediation steps are the corresponding topics with their descriptions. Instead of utilizing the report's lengthy description of these attributes which contains various technical jargons, we aim to extract a summary that is more concise, comprehensive and free from domain-specific terminologies. To accomplish this, we use a pre-trained Bart-large summarization model [35] to extract a shorter summary for each outage's respective symptom, root cause, and remediation sections. We employ the abstractive summarization technique instead of the extractive summarization method for two reasons: (i) the original report contains reliability-specific jargon that makes it difficult for an extractive summary of an outage report to capture the details succinctly, and (ii) it can interpret information from multiple sources making it highly versatile in handling diverse and complex content (iii) it can condense lengthy and convoluted text which was more effective for our specific use case
### _Causal Graph (CG) Construction_
The indicator dataset of alerts occurrence data as obtained from Section SSV-A forms the input to a standard PC algorithm [28] to obtain the causal graph between the alerts. Here, the alerts are the nodes in the graph and an edge \(a\to b\) between two alerts \(a\) and \(b\) indicates that alert \(b\) was caused due to alert \(a\).
PC algorithm is a constraint-based causal discovery algorithm that identifies the dependence relationships between pairs of alerts in the alerts occurrence data. It starts with a completely connected graph between the alerts, and iteratively computes the skeleton graph by removing relevant edges inferred through hypothesis testing using conditional independence (CI) tests. Here, we have used \(\chi^{2}\) test since the data is discrete, that is, either the alert was triggered at a specified time or not. Specifically, PC algorithm runs CI tests of the form \(p(y\perp\!\!\!\perp x|S)\), where \(x\) and \(y\) are two alerts
Fig. 2: _ESRO Pipeline consisting of two phases: (i) Graph Construction Phase - Previous alerts and outage reports are utilized to construct the merged CK Graph and train n outage cluster predictor, (ii) Inference Phase - Likely root causes and remediation steps are predicted from the real-time alerts for an outage_
under consideration while \(S\) is a set of alerts conditioned upon (also called separating set). The algorithm starts with an empty separating set (\(S=\phi\)) and increases the cardinality of \(S\). Once the probability \(p\) is greater than a confidence threshold \(\alpha\), the PC algorithm removes the edge between \(x\) and \(y\). After constructing the skeleton graph, it then orients the direction of the edges using a set of rules [36]. Hence, the result of employing the PC algorithm is a Completed Partially Directed Acyclic Graph (CPDAG), where the nodes correspond to distinct alerts. Within this graph, certain edges possess directed orientations, while others remain bidirectional. Bidirectional edges signify instances where the PC algorithm couldn't ascertain the causality direction between two nodes based on the available dataset.
Since the causal graph represents the causal dependence relationships among the alerts, an ancestral alert can be identified by traversing the graph. An ancestral alert is the one which was recorded first due to a fault in the root cause service, and it has in turn resulted in firing of other alerts in the impacted services. It should be noted that the service responsible for firing the ancestral alert might not be the root cause service. The causal graph thus can be used in real-time by traversing the alert nodes that were triggered during a fault.
### _Knowledge Graph (KG) Construction_
The outage reports provide us with rich historical information of what were the symptoms, what were the root causes and how the symptoms were resolved. The entire information of the incident report can be represented through a knowledge graph with appropriate relations between the nodes. Hence, we construct a knowledge graph where the symptom, root cause and the remediation steps for each outage are represented as individual nodes. Furthermore, we add has-root-cause and has-remediation edge between each symptom and its corresponding root-cause node and remediation node extracted from the same outage report respectively.
However, such a graph would have separate connected components for each outage, implying similar outages will not be grouped together. As a result, the knowledge graph construction step also groups similar outages into a cluster. Based on the sentence embeddings of the corresponding abstractive summary of the symptom nodes, root cause nodes, and remediation nodes, we cluster them individually. We first tokenize each node description (symptom, root cause and remediation summary), and then use pre-trained contextualized BERT embeddings [37] (transformer based masked-language model) for each token/word in the summary to get the word embeddings. The advantage of using pre-trained BERT embeddings is that the embedding of each word is generated based on the context in which it has been used. We finally compute the node embedding by averaging the word embeddings for all the tokens/words present in the summarized node description. We refrain from using Sentence-BERT [38] since it requires structural and semantic flow in the sentence to obtain a high quality embedding. Such a semantic flow might not be present in the abstractive summary of the symptom, root cause or remediation.
For each individual node type (symptom, root cause and remediation), we performed separate Agglomerative Hierarchical Clustering [39] on the node sentence description embeddings derived above. We then compute the optimal number of clusters \(K\) based on the Silhouette score, such that the score with \(K\) cluster is within 5% of the maximum score possible. This is done in order to reduce the overall number of clusters. Finally, we combine the individual clusters formed for symptoms, root causes and remediation in such a way that two outages (\(\mathcal{X}\), \(\mathcal{Y}\)) are grouped together in a cluster \(\mathcal{K}\) if they were in the same cluster due to either their symptom (\(\mathcal{K}_{symp}\)), root-cause (\(\mathcal{K}_{root-cause}\)) or remediation (\(\mathcal{K}_{rem}\)).
\[(\mathcal{X},\mathcal{Y})\in\mathcal{K}\Leftrightarrow(\mathcal{X},\mathcal{Y })\in\{\mathcal{K}_{symp}\vee\mathcal{K}_{root-cause}\vee\mathcal{K}_{rem}\} \tag{1}\]
The intuition behind Equation 1 is that an outage can be related to another if either their symptom are similar, or the root cause behind the incidents are similar or even if the remediation technique to mitigate the incident was similar. The knowledge graph hence represents the rich semi-structured information obtained from the outage reports.
### _Merged Graph Construction_
The individual graphs (causal graph and knowledge graph) constructed above forms the basic components of the merged graph. Individually, they represent rich information but lacks usability. The process of merging combines the benefit of both the individual graphs and forms a comprehensive model to locate similar outages and recommend potential root causes and remediation techniques for an impending outage. We implement a novel mechanism to merge these two graphs, where we link the alerts in the causal graph to the symptoms in the knowledge graph. The idea behind this is that when a triggered alert in the system indicates some visible symptoms/anomaly in the system, the edge between the alert and the symptom reflects such a phenomenon. Since an outage report is not indicative of the exact set of alerts nomenclature that caused the outage, we use the timestamp of the outage and the alerts fired to link the causal and the merged graphs.
For each outage, we extract its start time and resolution time from the outage report. We then filter out all the alerts that were triggered at least once during the interval of the occurrence of outage or an hour2 before the start of the outage. These alerts are indicative of the outage and hence, if a combination of these alerts are triggered again, it is of high probability that a similar outage has occurred. Thus, for each alert in the list of filtered alerts, we add a caused-outage edge between the alert and the corresponding outage symptom node(s). Thus, we have the merged graph (Figure 3), which we will henceforth term as _CK graph_, which combines the structured as well as semi-structured data.
Footnote 2: We select a duration of one hour for the outages as empirically we observed that most catastrophic issues happen within 1 hour after the root cause
### _Outage Cluster Predictor_
We link the causal graph with the knowledge graph based on the temporal overlap of the alerts fired during the occurrence of an outage. However, some alerts may have been fired that were not related to the outage that was occurring at the time. As a result, forecasting a single prior outage that is likely to occur when the alerts are triggered during inference time results in significant noise. Thus, we forecast a group of past outages by modeling only the most indicative alerts. However, the outage reports we have are not inherently clustered based on the set of alerts fired or the type of symptoms. Hence, we use the clusters that were defined in the knowledge graph (Section SSV-C) as ground truth to train a predictor to predict the cluster for an outage defined by a given set of alerts during inference. An additional advantage of using such a predictor is that it will inherently assign an outage prediction weight to each of the alert. Hence, not only the temporal overlap of the alerts with the outage is used, but also a weight is assigned to each alert, which will help the model to further rank the linked symptoms and their root causes.
Similar to Section SSV-D, we find the set of alerts triggered during the outage and an hour before the outage started, and create a dataset. However, the corresponding ground truth label that we want to predict is the outage cluster, which we computed in Section SSV-C. We then train a Random Forest classifier model on this dataset that predicts the cluster in which an outage belongs to, depending on the set of alerts fired. The maximum depth of the model is 25 while the number of estimators is 50. We utilize the inference of this model during the inference stage, which we shall describe below.
## VI Inference Phase
The constructed CK graph represents the temporal dependencies between the alerts and the past outages, with the similar outages clustered based on their symptoms, root causes and their remediation techniques. The CK graph is then used by the inference pipeline to suggest possible root causes for a new outage based on the alerts triggered at the time, which is the only information available during inference. This realistic setting sets the approach apart from prior works that rely on symptom descriptions to infer potential root causes and remediation steps, which are not available until some time after the outage has started.
In this section, we describe multiple inferences methodologies, which have been compared in Section SSVIII. The various methodologies form our baselines while the Section SSVI-C describes our proposed inference method. Below, we shall illustrate and/or exemplify the inference methodologies only for recommending the root cause, while analogous steps follow for the remediation techniques.
### _Path-based Inference (Path)_
The causal graph component of the CK graph is traversed using a path based inference technique to find the collection of candidate ancestral alerts, with starting nodes being the alerts triggered during an outage. It leverages the structural and causal information in the alerts to predict likely root causes and remediation steps. A traversal of the causal graph will result in a collection of alert nodes that are directly linked to the symptom nodes in the knowledge graph component of the CK graph. We restrict our traversal to only those symptom nodes that are reachable in \(k\) (or less) hops, where \(k\) is a hyperparameter. The potential root causes are the corresponding root causes for the symptoms which had been reached though the traversal strategy. For each potential root cause \(R\), a path-based score is defined.
\[Score_{path}(R)=\sum_{i}\frac{1}{d(path_{i}\to symptom(R))} \tag{2}\]
where, \(d(path)=\) length of the path from the starting alert node to the root cause node. The inverse of \(d(path)\) is summed over all the paths leading to the same root cause node. Figure 4 depicts demonstration of the same. A higher root-cause score represents a higher confidence in the prediction. This is based on two hypotheses: (i) the more the number of paths from the triggered alerts to a certain root cause, the higher is its likelihood of being the faulty service, and (ii) a shorter path represents a more direct correlation between the alert and the root-cause's historical co-occurrence and hence, should be given higher weight. This inference method utilizes only the information present in the structured data, that is, the alerts fired and their causal relationships to predict likely root causes.
### _Similarity-Based Inference (Sim)_
The similarity based inference method utilizes language processing techniques on the symptoms of the previous outages to predict likely root causes and remediation steps. It implicitly uses the information present in the semi-structured data, that is, the graph generated from the outage reports.
We extract the title description of each triggered alert and use BERT to identify contextual embeddings for each
Fig. 3: _CK Graph after merging causal graph and knowledge graph. Pink edges connect the alerts in causal graph to symptom nodes in knowledge graph_
word/token. The embeddings for all the words are averaged to compute the alert embedding. Finally, the average of the alert embeddings for all alerts triggered during the outage is used as an input query to the inference pipeline. We perform similar computations to obtain the symptom embeddings for each outage in the CK graph. Finally, we compare the cosine similarity of the input query embedding to each symptom embedding, and assign a score to each root causes with the similarity score for its corresponding symptoms.
\[Score_{Sim}(R)=cosine(emb_{alerts},emb_{symptom(R)}) \tag{3}\]
Here \(R\to symptom\) indicates the corresponding symptom of a root cause \(R\) in the CK graph. With a higher score, a more similar symptom (from among the past outages) will be identified and hence, have a more similar root cause.
### _Cluster based Inference (Clust)_
Our primary hypothesis is that combining the structured and current source of data (alert time series i.e. causal graph) with semi-structured and historical source of data (outage reports i.e. knowledge graph) for inference will result in better predictions than using each separately. In _Clust_, ESRO utilizes the entire CK graph to predict the potential root causes and remediation techniques.
Similar to the path-based inference method, _Clust_ traverses the causal links between the alerts to identify the set of ancestral alerts that link to the symptom nodes in the knowledge graph. It identifies all the set of symptom nodes that may be reached in \(k\) or lower hops from the triggered alert nodes. However, unlike _Path_, it reports a ranked list of the clusters to which the outages corresponding to the symptom nodes belong. The weight given to each cluster is the inverse of the number of times the cluster was reached using the graph traversal from the triggered alerts.
In the above set of clusters, we looked at the historical temporal overlap between the alerts fired and the outages to narrow down a set of potential outages to which the current outage is similar. However, to account for the importance of alerts fired in predicting the outages, we employ the outage cluster predictor described in Section SSV-E. We create a test set with indicator variables for all potential alerts using the alerts that were triggered during inference time and run the outage cluster prediction model to predict the probability of the alerts being connected with the clusters.
We combine the two ranked list of clusters by adding up the individual weights. Finally, from the top-\(L\) combined rank of clusters, we find the most similar symptom using the NLP based similarity technique as described in Section SSVI-B, and report the corresponding root causes and remediation steps. Thus, _Clust_ utilizes the entire CK graph in predicting the potential root causes and their remediation steps. Algorithm 1 describes the inference method.
```
Input: CK Graph \(\mathcal{G}\), outage cluster predictor \(\mathcal{M}\), Set of all alerts \(\mathcal{U}\), Set of fired alerts \(\mathcal{A}\), \(k\), \(L\) Output: Potential Root causes and Remediations
1\(Cluster\_Rank_{1}\leftarrow\{\}\)
2\(Cluster\_Rank_{2}\leftarrow\{\}\)
3\(\mathcal{C}\leftarrow\) set of all outage clusters in \(\mathcal{G}\)
4foreach alert \(a\in\mathcal{A}\)do
5 Traverse \(\mathcal{G}\) until \(k\) hops to locate symptom nodes \(\mathcal{S}\)
6\(Cluster\_Rank_{1}[get\_cluster(\mathcal{S})]\) += 1
7 Normalize \(Cluster\_Rank_{1}\)
8\(X\leftarrow\mathds{1}_{\mathcal{U}}(a)\)\(\forall a\in\mathcal{A}\)
9\(Cluster\_Rank_{2}\leftarrow\mathcal{M}(X)\)
10\(Cluster\_Rank[i]\leftarrow Cluster\_Rank_{1}[i]\)\(\forall i\in\mathcal{C}\)
11\(\mathcal{C}_{L}\leftarrow\) top-\(L\) clusters in \(Cluster\_Rank\)
12\(\mathcal{E}\gets create\_embedding(\mathcal{A})\)
13 Ranked Symptoms \(\mathcal{S}^{\prime}\gets Sim(\mathcal{E},emb_{s})\)\(\forall s\in\mathcal{C}_{L}\)
14\(\mathcal{R}\leftarrow\) root causes for \(\mathcal{S}^{\prime}\)
15\(Rem\leftarrow\) remediation steps for \(\mathcal{S}^{\prime}\) return\(\mathcal{R}\mathcal{C},Rem\)
```
**Algorithm 1**Clust Inference Method
## VII Experimental Setup
In this section, we outline the experimental process and the setup we followed. We have implemented ESRO3 in _python_ and used causal-learn4[40], an open source library to construct the causal graph between the alerts. The natural language models used to compute the summary of the outage reports and create sentence embeddings were obtained from the open source implementation provided by Hugging Face [41]. We have run ESRO on a system having Intel Xeon 8124M 3.0GHz CPU with 72 cores.
Fig. 4: _The figure shows a demonstration of the path based inference approach, where Alert 1 is fired during an outage. The inference method reaches two root causes Root Cause 1 (RC1) and Root Cause 2 (RC2) from Alert 1. There is only one 2-length path to RC2 from Alert 1, while there are two paths to RC1, a 2-length path and a 4-length path. Hence, the score for RC1 is 0.75 while the score for RC2 is 0.5._
### _Data_
The research was performed on a dataset obtained from a large SaaS company that operates on a large-scale cloud infrastructure with thousands of servers and multiple data centres globally. ESRO focused on a production service that comprised of over 40 microservices deployed via Kubernetes, serving millions of users daily. The dataset contained alerts as well as outage reports for a period from October 2020 to June 2022, a period of almost two years.
1. **Alerts Data:** The data set contained \(\sim 44,000\) alerts logged by 13 distinct monitors. During the entire time period, \(\sim 940\) unique alerts were fired. However, after filtering the alerts data according to Section SSV-A, only \(330\) unique alerts remained for consideration, indicating that many alerts occur only a few times with no predictive power for an outage in the same service.
2. **Outage Reports:** The production level service outage reports comprised a total of 182 reports, with each report containing information on the symptoms of the outage, its impact, the root cause, and the remediation strategies involved in mitigating the consequences of the outage. There is also information on the affected services, the duration of the outage, and related incidents. Over the period of two years, there were around 85 unique symptoms and 95 unique root causes5. Footnote 5: While the exact manifestation of symptoms and root causes varies, itβs worth noting that several instances may point towards analogous issues.
### _Evaluation Methodology_
We opt for 'Leave-One-Out' strategy to test our model, which has been used in prior works [19]. While constructing the CK graph, we neither consider the outage report for the incident that need to be tested nor the alerts fired during the incident's duration. We retrieve top-k root causes and remediation steps of historical incidents where the symptoms were similar. It needs to be noted that we do not suggest a new root cause or a remediation step, but retrieve most relevant root causes and remediation steps for past incidents. We evaluate how close the predicted root causes and remediation steps are to the ground truth root cause and remediation steps respectively. Results in Section SSVIII shows the evaluation metric's maximum value over top-k (here, k=5) predictions for all the different inference methodologies. Similar to the evaluation of a retrieval based task, success is essentially counted if there is a meaningful hit in its top-k predictions. We present an averaged evaluation over randomly selected 50 outages.
### _Evaluation Metrics_
We report Rouge (Recall Oriented Understudy for Gisting Evaluation) [42] score to report the similarity between the predicted root causes and remediation steps to the ground truth root causes and remediation steps respectively. Both of these are represented as text sentence. Rouge is used to compare a candidate text to a set of reference texts. Specifically, we choose Rouge-L and Rouge-1 scores [42]. Rouge-L takes into account sentence-level structural similarity and identifies longest co-occurring in sequence n-grams based on Longest Common Subsequence (LCS) [43], while Rouge-1 measures the number of matching '1-grams' between the two texts. Since the inference algorithm outputs a text prediction of the root causes and the remediation steps (and not a topic) based on a retrieval task, it might not match with the ground truth root cause exactly and hence a hit@top-k metric might not be a true metric. Hence, a rouge score will compute the closeness of the predicted root causes and the remediation steps to the ground truth.
We compare the similarity of the summarized text of the predicted root causes and remediation steps against the summarized text in the ground truth report, where the summary was computed/extracted using the methodology described in Section SSV-A. The main reason for comparing the extracted summary to the ground truth is the presence of production level jargons in the entire text, which may affect the accuracy of our model. The extracted summary will capture the crux of the root cause, thus allowing a more relevant comparison.
To predict the performance quality of the Outage Cluster Predictor model, we compute the top-K precision of its predictions at multiple values of K, where, we consider a prediction to be correct if the actual cluster is within the top-K predictions of the model.
### _Baselines_
To evaluate the efficacy of our model, we implemented two baselines, which are the state-of-the-art approaches of utilizing outage reports [19]. Since no public code was available, we implemented them to the best of our understanding. The baselines are described as follows:
1. **Incident Search (IS):** The symptom from the outage report for the test outage is used to search over the repository of all the remaining symptoms, by comparing similarity of the pre-trained RoBERTa [32] embeddings and top-5 similar symptoms are returned and evaluated. For this, we have used FAISS6[44] which is a library developed by Facebook for efficient similarity search. Footnote 6: [https://faiss.ai/](https://faiss.ai/)
2. **GCN:** Symptoms, root causes and the remediation steps in the incident reports are represented as individual nodes with initial feature vectors being the average GloVe embeddings [45] of each word in their respective sentences. Edges between the different nodes (intra-symptom, intra-root cause, intra-remediation nodes and between symptoms and root causes) are adjusted as recommended by Saha _et al._[19]. We train a 2-layered Graph Convolution Network (GCN) [46] model with 16 dimension hidden layer followed by a Dense layer. We apply a contrastive loss \(\mathcal{L}\) using 10 randomly sampled false root causes and the true root cause. \[\mathcal{L}=-log(\frac{e^{sim(x_{s},x_{r})}}{e^{sim(x_{s},x_{r})}+\sum_{j=1}^{ 10}e^{sim(x_{s},x_{r,j})}})\] (4)
where \(x_{r}=\text{GCN}\) representation of root cause, \(x_{s}=\text{GCN}\) representation of symptom, \(x_{r,j-}=\text{GCN}\) representation of false root cause. During inference, we compute the cosine similarity of the input symptom's GCN representation with all the symptom representations in the graph. Corresponding root causes and remediation steps for symptom nodes having maximum cosine similarity is considered as output.
Both of the aforementioned baseline methods utilize the knowledge graph component to predict both root causes and remediation steps. In the case of Incident Search, RoBERTa embeddings are employed to represent symptoms, root causes, and remediation strategies. Meanwhile, GCN employs a graph convolution network to train embeddings for knowledge graph nodes, enabling the retrieval of similar symptom nodes for a given incident. These baselines are aligned with our study as they try to retrieve the most relevant root causes and remediation options. Given that outage reports exclusively provide the information about the actual root cause and the associated remedial actions for an outage, approaches that rely only on the causal graph of alerts is unsuitable as a baseline. In these approaches [14, 47], only the root cause alert can be retrieved from the causal graph, but without information about the actual root cause and the associated remedial actions it is neither useful nor can it be evaluated with ground truth. Therefore, we opt not to utilize baselines [14, 47] that rely solely on alerts to predict root causes.
### _Design Choices and Hyperparameters_
* The optimal number of clusters in Section 8V-C are as follows: (i) \(\mathcal{K}_{symp}=63\), (ii) \(\mathcal{K}_{root-cause}=100\), (iii), \(\mathcal{K}_{rem}=112\), and (iv) \(\mathcal{K}=53\).
* \(k=9\) in Section 8VI-A, i.e., the path from the alert nodes to symptom nodes of outages is of 9 or fewer hops, and hence path to respective root cause node is 1 additional hop
* \(L=3\), that is, we choose top-3 clusters based on the combined ranking in the cluster-based inference method.
## 8 Evaluation Results
We evaluate our model with all the inference methods listed in Section 6 along with the baselines listed above. The evaluation strategy will be presented in the following way:
1. Evaluating design choices in ESRO
2. Comparison of the chosen design for ESRO against the baselines
3. Measuring the quality of the outage cluster predictor
4. Examples using actual production outages
### _Design Choices Evaluation_
In this section, we shall evaluate two principle design choices. One is computing the optimal number of clusters for grouping the outages during graph construction phase, while the second choice is for the most suitable inference method (Section 8VI) of ESRO.
#### 8.1.1 Number of Clusters
An important step in the graph construction pipeline for ESRO is the clustering of similar outages (Section 8V-C), which helps the inference method in suggesting related outages for a given alert or a set of alerts. We mentioned that the choice for the optimal number of clusters for each of the node category (sympton nodes, root cause nodes and remediation nodes) is based on the Silhouette Score computed after agglomerative clustering. We plot the variation of the silhouette scores against the number of clusters for each node category in Figure 5(a), (b) and (c). Along with it, the optimal number of clusters for each category (\(\mathcal{K}_{symp}\), \(\mathcal{K}_{root-cause}\) and \(\mathcal{K}_{rem}\)) is shown through a vertical line. The optimal number of clusters are chosen such that it is within 5% of the maximum silhouette score obtained.
With the optimal number of clusters fixed, we merge them and establish a new set of clusters such that two outages are in the same new cluster \(\mathcal{K}\) if either their symptoms, root causes or remediation steps were in their same respective optimal clusters. As a result, we create 53 new set of optimal clusters that groups all the outages. In Figure 5(d), we illustrate the t-SNE visualization of the symptom embeddings for the outages grouped into the top-6 largest clusters. These clusters constitute \(\sim 38\%\) of all the outages. We observe from the diagram that the clusters with higher number of outages (cluster 10, cluster 6, etc.) form close groups, illustrating that a new incident with a similar symptom can be associated to other past incidents.
#### 8.1.2 Inference Method
In this section, we draw forth a comparison between the different inference methods described in Section 8VI that utilizes the different components of the CK graph. We show how using the causal and the knowledge
Figure 5: Figure (a), (b) and (c) plots silhouette scores against number of clusters while clustering symptom nodes, root cause nodes and remediation nodes respectively. The vertical line represents the optimal number of clusters based on our condition (Section 8V-C). Figure (d) is the t-SNE visualization of the symptom embeddings for top-6 populated clusters.
graph together performs better than using them individually while predicting the root cause and the remediation steps. The comparisons are reported using average results over 50 randomly selected outages (described in Section SSVII-B). The comparison will provide us with a quantitative explanation for the most suitable inference method.
We observe from Table I that the clustering based inference strategy (Clust), as described in Section SSVI-C, outperforms both path (Path) and similarity (Sim) based inference methods. Even though Path uses the causal graph to find the ancestral alerts which are directly linked to the symptom nodes, it requires knowledge graph components to predict the root cause and the remediation step to be evaluated equally. However, Sim uses only the knowledge graph to predict the root causes and the remediation steps. Thus, the causal graph cannot be evaluated individually, and hence we do not evaluate any baselines that uses only alerts to predict the root cause.
The evaluation using both Rough-1 and Rouge-L scores demonstrates that the Clust method consistently outperforms the alternative techniques. Clustering based inference method utilizes both the sources of data to recommend potential root causes and remediation techniques. The broader scope is enabled by the utilization of the history of the outages including their diagnosis, the alerts that were triggered during an outage, and the predictive insights of each individual alert in relation to an outage scenario. We model the indicative power of an alert through the outage cluster predictor model, which we shall further elucidate in the next section.
### _Baseline Comparison_
In this section, we compare the cluster based inference method that we derived to be the best performing inference method against state-of-the-art baselines (as selected in Section SSVII-D). In Table II, we draw forth this comparison and report the results. Similar to Section SSVIII-A2, we report the average results over the same 50 random outages that were chosen in Section SSVIII-A2 for the baselines.
The table shows that the cluster-based inference method outperforms the Incident Search and GCN baselines. Even though these baseline methods utilized the symptom description as input, a level of detail typically available only after post-mortem reports, they still do not achieve results superior to those of the _Clust_ approach.
_Clust_ on average exhibits 16% higher performance in terms of Rouge-1 scores over Incident Search for root cause recommendation, while around 38% higher for recommending remediations. Improvements in Rouge-L scores are similar as well. Meanwhile, GCN's performance is notably lower to that of Incident Search in both root cause identification and recommendation tasks. Overall, _Clust_ performs \(\sim\)27% higher than the baselines on average in root cause recommendation and 39% in remediation steps recommendation.
Both the baselines use knowledge graph only to predict the root causes and the remediation steps. Incident Search mainly uses text similarity to find similar symptoms, similar to Sim. In addition, we use the contextualized BERT embeddings to represent the nodes of the knowledge graph, while incident search uses RoBERTa and GCN computes its own embeddings with GLoVe initialization. None of these embeddings provide additional information. An interesting observation is that the simple approach of comparing text similarities works better compared to the more complex graph-based method (GCN). This might be because our data covers a wide range of different types of outages, resulting in individual connected components for each outage report present in the data. This diversity makes it hard for graph-based methods to extract layout specific features and enrich the embedding computations.
### _Outage Cluster Predictor Performance_
To compare the performance of the Outage Cluster Predictor, we plot top-K precision of the model predictions against varying K. We consider a prediction to be correct if the actual cluster is within the top-K predictions. The dataset containing all the 182 outages was split into a 70%-30% train-test set for training the outage cluster predictor model. The total number of clusters in the entire dataset is 53, while the train set had only 43 unique clusters. Stratified split was not possible, since few clusters represented only 1 outage (see Fig. 7).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **Metric** & **IS** & **GCN** & **Clust** & **\% Gain** \\ \hline
**Root Cause** & **Rouge-1** & 0.207 & 0.176 & 0.242 & 27.2\% \\ \cline{2-5} & **Rouge-L** & 0.197 & 0.165 & 0.227 & 26.4\% \\ \hline
**Remediation** & **Rouge-1** & 0.157 & 0.162 & 0.219 & 37.3\% \\ \cline{2-5} & **Rouge-L** & 0.143 & 0.147 & 0.205 & 41.4\% \\ \hline \hline \end{tabular}
\end{table} TABLE II: _Comparison of Cluster based inference to the baselines. % Gain indicates the average improvement over the two baselines._
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **Metric** & **Path** & **Sim** & **Clust** \\ \hline
**Root Cause** & **Rouge-1** & 0.202 & 0.211 & 0.242 \\ \cline{2-5} & **Rouge-L** & 0.188 & 0.194 & 0.227 \\ \hline
**Remediation** & **Rouge-1** & 0.158 & 0.177 & 0.219 \\ \cline{2-5} & **Rouge-L** & 0.136 & 0.157 & 0.205 \\ \hline \hline \end{tabular}
\end{table} TABLE I: _Comparing the performance of the various inference methods through Rouge scores described in the paper_
Fig. 6: _Test Set prediction performance of the outage cluster predictor as reported by top-K precision against varying K._
We observe in Figure 6 that the top-K precision for the Outage Cluster Predictor model is 72.7% with K=5 and over 78% with k=6. Even with K=1, that is when we only use the top-1 prediction, the accuracy is \(\sim\)62%, which shows a significant performance where the total number of available clusters are 53.
Figure 7 shows the number of outages that belong to each cluster. We see that the distribution is highly skewed with 25% of the outages belonging to only 3 clusters, and 50% of the outages belongs to only 11 clusters. Given such a skewness in the data, a top-1 accuracy of 62% and a top-5 accuracy of 73% suggests that the outage cluster predictor model is highly powerful and capable of predicting the correct cluster given the alerts that were fired for an outage.
### _Illustration on Production Outages_
A quantitative evaluation captures the sentence similarity between the ground truth root causes/remediation and the predicted root cause/remediation. However, in this section, we present a formal evaluation of ESRO though manual validation of a few illustrative examples which demonstrated high Rouge scores in Section SSVII-D7. We present three outages that were flagged by the SREs and compare the root cause detected by the domain experts with the predicted output.
Footnote 7: Showcasing qualitative examples where a high Rouge score corresponds to a strong alignment between predictions and actual outcomes
#### V-D1 Outage example 1
This incident occurred in the email template microservice due to a deployment issue in a connected service and lasted for about 4 hours. It was caused by the service being deployed without proper configuration validation, which resulted in a fault. Running ESRO on the alerts fired during the time of the outage pointed to a similar symptom that occurred a year ago on the same email template service. It was ranked second on the list of possible prior outages. The past incident was the result of a migration of the internal deployment of the email service from one platform to another, which resulted in a change in configuration. Thus, even though there was no past outage with the same root cause among the outage reports, ESRO was able to find a fault that happened on the same email service due to a deployment issue.
#### V-D2 Outage example 2
In this example, users of the SaaS enterprise reported an outage due to the unavailability of services that lasted approximately 1.5 hours. As per the reports, the investigations pointed to a high load in the database connectivity for the services, with the root cause identified as an inefficiency in MySQL query plan which resulted in a snowball effect. The impact was resolved by a rolling restart of the application servers as well as the deactivation of certain accounts that were the cause of the long-running database query. Executing ESRO with the alerts fired during this outage, it was able to pinpoint a similar outage 9 months before, when consumers were unable to use the same service. The root cause of the past outage was due to a resource contention at a database tier, resulting in a database connection issue. ESRO ranked the outage in top-3 among past outages. ESRO was able to find a similar symptom at the same service that was caused by a database issue.
#### V-D3 Outage example 3
In another case, an outage was reported because a certain service was unreachable due to a deployment in the moonbeam pipeline resulting in a version mismatch. After an hour, the impact was addressed with a deployment rollback to match the versions. According to ESRO, it detected a similar outage that occurred two years ago when the same service was unreachable to a segment of users. The root cause for the past incident was a version mismatch between the service and the other components with which it communicated. To discover the version mismatch issues, a new alert was set up as a remediation strategy. Based on the similarities of the alerts fired and the previous symptoms, ESRO was able to identify this previous outage, ranking it in top-1 among similar outages.
## IX Conclusion
In this work, we introduce ESRO, a novel service for identifying root causes and recommending mitigation steps during outages. By analysing semi-structured natural language text from past outage reports, as well as real-time alerts, ESRO leverages a merged graph that combines causal and knowledge graph components. This enables the capture of causal relationships among alerts and information about symptoms, root causes, and remediation techniques from past incidents. Grouping similar outages into clusters based on shared characteristics further refines our approach. During the inference phase, we utilize the graph to discern probable root causes and potential remediation methods by drawing on insights from past outage incidents. Our cluster-based inference method employs only outage-specific alerts to uncover similar incidents and provide root cause solutions. Through qualitative analysis and quantitative evaluation on real outage examples, using two years of cloud service outage data, we showcase ESRO's effectiveness. We achieve more than a 26% enhancement in root cause prediction and over 37% improvement in remediation step prediction compared to baseline methods.
**Future Works:** We plan to extend our approach by conducting a broader comparative analysis by modifying state-of-the-art techniques for comparable evaluation and incorporating additional metrics for comprehensive assessment. We also propose exploring hierarchical models to predict root cause at multiple hierarchies. To make the system further intuitive to be used, investigating the utilization of LLMs for question answering within the merged CK graph holds promise.
Fig. 7: _The figure shows the number of outages in each cluster_ |
2309.03903 | Tracking Anything with Decoupled Video Segmentation | Training data for video segmentation are expensive to annotate. This impedes
extensions of end-to-end algorithms to new video segmentation tasks, especially
in large-vocabulary settings. To 'track anything' without training on video
data for every individual task, we develop a decoupled video segmentation
approach (DEVA), composed of task-specific image-level segmentation and
class/task-agnostic bi-directional temporal propagation. Due to this design, we
only need an image-level model for the target task (which is cheaper to train)
and a universal temporal propagation model which is trained once and
generalizes across tasks. To effectively combine these two modules, we use
bi-directional propagation for (semi-)online fusion of segmentation hypotheses
from different frames to generate a coherent segmentation. We show that this
decoupled formulation compares favorably to end-to-end approaches in several
data-scarce tasks including large-vocabulary video panoptic segmentation,
open-world video segmentation, referring video segmentation, and unsupervised
video object segmentation. Code is available at:
https://hkchengrex.github.io/Tracking-Anything-with-DEVA | Ho Kei Cheng, Seoung Wug Oh, Brian Price, Alexander Schwing, Joon-Young Lee | 2023-09-07T17:59:41Z | http://arxiv.org/abs/2309.03903v1 | # Tracking Anything with Decoupled Video Segmentation
###### Abstract
Training data for video segmentation are expensive to annotate. This impedes extensions of end-to-end algorithms to new video segmentation tasks, especially in large-vocabulary settings. To 'track anything' without training on video data for every individual task, we develop a **d**ecoupled video segmentation approach (**DEVA**), composed of task-specific image-level segmentation and class/task-agnostic bi-directional temporal propagation. Due to this design, we only need an image-level model for the target task (which is cheaper to train) and a universal temporal propagation model which is trained once and generalizes across tasks. To effectively combine these two modules, we use bi-directional propagation for (semi-)online fusion of segmentation hypotheses from different frames to generate a coherent segmentation. We show that this decoupled formulation compares favorably to end-to-end approaches in several data-scarce tasks including large-vocabulary video panoptic segmentation, open-world video segmentation, referring video segmentation, and unsupervised video object segmentation. Code is available at: hkchengrex.github.io/Tracking-Anything-with-DEVA.
## 1 Introduction
Video segmentation aims to segment and associate objects in a video. It is a fundamental task in computer vision and is crucial for many video understanding applications.
Most existing video segmentation approaches train end-to-end video-level networks on annotated video datasets. They have made significant strides on common benchmarks like YouTube-VIS [69] and Cityscape-VPS [27]. However,
these datasets have small vocabularies: YouTube-VIS contains 40 object categories, and Cityscape-VPS only has 19. It is questionable whether recent end-to-end paradigms are scalable to large-vocabulary, or even open-world video data. A recent larger vocabulary (124 classes) video segmentation dataset, VIPSeg [45], has been shown to be more difficult - using the same backbone, a recent method [34] achieves only 26.1 VPQ compared with 57.8 VPQ on Cityscape-VPS. To the best of our knowledge, recent video segmentation methods [2, 39] developed for the open-world setting (e.g., BURST [2]) are not end-to-end and are based on tracking of per-frame segmentation - further highlighting the difficulty of end-to-end training on large-vocabulary datasets. As the number of classes and scenarios in the dataset increases, it becomes more challenging to train and develop end-to-end video models to jointly solve segmentation and association, especially if annotations are scarce.
In this work, we aim to reduce reliance on the amount of target training data by leveraging external data _outside of the target domain_. For this, we propose to study _decoupled video segmentation_, which combines task-specific image-level segmentation and task-agnostic temporal propagation. Due to this design, we only need an image-level model for the target task (which is cheaper) and a universal temporal propagation model which is trained once and generalizes across tasks. Universal promptable image segmentation models like'segment anything' (SAM) [30] and others [76, 32, 24, 73, 74] have recently become available and serve as excellent candidates for the image-level model in a 'track anything' pipeline - Figure 1 shows some promising results of our integration with these methods.
Researchers have studied decoupled formulations before, as 'tracking-by-detection' [26, 58, 3]. However, these approaches often consider image-level detections immutable, while the temporal model only associates detected objects. This formulation depends heavily on the quality of per-image detections and is sensitive to image-level errors.
In contrast, we develop a (semi-)online bi-directional propagation algorithm to 1) denoise image-level segmentation with in-clip consensus (Section 3.2.1), and 2) combine results from temporal propagation and in-clip consensus gracefully (Section 3.2.2). This bi-directional propagation allows temporally more coherent and potentially better results than those of an image-level model (see Figure 2).
We do not aim to replace end-to-end video approaches. Indeed, we emphasize that specialized frameworks on video tasks with sufficient video-level training data (e.g., YouTubeVIS [69]) outperform the developed method. Instead, we show that our decoupled approach acts as a strong baseline when an image model is available but video data is scarce. This is in spirit similar to pretraining of large language models [52]: a _task-agnostic_ understanding of natural language is available before being finetuned on specific tasks - in our case, we learn propagation of segmentations of _class-agnostic_ objects in videos via a temporal propagation module and make technical strides in applying this knowledge to specific tasks. The proposed decoupled approach transfers well to large-scale or open-world datasets, and achieves state-of-the-art results in large-scale video panoptic segmentation (VIPSeg [45]) and open-world video segmentation (BURST [2]). It also performs competitively on referring video segmentation (Ref-YouTubeVOS [55], Ref-DAVIS [25]) and unsupervised video object segmentation (DAVIS-16/17[5]) without end-to-end training.
To summarize:
* We propose using decoupled video segmentation that leverages external data, which allows it to generalize better to target tasks with limited annotations than end-to-end video approaches and allows us to seamlessly incorporate existing universal image segmentation models like SAM [30].
* We develop bi-directional propagation that denoises image segmentations and merges image segmentations with temporally propagated segmentations gracefully.
* We empirically show that our approach achieves favorable results in several important tasks including large-scale video panoptic segmentation, open-world video segmentation, referring video segmentation, and unsupervised video object segmentation.
## 2 Related Works
End-to-End Video SegmentationRecent end-to-end video segmentation approaches [50, 23, 62, 4, 6, 14, 13] have made significant progress in tasks like Video Instance Segmentation (VIS) and Video Panoptic Segmentation (VPS), especially in closed and small vocabulary datasets like YouTube-VIS [69] and Cityscape-VPS [27].
Figure 2: We plot relative \(\overline{\text{VPQ}}\) increase of our decoupled approach over the end-to-end baseline when we vary the training data in the target domain (VIPSeg [45]). Common/rare classes are the top/bottom 50% most annotated object category in the training set. Our improvement is most significant (\(>\)60%) in rare classes when there is a small amount of training data. This is because our decoupling allows the use of external class-agnostic temporal propagation data β data that cannot be used by existing end-to-end baselines. Details in Section 4.5.1.
However, these methods require end-to-end training and their scalability to larger vocabularies, where video data and annotations are expensive, is questionable. MaskProp [4] uses mask propagation to provide temporal information, but still needs to be trained end-to-end on the target task. This is because their mask propagation is not class-agnostic. We circumvent this training requirement and instead decouple the task into image segmentation and temporal propagation, each of which is easier to train with image-only data and readily available class-agnostic mask propagation data respectively.
**Open-World Video Segmentation.** Recently, an open-world video segmentation dataset BURST [2] has been proposed. It contains 482 object classes in diverse scenarios and evaluates open-world performance by computing metrics for the common classes (78, overlap with COCO [37]) and uncommon classes (404) separately. The baseline in BURST [2] predicts a set of object proposals using an image instance segmentation model trained on COCO [37] and associates the proposals frame-by-frame using either box IoU or STCN [11]. OWTB [39] additionally associates proposals using optical flow and pre-trained Re-ID features. Differently, we use bi-directional propagation that generates segmentations instead of simply associating existing segmentations - this reduces sensitivity to image segmentation errors. UVO [18] is another open-world video segmentation dataset and focuses on human actions. We mainly evaluate on BURST [2] as it is much more diverse and allows separate evaluation for common/uncommon classes.
**Decoupled Video Segmentation.** 'Tracking-by-detection' approaches [26, 58, 3] often consider image-level detections immutable and use a short-term temporal tracking model to associate detected objects. This formulation depends heavily on the quality of per-image detections and is sensitive to image-level errors. Related long-term temporal propagation works exist [20, 19], but they consider a single task and do not filter the image-level segmentation. We instead propose a general framework, with a bi-directional propagation mechanism that denoises the image segmentations and allows our result to potentially perform better than the image-level model.
**Video Object Segmentation.** Semi-supervised Video Object Segmentation (VOS) aims to propagate an initial ground-truth segmentation through a video [47, 46, 70, 9]. However, it does not account for any errors in the initial segmentation, and cannot incorporate new segmentation given by the image model at later frames. SAM-PT [53] combines point tracking with SAM [12] to create a video object segmentation pipeline, while our method tracks masks directly. We find a recent VOS algorithm [9] works well for our temporal propagation model. Our proposed bi-directional propagation is essential for bringing image segmentation models and propagation models together as a unified video segmentation framework.
**Unified Video Segmentation.** Recent Video-K-Net [34] uses a unified framework for multiple video tasks but requires separate end-to-end training for each task. Unicorn [66], TarViS [1], and UNINEXT [67] share model parameters for different tasks, and train on all the target tasks end-to-end. They report lower tracking accuracy for objects that are not in the target tasks during training compared with class-agnostic VOS approaches, which might be caused by joint learning with class-specific features. In contrast, we only train an image segmentation model for the target task, while the temporal propagation model is always fully class-agnostic for generalization across tasks.
**Segmenting/Tracking Anything.** Concurrent to our work, Segment Anything (SAM) [30] demonstrates the effectiveness and generalizability of large-scale training for universal image segmentation, serving as an important foundation for open-world segmentation. Follow-up works [68, 12] extend SAM to video data by propagating the masks generated by SAM with video object segmentation algorithms. However, they rely on single-frame segmentation and lack the denoising capability of our proposed in-clip consensus approach.
## 3 Decoupled Video Segmentation
### Formulation
**Decoupled Video Segmentation.** Our decoupled video segmentation approach is driven by an image segmentation model and a universal temporal propagation model. The image model, trained specifically on the target task, provides task-specific image-level segmentation hypotheses. The temporal propagation model, trained on class-agnostic mask propagation datasets, associates and propagates these hypotheses to segment the whole video. This design separates the learning of task-specific segmentation and the learning of general video object segmentation, leading to a robust framework even when data in the target domain is scarce and insufficient for end-to-end learning.
**Notation.** Using \(t\) as the time index, we refer to the corresponding frame and its final segmentation as \(I_{t}\) and \(\mathbf{M}_{t}\) respectively. In this paper, we represent a segmentation as a set of non-overlapping per-object binary segments,, \(\mathbf{M}_{t}=\{m_{i},0<i\leq|\mathbf{M}_{t}|\}\), where \(m_{i}\cap m_{j}=\emptyset\) if \(i\neq j\).
The image segmentation model \(\text{Seg}(I)\) takes an image \(I\) as input and outputs a segmentation. We denote its output segmentation at time \(t\) as \(\text{Seg}(I_{t})=\text{Seg}_{t}=\{s_{i},0<i\leq|\text{Seg}_{t}|\}\), which is also a set of non-overlapping binary segments. This segmentation model can be swapped for different target tasks, and users can be in the loop to correct the segmentation as we do not limit its internal architecture.
The temporal propagation model \(\text{Prop}(\mathbf{H},I)\) takes a collection of segmented frames (memory) \(\mathbf{H}\) and a query image \(I\) as input and segments the query frame with the objects in the memory. For instance, \(\text{Prop}\left(\{I_{1},\mathbf{M}_{1}\},I_{2}\right)\) propagates the segmentation \(\mathbf{M}_{1}\) from the first frame \(I_{1}\) to the second frame \(I_{2}\). Unless mentioned explicitly, the memory \(\mathbf{H}\) contains all past segmented frames.
Overview.Figure 3 illustrates the overall pipeline. At a high level, we aim to propagate segmentations discovered by the image segmentation model to the full video with temporal propagation. We mainly focus on the (semi-)online setting. Starting from the first frame, we use the image segmentation model for initialization. To denoise errors from single-frame segmentation, we look at a small clip of a few frames in the near future (in the online setting, we only look at the current frame) and reach an in-clip consensus (Section 3.2.1) as the output segmentation. Afterward, we use the temporal propagation model to propagate the segmentation to subsequent frames. We modify an off-the-shelf state-of-the-art video object segmentation XMem [9] as our temporal propagation model, with details given in the appendix. The propagation model itself cannot segment new objects that appear in the scene. Therefore, we periodically incorporate new image segmentation results using the same in-clip consensus as before and merge the consensus with the propagated result (Section 3.2.2). This pipeline combines the strong temporal consistency from the propagation model (past) and the new semantics from the image segmentation model (future), hence the name _bi-directional propagation_. Next, we will discuss the bi-directional propagation pipeline in detail.
### Bi-Directional Propagation
#### 3.2.1 In-clip Consensus
Formulation.In-clip consensus operates on the image segmentations of a small future clip of \(n\) frames (\(\text{Seg}_{t}\), \(\text{Seg}_{t+1}\),..., \(\text{Seg}_{t+n-1}\)) and outputs a denoised consensus \(\mathbf{C}_{t}\) for the current frame. In the online setting, \(n=1\) and \(\mathbf{C}_{t}=\text{Seg}_{t}\). In the subsequent discussion, we focus on the semi-online setting, as consensus computation in the online setting is straightforward. As an overview, we first obtain a set of _object proposals_ on the target frame \(t\) via spatial alignment, merge the object proposals into a combined rep
Figure 4: A simple illustration of in-clip consensus. The top three squares represent object proposals from three different frames aligned to time \(t\). The blue shape is the most supported by other object proposals and is selected as output. The yellow shape is not supported by any and is ruled out as noise. The remaining are not used due to significant overlap with the selected (blue) shape.
Figure 3: Overview of our framework. We first filter image-level segmentations with in-clip consensus (Section 3.2.1) and temporally propagate this result forward. To incorporate a new image segmentation at a later time step (for previously unseen objects, e.g., red box), we merge the propagated results with in-clip consensus as described in Section 3.2.2. Specifics of temporal propagation are in the appendix.
resentation in a second step, and optimize for an indicator variable to choose a subset of proposals as the output in an integer program. Figure 4 illustrates this in-clip consensus computation in a stylized way and we provide details regarding each of the three aforementioned steps (spatial alignment, representation, and integer programming) next.
**Spatial Alignment.** As the segmentations (\(\text{Seg}_{t}\), \(\text{Seg}_{t+1}\),..., \(\text{Seg}_{t+n-1}\)) correspond to different time steps, they might be spatially misaligned. This misalignment complicates the computation of correspondences between segments. To align segmentations \(\text{Seg}_{t+i}\) with frame \(t\), techniques like optical flow warping are applicable. In this paper, we simply re-use the temporal propagation model to find the aligned segmentation \(\widehat{\text{Seg}}_{t+i}\) (note \(\widehat{\text{Seg}}_{t}=\text{Seg}_{t}\)) via
\[\widehat{\text{Seg}}_{t+i}=\text{Prop}\left(\{I_{t+i},\text{Seg}_{t+i}\},I_{t} \right),0<i<n. \tag{1}\]
Note, the propagation model here only uses one frame as memory at a time and this temporary memory \(\{I_{t+i},\text{Seg}_{t+i}\}\) is discarded immediately after alignment. It does not interact with the global memory \(\mathbf{H}\).
**Representation.** Recall that we represent a segmentation as a set of non-overlapping per-object binary segments. After aligning all the segmentations to frame \(t\), each segment is an _object proposal_ for frame \(I_{t}\). We refer to the union of all these proposals via \(\mathbf{P}\) (time index omitted for clarity):
\[\mathbf{P}=\bigcup_{i=0}^{n-1}\widehat{\text{Seg}}_{t+i}=\{p_{i},0<i\leq| \mathbf{P}|\}. \tag{2}\]
The output of consensus voting is represented by an indicator variable \(v^{*}\in\{0,1\}^{|\mathbf{P}|}\) that combines segments into the consensus output \(\mathbf{C}_{t}\):
\[\mathbf{C}_{t}=\{p_{i}|v^{*}_{i}=1\}=\{c_{i},0<i\leq|\mathbf{C}|\}. \tag{3}\]
We resolve overlapping segments \(c_{i}\) in \(\mathbf{C}_{t}\) by prioritizing smaller segments as they are more vulnerable to being majorly displaced by overlaps. This priority is implemented by sequentially rendering the segments \(c_{i}\) on an image in descending order of area. We optimize for \(v\) based on two simple criteria:
1. [noitemsep,topsep=0pt,parsep=0pt,parsep=0pt]
[MISSING_PAGE_POST]
Maximizing Association IoU.We find \(a_{ij}\) by maximizing the pairwise IoU of all associated pairs, with a minimum association IoU of \(0.5\). This is equivalent to a maximum bipartite matching problem, with \(r_{i}\) and \(c_{j}\) as vertices and edge weight \(e_{ij}\) given by
\[e_{ij}=\begin{cases}\text{IoU}(r_{i},c_{j}),&\text{if IoU}(r_{i},c_{j})>0.5\\ -1,&\text{otherwise}\end{cases}. \tag{10}\]
Requiring any matched pairs from two non-overlapping segmentations to have IoU \(>0.5\) leads to a unique matching, as shown in [29]. Therefore, a greedy solution of setting \(a_{ij}=1\) if \(e_{ij}>0\) and \(0\) otherwise suffices to obtain an optimal result.
Segment Deletion.As an implementation detail, we delete inactive segments from the memory to reduce computational costs. We consider a segment \(r_{i}\) inactive when it fails to associate with any segments \(c_{j}\) from the consensus for consecutive \(L\) times. Such objects might have gone out of view or were a misdetection. Concretely, we associate a counter cnt\({}_{i}\) with each propagated segment \(r_{i}\), initialized as 0. When \(r_{i}\) is not associated with any segments \(c_{j}\) from the consensus, i.e., \(\forall_{j}a_{ij}=0\), we increment cnt\({}_{i}\) by 1 and reset cnt\({}_{i}\) to 0 otherwise. When cnt\({}_{i}\) reaches the pre-defined threshold \(L\), the segment \(r_{i}\) is deleted from the memory. We set \(L=5\) in all our experiments.
## 4 Experiments
We first present our main results using a large-scale video panoptic segmentation dataset (VIPSeg [45]) and an open-world video segmentation dataset (BRUST [2]). Next, we show that our method also works well for referring video object segmentation and unsupervised video object segmentation. We present additional results on the smaller-scale YouTubeVIS dataset in the appendix, but unsurprisingly recent end-to-end specialized approaches perform better because a sufficient amount of data is available in this case. Figure 1 visualizes some results of the integration of our approach with universal image segmentation models like SAM [30] or Grounding-Segment-Anything [38, 30]. By default, we merge in-clip consensus with temporal propagation every 5 frames with a clip size of \(n=3\) in the semi-online setting, and \(n=1\) in the online setting. We evaluate all our results using either official evaluation codebases or official servers. We use image models trained with standard training data for each task (using open-sourced models whenever available) and a universal temporal propagation module for all tasks unless otherwise specified.
The temporal propagation model is based on XMem [9], and is trained in a class-agnostic fashion with image segmentation datasets [56, 60, 72, 33, 8] and video object segmentation datasets [65, 47, 48]. With the long-term memory of XMem [9], our model can handle long videos with ease. We use top-k filtering [10] with \(k=30\) following [9]. The performance of our modified propagation model on common video object segmentation benchmarks (DAVIS [47], YouTubeVOS [65], and MOSE [16]) are listed in the appendix.
### Large-Scale Video Panoptic Segmentation
We are interested in addressing the large vocabulary setting. To our best knowledge, VIPSeg [45] is currently the largest scale in-the-wild panoptic segmentation dataset, with 58 things classes and 66 stuff classes in 3,536 videos of 232 different scenes.
Metrics.To evaluate the quality of the result, we adopt the commonly used VPQ (Video Panoptic Quality) [27] and STQ (Segmentation and Tracking Quality) [63] metrics. VPQ extends image-based PQ (Panoptic Quality) [29] to video data by matching objects in sliding windows of \(k\) frames (denoted VPQ\({}^{k}\)). When \(k=1\), VPQ = PQ and associations of segments between frames are ignored. Correct long-range associations, which are crucial for object tracking and video editing tasks, are only evaluated with a large value of \(k\). For a more complete evaluation of VPS, we evaluate \(k\in\{1,2,4,6,8,10,\infty\}\). Note, VPQ\({}^{\infty}\) considers the entire video as a tube and requires global association. We additionally report \(\overline{\text{VPQ}}\), which is the average of VPQ\({}^{\infty}\) and the arithmetic mean of VPQ\({}^{\{1,2,4,6,8,10\}}\). This weights VPQ\({}^{\infty}\) higher as it represents video-level performance, while the other metrics only assess frame-level or clip-level results. STQ is proposed in STEP [63] and is the geometric mean of AQ (Association Quality) and SQ (Segmentation Quality). It evaluates pixel-level associations and semantic segmentation quality respectively. We refer readers to [27] and [63] for more details on VPQ and STQ.
Main Results.Table 1 summarizes our findings. To assess generality, we study three models as image segmentation input (PanoFCN [35], Mask2Former [7], and Video-K-Net [34]) to our decoupled approach. The weights of these image models are initialized by pre-training on the COCO panoptic dataset [37] and subsequently fine-tuned
Figure 5: Performance trend comparison of Video-K-Net [34] and our decoupled approach with the same base model. Ours decreases slower with larger \(k\), indicating that the proposed decoupled method has a better long-term propagation.
on VIPSeg [45]. Our method outperforms both baseline Clip-PanoFCN [45] and state-of-the-art Video-K-Net [34] with the same backbone, especially if \(k\) is large, _i.e_., when long-term associations are more important. Figure 5 shows the performance trend with respect to \(k\). The gains for large values of \(k\) highlight the use of a decoupled formulation over end-to-end training: the latter struggles with associations eventually, as training sequences aren't arbitrarily long. Without any changes to our generalized mask propagation module, using a better image backbone (_e.g_., SwinB [40]) leads to noticeable improvements. Our method can likely be coupled with future advanced methods in image segmentation for even better performance.
### Open-World Video Segmentation
Open-world video segmentation addresses the difficult problem of discovering, segmenting, and tracking objects in the wild. BURST [2] is a recently proposed dataset that evaluates open-world video segmentation. It contains diverse scenarios and 2,414 videos in its validation/test sets. There are a total of 482 object categories, 78 of which are 'common' classes while the rest are 'uncommon'.
Metrics.Following [2], we assess Open World Tracking Accuracy (OWTA), computed separately for 'all', 'common', and 'uncommon' classes. False positive tracks are not directly penalized in the metrics as the ground-truth annotations are not exhaustive for all objects in the scene, but indirectly penalized by requiring the output mask to be mutually exclusive. We refer readers to [2, 42] for details.
Main Results.Table 2 summarizes our findings. We study two image segmentation models: Mask2Former [7], and EntitySeg [49], both of which are pretrained on the COCO [37] dataset. The Mask2Former weight is trained for the instance segmentation task, while EntitySeg is trained
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline Backbone & & & VPQ\({}^{1}\) & VPQ\({}^{2}\) & VPQ\({}^{4}\) & VPQ\({}^{6}\) & VPQ\({}^{8}\) & VPQ\({}^{10}\) & VPQ\({}^{\infty}\) & VPQ\({}^{\infty}\) & VPQ\({}^{\infty}\) & STQ \\ \hline Clip-PanoFCN & end-to-end [45] & semi-online & 27.3 & 26.0 & 24.2 & 22.9 & 22.1 & 21.5 & 18.1 & 21.1 & 28.3 \\ Clip-PanoFCN & decoupled (ours) & online & 29.5 & 28.9 & 28.1 & 27.2 & 26.7 & 26.1 & 25.0 & 26.4 & 35.7 \\ Clip-PanoFCN & decoupled (ours) & semi-online & **31.3** & **30.8** & **30.1** & **29.4** & **28.8** & **28.3** & **27.1** & **28.4** & **35.8** \\ \hline Video-K-Net & R50 & end-to-end [34] & online & 35.4 & 30.8 & 28.5 & 27.0 & 25.9 & 24.9 & 21.7 & 25.2 & 33.7 \\ Video-K-Net & R50 & decoupled (ours) & online & 35.8 & 35.2 & 34.5 & 33.6 & 33.1 & 32.6 & 30.5 & 32.3 & 38.4 \\ Video-K-Net & R50 & decoupled (ours) & semi-online & 37.1 & 36.5 & 35.8 & 35.1 & 34.7 & 34.3 & 32.3 & 33.9 & 38.6 \\ Mask2Former & R50 & decoupled (ours) & online & 41.0 & 40.2 & 39.3 & 38.4 & 37.9 & 37.3 & 33.8 & 36.4 & 41.1 \\ Mask2Former & R50 & decoupled (ours) & semi-online & **42.1** & **41.5** & **40.8** & **40.1** & **39.7** & **39.3** & **36.1** & **38.3** & **41.5** \\ \hline Video-K-Net & Swin-B & end-to-end [34] & online & 49.8 & 45.2 & 42.4 & 40.5 & 39.1 & 37.9 & 32.6 & 37.5 & 45.2 \\ Video-K-Net & Swin-B & decoupled (ours) & online & 48.2 & 47.4 & 46.5 & 45.6 & 45.1 & 44.5 & 42.0 & 44.1 & 48.6 \\ Video-K-Net & Swin-B & decoupled (ours) & semi-online & 50.0 & 49.3 & 48.5 & 47.7 & 47.3 & 46.8 & 44.5 & 46.4 & 48.9 \\ Mask2Former & Swin-B & decoupled (ours) & online & 55.3 & 54.6 & 53.8 & 52.8 & 52.3 & 51.9 & 49.0 & 51.2 & **52.4** \\ Mask2Former & Swin-B & decoupled (ours) & semi-online & **56.0** & **55.4** & **54.6** & **53.9** & **53.5** & **53.1** & **50.0** & **52.2** & 52.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of end-to-end approaches (e.g., state-of-the-art Video-K-Net [34]) with our decoupled approach on the large-scale video panoptic segmentation dataset VIPSeg [45]. Our method scales with better image models and performs especially well with large \(k\) where long-term associations are considered. All baselines are reproduced using official codebases.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & & & **Validation** & & & & **Test** \\ \cline{3-8} Method & & OWTA\({}_{\text{all}}\) & OWTA\({}_{\text{com}}\) & OWTA\({}_{\text{unc}}\) & OWTA\({}_{\text{all}}\) & OWTA\({}_{\text{com}}\) & OWTA\({}_{\text{unc}}\) \\ \hline Mask2Former & w/ Box tracker [2] & 60.9 & 66.9 & 24.0 & 55.9 & 61.0 & 24.6 \\ Mask2Former & w/ STCN tracker [2] & 64.6 & 71.0 & 25.0 & 57.5 & 62.9 & 23.9 \\ OWTB [39] & & 55.8 & 59.8 & 38.8 & 56.0 & 59.9 & 38.3 \\ Mask2Former & w/ ours online & 69.5 & 74.6 & 42.3 & 70.1 & 75.0 & 44.1 \\ Mask2Former & w/ ours semi-online & **69.9** & **75.2** & 41.5 & **70.5** & **75.4** & 44.1 \\ EntitySeg & w/ ours online & 68.8 & 72.7 & 49.6 & 69.5 & 72.9 & 53.0 \\ EntitySeg & w/ ours semi-online & 69.5 & 73.3 & **50.5** & 69.8 & 73.1 & **53.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison to baselines in the open-world video segmentation dataset BURST [2]. βcomβ stands for βcommon classesβ and βuncβ stands for βuncom classesβ. Our method performs better in both β in the common classes with Mask2Former [7] image backbone, and in the uncommon classes with EntitySeg [49]. The agility to switch image backbones is one of the main advantages of our decoupled formulation. Baseline performances are transcribed from [2].
for 'entity segmentation', that is to segment all visual entities without predicting class labels. We find EntitySeg works better for novel objects, as it is specifically trained to do so. Being able to plug and play the latest development of open-world image segmentation models without any fine-tuning is one of the major advantages of our formulation.
Our approach outperforms the baselines, which all follow the 'tracking-by-detection' paradigm. In these baselines, segmentations are detected every frame, and a short-term temporal module is used to associate these segmentations between frames. This paradigm is sensitive to mis-detections in the image segmentation model. 'Box tracker' uses per-frame object IoU; 'STCN tracker' uses a pretrained STCN [11] mask propagation network; and OWTB [39] uses a combination of IoU, optical flow, and Re-ID features. We also make use of mask propagation, but we go beyond the setting of simply associating existing segmentations - our bi-directional propagation allows us to improve upon the image segmentations and enable long-term tracking. Figure 6 compares our results on one of the videos in BURST to OWTB [39].
### Referring Video Segmentation
Referring video segmentation takes a text description of an object as input and segments the target object. We experiment on Ref-DAVIS17 [25] and Ref-YouTubeVOS [55] which augments existing video object segmentation datasets [47, 65] with language expressions. Following [64], we assess \(\mathcal{J}\&\mathcal{F}\) which is the average of Jaccard index (\(\mathcal{J}\)), and boundary F1-score (\(\mathcal{F}\)).
Table 3 tabulates our results. We use an image-level ReferFormer [64] as the image segmentation model. We find that the quality of referring segmentation has a high variance across the video (e.g., the target object might be too small at the beginning of the video). As in all competing approaches [55, 64, 17], we opt for an offline setting to reduce this variance. Concretely, we perform the initial inclip consensus by selecting 10 uniformly spaced frames in the video and using the frame with the highest confidence given by the image model as a 'key frame' for aligning the other frames. We then forward- and backward-propagate from the key frame without incorporating additional image segmentations. We give more details in the appendix. Our method outperforms other approaches.
### Unsupervised Video Object Segmentation
Unsupervised video object segmentation aims to find and segment salient target object(s) in a video. We evaluate on DAVIS-16 [47] (single-object) and DAVIS-17 [5] (multi-object). In the single-object setting, we use the image saliency model DIS [51] as the image model and employ an offline setting as in Section 4.3. In the multi-object setting, since the image saliency model only segments one object, we instead use EntitySeg [49] and follow our semi-online protocol on open-world video segmentation in Section 4.2. Table 4 summarizes our findings. Please refer to the appendix for details.
### Ablation Studies
#### 4.5.1 Varying Training Data
Here, we vary the amount of training data in the target domain (VIPSeg [45]) to measure the sensitivity of end-to-end approaches _vs_. our decoupled approach. We sub-sample different percentages of videos from the training set
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & D16-val & D17-val & D17-td \\ \hline RTNet [54] & 85.2 & - & - \\ PMN [31] & 85.9 & - & - \\ UnOVOST [43] & - & 67.9 & 58.0 \\ Propose-Reduce [36] & - & 70.4 & - \\ Ours & **88.9** & **73.4** & **62.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: \(\mathcal{J}\&\mathcal{F}\) comparisons on three unsupervised video object segmentation datasets: DAVIS16 validation (D16-val), DAVIS17 validation (D17-val), and DAVIS17 test-dev (D17-td). Missing entries mean that the method did not report results on that dataset.
Figure 6: An in-the-wild result in the BURST [2] dataset. Note, we can even track the small skateboarder (pink mask on the road).
to train Video-K-Net-R50 [34] (all networks are still pretrained with COCO-panoptic [37]). We then compare end-to-end performances with our (semi-online) decoupled performances (the temporal propagation model is unchanged as it does not use any data from the target domain). Figure 1 plots our findings - our model has a much higher relative \(\overline{\text{VPQ}}\) improvement over the baseline Video-K-Net for rare classes if little training data is available.
#### 4.5.2 In-Clip Consensus
Here we explore hyperparameters and design choices in in-clip consensus. Table 5 tabulates our performances with different _clip sizes_, different _frequencies_ of merging in-clip consensus with temporal propagation, and whether to use _spatial alignment_ during in-clip consensus. Mask2Former-R50 is used as the backbone in all entries. For clip size \(n=2\), tie-breaking is ambiguous. A large clip is more computationally demanding and potentially leads to inaccurate spatial alignment as the appearance gap between frames in the clip increases. A high merging frequency reduces the delay between the appearance of a new object and its detection in our framework but requires more computation. By default, we use a clip size \(n=3\), merge consensus with temporal propagation every 5 frames, and enable spatial alignment for a balance between performance and speed.
#### 4.5.3 Using Temporal Propagation
Here, we compare different approaches for using temporal propagation in a decoupled setting. Tracking-by-detection approaches [26, 58, 3] typically detect segmentation at every frame and use temporal propagation to associate these per-frame segmentations. We test these short-term association approaches using 1) mask IoU between adjacent frames, 2) mask IoU of adjacent frames warped by optical flow from RAFT [59], and 3) query association [22] of query-based segmentation [7] between adjacent frames. We additionally compare with variants of our temporal propagation method: 4) 'ShortTrack', where we consider only short-term tracking by re-initializing the memory \(\mathbf{H}\) every frame, and 5) 'TrustImageSeg', where we explicitly trust the consensus given by the image segmentations over temporal propagation by discarding segments that are not associated with a segment in the consensus (i.e., dropping the middle term in Eq. (9)). Table 6 tabulates our findings. For all entries, we use Mask2Former-R50 [7] in the online setting on VIPSeg [45] for fair comparisons.
### Limitations
As the temporal propagation model is task-agnostic, it cannot detect new objects by itself. As shown by the red boxes in Figure 3, the new object in the scene is missing from \(\mathbf{M}_{k-1}\) and can only be detected in \(\mathbf{M}_{k}\) - this results in delayed detections relating to the frequency of merging with in-clip consensus. Secondly, we note that end-to-end approaches still work better when training data is sufficient, i.e., in smaller vocabulary settings like YouTubeVIS [69] as shown in the appendix. But we think decoupled methods are more promising in large-vocabulary/open-world settings.
## 5 Conclusion
We present **DEVA**, a decoupled video segmentation approach for 'tracking anything'. It uses a bi-directional propagation technique that effectively scales image segmentation methods to video data. Our approach critically leverages external task-agnostic data to reduce reliance on the target task, thus generalizing better to tasks with scarce data than end-to-end approaches. Combined with universal image segmentation models, our decoupled paradigm demonstrates state-of-the-art performance as a first step towards open-world large-vocabulary video segmentation.
**Acknowledgments**. Work supported in part by NSF grants 2008387, 2045586, 2106825, MRI 1725729 (HAL [28]), and NIFA award 2020-67021-32799.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Temporal scheme & \(\text{VPQ}^{1}\) & \(\text{VPQ}^{4}\) & \(\text{VPQ}^{10}\) & \(\overline{\text{VPQ}}\) & STQ \\ \hline Mask IoU & 39.9 & 32.7 & 27.7 & 27.6 & 34.5 \\ Mask IoU+flow & 40.2 & 33.7 & 28.8 & 28.6 & 37.0 \\ Query assoc. & 40.4 & 33.1 & 28.1 & 28.0 & 35.8 \\ βShortTrackβ & 40.6 & 33.3 & 28.3 & 28.2 & 37.2 \\ βTrustImageSegβ & 40.3 & 37.5 & 33.7 & 33.2 & 37.9 \\ Ours, bi-direction & **41.0** & **39.3** & **37.3** & **36.4** & **41.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performances of different temporal schema on VIPSeg [45]. Our bi-directional propagation scheme is necessary for the final high performance.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Varying clip size** & \(\text{VPQ}^{1}\) & \(\text{VPQ}^{10}\) & \(\overline{\text{VPQ}}\) & STQ & FPS \\ \hline \(n=1\) & 41.0 & 37.3 & 36.4 & 41.1 & **10.3** \\ \(n=2\) & 40.4 & 37.2 & 36.3 & 39.0 & 9.8 \\ \(n=3\) & **42.1** & **39.3** & 38.3 & 41.5 & 7.8 \\ \(n=4\) & **42.1** & 39.1 & **38.5** & 42.3 & 6.6 \\ \(n=5\) & 41.7 & 38.9 & 38.3 & **42.8** & 5.6 \\ \hline
**Varying merge freq.** & \(\text{VPQ}^{1}\) & \(\text{VPQ}^{10}\) & \(\overline{\text{VPQ}}\) & STQ & FPS \\ \hline Every 3 frames & **42.2** & 39.2 & **38.4** & **42.6** & 5.2 \\ Every 5 frames & 42.1 & **39.3** & 38.3 & 41.5 & 7.8 \\ Every 7 frames & 41.5 & 39.0 & 35.7 & 40.5 & **8.4** \\ \hline
**Spatial Align?** & \(\text{VPQ}^{1}\) & \(\text{VPQ}^{10}\) & \(\overline{\text{VPQ}}\) & STQ & FPS \\ \hline Yes & **42.1** & **39.3** & **38.3** & **41.5** & 7.8 \\ No & 36.7 & 33.9 & 32.8 & 33.7 & **9.2** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performances of our method on VIPSeg [45] with different hyperparameters and design choices. By default, we use a clip size of \(n=3\) and a merge frequency of every 5 frames with spatial alignment for a balance between performance and speed. |
2309.09590 | An Autonomous Vision-Based Algorithm for Interplanetary Navigation | The surge of deep-space probes makes it unsustainable to navigate them with
standard radiometric tracking. Self-driving interplanetary satellites represent
a solution to this problem. In this work, a full vision-based navigation
algorithm is built by combining an orbit determination method with an image
processing pipeline suitable for interplanetary transfers of autonomous
platforms. To increase the computational efficiency of the algorithm, a
non-dimensional extended Kalman filter is selected as state estimator, fed by
the positions of the planets extracted from deep-space images. An enhancement
of the estimation accuracy is performed by applying an optimal strategy to
select the best pair of planets to track. Moreover, a novel analytical
measurement model for deep-space navigation is developed providing a
first-order approximation of the light-aberration and light-time effects.
Algorithm performance is tested on a high-fidelity, Earth--Mars interplanetary
transfer, showing the algorithm applicability for deep-space navigation. | Eleonora Andreis, Paolo Panicucci, Francesco Topputo | 2023-09-18T08:54:29Z | http://arxiv.org/abs/2309.09590v3 | # An Autonomous Vision-Based Algorithm for Interplanetary Navigation
###### Abstract
The surge of deep-space probes makes it unsustainable to navigate them with standard radiometric tracking. Self-driving interplanetary satellites represent a solution to this problem. In this work, a full vision-based navigation algorithm is built by combining an orbit determination method with an image processing pipeline suitable for interplanetary transfers of autonomous platforms. To increase the computational efficiency of the algorithm, a non-dimensional extended Kalman filter is selected as state estimator, fed by the positions of the planets extracted from deep-space images. An enhancement of the estimation accuracy is performed by applying an optimal strategy to select the best pair of planets to track. Moreover, a novel analytical measurement model for deep-space navigation is developed providing a first-order approximation of the light-aberration and light-time effects. Algorithm performance is tested on a high-fidelity, Earth-Mars interplanetary transfer, showing the algorithm applicability for deep-space navigation.
## I. Introduction
As a new era of deep-space exploration and exploitation is rapidly approaching, the adoption of efficient and sustainable navigation methods becomes increasingly crucial. Traditional ground-based radiometric tracking, while accurate and reliable, heavily depends on limited resources, such as ground stations and flight dynamics teams. This approach is unsustainable in the long term. There is then an urgent need to enhance the level of navigation autonomy for future interplanetary missions.
There are different alternatives that grant autonomous navigation capabilities: autonomous X-ray pulsar-based navigation [1, 2], semi-autonomous radio-based navigation [3], and autonomous vision-based navigation (VBN) [4, 5]. Among these, X-ray navigation requires large detectors and long integration times
[6]. One-way radiometric tracking still relies on Earth-based infrastructure. Whereas, VBN is an economical and fully ground-independent solution: it enables determining the probe position by observing the movement of celestial bodies on optical images [6]. In addition, VBN is an approach compatible with all mission phases toward celestial bodies: cruise [4; 7; 8; 9], mid-range [10; 11; 12], and close proximity [13], including landing [5; 14]. Several VBN solutions for approach and close proximity have been already adopted by different missions, e.g., NASA's OSIRIS-REx [15]. Instead, VBN algorithms for interplanetary navigation have only gone through onboard testing without being directly used for probe operations. An example is the validation performed within the Deep-Space 1 (DS1) mission in 1998 [16]. Nevertheless, in recent years, there has been a growing interest in VBN solutions for deep-space exploration, applied in particular to CubeSats missions [6]. Worth mentioning is the Miniaturised Asteroid Remote Geophysical Observer (M-ARGO), which aims to execute an onboard autonomous navigation test during the interplanetary transfer [17].
In interplanetary VBN, previous research primarily focuses on implementing orbit determination (OD) algorithms to determine the probe state [8; 9; 18; 19; 20]. In [18], innovative angles-only Initial Orbit Determination algorithms are developed, whose output is then used within an extended Kalman filter (EKF) embedding light-effects corrections on the planet position in the measurement model. In [8], the feasibility of the M-ARGO autonomous deep-space navigation experiment is presented. In [9], an OD algorithm suited to be deployed on a miniaturized processor is developed by studying the most promising EKF implementations for onboard applications. Although the above works elaborate on autonomous OD, there is less literature focusing on developing a fully integrated pipeline embedding an image-processing (IP) procedure for extracting information from deep-space images. In [21], an IP technique to retrieve beacon information is qualitatively mentioned yet not implemented in a fully integrated simulation, and the effect of the measurement errors on the state estimation is not quantified through simulations. Whereas, [7] details the procedure adopted to process the deep-space images of DS1. Due to the long exposure time and high-speed slew rate of the mission, complex image patterns were produced for the point sources. Thus, to retrieve accurately the centroids of the bright objects and the beacon position in the image, computationally heavy multiple cross-correlations were applied, following the approach used for the Galileo mission [22]. In this work, an alternative and computationally lighter approach has been preferred based only on geometrical evaluations following the assumptions of having slower slew rates.
This work develops an autonomous VBN algorithm intended for use during a deep-space transfer, where the estimation accuracy is improved by applying light-effect corrections and an optimal strategy to select the best pair of beacons to track. The contribution to the state-of-the-art is threefold. First, the extended Kalman filter adopted as OD algorithm [9] is integrated with an IP pipeline suited to deep-space navigation [23]. The literature in [9; 19] is extended by considering deep-space images as input. In this way, the measurements are the outcome of an IP procedure rather than a mere behavioral model, which yields a
more realistic representation of the application case and a faithful reproduction of the state estimation error. Second, the VBN filter is developed for CubeSat applications, thus, particular attention has been paid to the computation capabilities of the navigation algorithm. Third, a novel analytical measurement model for deep-space navigation providing a first-order approximation of light-effects correction on beacon position is presented. The proposed model avoids correcting the raw camera measurement, so decoupling the spacecraft prediction from the process noise and prevents onboard optimization as in [9]. Moreover, light-aberration correction is also applied to the stars position, being the attitude determined from deep-space images.
The paper is structured as follows. In Sec. II the interplanetary navigation problem is described by paying particular attention to the definition of the optimal beacon selection method and the light-effects perturbations relevant in the deep-space environment. Sec. III details the IP procedure to extract observations from deep-space images. In Sec. IV, the developed VBN filter to be used during an interplanetary transfer is presented. Here, the dynamics and measurement models are described together with the chosen filtering scheme. Eventually, the performance of the IP pipeline and of the VBN filter tested on an interplanetary high-fidelity ballistic trajectory is reported in Sec. V.
## II. Interplanetary Vision-Based Navigation Problem
### A. Problem Geometry
A probe can determine its location by acquiring information from the observation of celestial bodies through optical sensors. Since celestial objects are unresolved in deep space, i.e., they fall within a single pixel, their line-of-sight (LoS) direction or pixel position is the only available information that can be used to estimate the probe state. When two LoS directions associated with different beacons are obtained simultaneously, the kinematic celestial triangulation problem can be solved [4, 18, 24].
In this work, CubeSats applications are investigated. This brings us to enforce some constraints, which make the navigation problem even more challenging than for standard probes:
1. only one miniaturized optical sensor, e.g., star tracker or camera, is adopted;
2. only planets are tracked because of the limited performance of the optical sensor [25];
However, note that the algorithm can be used also for larger spacecraft, despite the application of the paper.
Since the kinematic celestial triangulation problem requires at least two different synchronous observations to be solved, but it is improbable to detect several planets with one single instrument due to their space sparsity, the static celestial triangulation cannot be exploited for the CubeSat operational scenario. Therefore, dynamic estimators, e.g., Kalman filtering, are adopted as they can process asynchronous observations.
### Optimal Planets Selection
To reach the highest accuracy possible in the state estimation, the approach described in [26] is adopted to optimally select the planets to observe during the interplanetary transfer. The optimal planets pair is chosen among the observable ones by minimizing the figure of merit \(\mathcal{J}\), which is the trace of the position error covariance matrix when considering perturbed LoS directions. It is defined as follows:
\[\mathcal{J}=\sigma_{\text{str}}^{2}\ \frac{1+\cos\gamma^{2}}{\sin\gamma^{4}}\ \mathbf{d}^{\top}\ \left((\mathbf{I}_{3x3}-\hat{\mathbf{\rho}}_{i}\hat{\mathbf{\rho}}_{i}^{\top})+(\mathbf{I}_{3x 3}-\hat{\mathbf{\rho}}_{j}\hat{\mathbf{\rho}}_{j}^{\top})\right)\ \mathbf{d} \tag{1}\]
where \(\hat{\mathbf{\rho}}_{i}\) and \(\hat{\mathbf{\rho}}_{j}\) are the unitary LoS vectors to the \(i\)-esimal and \(j\)-esimal planets, respectively, \(\sigma_{\text{str}}\) is the standard deviation of the LoS angular error, and \(\mathbf{I}_{3x3}\) is the three-by-three identity matrix. Whereas, \(\mathbf{d}\) and \(\gamma\) are defined as
\[\mathbf{d}=\mathbf{r}_{i}-\mathbf{r}_{j} \gamma=\text{acos}(\hat{\mathbf{\rho}}_{i}^{\top}\hat{\mathbf{\rho}}_{j}) \tag{2}\]
where \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\) are the positions of the two planets, respectively. It is convenient to divide \(\mathbf{d}\) by 1 AU to keep \(\mathcal{J}\) non-dimensional.
The optimal planet pair is selected taking into account the planets observability, which is preliminarily assessed by evaluating the planet apparent magnitude and Solar Exclusion Angle (SEA). For more information refer to [9].
### Light-Effects Perturbations
Another important aspect to consider for deep-space navigation is the impact of light effects, i.e., light time and light aberration [18], on the observations used to estimate the spacecraft state. On one hand, the light-time effect is caused by the fact that, given the large distance involved in deep space and the finite speed of the light, the light detected at the camera is emitted from the target in the past. This brings the celestial object to be observed shifted with respect to its position in the instant of detection. The further the planet is from the spacecraft, the more significant the light-time effect is. On the other hand, the light-aberration effect is caused by the relative motion between the observer and the light source, and it becomes important when the spacecraft velocity is not negligible. Like the light-time effect, this causes a change in the planet position projection, which depends on the velocity intensity and direction relative to the LoS of the observed planet.
These two effects shall be corrected in the filter to avoid systematic errors in the estimation of the spacecraft state. Previous works consider these effects by applying corrections only to the planet LoS directions [9, 18]. Instead, in this work, since also the probe attitude is determined from deep-space images, the light-effect corrections need to be applied also to the computed stars LoS directions to avoid the evaluation of a biased attitude value. However, since stars are assumed to be fixed with respect to the Solar System, only
light aberration needs to be corrected.
## III. Image Processing Pipeline for Deep-Space Vision-Based Navigation
In deep space, the projection of the planet position in the 2D camera reference frame \(\mathbb{C}\), i.e., \({}^{\mathbb{C}}\mathbf{r}_{\text{pl}}\), or its associated LoS direction, is the only information available to support state estimation. An IP algorithm suited for deep-space navigation is adopted to extract this information from the image. The goal of the IP procedure is to recognize the planet projections in the image among the centroids available. The procedure goes through three steps: 1) The probe attitude is determined, 2) the light-aberration correction is applied to bright star centroids, and 3) the planets are identified. Note that the first step is needed to identify the portion of the sky the probe is observing and recognize those bright spots that correspond to not-stellar objects in the image. Although the current implementation foresees the attitude determination from the image, note that the Attitude Determination and Control System can also provide this solution in an operative scenario. A graphical representation of the IP procedure is shown in Fig. 1.
### A. Attitude Determination
As the first step, the probe determines its attitude. At this aim, Niblack's thresholding method [27] is adopted to remove the background noise to portions of the image centered on bright pixels and delimited by squared windows with a margin of one pixel on each side. Hence, the centroid of the object is computed by applying an intensity-weighted center of gravity algorithm considering the pixels inside the associated squared window [28]. At this point, the registration problem, whose goal is to find the correct matching between the observed star asterism and the cataloged stars in the inertial frame, is solved. This last step is performed differently according to whether the planet is acquired for the first time or not.
In the former case, the selected lost-in-space (LIS) strategy is the search-less algorithm (SLA) introduced in [29]. In this work, the SLA has been preferred over the binary search technique [30] for its higher speed gain rate (from 10 to more than 50 times [31]) and for its robustness to spikes. To be adopted, the SLA requires computing on the ground a vector of integers, the k-vector, which contains information for the stars matching starting from the chosen stars invariant. In this work, the interstellar angle is the invariant chosen to build the star catalog. To reduce the size of the catalog, only stars whose apparent magnitude is lower than 5.5 are considered for the generation of the invariant. Moreover, interstar angles greater than 35 deg are not taken into account. The objects identified by the SLA as spikes may be non-stellar objects (such as planets, asteroids, and cosmic rays) or stars not recognized due to errors in the centroid extraction. Yet, when a great number of spikes is present in the image, the star asterisms may not be recognized by the algorithm. In this work, to reduce the number of scenarios in which this failure occurs, a heuristic approach is considered. As faint stars are generally not stored in the onboard catalog and as the centroids extraction depends on
the thresholding procedure, when the attitude determination fails, the attitude determination procedure is iterated again by increasing the intensity threshold. One of the results of this approach consists of diminishing the number of bright objects in the image, which can ultimately lead to the removal of some spikes. The procedure is repeated until observed star asterisms are recognized or less than three stars are detected.
When the spacecraft is not in LIS mode, it has a rough estimate of its orientation. Therefore, a recursive registration method can be applied. Indeed, by knowing the previous attitude estimation, the LoS directions in the inertial reference frame \(\mathcal{N}\) of the four corners of the image are determined. At this point, a check is performed to identify which stars of the onboard catalog are contained inside the image Field of View (FoV). Thus, their position projections in the 2D camera reference frame are evaluated, and they are associated with the closest centroids of the bright objects extracted from the image.
When stars are identified, the probe attitude is determined by solving Wahba's problem [32] between the stars' LoS directions in the camera and inertial reference frame exploiting the Singular Value Decomposition (SVD) method [32]. Moreover, the robustness of the solution to Wahba's problem is increased thanks to the adoption of a RANdom-SAmple Consensus (RANSAC) procedure [33, 34]. The RANSAC algorithm aims to detect the bright objects that have been misidentified by the star identification, which can thus lead to a wrong attitude determination. To detect these outliers, the attitude of the spacecraft is adopted as the mathematical model for the data fitting. The attitude is estimated \(n_{R}\)-times by selecting randomly every time a group of 3 identified stars. The minimum set of stars needed for attitude determination is chosen to increase the probability of having a group made of different stars at each time. Thus, the estimated \(n_{R}\) spacecraft orientations are compared to identify the best model, which is then adopted for the data fitting. The stars not respecting the best model are considered outliers and are labeled as spikes.
When the recursive attitude determination fails, the spacecraft orientation at the following image acquisition is determined again with the LIS method. Vice versa, when the LIS algorithm succeeds in the determination of the probe orientation, in the following image acquisition the recursive attitude determination algorithm will be adopted.
### Light-Aberration Correction
After the first attitude determination, the centroids of the stars are corrected for the light-aberration effect, and the probe attitude is recomputed by taking into account the corrected stars LoS directions. The procedure adopted is described in [18]. At first, the observed stars LoS directions as seen by the spacecraft in the inertial reference frame \(\mathcal{N}\) are found:
\[\mathcal{N}\mathbf{\rho}_{s_{\mathrm{obs}}}=(\mathbf{K}_{\mathrm{cam}}\mathbf{A})^{-1} \overset{\mathrm{C}}{h}r_{s_{\mathrm{obs}}} \tag{3}\]
where \({}^{\mathbb{C}}_{h}\mathbf{r}_{\kappa_{\text{obs}}}\) are the observed stars position projections in \(\mathbb{C}\) in homogeneous coordinates (see [34] for homogeneous coordinates), and \(\mathbf{K}_{\text{cam}}\) is the camera calibration matrix. Then, the angle \(\theta_{\text{obs}}\) between the observed unitary stars LoS directions \({}^{N}\hat{\mathbf{\rho}}_{\kappa_{\text{obs}}}\) and the estimated unitary velocity vector of the probe \(\hat{\eta}_{\text{p}}\) is defined as
\[\tan\theta_{\text{obs}}=\frac{||{}^{N}\hat{\mathbf{\rho}}_{\kappa_{\text{obs}}} \times\hat{\eta}_{\text{p}}||}{{}^{N}\hat{\mathbf{\rho}}_{\kappa_{\text{obs}}}^{ \top}\hat{\eta}_{\text{p}}} \tag{4}\]
And the aberration angle \(\varepsilon\) is evaluated:
\[\tan\varepsilon=\frac{(v_{\text{p}}/c)\sin\theta_{\text{obs}}}{1-(v_{\text{p }}/c)\cos\theta_{\text{obs}}} \tag{5}\]
where \(c\) is the speed of the light. Thus, the corrected unitary stars LoS directions \({}^{N}\hat{\mathbf{\rho}}_{\kappa_{\text{corr}}}\) can be retrieved as
\[{}^{N}\hat{\mathbf{\rho}}_{\kappa_{\text{corr}}}=\frac{{}^{N}\hat{\mathbf{\rho}}_{ \kappa_{\text{obs}}}\sin\theta_{\text{corr}}-\hat{\eta}_{\text{p}}\sin \varepsilon}{\sin\theta_{\text{obs}}} \tag{6}\]
with \(\theta_{\text{corr}}=\theta_{\text{obs}}+\varepsilon\). At this point, the attitude matrix of the probe is redetermined by solving the Wahba problem [32] in which \({}^{N}\hat{\mathbf{\rho}}_{\kappa_{\text{corr}}}\) are considered. This corrected attitude matrix is labeled \(\mathbf{A}_{\text{corr}}\).
### _Beacon Identification_
At this step, the planet must be identified in the image and its projection \({}^{\mathbb{C}}\mathbf{r}_{\text{pl}}\) must be extracted. In this step, the identification is performed through the evaluation of the statistical moments associated with the
Fig. 1: Image Processing General Workflow
planet position projection, which defines the Gaussian probability of finding the planet in that portion of the image. At first, the expected position projection of the observed planet is evaluated as:
\[\begin{array}{c}\mathbb{\ }_{h}^{C}\mathbf{r}_{\text{pl}_{0}}=\mathbf{K}_{\text{cam}}\mathbf{A}_{ \text{corr}}(^{N}\mathbf{r}_{\text{pl}}-^{N}\mathbf{r}_{\text{p}})\end{array} \tag{7}\]
where \({}^{N}\mathbf{r}_{\text{p}}\) is the predicted probe position. If \(\mathbb{\ }_{\text{\ }}^{C}\mathbf{r}_{\text{pl}_{0}}\) falls within the boundaries of the image, its associated uncertainty ellipse is computed. The latter depends on the uncertainties of the spacecraft pose and planet position and is centered in \(\mathbb{\ }_{\mathbf{r}_{\text{pl}_{0}}}^{C}\). The ellipse of \(\mathbb{\ }_{\mathbf{r}_{\text{pl}_{0}}}^{C}\) represents the area of the image where the planet is most likely to be found within a 3\(\sigma\) probability. The spike contained in the 3\(\sigma\) ellipse is identified as the planet position projection \(\mathbb{\ }_{\mathbf{r}_{\text{pl}}}^{C}\). If multiple spikes are located within this ellipse, the closest one to the expected planet position is identified as the planet, as it is most likely to be the true planet projection.
The covariance matrix of the beacon position projection \(\mathbf{P}\) due to the spacecraft pose and beacon position uncertainty is computed as
\[\mathbf{P}=\mathbf{G}\mathbf{S}\mathbf{G}^{\top} \tag{8}\]
where \(\mathbf{G}\) is the Jacobian matrix of the mapping between the beacon position projection \(\mathbb{\ }_{\mathbf{r}_{\text{pl}}}^{C}\) and the spacecraft pose and the beacon position, and \(\mathbf{S}\) is the uncertainty covariance matrix of the probe pose and beacon position. To evaluate \(\mathbf{G}\), the variation of \(\mathbb{\ }_{\mathbf{r}_{\text{pl}}}^{C}\) with respect to the variation of the spacecraft pose and the beacon position is computed. In particular, the quaternions \(\mathbf{q}=(q_{0},\mathbf{q}_{\text{v}})^{\top}\) are chosen to represent the probe attitude matrix. Eq. (9) gives the quaternion representation of the attitude matrix \(\mathbf{A}_{\text{corr}}\)[32]
\[\mathbf{A}_{\text{corr}}=(q_{0}^{2}-\mathbf{q}_{\text{v}}^{\top}\mathbf{q}_{\text{v}})\bm {I}_{3x3}+2\mathbf{q}_{\text{v}}\mathbf{q}_{\text{v}}^{\top}-2q_{0}[\mathbf{q}_{\text{v}}] ^{\wedge} \tag{9}\]
where \([(\cdot)]^{\wedge}\) is the skew-symmetric matrix associated with the cross-product operation. Thus, the variation of \(\mathbb{\ }_{\mathbf{r}_{\text{pl}}}^{C}\) with respect to the variation of the spacecraft pose, i.e., \(\mathbf{A}_{\text{corr}}(\mathbf{q}_{C/N})\) and \({}^{N}\mathbf{r}\), and the beacon position \({}^{N}\mathbf{r}_{\text{pl}}\) can be defined as
\[\delta\ ^{C}\mathbf{r}_{\text{pl}}=\underbrace{\left[\frac{\partial\ ^{C}\mathbf{r}_{\text{pl}}}{ \partial q_{0}}\ \ \ \left[\frac{\partial\ ^{C}\mathbf{r}_{\text{pl}}}{ \partial\mathbf{q}_{\text{v}}}\right]\ \ \ \left[\frac{\partial\ ^{C}\mathbf{r}_{\text{pl}}}{ \partial^{N}\mathbf{r}}\right]\ \ \ \left[\frac{\partial\ ^{C}\mathbf{r}_{\text{pl}}}{ \partial^{N}\mathbf{r}_{\text{pl}}}\right]\right]}_{\mathbf{G}}\left(\begin{array}{c} \delta q_{0}\\ \delta\mathbf{q}_{\text{v}}\\ \delta^{N}\mathbf{r}\\ \delta^{N}\mathbf{r}_{\text{pl}}\end{array}\right)}_{\mathbf{G}} \tag{10}\]
The matrix \(\mathbf{G}\) has dimension \(2\times 10\) and it is defined as
\[\mathbf{G}=\begin{bmatrix}\frac{1}{\frac{1}{h^{T}\rho_{\rm l_{3}}}}&0&-\frac{\frac{ \mathbb{C}_{r}\rho_{\rm l_{1}}}{\mathbb{C}_{r}^{2}}}{\frac{1}{h^{T}\rho_{\rm l_{3 }}}}\\ 0&\frac{1}{\frac{1}{h^{T}\rho_{\rm l_{3}}}}&-\frac{\frac{1}{h^{T}\rho_{\rm l_{2} }}{\mathbb{C}_{r}^{2}}}{\frac{1}{h^{T}\rho_{\rm l_{3}}}}\end{bmatrix}\mathbf{K}_{ \rm cam}\begin{bmatrix}\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{ \partial q_{0}}&\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{ \partial\mathbf{q}_{v}}&\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{ \partial^{N}\mathbf{r}}&\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{ \partial^{N}\mathbf{r}_{\rm pl}}\end{bmatrix} \tag{11}\]
where the partial derivatives of (\(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho}\)) with respect to the spacecraft pose and beacon position are
\[\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{\partial q_{0}}=2q_{0}^{N}\mathbf{ \rho}-2[\mathbf{q}_{v}]^{\wedge\ N}\mathbf{\rho} \tag{12}\]
\[\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{\partial\mathbf{q}_{v}}=-2\ ^{N}\mathbf{\rho}q_{v}^{\top}+2\mathbf{q}_{v}^{\top}\ ^{N}\mathbf{\rho}\mathbf{I}_{3x3}+2\mathbf{q}_{v}\ ^{N}\mathbf{\rho}^{\top}+2q_{0}[^{N}\mathbf{\rho}]^{\wedge} \tag{13}\]
\[\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{\partial^{N}\mathbf{r}}=-\mathbf{A}_{ \rm corr} \tag{14}\]
\[\frac{\partial(\mathbf{A}_{\rm corr}\ ^{N}\mathbf{\rho})}{\partial^{N}\mathbf{r}_{\rm pl}}= \mathbf{A}_{\rm corr} \tag{15}\]
A change of attitude representation is performed to define \(\mathbf{S}\). Since the uncertainty of the probe orientation is more clearly identified through Euler's principal rotation theorem, the quaternion variation is linked to the one relative to the principal angle \(\theta\), also known as pointing error, and principal axis \(\mathbf{e}\). Moreover, a reference attitude value is considered to be always present onboard. Therefore, the variation with respect to the nominal value is limited. Thus, the small-error-angles formulation can be adopted [32]:
\[\delta q_{0}=0\qquad\qquad\qquad\qquad\delta\mathbf{q}_{v}=\frac{1}{2}\delta( \theta\mathbf{e}) \tag{16}\]
Since \(\sigma_{\mathbf{q}_{0}}^{2}=0\) for the small-angles assumption, \(\mathbf{S}\) can be described:
\[\mathbf{S}=\text{diag}(\sigma_{\rm q_{\rm r}}^{2}\text{I}_{3x3},\sigma_{\rm r}^{2 }\text{I}_{3x3},\sigma_{\rm r\mu}^{2}\text{I}_{3x3}) \tag{17}\]
where \(\sigma_{\mathbf{r}}\) and \(\sigma_{\rm r\mu}\) represent the standard deviation of the probe position and beacon position, respectively. Note that the cross-correlations are ignored for simplicity, yet in a more integrated solution, the pose could be coupled. Once the covariance matrix of the beacon position projection is assessed, the associated \(3\sigma\) uncertainty ellipse is computed. Let \(\lambda_{\rm max}\) and \(\lambda_{\rm min}\) be the largest and smallest eigenvalues of \(\mathbf{P}\), respectively, and \(\mathbf{v}_{\rm max}\), \(\mathbf{v}_{\rm min}\) their related eigenvectors. Note that \(\mathbf{P}\) has only two eigenvalues. The characteristics of the \(3\sigma\) covariance ellipse can be computed as:
\[a=\sqrt{11.8292\ \lambda_{\rm max}}\qquad\qquad b=\sqrt{11.8292\ \lambda_{\rm min}} \qquad\qquad\psi=\arctan\left(\frac{\mathbf{v}_{\rm max}z}{\mathbf{v}_{\rm max _{1}}}\right) \tag{18}\]
where \(a\) is the \(3\sigma\) covariance ellipse semimajor axis, \(b\) the \(3\sigma\) covariance ellipse semiminor axis, \(\psi\) the \(3\sigma\) covariance ellipse orientation (i.e., the angle of the largest eigenvector towards the image axis \(\mathbf{C}_{1}\)), and \(\mathbf{v}_{\max_{1}}\), \(\mathbf{v}_{\max_{1}}\) the eigenvector related to the maximum eigenvalue along \(\mathbf{C}_{2}\) and \(\mathbf{C}_{1}\) directions, respectively. Note that the value \(11.8292\) represents the inverse cumulative distribution function of the chi-square distribution with \(2\) degrees of freedom at the values in \(0.9973\) (\(3\sigma\)). Eventually, the beacon is identified with the closest spike to the expected beacon position projection contained in the \(3\sigma\) ellipse.
**IV. Non-dimensional Extended Kalman Filter Based on Planets Observations**
In this section, the VBN filter is described. Firstly, the dynamic and measurement models adopted in the VBN filter are detailed. Successively, the chosen filtering scheme is shown. Note that the vectors specified in this section are always defined in the inertial reference frame \(\mathcal{N}\). Thus, the superscript is indicated only for exceptions.
**A. Dynamics Model**
The process state \(\mathbf{x}\) is defined as
\[\mathbf{x}(t)=\left[\mathbf{r}(t),\mathbf{v}(t),\mathbf{\eta}(t)\right]^{\top} \tag{19}\]
where \(\mathbf{r}\) and \(\mathbf{v}\) are the inertial probe position and velocity, respectively, and \(\mathbf{\eta}\) is a vector of Gauss-Markov (GM) processes accounting for unmodeled terms: a \(3\)-dimensional residual accelerations \(\mathbf{\eta}_{\text{R}}\) and the stochastic component of the Solar Radiation Pressure (SRP) \(\mathbf{\eta}_{\text{SRP}}\); that is, \(\mathbf{\eta}=[\mathbf{\eta}_{\text{R}},\mathbf{\eta}_{\text{SRP}}]^{\top}\)[35]. The process is modeled using the following equation of motion
\[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t),t)+\mathbf{w} \tag{20}\]
where \(\mathbf{f}\) is the vector field embedding the deterministic part, while \(\mathbf{w}\) is the process white noise:
\[\dot{\mathbf{x}}(t)=\underbrace{\left[\begin{array}{c}\mathbf{v}\\ \mathbf{a}_{\text{Sun}}+\mathbf{a}_{\text{SRP}}+\sum_{i}\mathbf{a}_{\mathbf{p}_{i}}\\ -\xi\mathbf{\eta}_{\text{R}}\\ -\xi\mathbf{\eta}_{\text{SRP}}\end{array}\right]}_{f}+\underbrace{\left[ \begin{array}{c}\mathbf{0}_{3x1}\\ \mathbf{\eta}_{\text{R}}+\mathbf{\eta}_{\text{SRP}}\\ \mathbf{w}_{\text{R}}\\ \mathbf{w}_{\text{SRP}}\end{array}\right]}_{\mathbf{w}} \tag{21}\]
\[\mathbf{a}_{\rm Sun}=-\mu_{\rm Sun}\frac{\mathbf{r}}{||\mathbf{r}||^{3}} \tag{22}\] \[\mathbf{a}_{\rm SRP}=C_{\rm R}\frac{P_{0}R_{0}^{2}}{c}\frac{A_{\rm s}}{ m_{\rm s}}\frac{\mathbf{r}}{||\mathbf{r}||^{3}}\] (23) \[\mathbf{a}_{\rm pl_{i}}=\mu_{\rm pl_{i}}\left(\frac{\mathbf{r}_{\rm pl_{i} }-\mathbf{r}}{||\mathbf{r}_{\rm pl_{i}}-\mathbf{r}||^{3}}-\frac{\mathbf{r}_{\rm pl_{i}}}{||\bm {r}_{\rm pl_{i}}||^{3}}\right) \tag{24}\]
The terms that describe the SRP are [36]: \(C_{\rm R}\) the coefficient of reflection, \(P_{0}\) the solar power, \(R_{0}\) the Sun radius, \(A_{\rm s}\) the cross-section area of the probe, and \(m_{\rm s}\) its mass. The third-body perturbations of the Earth-Moon barycenter, Mars, and Jupiter are included; \(\mu_{\rm pl_{i}}\) is the gravitational parameter of the \(i\)-esimal planet considered. In the Langevin equations that govern the GM processes the coefficient \(\xi\) defines the reciprocal of the correlation time, while \(\mathbf{w}_{\rm R}\) and \(\mathbf{w}_{\rm SRP}\) are the process noises of the GM parameters with \(\sigma_{\rm R}\) and \(\sigma_{\rm SRP}\) standard deviations, respectively [35]. The process noise covariance matrix is \(\mathbf{Q}\):
\[\mathbf{Q}={\rm diag}(\mathbf{0}_{3x3},\mathbf{Q}_{\alpha},\mathbf{Q}_{\rm R},\mathbf{Q}_{\rm SRP}) \tag{25}\]
with \(\mathbf{Q}_{\rm R}=\sigma_{\rm R}^{2}\mathbf{I}_{3x3}\), \(\mathbf{Q}_{\rm SRP}=\sigma_{\rm SRP}^{2}\mathbf{I}_{3x3}\), and \(\mathbf{Q}_{\rm a}=(\mathbf{Q}_{\rm R}+\mathbf{Q}_{\rm SRP})/(2\xi)\).
### Measurement Model
One of the contributions of the work is to present a novel measurement model for deep-space triangulation. In the state of the art the LoS measurements have been modeled as azimuth and elevation observations [9, 18, 37]. Thus, the raw camera measurement is manipulated to obtain a derived quantity to be included in the filter. This implies that the error distribution is less Gaussian with respect to the raw measurement error distribution. Moreover, the correction of the light-aberration effect is performed on the observation through the exploitation of the estimated velocity information, coupling the measurement and the process noise. On the contrary, the measurement model proposed hereafter expresses the observations in pixel coordinates in the camera plane. In addition, it embeds the light effects and their dependencies with respect to the planet and spacecraft state. Therefore, the navigation filter takes into account these effects during the mean and covariance update, and the coupling between the measurement and the process noise is avoided.
#### 1. Evaluation of the time delay
To proceed with the implementation of the light-time correction, it is first necessary to evaluate the time delay \(\Delta t\). As the initial step, the equation representing the light-time effect can be written as follows [9]:
\[\mathfrak{L}:=c^{2}\left(t-\tau\right)^{2}-\left(\mathbf{r}_{\rm pl}(\tau)-\mathbf{r}( t)\right)^{\top}(\mathbf{r}_{\rm pl}(\tau)-\mathbf{r}(t))=0 \tag{26}\]
where \(\tau\) is the time at which the light is emitted by the planet and \(t\) is the time of measurement.
Eq. (26) is a constraint that links the spacecraft state with the planet position. Moreover, as the planet motion does not have an analytical solution, the value of \(\tau\) cannot be solved analytically to implicitly include this effect in the measurement. Yet, it is possible to linearize the constraint in Eq. (26) with respect to \(\Delta t\) under the assumption that the time delay \(\Delta t=t-\tau\) is small. The planet motion can be approximated linearly as:
\[\mathbf{r}_{\text{pl}}(\tau)\simeq\mathbf{r}_{\text{pl}}(t)+\left.\frac{\text{d}\mathbf{r} _{\text{pl}}}{\text{d}\tau}\right|_{\tau=t}(\tau-t)=\mathbf{r}_{\text{pl}}(t)-\left. \frac{\text{d}\mathbf{r}_{\text{pl}}}{\text{d}\tau}\right|_{\tau=t}\Delta t=\mathbf{r} _{\text{pl}}(t)-\mathbf{r}_{\text{pl}}(t)\Delta t \tag{27}\]
Therefore, Eq. (26) can be linearized as well as follows:
\[\begin{split}\mathfrak{L}&\simeq c^{2}\Delta t^{2 }+\left(\mathbf{r}_{\text{pl}}(t)-\mathbf{r}_{\text{pl}}(t)\Delta t-\mathbf{r}(t)\right)^ {\top}\left(\mathbf{r}_{\text{pl}}(t)-\mathbf{r}_{\text{pl}}(t)\Delta t-\mathbf{r}(t) \right)=\\ &=(c^{2}-\mathbf{r}_{\text{pl}}(t)^{\top}\mathbf{r}_{\text{pl}}(t)) \Delta t^{2}+2\mathbf{r}_{\text{pl}}(t)^{\top}\mathbf{r}_{\text{pl}/\varsigma}(t) \Delta t-\mathbf{r}_{\text{pl}/\varsigma}(t)^{\top}\mathbf{r}_{\text{pl}/\varsigma}(t) \end{split} \tag{28}\]
where \(\mathbf{r}_{\text{pl}/\varsigma}(t)=\mathbf{r}_{\text{pl}}(t)-\mathbf{r}(t)\). Thus, the solution \(\Delta t\) is obtained by solving a second-order polynomial equation:
\[\begin{split}\Delta t=&\frac{1}{c^{2}-\mathbf{r}_{\text {pl}}(t)^{\top}\mathbf{r}_{\text{pl}}(t)}\left(-\mathbf{r}_{\text{pl}/\varsigma}(t)^{ \top}\mathbf{r}_{\text{pl}}(t)\right.\\ &\left.\pm\sqrt{\mathbf{r}_{\text{pl}/\varsigma}(t)^{\top}\mathbf{r}_{ \text{pl}/\varsigma}(t)}\left(c^{2}-\mathbf{r}_{\text{pl}}(t)^{\top}\mathbf{r}_{\text{ pl}}(t)\right)+\left(\mathbf{r}_{\text{pl}/\varsigma}(t)^{\top}\mathbf{r}_{\text{pl}}(t) \right)^{2}\right)\end{split} \tag{29}\]
Eq. (29) shows that two solutions are possible, given the geometry between the planet and the spacecraft. It is important to understand which solution is the correct one to uniquely solve for \(\Delta t\). By defining \(\beta_{\text{pl}}(t)=\dfrac{\mathbf{v}_{\text{pl}}(t)}{c}\) and the planet flight path angle \(\epsilon\), the approximated solution for the light-time correction is:
\[\Delta t=\frac{-c\left.\left\|\mathbf{r}_{\text{pl}/\varsigma}(t)\right\|\left\| \mathbf{\beta}_{\text{pl}}(t)\right\|\cos\epsilon\pm c\left.\left\|\mathbf{r}_{\text {pl}/\varsigma}(t)\right\|\sqrt{\left\|\mathbf{\beta}_{\text{pl}}(t)\right\|^{2} \left(\cos^{2}\epsilon-1\right)+1}\right.}{c^{2}\left(1-\left\|\mathbf{\beta}_{ \text{pl}}(t)\right\|^{2}\right)} \tag{30}\]
Recall that the correct solution is the one providing \(\Delta t\geq 0\) as the light departs from the planet before arriving at the spacecraft camera. As a consequence, the solution with the plus sign is the one providing the correct time delay. Thus:
\[\Delta t=\frac{\left\|\mathbf{r}_{\text{pl}/\varsigma}(t)\right\|}{c\left(1-\left\| \mathbf{\beta}_{\text{pl}}(t)\right\|^{2}\right)}\left(-\left\|\mathbf{\beta}_{\text{ pl}}(t)\right\|\cos\epsilon+\sqrt{\left\|\mathbf{\beta}_{\text{pl}}(t)\right\| \left(\cos^{2}\epsilon-1\right)+1}\right) \tag{31}\]
It is worthy noting \(1-\left\|\mathbf{\beta}_{\text{pl}}(t)\right\|^{2}\geq 0\,\forall\mathbf{\beta}_{ \text{pl}}(t)\) and \(\left\|\mathbf{\beta}_{\text{pl}}(t)\right\|^{2}\left(\cos^{2}\epsilon-1\right)+1 \geq 0\,\forall\mathbf{\beta}_{\text{pl}}(t)\) and \(\forall\epsilon\), as \(\left\|\mathbf{\beta}_{\text{pl}}(t)\right\|\leq 1\). Note that \(\cos\epsilon\geq 0\,\,\forall\epsilon\) by flight path angle definition. Eq. (31) provides an analytical solution at first order for the light-time delay which can be exploited to include light-time correction in the filter update.
2 Definition of the measurement model equation
Once \(\Delta t\) is computed, the planet LoS can be expressed as the unit vector for the spacecraft position at time \(t\) to the planet position at time \(\tau\). Thus:
\[\mathbf{I}_{\text{pl/sc}}=\frac{\left(\mathbf{r}_{\text{pl}}(t-\Delta t)-\mathbf{r}(t) \right)^{\top}\left(\mathbf{r}_{\text{pl}}(t-\Delta t)-\mathbf{r}(t)\right)}{\left\| \left(\mathbf{r}_{\text{pl}}(t-\Delta t)-\mathbf{r}(t)\right)^{\top}\left(\mathbf{r}_{\text {pl}}(t-\Delta t)-\mathbf{r}(t)\right)\right\|} \tag{32}\]
This unit vector is warped by relativistic light aberration as the spacecraft is not fixed with respect to the inertial reference frame. At first order, this effect can be expressed as follows [38]:
\[\mathbf{I}_{\text{pl/sc}}^{\text{aberr}}=\mathbf{I}_{\text{pl/sc}}+\mathbf{I}_{\text{pl/sc }}\times\left(\mathbf{\beta}_{\text{sc}}\times\mathbf{I}_{\text{pl/sc}}\right) \tag{33}\]
where \(\mathbf{\beta}_{\text{sc}}=\frac{\mathbf{v}}{c}\). Note that higher orders are not detectable from the image processing pipeline as they are orders of magnitude below 15 arcsec [39], which is attitude determination performance.
Finally, the warped line of sight is projected in the camera:
\[\prescript{C}{h}\mathbf{r}_{\text{pl}}=\mathbf{K}_{\text{cam}}\,\mathbf{A}_{\text{corr}} \,\mathbf{I}_{\text{pl/sc}}^{\text{aberr}} \tag{34}\]
\[\prescript{C}{\mathbf{r}_{\text{pl}}}=\frac{1}{\prescript{C}{h}\mathbf{r}_{\text{pl,(3 )}}}\begin{pmatrix}\prescript{C}{\mathbf{r}_{\text{pl,(1)}}}\\ \prescript{C}{\mathbf{r}_{\text{pl,(2)}}}\\ \end{pmatrix} \tag{35}\]
where \(\prescript{C}{h}\mathbf{r}_{\text{pl}}\) is the projection of the planet line of sight in the image plane in homogeneous coordinates, \(\prescript{C}{\mathbf{r}_{\text{pl}}}\) is the same vector but in non-homogeneous coordinates, \(\prescript{C}{h}\mathbf{r}_{\text{pl,(i)}}\) is the \(i\)-esimal coordinate of vector \(\prescript{C}{h}\mathbf{r}_{\text{pl}}\), \(\mathbf{K}_{\text{cam}}\) is the camera intrinsic matrix, and \(\mathbf{A}_{\text{corr}}\) is the rotation matrix from the inertial reference frame \(\mathcal{N}\) to the camera reference frame \(\mathcal{C}\) corrected for the stars light aberration.
#### 3. Definition of the Jacobian of the measurement model
As the measurement model is analytic, its Jacobian can be easily computed by variational analysis. Thus:
\[\delta\prescript{C}{\mathbf{r}_{\text{pl}}}=\begin{bmatrix}\frac{1}{\prescript{C}{ \mathbf{r}_{\text{pl,(3)}}}}&0&-\frac{\prescript{C}{\mathbf{r}_{\text{pl,(1)}}}}{ \prescript{C}{\mathbf{r}_{\text{pl,(3)}}}}\\ 0&\frac{1}{\prescript{C}{\mathbf{r}_{\text{pl,(3)}}}}&-\frac{\prescript{C}{\mathbf{r}_ {\text{pl,(2)}}}}{\prescript{C}{\mathbf{r}_{\text{pl,(3)}}}}\\ \end{bmatrix}\delta_{\mathbf{h}}^{\mathcal{C}}\mathbf{r}_{\text{pl}} \tag{36}\]
\[\delta_{\mathbf{h}}^{\mathcal{C}}\mathbf{r}_{\text{pl}}=\mathbf{K}_{\text{cam}}\mathbf{A}_{ \text{corr}}\left(\delta I_{\text{pl/sc}}^{\text{aberr}}+2\left[\mathbf{I}_{\text {pl/sc}}^{\text{aberr}}\right]^{\wedge}\delta\mathbf{q}_{C/N}^{(v)}\right) \tag{37}\]
where \(\mathbf{q}_{C/N}^{(v)}\) is the vectorial part of the quaternion representing the rotation from the inertial reference frame \(\mathcal{N}\) to the camera reference frame \(\mathcal{C}\).
The variation of the aberrated line of sight \(I_{\text{pl/sc}}^{\text{aberr}}\) is computed by exploiting the triple vector product identity \(\mathbf{a}\times(\mathbf{b}\times\mathbf{c})=(\mathbf{a}\cdot\mathbf{c})\mathbf{b }-(\mathbf{a}\cdot\mathbf{b})\mathbf{c}\):
\[\begin{split}\delta I_{\text{pl/sc}}^{\text{aberr}}=& \left(I_{3x3}+2\ \boldsymbol{\beta}_{\text{sc}}I_{\text{pl/sc}}^{\top}-I_{\text{pl/sc}}^{\top} \boldsymbol{\beta}_{\text{sc}}-I_{\text{pl/sc}}\boldsymbol{\beta}_{\text{sc}}^ {\top}\right)\delta I_{\text{pl/sc}}+\\ &+\left(I_{\text{pl/sc}}^{\top}I_{\text{pl/sc}}-I_{\text{pl/sc}}I_{ \text{pl/sc}}^{\top}\right)\delta\boldsymbol{\beta}_{\text{sc}}\end{split} \tag{38}\]
where \(\delta\boldsymbol{\beta}_{\text{sc}}=\dfrac{\delta\boldsymbol{v}}{c}\) and \(\delta I_{\text{pl/sc}}\) is:
\[\delta I_{\text{pl/sc}}=\left(\dfrac{I_{3x3}}{\left\|\left( \boldsymbol{r}_{\text{pl}}(t-\Delta t)-\boldsymbol{r}(t)\right)^{\top}\left( \boldsymbol{r}_{\text{pl}}(t-\Delta t)-\boldsymbol{r}(t)\right)\right\|}+ \\ -\dfrac{\left(\boldsymbol{r}_{\text{pl}}(t-\Delta t)- \boldsymbol{r}(t)\right)\left(\boldsymbol{r}_{\text{pl}}(t-\Delta t)- \boldsymbol{r}(t)\right)^{\top}}{\left\|\left(\boldsymbol{r}_{\text{pl}}(t- \Delta t)-\boldsymbol{r}(t)\right)\right\|^{3}}\right)\left(\delta\boldsymbol {r}_{\text{pl}}(t-\Delta t)-\delta\boldsymbol{r}(t)\right) \tag{39}\]
The variation of the observed LoS from the camera can be gathered as follows:
\[\delta\boldsymbol{r}_{\text{pl}}(t-\Delta t)=\delta\boldsymbol{r}_{\text{pl}}( t)-\Delta t\ \delta\boldsymbol{r}_{\text{pl}}(t)-\boldsymbol{r}_{\text{pl}}(t)\delta\Delta t \tag{40}\]
where
\[\delta\Delta t =\dfrac{1}{c^{2}-\boldsymbol{v}_{\text{pl}}(t)^{\top}\boldsymbol {v}_{\text{pl}}(t)}\left(\left(\dfrac{\left(c^{2}-\boldsymbol{v}_{\text{pl}}( t)^{\top}\boldsymbol{v}_{\text{pl}}(t)\right)\boldsymbol{r}_{\text{pl/sc}}(t)^{\top}+ \boldsymbol{r}_{\text{pl/sc}}(t)^{\top}\boldsymbol{v}_{\text{pl}}(t)\boldsymbol {v}_{\text{pl}}(t)^{\top}}{\sqrt{\boldsymbol{r}_{\text{pl/sc}}(t)^{\top} \boldsymbol{r}_{\text{pl/sc}}(t)\left(c^{2}-\boldsymbol{v}_{\text{pl}}(t)^{ \top}\boldsymbol{v}_{\text{pl}}(t)\right)+\left(\boldsymbol{r}_{\text{pl/sc}}( t)^{\top}\boldsymbol{v}_{\text{pl}}(t)\right)^{2}}}\right.\] \[-\boldsymbol{v}_{\text{pl}}^{\top}(t)\left(\delta\boldsymbol{r}_{ \text{pl}}(t)-\delta\boldsymbol{r}(t)\right)+\left(\dfrac{\boldsymbol{r}_{\text {pl/sc}}(t)^{\top}\boldsymbol{v}_{\text{pl}}(t)\boldsymbol{r}_{\text{pl/sc}}( t)^{\top}-\boldsymbol{r}_{\text{pl/sc}}(t)^{\top}\boldsymbol{r}_{\text{pl/sc}}(t) \boldsymbol{v}_{\text{pl}}(t)^{\top}}{\sqrt{\boldsymbol{r}_{\text{pl/sc}}(t)^{ \top}\boldsymbol{r}_{\text{pl/sc}}(t)\left(c^{2}-\boldsymbol{v}_{\text{pl}}(t)^ {\top}\boldsymbol{v}_{\text{pl}}(t)\right)+\left(\boldsymbol{r}_{\text{pl/sc}}( t)^{\top}\boldsymbol{v}_{\text{pl}}(t)\right)^{2}}}+\] \[+\dfrac{2\boldsymbol{v}_{\text{pl}}(t)^{\top}}{c^{2}-\boldsymbol{ v}_{\text{pl}}(t)^{\top}\boldsymbol{v}_{\text{pl}}(t)}-\boldsymbol{r}_{\text{pl/sc}}(t)^{ \top}\Bigg{)}\delta\boldsymbol{v}_{\text{pl}}(t)\Bigg{)} \tag{41}\]
Finally, by combining Eqs. (36)-(41), the linear mapping between the variation of \({}^{\mathbb{C}}\boldsymbol{r}_{\text{pl}}\) and the variation of \(\boldsymbol{r}_{\text{pl}}\), \(\boldsymbol{v}_{\text{pl}}\), \(\boldsymbol{r}\), and \(\boldsymbol{q}_{C/N}^{(v)}\) can be established:
\[\delta^{\mathbb{C}}\boldsymbol{r}_{\text{pl}}=\boldsymbol{\Pi}\begin{pmatrix} \delta\boldsymbol{r}(t)\\ \delta\boldsymbol{v}(t)\\ \delta\boldsymbol{q}_{C/N}^{(v)}\\ \delta\boldsymbol{r}_{\text{pl}}(t)\end{pmatrix} \tag{42}\]
Eq. (42) provides the linear mapping between the projection of the planet in the image plane as a function of the spacecraft state and of the planet motion in the solar system. In an operational scenario, the variation of the spacecraft state comes from the orbital and attitude filter performance, whereas the variation of the planet motion arises from the knowledge of the planet ephemeris or from the approximation errors induced by an onboard implementation of the planet motion. The Jacobian matrix of the measurement model exploited in the navigation filter takes into account the first six columns of \(\mathbf{\Pi}\), being only the probe position and velocity contained in the state vector \(\mathbf{x}\).
The advantages of the proposed measurement model are threefold. Firstly, the proposed model avoids any operation on the raw measurement extracted from the deep-space images to correct the light-aberration effect. Secondly, the expression found in Eq. (31) is analytic and provides a first-order approximation for the light-time delay which can be used to find efficiently on board \(\Delta t\) without any optimization, at the contrary of [9]. Thirdly, the proposed equation models the light effects as a function of the spacecraft and planet states, implying that their uncertainty can be taken into account during filter updates.
In the following, two approaches to correct the light-aberration effect are analyzed. Case 1 exploits the complete measurement model developed in this section. Whereas, in case 2 only the light-time correction is included in the measurement model, as the relativistic light aberration is compensated when performing attitude determination on all the bright objects in the image (see Sec. III). This implies that Eqs. (33) and (38) are not used and that Eqs. (34) and (37) simply become:
\[\prescript{C}{h}\mathbf{r}_{\text{pl}}=\mathbf{K}\mathbf{A}_{\text{corr}}\mathbf{t}_{\text{pl/sc}} \tag{43}\]
\[\prescript{C}{h}\mathbf{r}_{\text{pl}}=\mathbf{K}\mathbf{A}_{\text{corr}}\left(\delta\mathbf{ I}_{\text{pl/sc}}+2\left[\mathbf{I}_{\text{pl/sc}}\right]^{\wedge}\delta\mathbf{q}_{C/N}^{ (v)}\right) \tag{44}\]
### Selected Filtering Strategy for the Vision-Based Navigation Algorithm
A non-dimensionalized EKF is selected as the most appropriate filtering approach for the development of a VBN algorithm for CubeSat applications. The selection has been performed in [9], where the behavior of five different EKFs has been analyzed in terms of estimator numerical stability and computational performance. Indeed, it is important to remark that the autonomous VBN algorithm has to be deployed on a miniaturized processor characterized by limited computation capabilities comparable to the one of a Raspberry Pi. The implemented scheme is reported in Table 1, where all the terms are already non-dimensionalized following the approach discussed in [9].
Here, \(\mathbf{x}_{p_{k}}\) is the predicted state vector with error covariance matrix \(\mathbf{P}_{p_{k}}\) at epoch \(k\), \(\mathbf{K}_{k}\) the Kalman gain, \(\mathbf{x}_{c_{k}}\) the corrected state vector with error covariance matrix \(\mathbf{P}_{c_{k}}\), \(\mathbf{F}\) the Jacobian of the dynamics model equation, \(\mathbf{h}\) the measurement model equation with Jacobian \(\mathbf{H}_{k}\), \(\mathbf{v}_{k}\) the measurement white noise, and \(\mathbf{y}_{k}\) the external measurement vector.
Moreover, two additional procedures are implemented in the navigation filter to face the errors of the IP algorithm: 1) When observations are not acquired due to an IP failure, the state vector and its error covariance matrix are simply propagated until the next step; 2) an innovation-based outlier detection method is applied to reject false positives [40]. In particular, when the absolute value of the innovation term (\(||\mathbf{r}_{p_{k}}-\mathbf{h}(\mathbf{x}_{p_{k}})||\)) is greater than \(k\sqrt{\mathbf{M}_{ii}}\) with \(\mathbf{M}=\mathbf{H}_{k}\mathbf{P}_{p_{k}}\mathbf{H}_{k}^{\top}+\mathbf{R}_{k}\) and \(k=3\), the innovation term is set to zero, and the filter correction step is not performed. Indeed, it is preferred to keep an old but good prediction so as not to worsen the estimation.
## 5 Results
### Image Processing Performance
To validate the IP algorithm performance before adopting it inside the VBN filter, four Monte Carlo campaigns are carried out where the initial uncertainty of the probe position \(\sigma_{\mathbf{r}}\) is set to \(10^{4}\), \(10^{5}\), \(10^{6}\), and \(10^{7}\) km, respectively. In each campaign, the extraction of the beacon position projection is run for 1031 scenarios, wherein at least one planet is present, out of the 3000 analyzed. In each scenario, the position of the spacecraft is selected by randomly sampling a Gaussian distribution with \(\sigma_{x}=\sigma_{y}=3\)AU and \(\sigma_{z}=0.07\)AU and centered in the origin of \(\mathcal{N}\). The \(z\)-component of the probe position is chosen in a narrower interval as the spacecraft is supposed to lie close to the ecliptic plane. Similarly, the orientation of the probe is assigned
\begin{table}
\begin{tabular}{l c} \hline \hline System State Space & \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x}(t),t)+\mathbf{w}\) \\ & \(\mathbf{y}_{k}=\mathbf{h}(\mathbf{x}_{k})+\mathbf{v}_{k}\) \\ & \(\dot{\mathbf{P}}=\mathbf{FP}+\mathbf{PF}^{\top}+\mathbf{Q}\) \\ \hline Propagation Block & \(\mathbf{x}_{p_{k}}=\mathbf{x}_{c_{k-1}}+\int_{t_{k-1}}^{t_{k}}\mathbf{f}(\mathbf{x}(t),t)\mathrm{ d}t\) & \(\mathbf{x}_{c_{0}}=E[\mathbf{x}_{0}]\) \\ & \(\mathbf{P}_{p_{k}}=\mathbf{P}_{c_{k-1}}+\int_{t_{k-1}}^{t_{k}}\dot{\mathbf{P}}\mathrm{d}t\) & \(\mathbf{P}_{c_{0}}=E[\mathbf{x}_{0}\mathbf{x}_{0}^{\top}]\) \\ \hline Correction Block & \(\mathbf{K}_{k}=\mathbf{P}_{p_{k}}\mathbf{H}_{k}^{\top}(\mathbf{H}_{k}\mathbf{P}_{p_{k}}\mathbf{H}_{k}^{ \top}+\mathbf{R}_{k})^{-1}\) \\ & \(\mathbf{x}_{c_{k}}=\mathbf{x}_{p_{k}}+\mathbf{K}_{k}[\mathbf{y}_{k}-\mathbf{h}(\mathbf{x}_{p_{k}})]\) \\ & \(\mathbf{P}_{c_{k}}=(\mathbf{I}-\mathbf{K}_{k}\mathbf{H}_{k})\mathbf{P}_{p_{k}}(\mathbf{I}-\mathbf{K}_{k}\mathbf{ H}_{k})^{\top}+\mathbf{K}_{k}\mathbf{R}_{k}\mathbf{K}_{k}^{\top}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Filtering Strategy
by randomly sampling a normal distribution \(\alpha\), \(\delta\), and \(\phi\) in the \(3\sigma\) intervals \([0,2\pi]\), \([-0.6,0.6]\), and \([0,2\pi]\), respectively. The declination \(\delta\) is chosen in a narrower interval as planets are distributed close to the ecliptic plane. For what concerns the planet detection step, \(\sigma_{\mathbf{q}_{\nu}}\) is set equal to 20 arcsec as a result of a statistical analysis conducted on the error obtained in the attitude determination. The planet position uncertainty \(\sigma_{\mathbf{r}_{\mu}}\) is assumed equal to zero because of the high accuracy with which the planets ephemeris are known.
The performance indexes adopted for the discussion are the angular error for the attitude determination and the beacon projection error for the planet detection step. The Probability Density Function (PDF) along with the \(3\sigma\) ellipsoid of the best-fit Gaussian distribution for \(\sigma_{\mathbf{r}}=10^{4}\) km, \(\sigma_{\mathbf{r}}=10^{5}\) km, \(\sigma_{\mathbf{r}}=10^{6}\) km, and \(\sigma_{\mathbf{r}}=10^{7}\) km are shown in Figs. 1(a), 1(b), 1(c), and 1(d), respectively.
Note that the planet position projection is detected with a sub-pixel \(3\sigma\) accuracy for all the values of \(\sigma_{\mathbf{r}}\). In other terms, the error on the estimated planet position projection is not dependent on the probe position uncertainty but only on the attitude determination and centroids computation errors. When the probe position uncertainty increases, the scenarios where the beacon projection error is over 0.3 px seem to be filtered out. Indeed, in these cases, the IP algorithm may select a different wrong spike as the expected position projection becomes far from the real one. As a result,
Fig. 2: **Probability Density Function of the planet position projection errors with 3\(\sigma\) bounds.**
the error norm becomes greater than the threshold value set for the assessment of the IP performance. This case is considered to be a failure of the IP pipeline and it is not represented in the PDF. Note that this condition is only applied during the IP testing, therefore, it is not present when the filter is run. The \(3\sigma\) error ellipses in Fig. 2 are obtained from the mean and covariance values reported in Table 2.
The four covariance matrices are characterized by a similar determinant, which is proportional to the area of the ellipse, implying that the precision in the planet determination is not dependent on the uncertainty of the spacecraft position. This feature is one of the advantages of the proposed pipeline for planet detection in deep-space images. The results of the IP robustness and attitude determination error are shown in Table 3 for the 962 cases in which a correct or wrong attitude solution is found.
The percentage of off-nominal scenarios during planet identification greatly depends on the probe position uncertainty. Indeed, when \(\sigma_{\mathbf{r}}\) increases, the expected planet position projection is further from the real position projection, and its uncertainty ellipse is bigger, which leads to a higher probability of planet misidentification. Moreover, the percentage of off-nominal scenarios in planet detection also depends strictly on the success of the attitude determination. Indeed, when attitude determination provides the wrong solution, planet detection fails consequentially. In Table 3, the third column represents the total number of cases of wrong planet identification when the attitude determination converges to a right or wrong solution. Instead, the last column represents the number of cases of wrong planet identification when the attitude determination converges to the correct solution. The failure percentage of the beacon detection procedure when the probe attitude is correctly determined is lower than 1% with a probe position uncertainty up to \(10^{5}\) km.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \(\sigma_{\mathbf{r}}\) [km] & \(10^{4}\) & \(10^{5}\) & \(10^{6}\) & \(10^{7}\) \\ \hline \(\mathbf{P}_{\text{err}}\) [px\({}^{2}\)] & \(\begin{bmatrix}0.008&0.001\\ 0.001&0.007\end{bmatrix}\) & \(\begin{bmatrix}0.011&0.002\\ 0.002&0.007\end{bmatrix}\) & \(\begin{bmatrix}0.013&0.006\\ 0.006&0.011\end{bmatrix}\) & \(\begin{bmatrix}0.007&0.001\\ 0.001&0.005\end{bmatrix}\) \\ \(\mathbf{\mu}_{\text{err}}\) [px] & [0.0014;-0.0003] & [0.0001;-0.0034] & [-0.0005;-0.0045] & [-0.0005;-0.0041] \\ \(\text{det}(\mathbf{P})\)[px\({}^{4}\)] & 5.9e-05 & 7.05e-05 & 9.94e-05 & 7.05e-05 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean and covariance of the planet position projection errors for the four values of \(\sigma_{\mathbf{r}}\)
\begin{table}
\begin{tabular}{c|c c} \hline \hline \(\sigma_{\mathbf{r}}\) & \(\sigma_{\text{ErrRot}}\) & \% Wrong Beacon & \% Wrong Beacon Detection with \\ [km] & [arcsec] & Detection & Right Attitude Determination \\ & & (of 962 cases) & (of 962 cases) \\ \hline \(10^{4}\) & 14.77 & 4.57 (44 cases) & 0.42 (4 cases) \\ \(10^{5}\) & 15.18 & 4.68 (45 cases) & 0.52 (5 cases) \\ \(10^{6}\) & 15.21 & 7.69 (74 cases) & 3.33 (32 cases) \\ \(10^{7}\) & 15.48 & 29.83 (287 cases) & 25.47 (245 cases) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Algorithm Performance
To sum up, the accuracy of the proposed method is independent of the probe position uncertainty, and it relies only on the centroid errors. Whereas, the robustness of the IP depends on the attitude determination performance and on the probe position uncertainty. If a more robust star identification algorithm is adopted, such as the pyramid one [41], or an attitude filter is implemented, the total percentage of failure in the beacon detection becomes remarkably lower. For completeness' sake, Fig. 3 shows some scenarios that have been found during the assessment of the IP performance where the procedure fails in planet detection. In particular, in Figs. 3a, 3b, and 3c the planet is not found in the image, whereas in Fig. 3d the planet is wrongly determined. In the figures, + represents the real planet position projection, \(\times\) represents the expected planet position projection, and \(\square\) the found spikes, respectively.
### Filter Results
_1. Navigation Concept of Operations_
In the study case, a CubeSat estimates its position and velocity by tracking visible planets over an interplanetary transfer. The spacecraft alternates observation windows, where an asynchronous tracking of the optimal pair of planets is performed, to only-propagation windows, where the filter only propagates the probe state as no external observations are acquired. The navigation CONOPS is shown in Fig. 4. The probe tracks the first planet of the optimal pair, which is selected at the beginning of the navigation cycle, then performs a slew maneuver to point to the second planet, during which no observations are acquired, and it observes this later. Eventually, the estimation is propagated until the following observation window starts.
At every time step of the planet observation, an image is generated using an improved version of the deep-space sky simulator in [42]. The simulator models the effects caused by the lights, i.e., light-time and light-aberration effects, on the centroids' positions and by the impact of cosmic rays hitting the sensor frame. The sky simulator renders the image by taking into input the true probe pose and velocity. Since the attitude control system is not simulated in this work, the true probe orientation is computed by evaluating the desired pointing direction needed to acquire the planet at the center of the image and adding a random perturbation to it, which simulates the spacecraft jitter effect and the attitude knowledge error. Since the probe position is known with a given uncertainty (up to \(10^{5}\) km in this work), the beacon projection will be not perfectly centered in the image, but still contained in it, which is a sufficient condition to let the IP pipeline extract the planet observation.
Fig. 4: Navigation Concept of Operations
#### 2.2.2 Simulation Settings
The VBN filter proposed in this work is tested on an interplanetary high-fidelity ballistic trajectory between Earth-Mars [43]. The dynamics of the reference true trajectory include the SRP perturbations, the main attractor acceleration, third-body accelerations due to all the planets in the Solar System, and relativistic perturbations. Note that the dynamic model selected for the filter (Eq. 21) is a lower-fidelity one, implying that the unmodeled accelerations are captured by the GM processes. Fig. 5 shows the analyzed leg of the nominal probe trajectory.
Starting from \(t_{0}\), the estimation procedure begins. Each planet is tracked for an hour with a frequency
Figure 3: Scenarios in which the IP pipeline fails in the planet detection.
of 0.01 Hz, the slew maneuver lasts 30 minutes, and the window in which the state is only propagated is ten days. Therefore, only two hours every ten days are reserved for correcting the state estimate. Over the interplanetary trajectory, 10 navigation legs of 10 days 2 hours, and 30 minutes each are repeated.
For image generation, the onboard camera is assumed to have the characteristics reported in Table 4, where F is the f-number, Qe is the quantum efficiency, Tlens is the lens transmission, \(\sigma_{d}\) is the defocus level, and \(n_{\text{CR}}\) is the number of single pixels that are turned on for simulating the presence of hitting cosmic rays. Figs. 6a and 6b are two of the rendered deep-space sky images adopted in the filtering procedure.
The initial standard deviations of the state adopted in Eq. 46 are reported in Table 5.
Note that the values are selected following a conservative approach, taking into account that in deep space the initial position and velocity are usually known with an accuracy better than \(10^{4}\) km and 0.1 km/s, respectively. However, even if the performance of the IP procedure degrades by increasing the probe position uncertainty, this has been tested to work up to \(\sigma_{r}=10^{6}\) km with a success rate higher than 90% in planet
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline FoV [deg] & F [-] & T [ms] & Image size [px] & f [mm] & Qe\(\times\) Tlens & \(\sigma_{d}\) [px] & \(n_{\text{CR}}\) & SAE \\
20 & 2.2 & 400 & 1024\(\times\) 1024 & 40 & 0.49 & 0.5 & 1 & 20 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Onboard Camera Characteristics.**
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\sigma_{\text{r}}\) [km] & \(\sigma_{\text{v}}\) [km/s] & \(\sigma_{\text{SRP}}\) [km/\(s^{2}\)] & \(\sigma_{\text{R}}\) [km/\(s^{2}\)] \\ \hline \(10^{4}\) & \(10^{-1}\) & \(10^{-10}\) & \(10^{-10}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Accuracy of the state components at \(t_{0}\)**
Figure 5: **Ballistic Interplanetary Reference Trajectory**
detection. Then, the performance of the IP worsens to 70% when \(\sigma_{r}=10^{7}\) km (see Sec. V.A) under the assumption of Gaussian errors in the measurement. For what concerns the OD, in [9] the algorithm has been tested to work up to \(\sigma_{r}=10^{7}\) km. Over this value, the OD algorithm is not able to select the optimal targets. Moreover, the standard deviation of the measurement error is set to \(\sigma_{\text{str}}=0.1\) px, considering the results of the Monte Carlo runs in the extraction of the planet centroid reported in Sec. V.A. Eventually, only planets whose apparent magnitude is lower than 7 and whose SAE is greater than \(20^{\circ}\) are assumed to be visible by the camera, therefore, they are the only ones considered available for the optimal beacon selection process.
### 3. Filter Performance
A Monte Carlo simulation of 50 samples is performed. The initial probe state vector is perturbed by applying the \(3\sigma\) standard deviation rule:
\[\mathbf{x}_{0}=\mathbf{\tilde{x}}_{0}+3\sqrt{\mathbf{P}_{0}}\mathbf{k} \tag{45}\]
where \(\mathbf{\tilde{x}}_{0}\) is the probe nominal state, \(\mathbf{k}\) is a random vector with values within \([-1;1]\), and the square root operates on the elements of the initial error covariance matrix \(\mathbf{P}_{0}\), which is defined as:
\[\mathbf{P}_{0}=\text{diag}(\sigma_{\text{r}}^{2}\mathbf{I}_{3x3},\sigma_{\text{v}}^{2 }\mathbf{I}_{3x3},\sigma_{\text{R}}^{2}\mathbf{I}_{3x3},\sigma_{\text{SRP}}^{2}\mathbf{I}_ {3x3}) \tag{46}\]
The performance analyzed hereafter is relative to case 1 outlined in Sec. IV.B. Figures 7 and 8 show the position and velocity error profiles and \(3\sigma\) covariance bounds in the J2000 ecliptic reference frame on the studied trajectory leg. The sample error profile is displayed with blue solid lines, whereas the orange solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange solid lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the same as the orange lines. The red solid lines are the orange lines. The red solid lines are the same as the orange lines. The red red lines are the same as the orange lines.
lines and the dashed ones define the \(3\sigma\) covariance bounds of the samples and the filter, respectively.
At the end of the trajectory leg, the filter estimates the spacecraft position and velocity with a \(3\sigma\) accuracy of 1953 km and 0.41 m/s, respectively. The \(3\sigma\) sample and filter covariance profiles are mostly overlapped, which suggests that the filter and its covariance matrices, in particular \(\mathbf{R}\), are well tuned. This underlines that the planet centroids are extracted with a \(3\sigma\) accuracy lower than 0.3 px as found in the Monte Carlo campaigns conducted in Sec. V.A. The outlier detection method has not rejected any planet determined by the IP. This is consistent with the results found in Sec. V.A, where the percentage of false positives is lower the 1% when the attitude is correctly determined and \(\sigma_{r}\) is below \(10^{5}\) km. The planets observed during the interplanetary transfer are Earth, Mars, and Venus. Their object-to-pixel ratio is checked to be below 1 over
Figure 8: Estimated errors for each velocity component with related \(3\sigma\) bounds
Figure 7: Estimated errors for each position component with related \(3\sigma\) bounds.
the entire tracking period to respect the assumption of navigation with unresolved planets. The performance of the filter detailed above (case 1) is compared with the other 4 cases:
* Case 2: when the light-aberration effect on the planet position is corrected inside the IP (Sec. III.B).
* Case 3: when both effects are not corrected
* Case 4: when only the light-aberration effect is compensated
* Case 5: when only the light-time effect is compensated
The root-mean-square error (RMSE) is chosen as the performance index to measure the estimation accuracy of the different models for the selected dataset. The latter comprises 50 initial state vectors and 814 images for simulation representing the scenarios observed during the interplanetary transfer. The filter and camera settings are unchanged with respect to case 1 for the sake of comparison.
The estimation is biased when the corrections of the light effects are not taken into account in the filter (case 3). Between the two effects, the greater deviation from the nominal value is obtained when the light-time effect is not compensated (case 4). It can be noted that the RMSE associated with case 4 is greater than the one of case 3. This means that the deviation caused by the light-time effect is partially compensated by the light aberration. When the light-aberration correction is not applied on both the stars and planets centroids, the probe state estimation is affected by a small deviation (case 5). This occurs because the measurement model and the external observation are both affected by the light-aberration effect. Indeed, on one side, the attitude adopted to evaluate the measurement model is determined from stars in the image whose positions are aberrated. So, the measurement model is perturbed as well. On the other side, the external observation, i.e., the planet centroids, is not corrected for the light-aberration effect. Thus, the observation and the model are coherent.
Whereas, the performance of the filter when the light effects are both compensated in the measurement
Fig. 9: Position and Velocity RMSE for the five cases.
model (case 1) is analogous to the performance of the filter when the light aberration is corrected in the IP procedure (case 2). A slight discrepancy exists since the methodology followed to compensate for the light effects is different. In case 2, the light-aberration effect on the planet position is taken into account by correcting the observation, instead, in case 1, the light-aberration effect is compensated in the measurement model. Moreover, in case 1, the light-aberration effect is also taken into account in the derivation of \(\mathbf{H}\) (Eq. 42). Nevertheless, the similarity of the performance allows us to validate the innovative measurement model proposed in Sec. IV.B that provides an analytical first-order approximation of light effects corrections.
## VI Conclusion
This paper develops an autonomous vision-based navigation algorithm for interplanetary transfer with an application for CubeSats missions. A non-dimensional extended Kalman filter adopting the planet position projection as external observation is chosen by considering the limited processing capabilities of a standard miniaturized processor. Moreover, the measurements exploited for the estimation correction are directly extracted from images generated with a deep-space rendering engine. This procedure allows for obtaining a more faithful value of the measurement error and its influence on the filter solution. At the end of the Earth-Mars trajectory, the filter estimates the spacecraft position and velocity with an accuracy of 2000 km and 0.5 m/s, respectively. Future analysis should examine the performance of the filter over low-thrust trajectories, which are desirable in CubeSats applications. Moreover, in this work, the probe attitude is determined from deep-space images with state-of-the-art star identification and attitude determination algorithms. The integration of the attitude filter in the proposed vision-based navigation algorithm and the adoption of a more robust star identification technique are objects of future investigation. Eventually, to validate the vision-based navigation applicability, it will be tested through hardware-in-the-loop simulations. Preliminary results have been already obtained through, first, the validation of the orbit determination algorithm with processor-in-the-loop simulations [9] and, second, with the validation of the image processing pipeline with the optical facility in the loop [44, 45].
## Acknowledgments
This research is part of EXTREMA, a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 864697).
|
2309.08842 | MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation | The Segment Anything Model (SAM), a foundation model for general image
segmentation, has demonstrated impressive zero-shot performance across numerous
natural image segmentation tasks. However, SAM's performance significantly
declines when applied to medical images, primarily due to the substantial
disparity between natural and medical image domains. To effectively adapt SAM
to medical images, it is important to incorporate critical third-dimensional
information, i.e., volumetric or temporal knowledge, during fine-tuning.
Simultaneously, we aim to harness SAM's pre-trained weights within its original
2D backbone to the fullest extent. In this paper, we introduce a
modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable
to various volumetric and video medical data. Our method roots in the
parameter-efficient fine-tuning strategy to update only a small portion of
weight increments while preserving the majority of SAM's pre-trained weights.
By injecting a series of 3D adapters into the transformer blocks of the image
encoder, our method enables the pre-trained 2D backbone to extract
third-dimensional information from input data. The effectiveness of our method
has been comprehensively evaluated on four medical image segmentation tasks, by
using 10 public datasets across CT, MRI, and surgical video data. Remarkably,
without using any prompt, our method consistently outperforms various
state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in
Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical
scene segmentation respectively. Our model also demonstrates strong
generalization, and excels in challenging tumor segmentation when prompts are
used. Our code is available at: https://github.com/cchen-cc/MA-SAM. | Cheng Chen, Juzheng Miao, Dufan Wu, Zhiling Yan, Sekeun Kim, Jiang Hu, Aoxiao Zhong, Zhengliang Liu, Lichao Sun, Xiang Li, Tianming Liu, Pheng-Ann Heng, Quanzheng Li | 2023-09-16T02:41:53Z | http://arxiv.org/abs/2309.08842v1 | # MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation
###### Abstract
The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. The effectiveness of our method has been comprehensively evaluated on four medical image segmentation tasks, by using 10 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: [https://github.com/cchen-cc/MA-SAM](https://github.com/cchen-cc/MA-SAM).
## 1 Introduction
The rise of foundation models (Bommasani et al., 2021) that are trained on vast and diverse datasets has catalyzed a paradigm shift in intelligent model development. Driven by their remarkable generalization and few-shot learning capability, it has become increasingly appealing to adapt a pre-trained large model to a diversity of downstream tasks, as opposed to the traditional approach of crafting and training distinct task-specific models from scratch. The Segment Anything Model (SAM) (Kirillov et al., 2023) is a recently developed visual foundation model for promptable image segmentation, pre-trained over 1 billion masks on 11 million natural images. Thanks to its large-scale training data and general model architecture, SAM has demonstrated impressive zero-shot performance on various tasks in the context of natural images. Given these merits, a natural question arises: can SAM be directly extended to address the critical medical image segmentation tasks, a domain that has been struggling with limited availability of high-quality images and labels essential for training deep models? However, due to the significant domain gap between natural images and medical images, the latest works on evaluating SAM on medical images have shown that SAM's zero-shot capability, regardless of whether prompts are employed, falls short for direct deployment on medical images (Huang et al., 2023; He et al., 2023; Wald et al., 2023). In these assessments, SAM obtains inferior performance when compared to state-of-the-art (SOTA) medical image segmentation models, and even encounters complete failure in some challenging tasks.
Based on these evaluations, it becomes evident that fine-tuning is an essential step for applying SAM to medical images. But why are we inclined to adapt SAM for medical image tasks? This can be attributed to three potential advantages associated with SAM. Firstly, SAM's training dataset consists of an extensive collection of images. Acquiring a similarly large-scale training dataset in the context of medical applications is extremely challenging. Although SAM's training data only comprises natural images, it is not restricted to any specific medical imaging modality. If SAM fine-tuning proves effective for one type of medical imaging, there is a good chance that the same approach could be applicable to other modalities as well. Secondly, after fine-tuning, SAM as a pre-trained large models may possess potential for robust generalization, which is of great importance for effectively deploying intelligent models in critical medical applications. Thirdly, SAM's prompt design provides a convenient solution for semi-automatic segmentation in tackling difficult tasks, such as tumor segmentation. In these aspects, SAM provides a general-purpose foundation model with the potential to be adapted across diverse medical imaging modalities, offering good generalization capability for both fully-automatic and semi-automatic segmentation.
Efforts to adapt SAM for medical applications are rapidly
growing, with the majority of these approaches relying on SAM's prompt design (Cheng et al., 2023; Wu et al., 2023; Deng et al., 2023; Dai et al., 2023). However, providing suitable prompts for segmenting each object within medical data is non-trivial. For example, consider an abdominal CT volume containing multiple organs, even providing a basic point prompt for each organ in every slice demands substantial efforts. Moreover, in cases where segmentation objects present relatively regular shapes and locations, automatic segmentation methods already obtain encouraging results, obviating the need for prompts in semi-automatic segmentation. In the context of SAM adaptation for automatic medical image segmentation, some recent studies employ parameter-efficient transfer learning (PETL) techniques, such as LoRA (Hu et al., 2021) or Adapters (Houlsby et al., 2019), showing promising performance in automatic segmentation (Zhang and Liu, 2023; Wang et al., 2023). However, these methods focus on pure 2D adaptation, overlooking the valuable third-dimensional information inherently present in medical images. This includes the crucial 3D spatial information in medical volumetric data and the temporal information in medical video data.
In this paper, we propose a modality-agnostic SAM adaptation method for medical image segmentation, named as MASAM, which efficiently and effectively captures the volumetric or temporal information in medical data. For the fine-tuning of image encoder, we leverage the PETL technique called FacT (Jie and Deng, 2023), which is based on tensorization-decomposition to enhance the tuning efficiency. Such fine-tuning approach retains the pre-trained weights to a large extent and only updates lightweight weight increments, ensuring the preservation of general knowledge necessary for object segmentation and reducing the number of parameters that need to be adjusted. To bridge the gap between 2D natural images and volumetric or video medical data, we further incorporate a set of 3D adapters into each transformer block of the image encoder to extract the valuable third-dimensional information. For the adaptation of the lightweight mask decoder, we employ full fine-tuning and modify its original architecture with a simple yet effective progressive up-sampling mechanism to recover the prediction resolution. We demonstrate the efficacy of our SAM adaptation framework on multiple medical imaging modalities in tackling various segmentation tasks. By comparing with multiple SOTA methods, our automatic segmentation demonstrates superior performance and remarkable generalization capability. Our main contributions are highlighted as follows:
* We propose a parameter-efficient fine-tuning method to adapt SAM to volumetric and video medical data. Our method effectively incorporates the essential third-dimensional information from medical images into the 2D network backbone via lightweight 3D adapters.
* We demonstrate that our SAM adaptation can be applied to various medical imaging modalities, including CT, MRI, and surgical video data, for anatomy, surgical scene, and tumor segmentation. Without using any prompt, our automatic segmentation consistently outperforms competitive SOTA methods by a large margin.
* We validate that after fine-tuning on medical images, the obtained models present outstanding generalization capability, showing even superior performance than SOTA domain generalization approaches.
* We show that by further leveraging prompts, our method achieves impressive results in challenging tumor segmentation task, surpassing nnU-Net by 38.7% in Dice score.
## 2 Related work
### Vision foundation models
Foundation models has recently been actively developed in computer vision, although to a lesser extent compared to their prevalence in natural language processing. Pioneering vision foundation models learn directly from vast image-text pairs sourced from the web in a self-supervised manner. Representative works CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) leverage contrastive learning techniques to train both text and image encoders. However, these models primarily excel in tasks that involve mapping images to text, such as classification. Later on, Florence (Yuan et al., 2021) incorporates universal visual-language representations, showing adaptability to more diverse computer vision tasks. One of the latest developments is SAM (Kirillov et al., 2023), a vision foundation model for general-purpose image segmentation. By pre-training on 1 billion masks, SAM demonstrates impressive zero-shot capability across numerous image segmentation tasks. Concurrently, SegGPT (Wang et al., 2023) and SEEM (Zou et al., 2023) have also emerged for general image segmentation, but are pre-trained on relatively smaller datasets compared to SAM.
### Parameter-efficient transfer learning
With the remarkable performance exhibited by large models, the paradigm of pre-training large foundation models and subsequently fine-tuning for specific downstream tasks has gained increasing popularity. As the pre-trained large models continue to grow in scale, the research on PETL has emerged to achieve effective and efficient adaptation by optimizing only a small subset of model parameters while keeping substantial amount of parameters fixed. PETL techniques has been originally proposed in natural language processing, and can be categorized into three main groups (Lialin et al., 2023), including additive methods, selective methods, and reparameterization-based methods. Additive methods, such as Adapters (Houlsby et al., 2019), aim to augment the existing pre-trained model by introducing additional parameters or layers, and then fine-tuning only these newly introduced components (He et al., 2022; Liu et al., 2023). Selective methods focus on updating a few selected influential layers or internal structure within the model (Gheini et al., 2021; Zaken et al., 2022). Reparameterization-based methods, such as LoRA (Hu et al., 2021) and FacT (Jie and Deng, 2023), leverage low-rank representations to minimize the number of trainable parameters, demonstrating robust and SOTA performance across various PETL tasks. Recently, PETL has also been actively studied in computer vision, enabling the effective adaptation of vision foundation models to a
wide range of downstream tasks (Zhou et al., 2022; Jia et al., 2022; Pan et al., 2022; Wang et al., 2023).
### Adapting SAM in medical imaging
Attracted by SAM's outstanding zero-shot performance in natural images, a plethora of evaluation studies quickly emerged in various medical image segmentation tasks (Huang et al., 2023; He et al., 2023; Wald et al., 2023; Zhou et al., 2023; Deng et al., 2023; Hu and Li, 2023; Cheng et al., 2023; Zhang et al., 2023). However, due to the large domain gap between natural and medical images, directly applying SAM to medical applications typically resulted in unsatisfactory performance. For example, He et al. (He et al., 2023) assessed SAM's segmentation accuracy on 12 medical imaging datasets and observed that SAM's zero-shot performance lagged significantly behind models trained on domain-specific medical images, with performance gap as large as 70% in Dice in some tasks. Similar observations were reported in (Huang et al., 2023), even when using different types of prompts. These findings suggest the necessity of task-specific fine-tuning to adapt SAM for medical images for a better segmentation performance.
Subsequently, attention has shifted from evaluation to adaptation of SAM to medical images (Zhang and Liu, 2023; Biswas, 2023; Wu et al., 2023; Li et al., 2023; Feng et al., 2023). Driven by the improvements observed with the use of prompts, a majority of works leverage SAM's prompt design during fine-tuning (Cheng et al., 2023; Deng et al., 2023; Dai et al., 2023; Yue et al., 2023). For instance, SAM-Med2D (Cheng et al., 2023) adopted more comprehensive prompts involving points, bounding boxes, and masks to tailor SAM for 2D medical images, and conducted comprehensive evaluations. MSA (Wu et al., 2023) employed point prompts and the Adapter technique to integrate medical domain knowledge into the SAM model. However, creating prompts for each 2D slice of 3D medical data is labor-intensive. In the case of SAM adaptation for fully automatic medical image segmentation (Hu et al., 2023; Paranjape et al., 2023), SAMed (Zhang and Liu, 2023) and Wang et al. (Wang et al., 2023) adopted LoRA for fine-tuning, showing superior performance than multiple 2D medical image segmentation methods. However, these methods do not take into account the critical 3D volumetric or temporal information, which is well-known to be valuable for enhancing medical image segmentation performance.
## 3 Methodology
In this section, we first briefly introduce the overview of SAM architecture, then introduce our method for the parameter-efficient fine-tuning of image encoder, the incorporation of volumetric or temporal information, and the adaptation of mask decoder, respectively. An overview of our framework for effective SAM adaptation is illustrated in Fig. 1.
### Overview of SAM
SAM is a promptable segmentation architecture consisting of three main components, i.e., the image encoder, the prompt encoder, and the mask decoder. The image encoder employs the Vision Transformer (ViT) (Dosovitskiy et al., 2020) as the backbone, extracting essential features of the images with a set of transformer blocks. The prompt encoder takes in various types of prompts including points, boxes, or texts, and encodes these inputs into prompts embeddings to facilitate the segmentation task. The mask decoder is designed to be lightweight, which computes the cross-attention between embeddings of image and prompts, and utilizes transposed convolutional layers and multi-layer perception to generate segmentation masks. When applying to medical images, the model's performance largely degrades since medical images present distinct texture and objects from natural images. This highlights the necessity for task-specific fine-tuning of SAM to address such challenges.
### Parameter-efficient fine-tuning of image encoder
In order to effectively extract image features, SAM's image encoder comprises a substantial portion of network parameters. Fine-tuning all these weights is computationally intensive. Previous research has shown that PETL techniques can achieve adaptation performance similar to full fine-tuning but with significantly fewer network parameters updated (Hu et al., 2021; Pan et al., 2022). In this regard, we adopt FacT (Jie and Deng, 2023), a SOTA PETL technique, that can obtain comparable or superior performance compared to other PETL methods while introducing a smaller number of trainable parameter.
Based on the common observations that the transformer-based models tend to be redundant in rank, FacT assumes the dense weight increment matrices \(\Delta\mathbf{W}\) used for fine-tuning can be approximated by a set of low-rank factors with cross-layer weight sharing. Following the tensor decomposition in FacT, we decompose the weight increments \(\Delta\mathbf{W}\) for each layer into three factors \(\mathbf{U}\in\mathbb{R}^{d\times r}\), \(\mathbf{V}\in\mathbb{R}^{d\times r}\), and \(\mathbf{\Sigma}\in\mathbb{R}^{r\times r}\), where \(d\) denotes the feature dimensions in ViT, \(r\) stands for the rank of these factors with \(r<<d\). It is worth noting that the two factors, \(\mathbf{U}\) and \(\mathbf{V}\), are shared across all layers, while the factor \(\mathbf{\Sigma}\) is unique for each layer. The weight increments can then be calculated using the following equation:
\[\Delta\mathbf{W}_{jk}=s\cdot\sum_{n=1}^{r}\sum_{n=1}^{r}\mathbf{\Sigma}_{t_{1},t_{2}} \mathbf{V}_{j_{1},t_{2}}, \tag{1}\]
where \(s\) denotes a hyper-parameter for adjusting the learning rate of factors. We fix \(s\) as 1 in our experiments and tune the overall learning rate with the optimizer to achieve a similar scaling effect. The FacT weight increments are applied to the query and value transformations within each transformer block, while all the other weights initialized from SAM remain frozen, as empirically there were no obvious improvements observed when applying FacT to other layers. With the FacT weight increments, the query and value transformations become:
\[\mathbf{W}_{q/v}=\mathbf{W}_{0}+s\cdot\mathbf{U}\mathbf{\Sigma}_{q/v}\mathbf{V}^{T}, \tag{2}\]
where \(\mathbf{W}_{q/v}\) denotes the query or value transformation after fine-tuning, \(\mathbf{W}_{0}\) represents the SAM pre-trained weights.
### Incorporating volumetric or temporal information
SAM is initially pre-trained on 2D images, yet medical imaging typically involves more than two dimensions. For example,
volumetric CT and MRI data contain crucial 3D spatial information for depicting anatomical structures or lesions, and surgical video data possesses valuable temporal relations between frames. Incorporating this volumetric or temporal knowledge inherent in medical imaging data, is pivotal for the successful transfer learning of SAM in medical applications. To address this key challenge, we propose to integrate a series of 3D adapters into the 2D transformer blocks within the SAM architecture. These adapters serve the purpose of extracting the essential volumetric or temporal insights needed for medical image analysis. By incorporating these adapters, we bridge the gap between the inherent complexities of medical imaging data and SAM's pre-trained 2D backbone, enabling it to effectively handle multidimensional medical data.
Specifically, as shown in Fig. 1, each 3D adapter consists of a normalization layer, a linear down-projection layer, a 3D convolutional layer followed by an activation layer, and a linear up-projection layer. The core extraction of volumetric or temporal information primarily resides within the 3D convolutional layer. The purpose of the down-projection layer is to reduce the dimensionality of the original \(d\)-dimensional features into a more compact \(c\)-dimensional representation, so as to control the number of newly introduced parameters. Conversely, the up-projection layer restores the feature dimensions. With \(\mathbf{M}\) denoting feature maps, the 3D adapter can be expressed as:
\[\text{3DAdapter}(\mathbf{M})=\mathbf{M}+\sigma(\text{Conv3D}(\mathbf{M})\cdot\mathbf{W}_{ \text{down}}))\mathbf{W}_{\text{up}}, \tag{3}\]
where Norm denotes the layer normalization, \(\sigma\) denotes the activation function, \(\mathbf{W}_{\text{down}}\in\mathbb{R}^{d\times c}\) and \(\mathbf{W}_{\text{up}}\in\mathbb{R}^{c\times d}\) denote the linear down- and up-projection layer respectively, and Conv3D denotes the 3D convolutional layer with a kernel size of \(3\times 1\times 1\) to specifically extract the third dimensional information.
To make the 3D adapters compatible with the 2D SAM backbone, for the network inputs, we extract a set of adjacent slices \(\mathbf{x}=\{x_{i},\frac{x_{i}}{c_{i}},...,x_{i},...,x_{i+1}\}_{i=1}^{H},x\in \mathbb{R}^{B\times N\times H\times W}\). Here, \(B\) denotes the batch size, \(N\) denotes the number of adjacent slices, and \(H\times W\) denotes the slice dimensions. Before the inputs are passed into the SAM backbone, a reshape operation is applied to transform \(\mathbf{x}\in\mathbb{R}^{B\times N\times H\times W}\) into \(\mathbf{x}\in\mathbb{R}^{B\times H\times W}\) by merging the adjacent slices into the batch dimension. Then for the feature maps, prior to feeding into the 3D convolutional layer of a 3D adapter, they are reshaped from \([BN,H/16,W/16,c]\) to \([B,c,N,H/16,W/16]\). Here \(H/16\) and \(W/16\) denote the spatial dimensions of feature maps, which are down-sampled by 16 times because of the patch embedding process in transformer. After the 3D convolutional operation, the shape of feature maps are changed back again. In this way, the volumetric or temporal information can be effectively extracted within a 2D network backbone. For each transformer block, we incorporate two 3D adapters before and after the attention layers, as empirically superior performance can be obtained with such a design.
### Adapting mask decoder
The mask decoder within original SAM comprises only two transformer layers, two transposed convolutional layers, and a single multilayer perception layer. Considering its lightweight architecture, it is feasible to apply full fine-tuning on the complete mask decoder for effective adaptation on medical images. During the patch embedding process of the transformer backbone within SAM's image encoder, each \(16\times 16\) patch is embedded as a feature vector, leading to \(16\times 16\) times down-sampling of the inputs. The SAM mask decoder utilizes two consecutive transposed convolutional layers to up-sample the feature maps by 4 times, yet the final predictions generated by SAM remain 4 times lower in resolution than the original shapes. Nevertheless, since many anatomical structures or lesions in medical images are quite small, achieving a higher resolution is often necessary to ensure improved discrimination in the context of medical imaging (Ronneberger et al., 2015).
To address this issue, we explore two approaches to tailor the mask decoder for enhanced suitability in medical image segmentation. For the first approach, termed as "progressive up-sampling", we introduce modest adjustments to the SAM decoder by integrating two additional transposed convolutional
Fig. 1: The overview of our proposed modality-agnostic SAM adaptation framework (MA-SAM) for medical image segmentation. The image encoder is updated through a parameter-efficient fine-tuning strategy with FacT. The volumetric or temporal information is effectively incorporated via a set of 3D adapters. The mask decoder is fully fine-tuned and modified to recover the prediction resolution. Reshape operations are used to make 3D operations compatible with the 2D backbone.
operations. With each layer up-samples the feature maps by a factor of 2, the four transposed convolutional layers progressively restore feature maps to their original input resolution. The second approach, termed as "multi-scale fusion", entails creating a design resembling a "U-shaped" network Ronneberger et al. (2015). This involves connecting the multi-scale feature maps of the image encoder with corresponding stages of the mask decoder using skip connections, a concept akin to that of U-Net. To achieve this, we uniformly divide the image encoder into four stages, establishing connections between the feature maps of each stage and those of the decoder through a series of up-sampling and convolutional operations. In our experiments, we have observed that the gradual up-sampling mechanism yields superior outcomes compared to multi-layer feature aggregation, showing the efficacy and simplicity of the progressive up-sampling approach.
## 4 Experiments
We extensively evaluate our method on four medical image segmentation tasks, covering three types of medical imaging modalities from 10 datasets, i.e., abdominal multi-organ segmentation in CT, prostate segmentation in MRI, and surgical scene segmentation in surgical video. We first conduct comparison with SOTA medical image segmentation methods and SAM fine-tuning methods, and then provide generalization evaluation and in-depth ablation studies to analyze our method.
### Datasets and evaluation metrics
**Task1:** The Beyond the Cranial Vault (BTCV) challenge dataset Landman et al. (2015) contains 30 CT volumes with manual annotations for 13 abdominal organs. Each CT scan contains 55 to 198 slices with the slice thickness varying from 2.5 \(mm\) to 5.0 \(mm\). The axial size is \(512\times 512\) for all scans, but with in-plane resolution ranging from \(0.54\times 0.54\)\(mm^{2}\) to 0.98 \(\times\)\(0.98\)\(mm^{2}\). We use the same data split as Tang et al. (2022), which contains 24 cases for training and 6 cases for testing.
**Task2:** We perform prostate segmentation on 6 MRI data sources Liu et al. (2020), i.e., Site A to F, that were collected from NIC-ISBI13 Bloch et al. (2015), I2CVB Lemaitre et al. (2015), and PROMISE12 Litjens et al. (2014) datasets. The case number for each site is 30, 30, 19, 13, 12, 12 respectively, which were randomly divided into 80% and 20% for training and testing. These MRI scans from different sites were acquired with varying imaging protocols and present heterogeneous data distributions, thus were commonly used in previous domain generalization studies Liu et al. (2022).
**Task3:** The 2018 MICCAI Robotic Scene Segmentation Challenge EndoVis18) dataset Allan et al. (2020) comprises 19 sequences, captured using the da Vinci X or Xi system. Each sequence contains either 149, 249, or 250 frames at a resolution of 1280 \(\times\) 1024. The dataset encompasses the surgical scene, with 12 classes annotated for various anatomical structures and robotic instruments. The dataset is officially split into 15 sequences for training and 4 sequences for testing.
**Task4:** The Pancreas Tumor Segmentation task within 2018 MICCAI Medical Segmentation Decathlon Challenge (MSD-Pancreas) dataset Antonelli et al. (2022) contains 281 CT scans with annotations for pancreas and tumor. Each scan comprises 37 to 751 slices with an axial size of 512\(\times\)512. We follow Gong et al. (2023) to utilize only tumor labels in our experiments and employ the same data split as in their work.
In addition, we use the Multi-Modality Abdominal Multi-Organ Segmentation Challenge (AMOS 22) dataset Ji et al. (2022) for the evaluation of model generalization. This dataset contains abdominal CT and MRI data that were acquired from different patients. Each scan was annotated with 15 organs, but we focus on the 12 organs that overlap with the BTCV dataset. 300 CT scans and 60 MRI scans in the training and validation sets of AMOS 22 are used for our generalization evaluation.
For data pre-processing, the intensity values of each CT scan in BTCV and MSD-Pancreas datasets were truncated within the range of [-200, 250] Hounsfield Units (HU) and [-50, 200] HU respectively. The intensity of each MRI scan was truncated at the 99th percentile. Each CT or MRI scan was normalized to zero mean and unit variance. For surgical video data, each frame was normalized to [0, 1] range. We resized all images to \(512\times 512\) for the axial plane of CT and MRI data, as well as for each frame of surgical video sequences. For model evaluation, we employ the common metrics, i.e., the Dice score and Hausdorff Distance (HD) to assess pixel-wise segmentation accuracy and the segmentation boundary quality respectively. We also report the mean intersection-over-union (mIoU) for the EndoVis18 dataset and the normalized surface distance (NSD) for the MSD-Pancreas dataset to align with previous studies.
### Implementation details
The fine-tuning process was supervised using a hybrid segmentation loss, which combines the cross-entropy loss and Dice loss as: \(\mathcal{L}_{\text{seg}}=\alpha\mathcal{L}_{\text{ce}}+\beta\mathcal{L}_{ \text{Dice}}\). The weighting factors \(\alpha\) and \(\beta\) were set as 0.2 and 0.8 following Zhang and Liu (2023), except for the surgical video data for which only the Dice loss was utilized. Every five consecutive slices were taken as the network inputs. For data augmentation, we applied a range of transformations including random rotation, flip, erasing, shearing, scaling, translation, posterization, contrast adjustment, brightness modification, and sharpness enhancement. Our model was trained using Adam optimizer with a batch size of 24. As in Zhang and Liu (2023), we adopted a warmup training strategy to increase the learning rate linearly to the specific value and then exponentially decrease it towards the end of training to stabilize the training. We employed ViT_H as the backbone of the image encoder and conducted a total of 400 epochs of training, ensuring that the model converged effectively. Our framework was implemented in PyTorch 2.0 using 8 NVIDIA A100 GPUs.
### Comparison with SOTA methods
For CT and MRI datasets, we extensively compare our method with various SOTA 3D medical image segmentation methods including CNN-based approaches **nnU-Net**Isensee et al. (2021), which is a U-Net Ronneberger et al. (2015) based self-configuring framework, showing robust performance on various medical image segmentation competitions, and **3D UX-Net**Lee et al. (2023), which is a very recent large kernel volumetric ConvNet for 3D medical image segmentation, as well as
transformer-based methods **SwinUNETR**(Tang et al., 2022b), which is a 3D transformer-based model with a hierarchical encoder, and **nnFormer**(Zhou et al., 2023a), which is a model combining local and global volume-based self-attention mechanism. We also compare our method with the most recent SAM adaptation methods **SAMed_h**(Zhang and Liu, 2023), which is an automatic 2D medical image segmentation model for organ segmentation, and **3DSAM-adapter**(Gong et al., 2023), which is a promptable 3D medical image segmentation model for tumor segmentation. For surgical video data, we compare our method with SOTA surgical scene segmentation methods, **NCT**(Shvets et al., 2018), **UNC**(Ren et al., 2020), and **OTH**(Chen et al., 2018), which are the top-three approaches reported in the challenge, **Noisy-LSTM**(Wang et al., 2021) which uses ConvLSTM to learn temporal cues, **STswmL**(Jin et al., 2022) which is a transformer-based model capturing intra- and inter-video relations, and **nnU-Net**. For all comparison experiments, the dataset splits remain consistent across all the methods.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c|c} \hline Methods & \multicolumn{1}{l}{Spleen} & R.Kd & L.Kd & GB & Eso. & Liver & Stomach & Aorta & IVC & Veins & Pancreas & AG & Average \\ \hline \hline \multicolumn{11}{c}{Dice [\%] \(\uparrow\)} \\ \hline \hline nnU-Net (Isensee et al., 2021) & **97.0** & **95.3** & 95.3 & 63.5 & 77.5 & **97.4** & 89.1 & 90.1 & **88.5** & 79.0 & **87.1** & **75.2** & 86.3 \\
3D UX-Net (Lee et al., 2023) & 94.6 & 94.2 & 94.3 & 59.3 & 72.2 & 96.4 & 73.4 & 87.2 & 84.9 & 72.2 & 80.9 & 67.1 & 81.4 \\ SwinUNETR (Tang et al., 2022b) & 95.6 & 94.2 & 94.3 & 63.6 & 75.5 & 96.6 & 79.2 & 89.9 & 83.7 & 75.0 & 82.2 & 67.3 & 83.1 \\ nnFormer (Zhou et al., 2023a) & 93.5 & 94.9 & 95.0 & 64.1 & 79.5 & 96.8 & 90.1 & 89.7 & 85.9 & 77.8 & 85.6 & 73.9 & 85.6 \\ SAMed\_h (Zhang and Liu, 2023) & 95.3 & 92.1 & 92.9 & 62.1 & 75.3 & 96.4 & 90.2 & 87.6 & 79.8 & 74.2 & 77.9 & 61.0 & 82.1 \\ \hline MA-SAM (Ours) & 96.7 & 95.1 & **95.4** & **68.2** & **82.1** & 96.9 & **92.8** & **91.1** & 87.5 & **79.8** & 86.6 & 73.9 & **87.2** \\ \hline \multicolumn{11}{c}{HD [\%] \(\downarrow\)} \\ \hline nnU-Net (Isensee et al., 2021) & 1.07 & **1.19** & 1.19 & 7.49 & 8.56 & **1.14** & 4.84 & 14.11 & **2.87** & 5.67 & **2.31** & **2.23** & 4.39 \\
3D UX-Net (Lee et al., 2023) & 3.17 & 1.59 & 1.26 & 4.53 & 13.92 & 1.75 & 19.72 & 12.53 & 3.47 & 9.99 & 3.70 & 4.11 & 6.68 \\ SwinUNETR (Tang et al., 2022b) & 1.21 & 1.41 & 1.37 & 2.25 & 5.82 & 1.70 & 13.75 & 5.92 & 4.46 & 7.58 & 3.53 & 3.40 & 4.37 \\ nnFormer (Zhou et al., 2023a) & 78.03 & 1.41 & 1.43 & 3.00 & 4.92 & 1.38 & 4.24 & 7.53 & 4.02 & 6.53 & 2.96 & 2.76 & 9.95 \\ SAMed\_h (Zhang and Liu, 2023) & 1.37 & 33.53 & 1.84 & 6.27 & 4.84 & 1.77 & 7.49 & **4.97** & 7.28 & 6.87 & 10.00 & 6.49 & 7.73 \\ \hline MA-SAM (Ours) & **1.00** & **1.19** & **1.07** & **1.59** & **3.77** & 1.36 & **3.87** & 5.29 & 3.12 & **3.25** & 3.93 & 2.57 & **2.67** \\ \hline \multicolumn{11}{c}{SAMed.h: ViT\_H version of SAMed} \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of abdominal multi-organ segmentation results generated from our MA-SAM method and other state-of-the-art methods on BTCV dataset.
\begin{table}
\begin{tabular}{l|c c c c c c|c c c c c c c|c} \hline Methods & Site A & Site B & Site C & Site D & Site E & Site F & Average & Site A & Site B & Site C & Site D & Site E & Site F & Average \\ \hline \hline \multicolumn{11}{c}{Dice [\%] \(\uparrow\)} \\ \hline \hline nnU-Net (Isensee et al., 2021) & 93.3 & 89.2 & 89.5 & 86.5 & 91.0 & 90.2 & 90.0 & 1.74 & 2.34 & 3.61 & 2.98 & 2.74 & 1.80 & 2.54 \\
3D UX-Net (Lee et al., 2023) & 91.8 & 86.0 & 88.3 & 70.4 & 85.9 & 88.4 & 85.1 & 1.95 & 3.20 & 4.37 & 9.61 & 5.07 & 2.67 & 4.48 \\ SwinUNETR (Tang et al., 2022b) & 88.7 & 88.0 & 88.4 & 71.5 & 84.7 & 84.6 & 84.3 & 3.27 & 3.02 & 4.37 & 8.59 & 5.24 & 2.82 & 4.55 \\ nnFormer (Zhou et al., 2023a) & 93.6 & 90.1 & 89.5 & 86.8 & 91.9 & 90.6 & 90.4 & 1.73 & 2.11 & 3.54 & 2.93 & 2.75 & 2.08 & 2.52 \\ SAMed\_h (Zhang and Liu, 2023) & 94.6 & 89.5 & 88.6 & 87.9 & **92.7** & 91.3 & 90.8 & 1.14 & 3.90 & **3.10** & 3.00 & 2.61 & 1.67 & 2.57 \\ \hline MA-SAM (Ours) & **95.3** & **92.7** & **90.4** & **91.3** & **92.7** & **93.1** & **92.6** & **1.00** & **1.54** & 3.29 & **1.80** & **2.56** & **1.47** & **1.94** \\ \hline \multicolumn{11}{c}{SAMed.h: ViT\_H version of SAMed} \\ \end{tabular}
\end{table}
Table 2: Comparison of prostate segmentation results generated from our MA-SAM method and other state-of-the-art methods on six prostate MRI datasets.
Fig. 2: Qualitative visualization of segmentation results generated from our MA-SAM method and other state-of-the-art methods on BTCV dataset. Abdominal organs are denoted in different colors as shown in the corresponding color bar.
Table 1 to Table 4 present comparative results for the four different tasks: abdominal multi-organ segmentation in CT data, prostate MRI segmentation across 6 sites, scene segmentation in surgical video, and tumor segmentation in CT data, respectively. When prompts are not specified, all methods generate results automatically without using any prompt. With our dedicatedly designed fine-tuning strategy for SAM, our method consistently and significantly outperforms other comparison approaches across all the four tasks. In terms of fully automatic segmentation for the first three tasks, our method improves the Dice score by 0.9%, 2.6%, 5% compared to the second-best performing approach, respectively. Notably, nnU-Net proves to be a strong competitor, showing robust segmentation performance across CT and MRI datasets. However, in surgical scene segmentation, nnU-Net obtains lower results compared to methods specifically tailored for processing surgical videos. Our method demonstrates strong performance across both volumetric and video medical data, indicating the potential of unifying the network architecture in these two domains of medical imaging, where previous methods were developed separately. When comparing with the pure 2D SAM fine-tuning method SAMed_h, which employs the same network backbone as ours, our method also achieves significantly better results, demonstrating the benefits of incorporating volumetric or temporal information for 3D medical image segmentation. The visual comparison results are presented in Fig. 2 to Fig. 4.
segmentation. This observation might indicate that SAM fine-tuning might be less effective for objects with ill-defined margins and small sizes, as these characteristics differ from the natural images on which SAM was originally trained.
### Generalization evaluation
One of the most appealing advantage of foundation models lies in their impressive generalization capability. To investigate the generalization of our models adapted from SAM, we first compare the zero-shot and few-shot capability of nnU-Net and our method by applying models trained on the BTCV CT scans to the AMOS22 CT and MRI scans. In Fig. 6, "nnU-Net 0 shot" and MA-SAM 0 shot" denote that the models trained on BTCV data are directly used to perform inference on AMOS22 images, and "nnU-Net 5 shot" and MA-SAM 5 shot" denote the models are further fine-tuned with 5 additional training cases from the AMOS22 dataset. From the results, we can see that our method exhibits better zero-shot and few-shot segmentation performance on AMOS22 CT and MRI images, demonstrating higher generalization capability. Especially for MRI images, nnU-Net encounters complete failure in the zero-shot context, obtaining only 10.9% Dice score, while our model still retains the performance of 60.4% Dice score. In the five-shot context, our method also shows 9% improvements than nnU-Net, further underscoring the advantages of generalization.
We also compare the model generalization on prostate MRI segmentation. In Table 5, the results of nnU-Net and our models are obtained by directly applying the models fine-tuned on Site A to make predictions for Site B to F. We include two recent SOTA test-time domain generalization methods into comparison, i.e., TTST (Karani et al., 2021) and TASD (Liu et al., 2022), which employ additional domain generalization techniques when performing predictions on each specific site. The results demonstrate that our method not only outperforms nnU-Net by a large margin when generalizing to different sites, but also achieves superior performance than SOTA domain generalization approaches. All these results on AMOS22 dataset and different prostate MRI datasets underscore the impressive generalization capability of our method, which is an importance characteristic for critical medical applications.
importance of incorporating the third-dimensional information for medical image segmentation.
#### 4.2.2 Effect of mask decoder design
We compare the performance of different mask decoder designs, including the original SAM mask decoder, the progressive up-sampling strategy, and the multi-scale fusion strategy. Table 6 shows that the straightforward progressive up-sampling strategy yields superior results, validating its simplicity and effectiveness. These results demonstrate the importance of recovering prediction resolution for medical images, which often contain small objects. However, no significant improvements were observed with the multi-scale fusion strategy. This might because of the extensive modifications it introduces to the original SAM decoder, resulting in less effective utilization of the pre-trained SAM weights.
#### 4.2.3 Influence of network backbone
We conduct experiments with different network backbones, i.e., ViT_B, ViT_L, and ViT_H, to assess their impact on the performance of our method. As can be observed in Table 7, there is a noticeable improvement in Dice performance as the model size increases from ViT_B to ViT_H, signifying the advantages of using a larger model size to enhance overall model performance.
#### 4.2.4 Choice of location for 3D adapters
We perform ablation experiments to investigate the placement of 3D adapters within our model. Specifically, we compare the performance when incorporating a 3D adapter in one of three locations: before the multi-head self-attention block (MHSA), after MHSA, or in both of these positions. As demonstrated in Table 8, the configuration with two 3D adapters positioned both before and after MHSA yields superior performance for our final model.
#### 4.2.5 Choice of rank
We investigate how the model's performance changes with varying decomposition rank \(r\), by considering the rank value from the set [4; 8; 16; 32; 64]. As expected, Table 10 shows that with an increase in rank, there is a corresponding improvement in average Dice performance, but the performance tends to saturate when \(r\geq 32\). We thus set \(r=32\) in our experiments to seek a balance between performance gains and the number of parameters introduced.
## 5 Discussion
Foundation models, like the Segment Anything Model (SAM), have revolutionized intelligent model development by offering robust generalization and few-shot learning capabilities. SAM has demonstrated impressive zero-shot performance for natural image tasks. However, applying SAM directly to medical image segmentation has proven ineffective due to the substantial domain differences. To address this problem, in this work, we propose a parameter-efficient fine-tuning method to adapt SAM to various medical imaging modalities. Our method leverages FacT to efficiently update a small portion of weight increments and injects a set of designed 3D adapters to extract crucial volumetric or temporal knowledge of medical images during fine-tuning. The general applicability and effectiveness of our method has been validated on four medical image segmentation tasks across three imaging modalities. Our model also demonstrates outstanding generalization capability, as well as significant advantage in particularly challenging tumor segmentation when prompts are used.
One significant motivation for adapting SAM to medical images is its pre-training on a vast and diverse dataset, which is difficult to achieve in the filed of medical imaging. This makes SAM's adaptation generally applicable to various medical imaging modalities. In medical applications, there are recent efforts trying to pre-train modality-specific foundation models. However, these models are often constrained to a specific medical imaging modality and challenging to extend to others. For example, models pre-trained with chest x-ray data may face difficulties when applied to MRI data. By leveraging SAM's pre-trained weights, we are able to train a large-scale segmentation network, such as ViT_H, for medical image segmentation, even when limited data, such as just 5 imaging scans are used. Our experiments have demonstrated the benefits of increasing the model size, raising the intriguing question of how performance evolves with further increases in model size. Can we achieve improved accuracy or enhanced generalization with larger models for medical images? Exploring these possibilities holds great interest.
Using promptable segmentation is less meaningful for tasks that can already achieve satisfactory results with SOTA medical image segmentation methods. Prompts prove particularly beneficial and valuable when dealing with challenging tumor segmentation tasks, as demonstrated in our experiments as well
\begin{table}
\begin{tabular}{l|c} \hline Decoder design & Dice [\%] \\ \hline \hline SAM mask decoder & 84.4 \\ Progressive up-sampling & 85.1 \\ Multi-scale fusion & 84.5 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of model performance with different mask decoder designs.
\begin{table}
\begin{tabular}{l|c} \hline \hline Backbone & Dice [\%] \\ \hline \hline ViT\_B & 82.5 \\ ViT\_L & 84.1 \\ ViT\_H & 85.1 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of model performance with different network backbones.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & \(r=4\) & \(r=8\) & \(r=16\) & \(r=32\) & \(r=64\) \\ \hline \hline MA-SAM & 81.4 & 82.7 & 84.6 & 85.1 & 85.3 \\ \hline \end{tabular}
\end{table}
Table 10: The change of Dice score for our method with different ranks.
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline SAM weights & Full FT & FacT & 3D Adapters & Dice [\%] \(\uparrow\) \\ \hline \hline \(\circ\) & \(\bullet\) & \(\circ\) & \(\circ\) & 72.2 \\ \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & 70.4 \\ \(\bullet\) & \(\bullet\) & \(\circ\) & \(\circ\) & 85.3 \\ \(\bullet\) & \(\circ\) & \(\bullet\) & \(\circ\) & 85.1 \\ \(\bullet\) & \(\circ\) & \(\circ\) & \(\bullet\) & 86.4 \\ \(\bullet\) & \(\circ\) & \(\bullet\) & \(\bullet\) & 87.2 \\ \hline \end{tabular}
\end{table}
Table 9: Ablation on each key component in our method. The markers \(\bullet\) and \(\circ\) denote whether a specific component is used or not.
\begin{table}
\begin{tabular}{l|c} \hline \hline Backbone & Dice [\%] \\ \hline \hline ViT\_B & 82.5 \\ ViT\_L & 84.1 \\ ViT\_H & 85.1 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of model performance with different position of 3D adapters.
as other SAM fine-tuning works. However, crafting effective prompts demands a substantial amount of effort. As shown in Table 4, the performance of promptable segmentation drops as the quality of prompts declines. Given the challenges associated with manual prompt creation, there is consider room for future exploration in automating this process. It would be interesting and valuable to investigate methods for generating suitable prompts automatically or study how to train an accurate segmentation model with noisy or imperfect prompts. This would enhance the practicality of promptable segmentation in scenarios where manual prompt creation is challenging.
## 6 Conclusion
We present an effective SAM adaptation framework that is general and can be applied to diverse medical image segmentation tasks across different modalities. Our method roots in the parameter-efficient fine-tuning strategy and successfully incorporates the volumetric or temporal information of medical images during fine-tuning. Without using any prompt, our method with automatic segmentation outperforms various SOTA 3D medical image segmentation methods by a large margin. Our model has demonstrated outstanding generalization capability, which is crucial for successful deployment of intelligent model across medical datasets. We have also shown the substantial advantage of the prompt mode, which is particularly valuable in tackling challenging tumor segmentation task. Our method holds significant promise as a general segmentation framework that can be applied to various medical imaging modalities for both fully automatic and promptable segmentation.
|
2309.09584 | Integrating Vertidrome Management Tasks into U-space | U-space as defined by the European Commission is a set of new services
relying on a high level of digitalization and automation of functions and
specific procedures, designed to provide safe, efficient and secure access to
airspace for large numbers of unmanned aircraft, operating automatically and
even beyond visual line of sight. This kind of concepts of operations (ConOps)
of airspace integration for drones and air taxis are also called UTM (Unmanned
aircraft system Traffic Management) systems, being U-space the UTM ConOps
agreed for Europe. U-space services are under development but commercially not
available yet. For demonstration purposes, in the project HorizonUAM, a central
U-space cloud service is simulated through a local messaging server using the
protocol MQTT (Message Queuing Telemetry Transport). A prototypical vertidrome
management tool was created to demonstrate the scheduling and sequencing of air
taxi flights. The vertidrome manager is fully integrated within U-space and
receives real-time information on flight plans, including requests for start
and landing and emergency notifications. Additional information coming from
other U-space services (e.g. weather information) can be accessed on request.
The integration was demonstrated in a scaled flight test environment with
multicopters (<15 kg) representing passenger carrying air taxis. | Bianca I. Schuchardt, Aditya Devta, Andreas Volkert | 2023-09-18T08:47:51Z | http://arxiv.org/abs/2309.09584v1 | # Integrating Vertidrome Management Tasks into U-Space
###### Abstract
U-space as defined by the European Commission is a set of new services relying on a high level of digitalization and automation of functions and specific procedures, designed to provide safe, efficient and secure access to airspace for large numbers of unmanned aircraft, operating automatically and even beyond visual line of sight. This kind of concepts of operations (ConOps) of airspace integration for drones and air taxis are also called UTM (Unmanned aircraft system Traffic Management) systems, being U-space the UTM ConOps agreed for Europe. U-space services are under development but commercially not available yet. For demonstration purposes, in the project HorizonUAM, a central U-space cloud service is simulated through a local messaging server using the protocol MQTT (Message Queuing Telemetry Transport). A prototypical vertidrome management tool was created to demonstrate the scheduling and sequencing of air taxi flights. The vertidrome manager is fully integrated within U-space and receives real-time information on flight plans, including requests for start and landing and emergency notifications. Additional information coming from other U-space services (e.g. weather information) can be accessed on request. The integration was demonstrated in a scaled flight test environment with multicopters (<15 kg) representing passenger carrying air taxis.
Urban air mobility, U-space, air taxi, vertidrome, vertiport
## 1 Introduction
"By 2030 drones and their required eco-system will have become an accepted part of the life of EU citizens" as stated by the European Commission in the European Drone Strategy 2.0 [1]. These new airspace users could be smaller drones used for various missions such as for surveillance or cargo delivery. Also air taxis (first piloted, later remotely piloted and finally fully autonomous) could enter the airspace for passenger transport in urban environments. The project HorizonUAM [2] brought together researchers from various disciplines at the German Aerospace Center (DLR) to investigate this Urban Air Mobility (UAM) eco-system holistically with the focus on aerial passenger transport in urban environments. More than 200 cities worldwide have a high potential for the implementation of UAM by the year 2050 according to demand estimations conducted within HorizonUAM [3]. In urban environments, air taxis will most probably not be taking off and landing from conventional runways at existing airports due to capacity limits [4]. It is envisioned that a new type of infrastructure, so called vertiformes or vertiports will be erected [5, 6]. Effective management of these vertiformes is key for catering high-density air taxi traffic.
## 2 U-Space for Air Traffic Management
The rapid growth of Unmanned Aircraft Systems (UAS) has introduced new challenges to traditional Air Traffic Management (ATM) systems. To address these challenges, the concept of U-space has emerged, providing a framework for the safe and efficient integration of UAS into airspace systems [7]. While U-space is the European approach, there are other comparable initiatives globally for UAS traffic management (UTM). The FAA UTM ConOps [8] is standing out here. It has many parallels to the U-space approach but still remains rather high-level
in technical aspects such as the detailed description of specific UTM services as elaborated in [9].
This section aims to explore the implementation of U-space for air traffic management based on the project CORUS-XUAM (Concept of operations for European UTM systems - Extension for urban air mobility [10]), focusing on its key components, benefits, and potential impact on the aviation industry.
### _U-space: A Conceptual Framework_
U-space refers to a set of technologies, procedures, and regulations that enable safe and efficient operations of drones in low-altitude airspace [11]. It encompasses various aspects of UAS operations, including registration, flight planning, communication, surveillance, and conflict resolution [12]. The concept aims to ensure the integration of UAS into the existing aviation ecosystem, promoting safety, security, and scalability [13]. The key components of U-space are explained in the next section.
### _Key Components of U-space for Air Traffic Management_
The implementation of U-space involves several key components. They are as follows.
#### 2.2.1 UAS Registration and Tracking
UAS operators are required to register their aircraft and obtain a unique identification number [14]. This registration facilitates accountability and traceability, enabling authorities to identify operators and address safety concerns effectively. Moreover, according to EU regulation 2021/664 [14, 15], all aircraft flying in the U-space airspace shall regularly communicate their current position to U-space. As a result, this will form the basis for monitoring, traffic information, tactical conflict prediction and surveillance data exchange [16].
#### 2.2.2 Flight Planning and Management
U-space provides UAS operators with tools for flight planning, including access to airspace information, route optimization, and geofencing capabilities [11]. Through automated systems, operators can submit flight plans and receive real-time information on airspace restrictions and potential conflicts.
#### 2.2.3 Communication and Surveillance
U-space employs a range of communication technologies, including satellite-based systems, to ensure reliable and safe data exchange between UAS, UTM and ground control stations. Surveillance mechanisms, such as ADS-B (Automatic Dependent Surveillance-Broadcast), enable real-time tracking and monitoring of UAS positions, enhancing situational awareness [16].
#### 2.2.4 Conflict Prediction and Resolution
U-space incorporates four methods namely strategic conflict prediction, strategic conflict resolution, tactical conflict prediction and tactical conflict resolution [16] for conflict detection and resolution. Strategic deconfiction is achieved by the flight authorisation service at the time of approving a submitted flight plan which generally happens before the flight [13]. However, tactical deconfiction is performed during the flight. In tactical conflict prediction, alerts are provided to the pilots and the U-space service providers based on current motion and possible intent. On the other hand, tactical conflict resolution utilises sensors and onboard collision-avoidance algorithms to prevent potential conflicts between UAS and other aircraft [11]. These systems support the development of cooperative and non-cooperative collision avoidance capabilities.
### _Benefits of U-space Implementation_
The implementation of U-space offers several benefits to the aviation industry:
* Enhanced Safety and Risk Mitigation U-space provides a structured framework for UAS operations, ensuring safety through comprehensive registration, flight planning, and surveillance mechanisms [13]. By reducing the risk of mid-air collisions and unauthorized UAS activities, U-space enhances overall airspace safety.
* Increased Efficiency and Scalability U-space streamlines UAS operations, optimizing flight routes, and minimizing delays. The automation of flight planning and management processes improves operational efficiency, allowing for the scalable integration of a growing number of UAS into airspace [16].
* Facilitation of New Applications and Services U-space creates opportunities for the development of new UAS applications and services. From drone deliveries to aerial inspections, the implementation of U-space encourages innovation and unlocks the full potential of UAS technology [11].
## 3 Vertidrome Management
The rapid urbanization and increasing population density in cities around the world have led to ever-growing challenges in transportation infrastructure [17]. Traditional ground-based transportation systems are often plagued by congestion, leading to prolonged commute times, increased carbon emissions, and decreased overall quality of life. To address these issues and embrace the future of transportation, the concept of vertidromes has emerged as a promising solution for UAM. Vertidromes are structures designed to provide spaces for the take-off, landing and maintenance of eVTOL aircraft [18]. As such, they will always provide at least one Touch-down and Lift-off (TLOF) surface, usually a pad, which may additionally be used as a parking space. Figure 1 shows an exemplary layout of a vertidrome and explains its surface features.
The pads are enveloped by the zones for Final Approach and Take-off (FATO) which are extended areas around the pad within which the f
climb needs to be conducted. To provide an additional layer of safety, these are surrounded by the FATO Safety Area (FSA). TLOF surfaces and parking positions may be connected by taxiways (if the TLOF surface does not also act as a parking space), which has a width that depends on the size of the vehicles expected to operate on the vertiport as well as the type of ground operation (ground taxi vs. hover taxi). As vertidromes emerge as critical infrastructure for UAM, efficient and effective vertidrome management becomes paramount to ensure the safe and seamless operation of Electric Vertical Take-off and Landing (eVTOL) aircraft within urban environments. Vertidrome management involves efficient management of take-off and landing procedures, passenger flow, vehicle charging, and maintenance schedules to minimize waiting times and ensure optimal vehicle utilization [19]. Implementing intelligent algorithms and data-driven analytics for vehicle routing and scheduling can optimize the utilization of vertidrome resources and reduce operational inefficiencies. The information flow for a conceptual Vertidrome Air Traffic Management System (VATMS) is described in the next section.
### _Information Flow_
The exchange of data for a conceptual VATMS, developed in the HorizonUAM project, is illustrated in Figure 2.
Entities of the VATMS are represented with boxes, information supplied is contained in hexagons with arrows pointing in the direction of information flow. The input to the system is depicted on the top and left of Figure 2 while the outputs of the system are illustrated on the bottom and right. Additionally, the entities belonging to the area of the vertidrome are represented on the top and bottom of the figure while the U-space cloud services are showed on the left and right.
#### 3.1.1 Inputs to the VATMS
On the input side, the human key component, the Vertidrome System Operator (VSO), who is responsible for overseeing the VATMS, configures operational constraints, such as wind speed limits, as well as (within a prescribed range) thresholds for notifications and alerts of the individual vertiport as to dynamically adjust the operators workload (e.g. suppress notifications of low urgency in emergency situations or during peak hours). The VSO is also authorized to override decisions by the system (e.g. block pads, reclassify a hazard), which also includes decision-making in situations where the automation fails to perform the critical tasks. In Figure 2, the right side of the VSO depicts the avian and drone radar, which provide the location, size and possible trajectories of bird flocks as well as uncooperative vehicles, such as small drones for recreational flying. Those uncooperative airspace users are a risk in the vicinity of airports or vertidromes but also for UAM in general due to the relatively low altitude of operation [20]. Additionally, the Hazard Detection system is illustrated, which relies on a set of sensors and e.g. cameras and image recognition software to supply data, more specifically the location, size and movement, of foreign objects and other (non-environmental) hazards, like ground personnel in FATO vicinity. The left side of the VSO illustrates the local surveillance system, which provides the locations of drones within the vertidrome area of operations in real time. Additionally, a systems monitoring service tracks sensor integrity and infrastructure health, which in this case is serviceability of pads and other equipment directly affecting the capability of vertidrome for accepting arrivals and departures. Information from this service would be shared with and possibly supplemented by ground (maintenance) personnel at the vertidrome itself. The system also receives input from the Ground Flow Management Unit (GFMU), in the form of a pad preference for an arriving or departing vehicle, derived from assigned parking and taxi routes. Arriving and departing aircraft will also transmit their status information to the VATMS and may, in the event of an emergency or loss of connection to the U-space services, establish direct communications with the VATMS and the VSO. Finally, just like at conventional airports, local weather sensors, but also (if necessary) specialized sensors able to detect surrounding urban micro-weather, provide current meteorological data, while forecasts and weather for the greater area are provided by the/a U-space meteorological service. The U-space services communicate with the VATMS through the U-space Cloud Services. The main inputs from the U-space side are provided by the Fleet Manager of the respective vehicles, which are in charge of handling each individual flight. The initial input of the Fleet Manager for a flight is the request for a departure and an arrival slot at the respective vertidromes. During the flight itself (activation of flight plan until end-of-flight reporting) the Fleet Manager provides the flight plan, possible amendments, as well as flight and aircraft status to the vertidrome. The U-space Surveillance Service tracks the flight and supplies information about its location while the Adherence Monitor (AM) verifies the flights adherence to it's flight plan in both time and space. Should the AM detect a discrepancy e.g. a delay, which makes a reserved slot unachievable, it notifies the VATMS and the Fleet Manager. The latter
Fig. 1: Concept of a vertidrome [5]
[MISSING_PAGE_POST]
would then have to renegotiate a new slot for the affected vehicle. The remaining input is the U-space Emergency Management Service (EMS). The EMS can, similar to the Fleet Manager, make reservations at a vertiport through the VATMS. However, due to the nature of flights handled by the EMS, these have the highest priority and have to be accommodated by the VATMS, which is why the requests by the EMS are called 'demands' in Figure 2.
#### 3.1.2 Outputs of the VATMS
The VATMS generates a wide variety of outputs. The EMS, AM and Fleet Manager receive these outputs through the U-space cloud. The EMS receives confirmation for pad reservations as well as information on available resources/infrastructure that may be utilized to aid the flight in distress. The positions of aircraft are transmitted through to the Adherence Monitor, which will primarily serve the confirmation of punctuality of departing flights and their status regarding entry into their flight corridor. The Fleet Manager receives the confirmed slot times for the planned operation, information on the pad availability and serviceability, as well as information on serviceable resources and infrastructure to assist the aircraft on it's approach or departure. The VATMS also reports observed take off and arrival times, local weather, risks and hazards to the Fleet Manager and submits advisories and alerts about aforementioned observations. Where the environment creates the necessity for better response times, the hazard and risk information, including response advisories and alerts may be directly transmitted to the vehicle. This, however, is supposed to be an optional feed, as indicated by the dotted line. Additionally, the vehicle receives emergency communications (dashed line) in the event the vehicle loses U-space connection on approach or departure. This would include confirmation that a certain pad is reserved for landing, the previously mentioned optional services, as well as establishing a voice communication with the VSO in the case of a human pilot. For clarification, the flow of information and clearances from the Fleet Manager to the vehicle is also illustrated. Under normal conditions, the VATMS does not provide these to the vehicle. After an arrival or departure slot has been confirmed or changed the information is also sent to the Ground Flow Management Unit (GFMU) for planning purposes. Finally, the VSO receives an operational overview and forecast, which includes current and future arrivals and departures, weather information and pad usability. The VSO is also supplied with information on all (potential) risks and hazards and may be prompted to classify a hazard, when the system receives inconclusive or conflicting inputs. The VSO also receives information on sensor integrity and accuracy and overall infrastructure health and availability, which in Figure 2 is summarized as infrastructure health. After explaining about vertiforme management and the information exchange of a conceptual VATMS, the next section addresses the prototypical implementation of such a system for a flight demonstration in a scaled urban scenario in the research project HorizonUAM.
## 4 Prototypical Implementation
This section describes the prototypical implementation of the vertiforme manager, the communication protocol and the Fleet Manager.
### _Vertidrome Manager_
Vertidrome Manager is a software prototype in the VATMS ecosystem which is responsible for scheduling, controlling and managing air traffic at a vertidrome. This development is a part of the HorizonUAM project, which researches ways for an efficient, safe and sustainable initial implementation of the U-space SESAR joint undertaking [2]. An overview of the software, which has fully been developed and tested in MATLAB, the data processing and workflows, interface and individual functions, as well as customisation and manual input options are presented in this section. Vertidrome Manager possesses four major subsystems which are the Weather Data Processing Unit, the Adherence Monitoring Unit, the Risk Management Unit and finally the Pad Management and Scheduling Unit. The data flow between these subsystems is shown in Figure 3. The individual subsystems of the software are explained in the subsequent sections.
#### 4.1.1 Weather Data Processing Unit
The Weather Data Processing Unit receives the local weather data from sensors on the vertidrome, as well as the area weather and the forecast for the vertidrome from the Weather Information Service. From this data, the system derives a plan for the pad usability for the Pad Managing and Scheduling Unit, including which pads may be used for arrival, departure or potentially both operations, but also if local weather phenomena require slot times to be extended to allow for safe operations.
#### 4.1.2 Adherence Monitoring Unit
The Adherence Monitoring Unit receives two kinds of input. First, the VSO defines the adherence criteria, which may include tolerances in space and time, as well as define paths that should be followed or areas that shall be avoided for any reason. To be able to ensure that all vehicles comply with these regulations, the system receives high-frequency surveillance data from the appropriate local system. The reason for this data to be from a local source is the requirement for reduced lag and more frequent updates. Within the vertidrome control volume, all aspects of the 4D-trajectory of a vehicle are monitored and in the event of deviations, the Risk Management Unit is supplied with the ID and type of deviation of the respective vehicle. Additionally the VSO is also be alerted about the deviating vehicle to enable quick response and to re-establish compliance/adherence.
#### 4.1.3 Risk Management Unit
The Risk Management Unit is the central system of the VATMS to detect, classify and track any threats to the operation, to alert vehicles of those threats, advise on potential responses and employ mitigation strategies. The Risk Management Unit receives the information about
[MISSING_PAGE_POST]
the serviceability of the landing pads, taxiways, landing assistance systems and any infrastructure which may directly or indirectly affect the capability of the veridrome to safely handle a flight. In case of non-nominal situation, the Risk Management Unit will alert the VSO and notify the U-space stakeholders. Data on foreign objects, personnel within the FATO area or in close proximity and other hazards is also supplied to the unit by the Hazard Detection System. Eventually, the Risk Management Unit processes all the previously mentioned inputs into a set of operational constraints for the Pad Management and Scheduling Unit.
#### 4.1.4 Pad Management and Scheduling Unit
The Pad Management and Scheduling Unit performs the task of the assignment of FATOs/pads to aircraft requesting operation in and out of the respective veridrome. Based on the operational constraints imposed by the Risk Management Unit and the Weather Data Processing Unit, the unit confirms a slot for an aircraft requesting landing or departure through a flight plan. All information on planned and in progress flights in and out of the veridrome, as well as information on the availability and serviceability of pads is compiled into an operational overview and forecast to the VSO. The next section presents the user interface of the Veridrome Manager.
#### 4.1.5 Veridrome Manager User Interface
The Vertidrome Manager User Interface is illustrated in Figure 4.
In the top left of the figure, a table shows the positions of the aircraft in the pad sectors. The relative heading, distance from the pad and the relative altitude of the incoming or outgoing flight is shown along with its assigned pad. To the right of that is the pad status display which also shows possible changes to the pad usability. Moving to the right, the system clock along with current weather information including wind direction and speed are displayed. Below the weather tab, pad control path containing displays on pad closures due to foreign objects (right) or orders by the operator (left) as well as a means to create a new close order is shown. The last element on the top is the slot reassignment panel, which allows for flights to be reassigned to empty slots. Finally, on the bottom, an operational forecast is shown, previewing the current scheduled flights for landing or departure from the veridrome. Additionally, the receival of new flight request from the U-space server is also depicted in Figure 4. As soon as the Veridrome Manager receives a new flight plan, it notifies the VSO about the flight information as well as about the flight approval or flight rejection through temporary pop-up windows. These pop-up windows also demand an acknowledgement from the VSO in order to make sure the successful exchange of information. After presenting the operator interface of the Vertidrome Manager, the next section explains about the communication protocol of the Veridrome manager with the U-space server and the Ground Control Station.
### _Vertidrome Manager Communication Protocol_
The Vertidrome Manager uses the MQTT protocol for exchanging data with the U-space server as well as the associated Ground Control Station. MQTT stands for Message Queuing Telemetry Transport. It is a lightweight and widely used messaging protocol designed for efficient communication between devices and applications in the Internet of Things (IoT) and other resource-constrained environments [21]. MQTT was invented by Stanford-Clark and Nipper in the late 1990s [21].
The key features and concepts of MQTT include the following:
* Publish/Subscribe Model: MQTT follows a publish/subscribe messaging pattern, where devices (publishers) send messages to a central broker, and other devices (subscribers) can receive those messages by subscribing to specific topics [22]. Topics act as message channels that organize the communication.
* Lightweight: MQTT is designed to be extremely lightweight, making it suitable for low-bandwidth and high-latency networks [21]. The protocol's minimal overhead reduces data transmission requirements, making it ideal for IoT devices with limited processing power and memory.
* Quality of Service (QoS) Levels: MQTT supports three QoS levels for message delivery. QoS 0 means The message is sent once, and the sender doesn't care if it is received or not. QoS 1 implies the message is guaranteed to be delivered at least once to the receiver with duplicates. QoS 2 means the message is guaranteed to be delivered exactly once to the receiver, without duplicates [23].
* Retained Messages: MQTT allows publishers to set a "retained" flag on messages. When a message is retained, it will be stored on the broker and sent to any new subscribers that join a topic with that retained message [22].
* Last Will and Testament (LWT): Clients can specify a "last will" message that the broker will publish on their behalf if the client unexpectedly disconnects. This feature is useful for notifying others when a device goes offline [23].
* TCP/IP-based: MQTT operates over TCP/IP, but it can also be implemented on top of other transport protocols, such as WebSockets [21].
* Security: MQTT can work with TLS/SSL encryption to ensure secure communication between clients and brokers [21].
The application of MQTT protocol in the case of Ver
[MISSING_PAGE_POST]
tidrome Manager is as follows. Initially, the Vertidrome Manager receives any new flight requests or flight plans from the U-space server over MQTT. According to the calculated results from the Weather Data Processing Unit and the Risk Management Unit, the Vertidrome Manager evaluates the requested pad usability and sends the approval or rejection of the flight plan back to the U-space server over MQTT. The exchange of information over MQTT is also displayed as pop-up windows in the Vertidrome Manager user interface which is illustrated in Figure 4.
Additionally, it can be seen in Figure 4 that as soon as the flight request gets approved, the planned flight is added in the operational forecast tab. The Vertidrome Manager also receives aircraft position coordinates from the U-space server over MQTT. As the aircraft approaches the vertidrome and enters the defined sector, the Vertidrome Manager starts displaying aircraft position information such as distance to the vertidrome, relative altitude and heading to the VSO. The snapshot is depicted in Figure 5.
The Vertidrome Manager also receives information over MQTT about change in pad usability because of detection of personnel or foreign objects on the pad. Reacting to this information, the Vertidrome Manager closes the affected pad for flight operations by generating a close order and displaying the details in the user interface. Figure 6 depicts such a scenario where pad A is closed because of detection of a person.
After closing the pad, the Vertidrome Manager also forwards this information to the U-space server and the associated Ground Control Station for further steps. As soon as the vertidrome operator clears the pad for operation, this information gets again transmitted over MQTT to the relevant stakeholders.
### _Fleet Manager_
U-Fly is an in-house developed ground control station that has been used for a variety of UAS studies and flight test campaigns such as reported in [24, 25, 26]. It is used for flight path planning, and execution, as well as operations monitoring. It can be used for the operation of a single or multiple UAS simultaneously. Figure 7 shows the ground operator's view of U-Fly as used for the evaluation of the vertidrome manager. The red areas are no-fly zones, so called geo-fences. The blue circles show the available landing pads. The designated areas for approach or departure are marked as grey arcs. Way points (blue dots) can be adjusted by the UAS operator as needed.
Regarding the services as introduced in 2.2 the U-Fly as Fleet Manager assumes the U-space services UAS registration and tracking for displaying the flight path of registered UAS. Flight path planning and management is directly addressed via U-Fly. Communication is established through MQTT as described earlier. The UAS, as introduced in the following section, are currently connected to U-Fly via a 433 MHz telemetry datalink. Strategic conflict prediction and resolution can be performed within U-Fly. For tactical conflict detection and resolution further services onboard the UAS would be required such as described in [27]. This was not part of the vertidrome manager demonstration described in this paper.
## 5 Evaluation
The vertidrome manager and its integration into a prototypical U-space environment were tested in live demonstrations conducted between May and July 2023 at the National Experimental Test Center for UAS located at Cochstedt, Germany. A model city on a scale of 1:4
Fig. 5: Aircraft position information received by the Vertidrome Manager over MQTT
Fig. 6: Pad closure information received over MQTT
Fig. 7: U-Fly Fleet Manager
was erected from shipping containers [28], see figure 8. Vertidrome landing pads were marked on the ground.
### _Scenario_
An airport shuttle use-case was selected for demonstration, similar to a scenario that previously had been evaluated in a virtual reality passenger study [29]. Air taxis are operated between Hamburg 'Airport' and Hamburg city center with a vertidrome named 'Binnenalster'. In the Cochstedt demonstration, an additional landing pad at Hamburg 'Main Station' was included as shown in figure 9. The first demonstration showed the nominal case with one air taxi flying from the North of Hamburg (airport area, behind the model city) above an urban canyon (model city) towards the vertidrome Binnenalster (marked on the ground).
For the nominal procedure the sequence of events was as follows:
1. Fleet manager plans flight path from Airport to Binnenalster within U-Fly
2. Flight plan is sent to U-space for approval
3. Vertidrome manager receives landing request through U-space and accepts it
4. Fleet manager receives approval and initiates take-off at Airport
5. Fleet manager and vertidrome manager can track the air taxi during automated en-route flight
6. Air taxi lands on time at Binnenalster and flight plan is concluded
The second demonstration included a rerouting to the vertidrome Main Station due to a blocked landing pad at Binnenalster. Another UAS detects a passenger on the landing pad Binnenalster, aided by a runtime monitored machine learning algorithm for the detection of persons in image data [30]. For this rerouting scenario the sequence of events is the same as above until step 5 and then continues with:
1. UAS reports detected person on landing pad as emergency through U-space
2. Vertiport manager receives emergency message and closes pad at Binnenalster
3. Fleet manager receives information on pad closure and activates an alternative flight plan to Main Station
4. Vertidrome manager accepts landing request at Main Station
5. Alternative flight plan is approved through U-space and air taxi continues en-route flight
6. Air taxi lands safely at Main Station and flight plan is concluded
Further scaled UAM demonstrations were conducted for the evaluation of drone to drone communication and multisensor navigation. Those results are reported in [31, 32].
### _Air taxis at scale_
For the demonstration within the model city smaller UAS were used to simulate passenger carrying air taxis. An overview of the three drones used for the above scenario shall be provided.
Fig. 8: Urban canyon on a scale of 1:4, erected from shipping containers
Fig. 9: Urban air mobility scenario within Hamburg
For most of the flight trials the EVO X8 heavy [Fig 10] was used, since it fits the scale of the model city of 1:4 the best in order to simulate an air taxi. The EVO X8 [Fig 11] is basically the same drone as the EVO X8 heavy, but with smaller propellers, motors and overall smaller dimensions, thus less empty weight. The HolyBro S500 [Fig 12] was mainly used for preliminary tests and high risk operations. This small drone is easy and cheap to repair in case of an accident, but offers the same autopilot features as the bigger drones.
### _Results_
Figure 13 and 14 show an exemplary flight path from the scaled rerouting scenario, where the incoming air taxi has to divert to an alternative vertidrome. The orange track is the initial flight path originally aiming at vertidrome Binnenalster (rectangular pad marking). The green track shows the then rerouted flight path to Main Station (round pad marking).
The small UAS could successfully be used for scaled demonstrations instead of full-sized air taxis in the trials. In the future the basic assumed U-space services and functionalities are expected to be the same for all UAM vehicles whether passenger carrying or not. The U-Fly interface had previously been used for various experiments and was therefore easily usable also in the air taxi scenarios. The vertidrome manager interface was completely new. It was not optimized for usability yet, but could be used for the intended procedures and commands. Communication via MQTT was successful within the local wireless network. Most difficult to realize in the demonstration was the right timing of events within the short duration of flight. The average nominal scenario had a duration of only 3 minutes with the UAS flying at 2 m/s. Wind was a limiting factor in the tests. During the final runs in July 2023, demonstrations had to be terminated when wind speeds of more than 11 m/s occurred.
## 6 Discussion
The demonstrated vertidrome management tool relies on a human controller to manage incoming requests. Future developments envision a higher degree of automation on the vehicle side but also on the controller side. In future works also the integration at existing airports and the interface with conventional air traffic management will be investigated.
The chosen MQTT protocol proved to be simple to use and effective in mimicking a functional U-space environment. Alternatively, Fas-Millan et al. [33] report the use of a REST-API (Representational State Transfer - Application Program Interface) and JSON (JavaScript Object Notation) format messages in a U-space prototype. Fas-Millan et al. also state that this format allows the easy integration of the different ground control stations and drone platforms, since most programming languages
Fig. 11: EVO X8
Fig. 12: HolyBro S500 V2
Fig. 10: EVO X8 heavy
have libraries to manage it; and provides great flexibility to allow changes in the message specifications without necessarily having to impact the existing code. For future ConOps development, an agreed definition of the minimum and optional commands and the information exchanged between the UTM and the UAS would be beneficial [33]. Living labs such as the Air Space Research Area (AREA) U-space [34] currently being implemented in Cochstedt, Germany, will provide a holistic framework for future flight testing within U-space.
The vertidrome manager, as conceptualized in this paper, addresses additional U-space services that are not yet described in the CORUS-XUAM ConOps [10]. Required services such as for providing status information on the pad availability or for assistance in emergency procedures will be further investigated in follow-on projects such as EUREKA [35].
## 7 Conclusion
This paper reported on the conceptualization, implementation and demonstration of a vertidrome manager. The tool was integrated into a prototypical U-space environment and tested in scaled flight trials with smaller UAS representing air taxis. For this purpose a model city was erected consisting of an urban canyon and several landing pads. The scaled demonstrations proved to be very effective especially as prototypes of full-sized passenger carrying air taxis are still rarely available at the time this paper was written. Specfially, the interface evidenced to be a first good approach to the software that would be required in a first implementation phase in which a person-in-the-loop is required to approve or cancel air taxi operations in terms of functionality and information displayed. Furthermore, the process of the eventualities, even with the requirement of a person involved, was quick enough for the drone in the scaled experiment to be able to maneuver on time an execute the rerouting with no risk.
U-space services will be implemented in the near future in Europe and will be made available for new airspace users such as UAS but also for remotely piloted or autonomous air taxis. Specific U-space services as needed for vertidrome operations were successfully tested in the reported scaled flight demonstrations. In future research, more emphasize should be given on the actual vertidrome layout and the approach procedures. Capacity and efficiency of different designs should be carefully considered in simulation as well as in flight testing.
## Acknowledgment
The authors acknowledge the contribution of Michael Rudolph and Cornelius Lehners in coding the software tools as well as the support of the HorizonUAM flight test team in the validation campaign.
## Competing Interests
B.I. Schuchardt is also guest editor for the CEAS Aeronautical Journal for the special issue on the HorizonUAM project but has not been involved in the review of this manuscript. The other co-authors have no competing interests to declare that are relevant to the content of this article.
Fig. 14: Flight path of the rerouting scenario: Top view
Fig. 13: Flight path of the rerouting scenario: Low view as seen from take-off point |
2309.15813 | Fractal-like star-mesh transformations using graphene quantum Hall
arrays | A mathematical approach is adopted for optimizing the number of total device
elements required for obtaining high effective quantized resistances in
graphene-based quantum Hall array devices. This work explores an analytical
extension to the use of star-mesh transformations such that fractal-like, or
recursive, device designs can yield high enough resistances (like 1 E{\Omega},
arguably the highest resistance with meaningful applicability) while still
being feasible to build with modern fabrication techniques. Epitaxial graphene
elements are tested, whose quantized Hall resistance at the nu=2 plateau (R_H =
12906.4 {\Omega}) becomes the building block for larger effective, quantized
resistances. It is demonstrated that, mathematically, one would not need more
than 200 elements to achieve the highest pertinent resistances | Dominick S. Scaletta, Swapnil M. Mhatre, Ngoc Thanh Mai Tran, Cheng-Hsueh Yang, Heather M. Hill, Yanfei Yang, Linli Meng, Alireza R. Panna, Shamith U. Payagala, Randolph E. Elmquist, Dean G. Jarrett, David B. Newell, Albert F. Rigosi | 2023-09-27T17:38:01Z | http://arxiv.org/abs/2309.15813v1 | # Fractal-like star-mesh transformations using graphene quantum Hall arrays
###### Abstract
A mathematical approach is adopted for optimizing the number of total device elements required for obtaining high effective quantized resistances in graphene-based quantum Hall array devices. This work explores an analytical extension to the use of star-mesh transformations such that fractal-like, or recursive, device designs can yield high enough resistances (like 1 E\(\Omega\), arguably the highest resistance with meaningful applicability) while still being feasible to build with modern fabrication techniques. Epitaxial graphene elements are tested, whose quantized Hall resistance at the \(\nu=2\) plateau (\(R_{\rm H}\approx 12906.4\ \Omega\)) becomes the building block for larger effective, quantized resistances. It is demonstrated that, mathematically, one would not need more than 200 elements to achieve the highest pertinent resistances. |
2302.14857 | Variational deep learning of equilibrium transition path ensembles | We present a time dependent variational method to learn the mechanisms of
equilibrium reactive processes and efficiently evaluate their rates within a
transition path ensemble. This approach builds off variational path sampling
methodology by approximating the time dependent commitment probability within a
neural network ansatz. The reaction mechanisms inferred through this approach
are elucidated by a novel decomposition of the rate in terms of the components
of a stochastic path action conditioned on a transition. This decomposition
affords an ability to resolve the typical contribution of each reactive mode
and their couplings to the rare event. The associated rate evaluation is
variational and systematically improvable through the development of a cumulant
expansion. We demonstrate this method in both over- and under-damped stochastic
equations of motion, in low-dimensional model systems and the isomerization of
solvated alanine dipeptide. In all examples, we find that we can obtain
quantitatively accurate estimates of the rates of the reactive events with
minimal trajectory statistics, and gain unique insight into the transitions
through the analysis of their commitment probability. | Aditya N. Singh, David T. Limmer | 2023-02-28T18:55:58Z | http://arxiv.org/abs/2302.14857v2 | # Variational deep learning of equilibrium transition path ensembles
###### Abstract
We present a time dependent variational method to learn the mechanisms of equilibrium reactive processes and efficiently evaluate their rates within a transition path ensemble. This approach builds off of the variational path sampling methodology by approximating the time dependent commitment probability within a neural network ansatz. The reaction mechanisms inferred through this approach are elucidated by a novel decomposition of the rate in terms of the components of a stochastic path action conditioned on a transition. This decomposition affords an ability to resolve the typical contribution of each reactive mode and their couplings to the rare event. The associated rate evaluation is variational and systematically improvable through the development of a cumulant expansion. We demonstrate this method in both over- and under-damped stochastic equations of motion, in low-dimensional model systems and the isomerization of solvated alanine dipeptide. In all examples, we find that we can obtain quantitatively accurate estimates of the rates of the reactive events with minimal trajectory statistics, and gain unique insight into the transitions through the analysis of their commitment probability.
## Introduction
In complex systems, understanding the mechanism of a transition between long-lived metastable states is hampered by the general collective nature of the dynamics and the difficulty of observing these rare but important events.[1] While methods like transition path sampling[2] exist to harvest rare events computationally, their distillation into mechanistic descriptions is cumbersome, and the conversion of that description into quantitative statements of their rate is challenging.[3; 4] Here, we present a method that uses a neural-network ansatz with a variational optimization procedure to compute the time dependent commitment probability from a reactive trajectory ensemble. The method involves learning a unique policy, in the form of an optimal external control force, that reweights a reactive conditioned path ensemble to an unconditioned ensemble that reacts autonomously. The optimal force is simply related to the commitment probability,[5; 6] and serves as an ideal descriptor of the reaction. The reweighting principle developed within the framework of variational path sampling[7] is expressed in terms of the stochastic action, which allows us to decompose the rate into additive contributions from different degrees of freedom, including collective coordinates that describe molecular transitions. This decomposition provides a means of identifying relevant order parameters without making _a-priori_ assumptions. The combination of the mechanistic insight afforded by an interpretable representation of the reaction and the validation through a variational evaluation of the rate, provides a robust method for distilling features of equilibrium transition path ensembles.
The investigation of reactive events requires access to timescales that are considerably longer than the local relaxation time of the system. The canonical approach to investigate these processes has leveraged physically intuitive low-rank descriptions of the system to infer mechanistic insight, and bridge the timescales through reactive flux calculations or importance sampling.[8; 9; 10; 11; 12] The notion of an ideal reaction coordinate capable of providing a complete description of the reactive event dates back to Onsager,[13] and was formalized within the context of chemical physics as the committor- a map between the phase space position of a system and the likelihood of it reacting.[34; 35; 14; 15; 16] Learning this high dimensional function has attracted interest from a diversity of fields, and significant advancements has been made through methods that employ importance sampling and machine learning.[16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28] Some notable approaches have leveraged the confinement of the transition region to compute it using string methods,[16; 17; 29] coarse-grained the phase-space to approximate it through diffusion maps,[19; 28; 30; 31] and parameterized neural-networks by either fitting the committor directly[18; 21] or solving the variational form of the steady-state backward Kolmogorov equation[22] by combining it with importance sampling methods.[23; 24; 25] While the learning procedures applied previously have been successful in fitting high dimensional representations of the reaction coordinate or committors, their nonlinearity has largely resulted in a difficulty in interpreting the relative importance of physically distinct descriptors and converting those descriptors into a robust measure of the rate. Earlier developments of methods based on likelihood maximization[18; 32; 33] have offered linear ways to make this analysis tractable to complex processes.[34; 35; 36; 37; 38] However, these approaches have overwhelmingly relied on physical intuition to express likelihood functions.[37]
The method that we present builds off of variational path sampling[6; 7; 39; 40; 41] that has provided an alternative approach for sampling rare events. Specifically, a recent method[6] has detailed how to express a low-rank ansatz for an optimal control force to drive rare events and estimate their rates. Our work exploits the fact that the optimization of this control force, or policy, is related to the time dependent committor. We find that in equilibrium systems, where path sampling methods afford a way to generate a reference reactive trajectory
ensemble, the optimization of this committor becomes straightforward, and allows the use of a neural-network (NN) ansatz to solve the time-dependent backward Kolmogorov equation,[42] providing a time dependent and probabilistic representation of the reaction. While the method computes a nonlinear function, the form of the optimized loss is given by the difference in stochastic actions that quantifies the distance between a conditioned and a reference trajectory ensemble. For systems in which we saturate the variational bound, this quantity is unique and linearly decomposable on a per-coordinate basis, and can be understood as a measure of the importance of each coordinate to conditioning a trajectory to be reactive. This metric is purely based on the intrinsic mechanism of the reaction, and can be extended to collective coordinates, allowing us to identify the relevant reaction descriptors without making _a-priori_ assumptions.
This paper is organized as follows. First, we review the variational path sampling formalism to discuss the theory behind this method. Next, we validate this method by applying it to a couple of low dimensional systems where numerically exact results are possible. We probe the sensitivity of this method to limited statistics as well as the applicability to systems integrated with underdamped equations of motion. Then, we illustrate how the per-coordinate stochastic action encodes the relevance of a coordinate to the reaction. Finally, we apply this method to study the isomerization of alanine dipeptide in implicit and explicit solvent. In both of these cases, we show how the method can be used to infer a mechanistic picture of the reaction, and identify important reaction descriptors among a redundant set of internal coordinates.
## I Variational path sampling formalism
For simplicity, we consider a system evolving under an overdamped Langevin equation of the form,
\[\gamma_{i}\dot{\mathbf{r}}_{i}(t)=\mathbf{F}_{i}\left(\mathbf{r}^{N}\right)+ \boldsymbol{\eta}_{i}(t) \tag{1}\]
where \(\dot{\mathbf{r}}_{i}\) is the rate of change of the \(i\)th particle's position at time \(t\) in \(d\) dimensions, \(\gamma_{i}\) is the friction coefficient, and \(\boldsymbol{\eta}_{i}(t)\) denotes a Gaussian random force with mean \(\left\langle\boldsymbol{\eta}_{i}(t)\right\rangle=0\) and variance \(\left\langle\boldsymbol{\eta}_{i}(t)\otimes\boldsymbol{\eta}_{j}(t^{\prime}) \right\rangle=2\gamma_{i}k_{\mathrm{B}}T\delta_{ij}\mathbf{1}\delta(t-t^{ \prime})\) where \(k_{\mathrm{B}}T\) is Boltzmann's constant times the temperature. The conservative force \(\mathbf{F}_{i}\left(\mathbf{r}^{N}\right)=-\nabla_{i}V\left(\mathbf{r}^{N}\right)\) is given by the gradient of the potential \(V\left(\mathbf{r}^{N}\right)\) with \(\mathbf{r}^{N}\) the full \(N\)-particle configuration. We are interested in investigating reactive events, so we consider potentials that exhibit metastability.
We consider transitions between two metastable states, \(A\) and \(B\), which in general are collections of configurations defined through the indicator functions \(h_{A}[\mathbf{r}^{N}(t)]\) and \(h_{B}[\mathbf{r}^{N}(t)]\), where
\[h_{X}[\mathbf{r}^{N}(t)]=\begin{cases}1&\mathbf{r}^{N}(t)\in X\\ 0&\mathbf{r}^{N}(t)\notin X\end{cases} \tag{2}\]
for \(X=\{A,B\}\). The rate for transitioning \(A\to B\) can be defined by the time derivative of the side-side correlation function,[9]
\[k=\frac{d}{dt}\frac{\langle h_{A}(0)h_{B}(t)\rangle}{\langle h_{A}\rangle}= \frac{d}{dt}\langle h_{B|A}(t)\rangle \tag{3}\]
where \(\langle\cdots\rangle\) denotes an average computed over a stationary distribution and \(h_{B|A}\) is the conditional probability of starting in \(A\) and ending in \(B\) at \(t\). Provided a separation of timescales between the local relaxation time within a state, \(\tau_{\mathrm{mol}}\), and \(1/k\), the rate is given by the path integral
\[kt_{f}=\int\mathcal{D}[\mathbf{X}]h_{B|A}(t_{f})P[\mathbf{X}] \tag{4}\]
where when \(t_{f}\) is in the range \(\tau_{\mathrm{mol}}<t_{f}\ll 1/k\), the probability to transition grows linearly with time. The path integral sums over all trajectories \(\mathbf{X}=\{\mathbf{r}^{N}(0),\ldots,\mathbf{r}^{N}(t_{f})\}\), or the timeseries of the state of the system evolved for time \(t_{f}\), weighted by the likelihood of observing a trajectory \(P[\mathbf{X}]\). This path integral is a trajectory partition function associated with reactive paths,[43] and equal to the transition probability between \(A\) to \(B\) in time \(t_{f}\).
Variational path sampling uses the path partition function representation of the rate together with a dynamical reweighting approach[44] to extract reactive paths effectively,[39] evaluate rates accurately,[6] and we show here, provide detailed mechanistic information concerning the rare event. Variational path sampling does this by considering the system as before, but under the action of an additional time-dependent drift \(\boldsymbol{\lambda}_{i}(\mathbf{r}^{N},t)\), which enters the equation of motion as
\[\gamma_{i}\dot{\mathbf{r}}_{i}=\mathbf{F}_{i}(\mathbf{r}^{N})+\boldsymbol{ \lambda}_{i}(\mathbf{r}^{N},t)+\boldsymbol{\eta}_{i}(t) \tag{5}\]
where the conservative force, noise and friction are the same as the reference system without \(\boldsymbol{\lambda}_{i}(\mathbf{r}^{N},t)\). For this driven system, the rate \(k_{\lambda}\) between the same two metastable states \(A\) and \(B\) is given by an analogous relation as in the reference system
\[k_{\lambda}t_{f}=\int\mathcal{D}[\mathbf{X}]h_{B|A}(t)P_{\lambda}[\mathbf{X}] \tag{6}\]
where \(P_{\lambda}[\mathbf{X}]\) denotes the probability of observing a trajectory \(\mathbf{X}\) integrated using Eq. 5. By virtue of the Girsanov transformation, these two rate expressions can be related to each other. Specifically, using the Radon-Nikodym derivative to define the change in stochastic action, \(\Delta U_{\lambda}[\mathbf{X}]=\ln P[\mathbf{X}]/P_{\lambda}[\mathbf{X}]\), the rate in the driven system can be rewritten as[45]
\[\ln k_{\lambda}t_{f} =\ln\int D[\mathbf{X}]P[\mathbf{X}]h_{B|A}(t_{f})e^{\Delta U_{ \lambda}}\] \[=\ln kt_{f}+\ln\left\langle e^{\Delta U_{\lambda}}\right\rangle_{ B|A} \tag{7}\]
where we have employed \(\langle\ldots\rangle_{B|A}=\langle h_{B|A}\ldots\rangle/\langle h_{B|A}\rangle\) as a conditional average over a reference reactive ensemble to relate the two rates. For the case of the overdamped Langevin equation, the change in stochastic action is given by a difference of Onsager-Machlup actions[46]
\[\Delta U_{\lambda}[\mathbf{X}]=-\sum_{i=1}^{N}\!\frac{1}{4\gamma_{i}k_{ \mathrm{B}}T}\int_{0}^{t_{f}}dt\,\boldsymbol{\lambda}_{i}^{2}-2\boldsymbol{ \eta}_{i}\boldsymbol{\lambda}_{i} \tag{8}\]
where we have used the fact that the average is performed in the reference system to eliminate the difference in the
conservative force and time derivative of the position for the noise.
The relationship between rates in Eq. 7 is exact for any time-dependent drift \(\mathbf{\lambda}_{i}(\mathbf{r}^{N},t)\). It is distinct from that employed previously,[6] which related the reference and driven rates to an expectation value in the driven system. In variational path sampling, we consider a class of \(\mathbf{\lambda}_{i}(\mathbf{r}^{N},t)\) which enforce the transition to occur with probability 1. In such a case, provided access to a reactive path ensemble in which to evaluate the expectation values, the rate in the reference system can be obtained directly as an exponential average,
\[\ln kt_{f} =-\ln\left\langle e^{\Delta U_{\lambda}}\right\rangle_{B|A} \tag{9}\] \[=-\langle\Delta U_{\lambda}\rangle_{B|A}-\sum_{n=2}^{\infty}\frac {1}{n!}\mathcal{C}_{B|A}^{(n)}(\Delta U_{\lambda}) \tag{10}\]
or a cumulant expansion, where \(\mathcal{C}_{B|A}^{(n)}(\Delta U_{\lambda})\) denotes the \(n\)'th cumulant of \(\Delta U_{\lambda}\) averaged in the reactive ensemble. We will refer to these different estimators as \(k^{(\exp)}\) for the exponential average and \(k^{(n)}\) for the cumulant expansion where \(n\) will denote where the sum was truncated.
Truncation of the cumulant expansion for \(n=1\) provides a variational bound of the rate. This is seen by applying Jensen's inequality to Eq. 7,
\[\ln kt_{f}\leq-\langle\Delta U_{\lambda}\rangle_{B|A} \tag{11}\]
where the rate in the reference system is just the Kullback-Leibler (KL) divergence between the driven and reference path ensembles, or equivalently the mean change in action. In equilibrium systems, this relation is similar to the variational structure of transition state theory, which also provides an upper bound to the rate.[9] However, this expression is also closely related to the reversible work theorem in equilibrium thermodynamics,[47] as it relates the smallest change required to transform one ensemble to another.[48; 49] In this case, the transformation is between an unconditioned path ensemble and a reactive path ensemble. Just as the minimum amount of work done on a physical system is given by its reversible limit, which reflects the way in which a system would naturally transform, so too we find the minimum driving force to ensure a reaction is related to the way in which a system would naturally react.[50] This is shown by noting that the force that saturates this bound in Eq. 11 is the Doob force, denoted \(\mathbf{\lambda}^{*}(\mathbf{r}^{N},t)\), and is related to the solution of the backward Kolmogorov equation.[5; 6; 51; 52] For an overdamped Langevin dynamics this is,
\[\partial_{t}q(\mathbf{r}^{N},t)=-\sum_{i=1}^{N}\frac{\mathbf{F}_{i}\left( \mathbf{r}^{N}\right)}{\gamma_{i}}\nabla_{i}q(\mathbf{r}^{N},t)-\frac{k_{ \mathrm{B}}T}{\gamma_{i}}\nabla_{i}^{2}q(\mathbf{r}^{N},t) \tag{12}\]
with boundary conditions \(q(\mathbf{r}^{N},t_{f})=h_{B}(t_{f})\) and \(q(\mathbf{r}^{N},0)=h_{A}(0)\). The function that solves this expression, \(q(\mathbf{r}^{N},t)\), is the time-dependent committor function,[16] or the probability of reaching state \(B\) at \(t_{f}\) given a position \(\mathbf{r}^{N}\) at time \(t\). In the stationary limit, where the separation of timescales prohibits multiple transitions, \(q(\mathbf{r}^{N},t)\) reduces to the time independent committor function of transition path theory.[53; 16] The explicit relation between the Doob force \(\mathbf{\lambda}^{*}(\mathbf{r}^{N},t)\) and \(q(\mathbf{r}^{N},t)\) is,
\[\mathbf{\lambda}^{*}(\mathbf{r}^{N},t)=2k_{\mathrm{B}}T\nabla\ln q(\mathbf{r}^{N},t) \tag{13}\]
where by construction this force makes all trajectories reactive, and the reactions occur as they would in the original system. This force uniquely saturates the inequality in Eq. 11, thus providing a unique description of the reaction in a complex system.[6]
This formalism allows us to compute both the time-dependent committor \(q(\mathbf{r}^{N},t)\) and the rate \(k\) from a reactive trajectory ensemble by parameterizing the external force \(\mathbf{\lambda}\) and optimizing it by maximizing the expectation value of the change in action averaged within the reactive trajectory ensemble. In this work, we will consider parameterizing \(\mathbf{\lambda}\) with both linear functional forms as well as a non-linear form provided by a neural network. The optimization of either is done by defining a loss function, \(\mathcal{L}_{\lambda}\), as
\[\mathcal{L}_{\lambda}=-\bigg{\langle}\int_{0}^{t_{f}}dt\sum_{i=1}^{N}\frac{ \mathbf{\lambda}_{i}^{2}-2\mathbf{\lambda}_{i}\mathbf{\eta}_{i}}{4\gamma_{i}k_{\mathrm{B} }T}\bigg{\rangle}_{B|A} \tag{14}\]
where the sum is over each independent component of the noise, and \(\mathbf{\lambda}_{i}\) is the component of the driving force on the degree of freedom associated with the noise. This loss function is just the change in stochastic action, so in optimizing \(\mathbf{\lambda}_{i}\) we are simultaneously optimizing our estimate of the rate in the reference system. This optimization occurs over \(n_{\mathrm{int}}\) iterations, and requires averages within the reactive trajectory ensemble, which we will generate with standard path sampling tools like transition path sampling. In all cases presented, we have ensured that the optimized force satisfies \(\ln k_{\lambda}t_{f}\approx 1\).
## II Choice of ansatz and convergence
The accuracy of the rate estimate, and the mechanistic information afforded by the evaluation of \(q(\mathbf{r}^{N},t)\), depends on the fidelity with which the function can be represented. This depends on the ansatz used to expand it, and in particular, its expressibility. It also depends on the ease by which the function is learned, as inevitably the reactive path ensemble needed to train \(q(\mathbf{r}^{N},t)\) will be computationally expensive to generate. In this section, we consider the relative merits of expanding the driving force in both linear and nonlinear bases, and assess their accuracy and data efficiency.
### Linear Function Ansatz
We first consider the case of linear function approximations. A linear functional for \(\mathbf{\lambda}(\mathbf{r}^{N},t)=2k_{\mathrm{B}}T\nabla\ln q(\mathbf{r}^{N},t)\) can generically be expressed as its associated potential,
\[\ln q(\mathbf{r}^{N},t)=\sum_{n=1}^{n_{\mathrm{b}}}c_{n}\varphi_{n}(\mathbf{r}^ {N},t) \tag{15}\]
where \(c_{n}\) and \(\varphi_{n}(\mathbf{r}^{N},t)\) denote the \(n\)'th coefficient and basis function, and \(n_{\mathrm{b}}\) denotes the total number of basis
functions. This can be written compactly as \(\mathbf{\lambda}(\mathbf{\mathrm{r}}^{N},t)=\nabla[\mathbf{\mathrm{c}}\cdot\mathbf{\Phi}(\mathbf{ \mathrm{r}}^{N},t)]\), where \(\mathbf{\mathrm{c}}\) is the \(n_{\mathrm{b}}\) length vector of coefficients and \(\mathbf{\Phi}(\mathbf{\mathrm{r}}^{N},t)\) is the vector of basis functions. For a linear functional expansion, the optimal set of coefficients \(\mathbf{\mathrm{c}}^{*}\) has a closed form that can be computed by taking the derivative of the loss function in Eq. 14 and setting it to 0. Computing the coefficients reduces to solving a \(n_{\mathrm{b}}\times n_{\mathrm{b}}\) set of linear equations, whose solution is
\[\mathbf{\mathrm{c}}^{*}=\left[\left\langle\,\int_{0}^{t_{f}}dt\;\nabla\mathbf{\mathrm{ \Phi}}\otimes\nabla\mathbf{\Phi}\right\rangle_{B|A}\right]^{-1}\left\langle\,\int_ {0}^{t_{f}}dt\;\mathbf{\mathrm{\eta}}\cdot\nabla\mathbf{\Phi}\right\rangle_{B|A} \tag{16}\]
where \(\otimes\) denotes an outer product. For an orthonormal basis, the optimal coefficients are simply related to the average noise-weighted basis function,[54] but in general, the functions are not expected to be orthonormal. Because of this simplicity in training, linear bases are particularly efficient to employ. In cases where the reaction coordinate can be described well by a limited set of coordinates or order parameters, they can also be accurate.[39, 6, 7]
To understand the utility of a linear functional approximation, we consider a particle evolving in a two-dimensional external potential with two reactive channels visualized in Fig. 1 (A). The potential \(V(x,y)\) is
\[V(x,y)/k_{\mathrm{B}}T=2[6+4x^{4}-6y^{2}+3y^{4}+10x^{2}(y^{2}-1)] \tag{17}\]
where \(x\) and \(y\) are dimensionless coordinates and we have worked in a reduced unit system determined by \(k_{\mathrm{B}}T=\gamma_{x}=\gamma_{y}=1\), and employed a first order Euler integrator with timestep equal to \(0.004\;t^{*}\) with \(t^{*}=k_{\mathrm{B}}T/\gamma_{x}\) as our reduced time unit. We considered transitions defined by the indicator functions
\[h_{A}(x)=\Theta(-x+0.85)\qquad h_{B}(x)=\Theta(x-0.85) \tag{18}\]
where \(\Theta\) denotes the Heaviside step function. A reactive path ensemble was generated by running brute force trajectories in order to sample 400 reactions, and the rate was evaluated by computing the side-side correlation function. We found that \(t_{f}/t^{*}=2\) was a sufficient observation time to be in the linear growth regime for the transition probability with \(\ln kt_{f}=-6.1\pm 0.1\).
The linear approximation used were localized Gaussian basis functions of the form
\[\varphi_{n}(x,y,t)=e^{-a_{x}(x-x_{n})^{2}}e^{-a_{y}(y-y_{n})^{2}}e^{-a_{t}(t-t _{n})^{2}} \tag{19}\]
where the Gaussian centers \(\{x_{n},y_{n},t_{n}\}\) were equally spaced on a grid within the range of \(x=[-1.5,1.3]\), \(y=[-1.6,1.6]\) and \(t=[0,2]\) and Gaussian widths were choosen such that \(\{a_{x}=1.4/(n_{\mathrm{b}}^{1/3}-1),a_{y}=1.6/(n_{\mathrm{b}}^{1/3}-1),a_{t} =1/(n_{\mathrm{b}}^{1/3}-1)\}\). The expansion coefficients, \(\mathbf{\mathrm{c}}^{*}\), were computed using Eq. 16 averaged over the path ensemble consisting of the 400 reactive trajectories. The optimization was done in one step, \(n_{\mathrm{int}}=1\), where we found the loss function immediately converged to the brute force estimate of the rate for \(n_{\mathrm{b}}=16^{3}\) as shown in Fig. 1 (B). The dependence of the rate estimate with the size of the basis is shown in Fig. 1 (C), where the loss decays slowly, obtaining a value of the rate statistically indistinguishable from the brute force estimation of the rate for \(n_{\mathrm{b}}=16^{3}\). This slow decay could be mitigated somewhat with fine tuning the basis, but we do not explore that here.
### Neural Network Function Ansatz
Since the form of the force is rapidly varying and nonlinear, saturation of the inequality in Eq. 11 requires a large number of basis functions. If we express the linear ansatz in the full configuration space, the number of basis set coefficients grows exponentially with the degrees of freedom, making it intractable to converge the loss to the rate for complex systems. To tackle these shortcomings, we consider employing a neural network (NN) ansatz to compute the time dependent committor, associated Doob force through automatic differentiation, and through optimization evaluate the rate. The input
Figure 1: Functional ansatz testing. (A) Potential energy surface of the simple 2D model where the spacing between lines denote 2 \(k_{\mathrm{B}}T\). (B) Convergence of the loss function using the linear (\(\mathcal{L}_{\mathrm{LB}}\)) ansatz and neural-net (\(\mathcal{L}_{\mathrm{NN}}\)) ansatz. (C) Convergence of the linear basis with basis set size. Errors denote one standard error computed from 3 independent trials.
comprises the features selected for expressing the force and is connected to two hidden layers. For the two hidden layers, the Swish activation function [55] is used as its derivatives are free from discontinuities, while also being exempt from the weight decay problem [56]. The penultimate layer only contains a single unit, with a sigmoid activation function. The output of this layer is the model's estimate of the time-dependent committor, \(q(\mathbf{r}^{N},t)\). The final layer is a _lambda_ layer, which simply computes the log of the committor. The output of this layer represents the many-body potential, \(\ln q(\mathbf{r}^{N},t)\), and the forces can be computed by taking a derivative of the output with respect to the input coordinates via autodifferentiation. While one can simply parametrize the forces instead of the committor, this architecture automatically enforces the conservativeness of the potential and offers a simple way to obtain the committor without the need to perform a multidimensional integration.
For the NN ansatz for the same 2D system above, \(x\), \(y\) and \(t\) were used as the input features and optimization was performed using the RMSprop optimizer [57] on 200 reactive trajectories. The learning rate was choosen to be 0.001, and for each iteration the loss function and associated gradients were evaluated over half of the trajectories drawn randomly from the ensemble. The training curve plotted in Fig. 1 (B) shows that the loss function plateaus to \(\ln kt_{f}\) within \(n_{\text{int}}=20\) indicating that this ansatz was successful in learning the exact time-dependent committor quickly. While the training required multiple iterations, the number of parameters used to converge to the brute force rate was around 500 without specific optimization, far fewer than required in the naive linear function approximation. The flexibility of the NN ansatz and the relatively swift training suggest it as a viable means of approximating the time dependent committor. As a consequence, in the remainder of the manuscript, we consider only the performance of the NN ansatz.
### Convergence with Limited Statistics
To illustrate the efficiency of this method, we tested the convergence of the NN ansatz with the statistics used to compute the rate and time dependent committor within the previously introduce model two-dimensional potential. Specifically, we tested the convergence of the NN ansatz with the number of reactive trajectories used in training, as well as the time lag between configurations along a reactive trajectory. For both cases, we use two estimators, one which probed how closely the restricted trajectory ensemble is to the full trajectory ensemble, and a second which indicated how well a model trained on an approximated trajectory ensemble performs on the original trajectory ensemble. We denote the error from each of these approximate estimates as \(\Delta\mathcal{L}\).
For the first case, we vary the number of trajectories \(N_{\text{t}}\) used for training the model. The model trained on this limited trajectory ensemble, with force denoted by \(\lambda^{N_{\text{t}}}\) is then used to compute the first cumulant for the original trajectory ensemble comprised of the full 200 trajectories, \(\left\langle\Delta U_{\lambda^{N_{\text{t}}}}\right\rangle_{B|A}\). This is compared to the first cumulant obtained by training the model on the original trajectory ensemble with an optimal estimate of the rate. The difference of these two values, plotted in Fig. 2 (A), is an indicator of how close the committor trained on the restricted ensemble is to the actual committor. This plot shows that the estimator converges quickly with \(N_{\text{t}}\), and suggests that about 50 trajectories are sufficient to learn the time-dependent committor for this specific system. Another way of probing how the restricted trajectory ensemble compares to the original trajectory ensemble is to perform both training and averaging in the restricted trajectory ensemble, and compare that estimate to the true rate. This difference between the action averaged in a restricted ensemble \(\left\langle\Delta U_{\lambda^{N_{\text{t}}}}\right\rangle_{B|A,N_{\text{t}}}\), also shown in Fig 2 (A), is observed to be negative for \(N_{\text{t}}=10\), indicating overfitting of the model to the restricted trajectory ensemble. However, this error vanishes quickly, and plateaus to 0 for \(N_{\text{t}}=50\). This is a reflection of the transition path ensemble and the similarity of different reactive trajectories. The errorbars for all of these cases are obtained by training 5 different models on \(N_{\text{t}}\) randomly selected trajectories from the trajectory ensemble.
For the second case, we approximate the trajectory ensemble by storing only every configurations after a time-lag of \(d_{c}\Delta t\) where \(\Delta t\) is the timestep used to integrate the trajectory. The original reactive trajectory ensemble comprises 200 trajectories with 500 discrete timesteps, and the number of configurations used per trajectory are obtained by dividing 500 by \(d_{c}\). We train the model in
Figure 2: Convergence of rate estimates with respect to the accuracy of the reactive trajectory ensemble. (A) Error estimators for the loss function of the NN ansatz as a function of the number of reactive trajectories \(N_{\text{t}}\) used for training. (B) Error estimators for the loss function of the NN ansatz as a function of the number of configurations used per trajectory. \(d_{c}=1\) corresponds to the original ensemble, where every configuration is used for training. Errorbars denote one standard error computed from 5 independent trials.
this approximated trajectory ensemble and compute the same error estimates. The first estimate compares the first cumulant averaged in the original trajectory ensemble (\(d_{c}=1\)) with the model trained on the approximated trajectory ensemble to the loss computed by performing both averaging and training in the original trajectory ensemble, \(\langle\Delta U_{\lambda_{d_{c}}}\rangle_{B|A}\). In this case, this estimate probes how well the the NN ansatz is able to extrapolate the forces for timesteps that it has not been trained on. The difference plotted in Fig. 2 (B) indicates that this extrapolation fails quickly. The second estimate probes the loss obtained by performing both averaging and training in the approximated trajectory ensemble whose loss is \(\langle\Delta U_{\lambda_{d_{c}}}\rangle_{B|A,d_{c}}\). To get this estimate, the variance in Eq. 14 had to be scaled by a factor of \(d_{c}^{-1}\) to account for the change in the effective timestep \(\Delta t\). This difference plotted in Fig. 2 (B) shows that this approximation only works well for \(d_{c}\leq 5\). The poor scaling with \(d_{c}\) reflects the fact that stochastic diffusions with different variances have no overlap in the continuum limit.[58]
## III Rate decomposition and feature selection
From an information theoretic point of view, the rate is a ratio of a conditioned and an unconditioned trajectory partition function.[2; 43; 59; 60] Our optimization directly minimizes the KL-divergence between a trajectory ensemble driven with force \(\mathbf{\lambda}\) and the undriven reactive trajectory ensemble. As the KL-divergence is expressible by the change in stochastic action along a trajectory, it involves a sum over all the degrees of freedom that the noises act on. For a suboptimal force, the rate is given by the average of the exponential of this quantity, coupling the noises from different degrees of freedom. However, when the variational bound is saturated and the rate is given by a simple mean, the accompanying change of action is linearly decomposable. This decomposition provides mechanistic insight, and affords a means of optimizing the features that form the representation of \(\mathbf{\lambda}\). We generally find a NN ansatz can saturate the bound in Eq. 11, which allows us in this section to explore a variety of featurizations and their corresponding contributions to the rate. Specifically, we consider networks with Cartesian and collective coordinates, as well as those integrated with underdamped equations of motion.
### Cartesian coordinates
Using an NN ansatz allows us to compute the exact time dependent committor and associated Doob force. When Eq. 11 is saturated, the rate is given by the first cumulant of the change in action. This allows us to decompose the rate into independent contributions,
\[kt_{f}=\exp\left[-\sum_{i=1}^{Nd}\langle\Delta U_{\lambda^{*}}^{i}\rangle_{B|A}\right] \tag{20}\]
where
\[\Delta U_{\lambda^{*}}^{i}=\int_{0}^{t_{f}}dt\frac{[\lambda_{i}^{*}(t)]^{2}}{ 4\gamma_{i}k_{\mathrm{B}}T} \tag{21}\]
is the contribution to the rate per stochastic degree of freedom. The change in action, \(\Delta U_{\lambda^{*}}^{i}\), is strictly positive, indicative of the transition probability being less than 1, and results from functional minimization of Eq. 14. The stochastic action for the Langevin equation is a sum of Gaussian random variables for each degree of freedom at each timeslice, and a change in stochastic action is a difference of Gaussian random variables. Given this, and recognizing the quadratic dependence on \(\lambda_{i}^{*}(t)\) for the change in action, we observe that \(\lambda_{i}^{*}(t)\) is essentially fitting the bias in the Gaussian noises generated when conditioning the stochastic process to react. Therefore, only degrees of freedom that require activation, or a rare sequence of noises, will accumulate a significant change in stochastic action or contribute significantly to the rate. Degrees of freedom that are uncorrelated with the reaction will not contribute to the rate, as their noises will remain unbiased.
To illustrate how this decomposition can be used to identify the relevance of coordinates, we consider the same 2D system visualized in Fig. 1 (A) and perform a decomposition of the rate. The two stochastic coordinates, \(x\) and \(y\), are fed into the neural network anzatz and optimized. The resultant distributions for the individual components of the stochastic action, \(P[\Delta U_{\lambda^{*}}^{o}]\), for \(\alpha=\{x,y\}\), defined as
\[P[\Delta U_{\lambda^{*}}^{\alpha}]=\langle\Delta U_{\lambda^{*}}^{\alpha}- \Delta U_{\lambda^{*}}^{\alpha}[\mathbf{X}]\rangle_{B|A}\]
are shown in Fig. 3 (A). Neither of the two distributions show a complete overlap with the distribution of the total rate, \(P[\Delta U_{\lambda^{*}}]\), indicating that both \(x\) and \(y\) are important in describing the reaction coordinate. However \(P[\Delta U_{\lambda^{*}}^{x}]\) is shifted towards larger values, and the expectation value of \(\langle\Delta U_{\lambda^{*}}^{x}\rangle_{B|A}\) is found to be be larger than \(\langle\Delta U_{\lambda^{*}}^{y}\rangle_{B|A}\), allowing us to quantitatively assert that the coordinate \(x\) is more important to the reaction than \(y\), as it encodes more information of the conditioned path ensemble. However \(y\) is still relevant, in agreement with intuition from the geometry of the potential.
### Collective coordinates
While the decomposition above can quantify the relevance of an coordinate to a reactive event, they are expressed in the bare Cartesian coordinates that enter into the equation of motion. As such, their utility is diminished in many-particle systems which are translationally and rotationally invariant, and for which the number of degrees of freedom is large. A canonical approach in the study of rare events in complex systems is to employ collective coordinates, which are nonlinear combinations of the original Cartesian coordinates and may encode the expected symmetries of the system. In order to extend the formalism into this regime, we consider the transformation between Cartesian and collective coordinates, \(\mathbf{r}\rightarrow\tilde{\mathbf{r}}\), and its subsequent impact on the rate decomposition. The Jacobian of the transformation \(\mathbf{J}_{\mathbf{r}}(\tilde{\mathbf{r}})\) is,
\[\mathbf{J}_{\mathbf{r}}(\tilde{\mathbf{r}})=\left[\nabla_{\mathbf{r}}\tilde{\mathbf{r} }_{1}~{}\cdots~{}\nabla_{\mathbf{r}}\tilde{\mathbf{r}}_{\tilde{n}}\right]\]
which is a matrix of \(Nd\times\tilde{N}\) partial derivatives where \(\tilde{N}\) is the size of the collective variable function space. Under
this transformation, the original forces \(\mathbf{\lambda}(\mathbf{r}^{N},t)\) and the transformed forces \(\mathbf{\tilde{\lambda}}(\mathbf{\tilde{r}},t)\) are related by
\[\mathbf{\lambda}(\mathbf{r}^{N},t)=\mathbf{J}_{\mathbf{r}}^{T}(\mathbf{\tilde{r}}) \cdot\mathbf{\tilde{\lambda}}(\mathbf{\tilde{r}},t) \tag{22}\]
where the force acting on the original coordinate \(\mathbf{r}_{i}\) due to the force \(\mathbf{\tilde{\lambda}}_{j}\) which depends on the collective coordinate \(\mathbf{\tilde{r}}_{j}\) is given by a product of \(\mathbf{\tilde{\lambda}}_{j}\) and the Jacobian element \(J_{ij}\). Inserting this into the expression for the optimal stochastic action, in Eq. 20, we obtain
\[\Delta U_{\lambda^{*}}=\sum_{j,k}^{\bar{N}}\Delta\tilde{U}_{\lambda^{*}}^{jk} \tag{23}\]
with
\[\Delta\tilde{U}_{\lambda^{*}}^{jk}=\int_{0}^{t_{f}}dt\frac{\tilde{\lambda}_{ j}^{*}\tilde{\lambda}_{k}^{*}}{4k_{\rm B}T}\Gamma_{jk}^{-1} \tag{24}\]
where the initially linearly independent factors from each Cartesian coordinate, indexed by \(i\), are expressed as a pair of contributions from the collective coordinates, indexed by \(j\) and \(k\). From this form it is evident that the contribution to the rate incurred from the transformed coordinates \(\mathbf{\tilde{r}}\) are not necessarily independent of each other. Zero coupling between \(\tilde{r}_{j}\) and \(\tilde{r}_{k}\) is obtained when the effective friction \(\Gamma_{jk}^{-1}=\sum_{i=1}J_{ij}J_{ik}/\gamma_{i}=\delta_{jk}/\gamma\), with \(\gamma_{i}=\gamma\), a condition that requires the friction weighted transformed coordinates to be orthogonal.
As an illustration of the decomposition under a change of coordinates, we consider the same 2D system as before, but rather than parameterizing \(\mathbf{\lambda}_{i}\) on \(x\) and \(y\), we transform into polar coordinates \((x,y)\rightarrow(r,\theta)\), where \(r=x^{2}+y^{2}\) and \(\tan\theta=y/x\). We quantify the contributions to the rate from the polar coordinates, by training the NN ansatz on the polar coordinates. The partials are prepared ahead of time and are passed into the loss function along with the noises. The relative action distributions in the transformed coordinates, \(P[\Delta\tilde{U}_{\lambda^{*}}^{\alpha,\alpha^{\prime}}]\) for \(\alpha,\alpha^{\prime}=\{r,\theta\}\) computed using Eq. 24 are shown in Fig. 3 (B). Since polar coordinates are orthogonal and \(\gamma_{x}=\gamma_{y}\), the coupling term \(\Delta\tilde{U}_{\lambda^{*}}^{\alpha,\alpha^{\prime}}=0\) for \(\alpha\neq\alpha^{\prime}\). We observe that the distribution corresponding to the coordinate \(\theta\) almost perfectly overlaps with the total action distribution, indicating that \(\theta\) is an excellent descriptor of the reaction coordinate. The distribution for \(r\) is centered around 0 and narrow, illustrating it is unbiased by conditioning on a reaction and thus contributes little to the rate. This decomposition of the rate in collective coordinates provides a simple metric to identify the relevance of physically meaningful descriptors to a reactive process, without making any _a-priori_ assumptions about the reaction. The form of this metric is purely based on the physical mechanism of the reaction, as it quantifies how conditioning a trajectory ensemble to be reactive shifts the noise distributions per-coordinate. This allows us to do hypothesis testing for the relevance of collective coordinates, and to deduce from the size of their contributions to the rate, which are gating the rare event and which coordinates are uncorrelated with barrier crossing. This hypothesis testing requires the saturation of the variational bound, which if not achieved points to the lack of relevant features in the NN ansatz.
### Importance of velocity
In a general molecular system, motion is not overdamped and as a consequence the full phase space spanned by both configurational coordinates as well as their conjugate velocities are required to specify a reactive trajectory. In order to understand the importance of including velocity degrees of freedom in a parameterization of \(\mathbf{\lambda}\), we consider formally when it can be neglected. For concreteness, we consider an the underdamped Langevin equation of the form
\[m\dot{\mathbf{v}}_{i}=-\gamma\mathbf{v}_{i}+\mathbf{F}_{i}(\mathbf{r}^{N})+ \mathbf{\eta}_{i}\qquad\dot{\mathbf{r}}_{i}=\mathbf{v}_{i} \tag{25}\]
where \(\mathbf{v}_{i}\) is the velocity of particle \(i\) and the rest of the quantities are defined in the same way as in Eq. 1. For simplicity we take the mass, \(m\), and friction \(\gamma\) to be independent of particle index, though generalizations are straightforward. We start by noting that the backward Kolmogorov equation takes the form,
\[\partial_{t}q=-\sum_{i=1}\mathbf{v}_{i}\nabla_{\mathbf{r}_{i}}q-\frac{\gamma \mathbf{v}_{i}}{m}\nabla_{\mathbf{v}_{i}}q+\frac{\mathbf{F}_{i}}{m}\nabla_{ \mathbf{v}_{i}}q+\frac{2\gamma k_{\rm B}T}{m^{2}}\nabla_{\mathbf{v}_{i}}^{2}q \tag{26}\]
Figure 3: Decomposition of the rate into contributions from different degrees of freedom. (A) The distribution of the relative action for the two Cartesian coordinates \(x\) and \(y\). (B) The decomposition of the rate performed for the polar coordinates \(r\) and \(\theta\), using Eq. 20. Due to the orthogonality of the transformation, and the isotropicity of the diffusivities, the coupling term is zero. The perfect overlap between the \(\Delta\tilde{U}_{\lambda}^{\theta\theta}\) and the total relative action identifies \(\theta\) to be an excellent descriptor of the reaction coordinate.
which when solved with the same boundary conditions as Eq. 12 yields the time dependent committor function \(q(\mathbf{r}^{N},\mathbf{v}^{N},t)\) whose arguments we suppress above for ease of notation. Since the noise acts only on the velocities, the Doob force is given by the gradient of the committor with respect to the velocities rather than the positions,
\[\boldsymbol{\lambda}_{i}^{*}(\mathbf{r}^{N},\mathbf{v}^{N},t)=\frac{2\gamma k_{ \mathrm{B}}T}{m}\nabla_{\mathbf{v}_{i}}\ln q(\mathbf{r}^{N},\mathbf{v}^{N},t) \tag{27}\]
thus naively it would seem that parameterizing a velocity dependence is crucial whenever an underdamped equation is used. However, in the limit that \(\gamma^{-1}\to 0\), we find that the velocity dependence can be safely ignored.
This can be understood via application of perturbation theory, where \(q(\mathbf{r}^{N},\mathbf{v}^{N},t)\) is expanded in orders of \(\gamma^{-1}\).[29, 61] To first order, \(q(\mathbf{r}^{N},\mathbf{v}^{N},t)\) becomes
\[q(\mathbf{r}^{N},\mathbf{v}^{N},t)=q_{0}(\mathbf{r}^{N},t)+\frac{m\mathbf{v} }{\gamma}\nabla_{\mathbf{r}}q_{0}(\mathbf{r}^{N},t)+\mathcal{O}(\gamma^{-2}) \tag{28}\]
where \(q_{0}\) is independent of the velocity. Substituting the approximated form of \(q\) into the underdamped backward Kolmogorov equation, we find,
\[\partial_{t}q\approx-\sum_{i=1}\frac{\mathbf{F}_{i}}{\gamma}\nabla_{\mathbf{ r}_{i}}q_{0}-\frac{m\mathbf{v}_{i}^{2}}{\gamma}\nabla_{\mathbf{r}_{i}}^{2}q_{0}+ \mathcal{O}(\gamma^{-2}) \tag{29}\]
which when averaged over the Maxwell-Boltzmann distribution, yields
\[\partial_{t}q=-\sum_{i=1}^{N}\frac{\mathbf{F}_{i}\left(\mathbf{r}^{N}\right)}{ \gamma}\nabla_{\mathbf{r}_{i}}q_{0}-\frac{k_{\mathrm{B}}T}{\gamma}\nabla_{ \mathbf{r}_{i}}^{2}q_{0}+\mathcal{O}(\gamma^{-2}) \tag{30}\]
which to first order in \(1/\gamma\) is identical to Eq. 12, the overdamped backward Kolmogorov equation, with \(q(\mathbf{r}^{N},\mathbf{v}^{N},t)\approx q_{0}(\mathbf{r}^{N},t)\). As a consequence, the committor in the overdamped limit becomes a function solely of \(\mathbf{r}^{N}\) and the Doob force is given by a gradient with respect to position. In Appendix A, we show that to \(\mathcal{O}(\gamma^{-2})\) this approximation also saturates the variational inequality for the rate expression.
In order to gain intuition for when the higher order terms in Eq. 28 become negligible, we consider the reaction of a particle in a simple double well potential of the form
\[V(x)/k_{\mathrm{B}}T=\frac{1}{64}(x-4)^{2}(x+4)^{2} \tag{31}\]
where \(x\) is a dimensionless coordinate and we take \(k_{\mathrm{B}}T=m=1\), which determines a dimensionless time unit \(t^{*}=\sqrt{k_{\mathrm{B}}T/m}\). We considered transitions between states defined by the indicator functions
\[h_{A}(x)=\Theta(-x+3.6)\qquad h_{B}(x)=\Theta(x-3.6) \tag{32}\]
and obtain 400 reactive trajectories each of length \(t_{f}/t^{*}=5\) using a timestep of 0.01 \(t^{*}\) using first order integrator. We studied this system over a range of \(\gamma/\gamma^{*}\) between 0.1 and 1 with \(\gamma^{*}=m/t^{*}\). We trained a NN ansatz only on the positions and time, and compared the optimized value of the loss function to the brute-force rates evaluated from a direct mean first passage time calculation. Figure 4 (A) shows the difference between the two estimates, along with the brute-force rate as a function of \(\gamma/\gamma^{*}\). The reactive rates show a Kramers' turnover[10] at \(\gamma/\gamma^{*}\approx 0.3\), and the optimized loss is consistently off by a factor of 1.5 \(\gamma/\gamma^{*}<0.3\). After the turnover, the error in rate estimate decreases monotonically, until it completely vanishes for \(\gamma/\gamma^{*}=1\). It is surprising that this relatively small friction is already consistent with the overdamped limit.
To further understand the importance of velocity in the time dependent committor, we train a model to optimize for the velocity-dependent committor for \(\gamma/\gamma^{*}=0.1\) and \(\gamma/\gamma^{*}=1\). For both of these cases, the optimized value of the loss was within a standard error of the true rate. The plot of the optimized committor evaluated at time
Figure 4: Computation of the committor for reactive processes that are integrated using the underdamped equations of motion. (A) The reaction rate and the error estimate in the loss function for optimizing the velocity-independent committor, \(\mathcal{L}_{x}\), as a function of the friction coefficient \(\gamma\). (B) and (C) show the optimized position and velocity dependent committor of the reaction between the metastable well depicted by the potential energy surface \(V(x)\) in red, for friction coefficient \(\gamma=1.0\) and \(\gamma=0.1\) respectively. For (B) and (C) the dot-dashed, solid and dashed lines denote slices of the committors at constant velocity, \(v=-1,0\) and \(1\), respectively.
\(t=t_{f}/2\) as a function of velocity and position is shown in Figs. 4 (B) and (C). The functional dependence on time is not strong away from \(t=0\) and \(t=t_{f}\). For \(\gamma/\gamma^{*}=1.0\), \(q(x,v,t)\) depends weakly on \(v\), with the dependence being captured by a linear shift along \(x\) to an otherwise simple sigmodal dependence on \(x\). This is precisely the dependence expected from the expansion in Eq. 28. However, for \(\gamma/\gamma^{*}=0.1\) the committor is strongly sensitive to the velocity. For \(v=0\) the large \(x\) behavior of \(q(x,v,t)\) slowly converges to 1 reflecting the potential for the particle even at large values of \(x\) to fail to react. For negative velocities, the inflection point of \(q(x,v,t)\) is shifted to positive values of \(x\), consistent with corresponding values of the potential that are low enough below the barrier that the particle is trapped. Correspondingly, for positive velocities, \(q(x,v,t)\) is shifted to negative values of \(x\), reflecting the high likelihood of reacting even for positions not quite to the top of the barrier. This behavior is not reproducible by scaling a spatially dependent committor by a simple constant. Hence, featurization of the velocity, or expansion of the committor to higher orders in \(\gamma^{-1}\) is required to accurately encode the time-dependent committor for this low-friction regime.
## IV Application to alanine dipeptide
To examine the efficacy of this method for a complex molecular system, we investigate the isomerization of alanine dipeptide. Alanine dipeptide has two metastable conformations. It can transition between these two states via the rotation of the Ramachandran angles \(\phi\) and \(\psi\). A multitude of path sampling methods have focused on this model due to the collective nature of this transition in the gas phase and in solution. While the transition can be tracked using the \(\phi\) and \(\psi\), they serve only as order parameters and are not sufficient in describing the complete reaction coordinate or committor.[14; 18] Significant advancements in methods to parameterize the time independent committor have been made by resolving this model along physically motivated, predetermined order parameters.[21; 23; 28; 62; 63; 64; 65; 66; 67; 23; 68] As we show, choosing among a large number of internal coordinates without consideration of their correlation or coupling risks neglecting important aspects of the transition path ensemble. This is because internal coordinates do not form an orthogonal set of coordinates, and collective motions such as the rotations of a single dihedral angle can be coupled with the motions of angles and other dihedrals. Below we first consider isomerization of alanine dipeptide in implicit solvent, and then in explicit solution. For both we parameterize \(\ln q(\mathbf{r}^{N},t)\) using the NN ansatz.
### Isomerization in implicit solvent
In implicit solvent we consider isomerization of alanine dipeptide between its \(C_{\text{eq}}\) and \(C_{\text{ax}}\) conformations, as visualized in the inset in Fig. 5. To investigate this reaction, we first generated a reactive trajectory ensemble. Simulations were performed in OpenMM[69] and the AMBER ff14SB forcefield[70] was used for parametrizing the dipeptide interactions. A Langevin thermostat with the leap-frog discretization was used as the integrator.[71] The timestep was chosen to be 1 fs, \(\gamma\) was set to 10 ps\({}^{-1}\) and the transition path length \(t_{f}\) was set as 1 ps. The indicator functions identifying the metastable wells \(C_{\text{ax}}\) and \(C_{\text{eq}}\) were defined using the Ramachandran angle \(\phi\),
\[h_{A}(\phi)=\Theta(\phi-\pi/4)\qquad h_{B}=\Theta(-\phi+\pi/4) \tag{33}\]
and the first trajectory was generated by running forward and backward simulations from the top of the saddle point along the dihedral \(\phi=0\). Transition path sampling[2] was used to obtain a reactive trajectory ensemble and the shooting from the top method[23] was used to generate new trajectories. This method offers a way to decrease correlations between the trajectories as well as increase the acceptance rate for new trajectories by performing shooting moves within a restricted region near the saddle point, which for this case was chosen as \(-\pi/6\leq\phi\leq\pi/6\). A total of 1000 trial trajectories were generated, and the acceptance rate came out to be approximately 0.4. Every 5th trajectory in the ensemble was saved and used for analysis for a total of 200 trajectories. For the choice of reaction descriptors, we used all the internal coordinates that did not involve the hydrogen atoms. This set consists of 9 bonds, 11 angles and 12 dihedrals, the latter two of which contain 9 total redundant coordinates. These internal coordinates along with the Jacobians are computed for the 200 saved trajectories, and saved to be used for training.
We use the underdamped approximation discussed in section Eq. 28, which exempts us from including the velocities as a part of the feature set. The loss function is modified accordingly for the action of the Langevin leap-frog integrator[72] implemented in OpenMM, as shown in Appendix B. The RMSProp optimizer with a learning rate of 0.001 was used to train the model, and training was performed for 3000 steps in a manner analogous to the other examples. Figure 5 shows the value of the loss function, along with the reaction rate obtained by computing the mean first passage time of 400 reactive trajectories generated independently. The loss function plateaus around the 2500th step, with the value being within one standard error of the true rate. The stopping criteria was taken to be where the change in the mean
Figure 5: Convergence of the loss function for isomerization in implicit solvent along with a representative snapshot of the two metastable conformations \(C_{\text{ax}}\) (left) and \(C_{\text{eq}}\) (right). Solid black line denotes the true rate, \(\ln kt_{f}\), with shading denoting one standard error.
loss function over 500 iterations was less than a threshold of 1.
To gain mechanistic insight into the reaction, we performed the decomposition of the relative action in terms of the internal coordinates. Following Eq. 24, we computed the \(\langle\tilde{U}^{jk}_{\lambda^{*}}\rangle_{B|A}\) matrix and visualize it in Fig. 6 (A). The angles and dihedrals are represented using the letters \(a\) and \(d\) respectively, and the numbers in the subscripts are defined in Appendix C. We observe that the matrix of contributions to the rate is sparse, with only a few select coordinates and their couplings obtaining a significant value. The contributions from the distances have been removed from the plot as their combined value was calculated to be statistically indistinguishable from zero. The effective decoupling of the bond vibrations from the angles and dihedrals provides evidence that the NN-based ansatz is not overfitting redundant features from the limited input dataset, consistent with physical intuition for stiff bonds.
We also observe a strong coupling among the dihedrals and the angles. Internal coordinates do not form an orthogonal set of coordinates, and the off-diagonal terms in the matrix indicate that the reaction is mediated by the coupling between these internal degrees of freedom. Moreover, the change in action is delocalized between sets of internal coordinates that have been ignored in previous studies. We note that the off-diagonal elements of the matrix are negative, while the diagonal terms are positive. This prevents us breaking down the rate in terms of additive contributions from different degrees of freedom. However, the matrix is symmetric so we can sum over the rows of the matrix and define the contribution from a single collective coordinate \(j\) as
\[\Delta\tilde{U}^{j}_{\lambda^{*}}=\sum_{k}^{\tilde{N}}\Delta\tilde{U}^{jk}_{ \lambda^{*}} \tag{34}\]
where \(\Delta\tilde{U}^{jk}_{\lambda^{*}}\) is defined the same way as in Eq. 24. Plotted in Fig. 6 (B), this decomposition is found to be positive for almost all the internal coordinates except for two. These negative values are within the standard error. This allows us to extract the leading contributors to the \(C_{\text{ax}}\to C_{\text{eq}}\) reaction. We observe that the Ramachandran angle \(\phi\) (\(d_{4}\)) is found to incur the largest contribution. This is a remarkable result as no _a-priori_ information of the reaction coordinate was passed into the model for training. While the indicator functions that were used to define the boundaries of the metastable wells were defined using \(\phi\), the optimization scheme itself did not require any description of the indicator functions. Yet, this method automatically finds the Ramachandran angle \(\phi\) to contribute the most to isomerization, out of 32 internal coordinates, 9 of which are redundant.
This decomposition reveals other leading contributors to the reaction and highlights other order parameters that are activated. The C-N-C\({}_{\alpha}\)-C\({}_{\beta}\) (\(d_{3}\)) and the C\({}_{\beta}\)-C\({}_{\alpha}\)-C-N (\(d_{8}\)) torsions are found to be the next two leading contributors, suggesting that rotation of the Ramchandran angle \(\phi\) is strongly coupled to the orientation of the alkyl bond. Some other important internal coordinates that are selected by this method include the O-C-N angle (\(a_{3}\)), the C-O-C-N improper torsion (\(d_{11}\)) and the C\({}_{\beta}\)-C\({}_{\alpha}\)-C-O (\(d_{7}\)) torsion. These internal coordinates also emphasize the importance of the relative orientation of the O-C bond and the methyl-bond. Our final observation is that the contribution from other Ramachandran angle \(\psi\) (\(d_{6}\)) is found to be effectively zero. This is another significant result as \(\psi\) has long been used as the 2nd order parameter to explore the isomerization of alanine dipeptide due to the topology of the free energy surface. Our observations of \(\psi\) not contributing to the reaction coordinate directly contradicts the importance of this dihedral that is implicit in these studies.
### Isomerization in Explicit Solvent
Finally to demonstrate the ability to tackle very high dimensional systems, we explore the conformational isomerization of alanine dipeptide in explicit solvent. As the potential energy landscape of along the Ramachandran angles is modified due to solvent interactions [14, 73], we consider the isomerization between the \(\beta\) and \(\alpha_{L}\) states, visualized in Fig. 7. The equations of motion and forcefields
Figure 6: Decomposition of the rate of isomerization of alanine dipeptide in implicit solvent. (A) The \(\Delta\tilde{U}^{jk}_{\lambda}\) computed using Eq. 22 as a function of internal coordinates. (B) Decomposition of the rate in terms of contributions from internal degrees of freedom, computed by summing up the rows of the matrix in (A).
for the peptide are the same as in the implicit solvent study, and the TIP3P forcefield [74] is used for parameterizing the water molecules. Lorentz-Berthelot mixing rules are used for the peptide-water interactions. The box volume is 27 nm\({}^{3}\) with 862 water molecules and periodic boundary conditions and long ranged interactions employed an Ewald summation. The basin definitions for \(\alpha_{L}\) and \(\beta\) are the same as that of the \(C_{\mathrm{ax}}\) and \(C_{\mathrm{eq}}\) states, respectively. The same method as before is used for obtaining a reactive trajectory ensemble, with an ensemble of 200 reactive trajectories used for learning the time dependent committor.
For this reaction, we restrict the input feature to only parameterize the internal coordinates of the peptide. Noting that bond vibrations are decoupled from rotations of the dihedrals and that water interactions are mediated through hydrogen bonds, our input feature set comprises all the 36 angles and 45 dihedrals and contains 42 redundancies. While the solvent degrees of freedom can be parameterized using symmetry functions, [75, 76, 77] our goal is to illustrate how our method can leverage a quantitative insight into the reaction mechanism even when it does not have access to the full phase space.
Optimization of the NN ansatz is performed using the RMSProp Optimizer for 2500 steps, and the results are shown in Fig. 7 (B). The loss function plateaus to a value of 2 higher than \(\ln kt_{f}\). This is expected, as the feature set excludes the relevant solvent degrees of freedom. Regardless, we are able to obtain the correct rate from this method by computing the second cumulant and exponential average, the form of which is given in Eqs. 9 and 10. Both these estimators are plotted, and are observed to converge to the rate computed independently. The agreement between the exponential estimator and second cumulant is only expected when the loss function is perturbatively close to the true value, otherwise the additional cumulants would be needed. Note that even in the case where the Doob force is not fully optimal, driven trajectories are almost surely reactive.
Since the variational bound is not saturated, the sum given in Eq. 20 does not equal the rate. This means that the description of the time dependent committor is not exact. However, because we are perturbatively close we can still use the method to extract the relative importance of degrees of freedom as before. First, we perform the same decomposition as Eq. 34. While not visualized, the \(\Delta\tilde{U}_{\lambda}^{jk}\) matrix is found to be less sparse for the reaction in solvent, due to the renormalization of the solvent effects into the peptide degrees of freedom. To gain quantitative insight, we sum of the rows of the \(\Delta\tilde{U}_{\lambda}^{jk}\) matrix as before, and plot the contributions from the 20 leading features. Plotted in Fig. 8 (A), this decomposition shows that the rotations of angles involving hydrogen atoms become more important than the internal rotations of the peptide dihedrals. Most of the leading contributors are angles that involve one or two hydrogen atoms, re-phasizing the effect of solvent in mediating this reaction. This is in accord with the findings of previous papers on the isomerization of solvated alanine dipeptide. [78, 79, 18, 21]
Figure 8: Decomposition of the rate of isomerization of alanine dipeptide in explicit solvent. (A) Decomposition of the rate in terms of contributions from the top twenty internal degrees of freedom, computed using Eq. 34. (B) Decomposition of the rate in terms of contributions from all atoms, computed using Eq. 20. The atom indices are labelled in the snapshot of the peptide in Appendix C.
Figure 7: Investigation of the conformational isomerization of alanine dipeptide in explicit solvent. (A) Representative snapshot of the two metastable conformations, \(\alpha_{L}\) (left) and \(\beta\) (right). (B) Value of the loss function along with the two rate estimators \(k^{(2)}\) and \(k^{(\mathrm{exp})}\) during training. The rate can be leveraged by the two estimators, even though the loss function does not converge to \(\ln\ kt_{f}\). Solid black line denotes the true rate, \(\ln kt_{f}\), with shading denoting one standard error.
However what is striking is that no single mode is dominant, with no internal coordinate accounting for more than 5% of the rate.
To confirm the role of the hydrogen atoms, we plot a decomposition in terms of the individual atoms in Fig. 8 (B) using the action expressed in the bare coordinates. The plot reveals that the methyl carbon contributes the most to the rate, followed by the acetyl carbonyl oxygen atom. However, the combined importance of the hydrogens far outweighs both. This finding also illuminates why the addition of solvent transforms the reactive mechanism. Both these atoms strongly interact with water molecules via hydrophilic and hydrophobic effects that are mediated through hydrogen-bonding and volume exclusion, respectively.[18; 21; 78; 79] We find that this method is able to provide a rate estimate and quantify the renormalized contributions from different degrees of freedom even when it does not have access to the full phase space. This feature can be particularly useful for more complex systems, where a complete description of the system is not tractable due to computational or memory bottlenecks.
## Conclusion
We have detailed a novel method that can be used to evaluate the time-dependent committor and the rate from a reactive trajectory ensemble. The method employs an ansatz for parameterizing a many-body potential that is related to the time-dependent committor, and can be optimized by variationally solving the backward Kolmogorov equation, as expressed through a trajectory reweighting theory used within variational path sampling. For reactive processes in equilibrium, where the cost of obtaining a reactive trajectory ensemble is independent of the rarity of the reaction, this method provides a simple procedure to compute the rate and distill mechanistic information.
Combining this optimization scheme with a neural network ansatz for the time dependent committor allows us to saturate the variational rate bound, and gives us a complete description of the transition path ensemble. Specifically, we have described how to decompose the rate in terms of additive contributions from different degrees of freedom. This procedure of quantifying contributions can be applied to collective coordinates and order parameters that are used for characterizing reactions of complex molecular systems. We showcase this decomposition by investigating the reaction of Brownian particles in simple potentials in underdamped and overdamped regimes. We have shown how to apply this procedure to conformational changes in solution, leveraging insightful information about the reactive event even when the full phase space is not provided as training data. In cases where the variational bound is not saturated, the rate can still be computed using other estimators.
The formalism employed, casts the time dependent committor as an optimal control force naturally making this model generative. Specifically, when the variational bound is saturated, a time dependent control force is produced that generates reactive trajectories in an unbiased manner. While not used as such here, this procedure can be employed to glean higher order statistics of the reactions over and above the rate.[6] When the variational bound is not saturated the control force can still be applied to generate unbiased transition path statistics through ensemble reweighing.[44] One could envision an iterative procedure in cases where path sampling is difficult, for example in cases of long diffusive trajectories, where initial control forces are gradually optimized through alternative cycles of training and reactive ensemble generation.
As the method is based on ensembles of trajectories and path reweighting, there is no formal restriction to equilibrium systems. Indeed, variational path sampling has been initially applied to systems whose dynamics break detailed balance. As such the procedures developed here for NN based function approximations and rate decompositions transfer over directly to rare transitions in nonequilibrium steady-states. However, traditional path sampling techniques that render the generation of a path ensemble simple in equilibrium are not typically as effective away from equilibrium. The iterative procedure alluded to above is likely a robust means of extending this methodology to study phase transitions in active matter and driven assembly.
## Acknowledgements
We would like to thank Dr. Avishek Das and Dr. Jorge Rosa-Raiecs for illuminating discussions regarding variational path sampling methods. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Basic Energy Sciences, via the Scientific Discovery through Advanced Computing (SciDAC) program.
## Appendix A Saturation of variational bound
In Section III.3 we detailed how the time dependent committor can be approximated for a formally underdamped system evolving in an overdamped regime. Here we demonstrate that the approximate form of the time dependent committor saturates the variational rate bound up to order \(\mathcal{O}(\gamma^{-2})\). Using the approximation \(q(\mathbf{r}^{N},\mathbf{v}^{N},t)=q_{0}(\mathbf{r}^{N},t)+m\mathbf{v}\nabla _{\mathbf{r}}q_{0}(\mathbf{r}^{N},t)/\gamma+\mathcal{O}(\gamma^{-2})\) we consider the log transform, \(Q=\ln q\), which to equivalent order in perturbation theory is
\[Q(\mathbf{r}^{N},\mathbf{v}^{N},t) \approx\ln\left(q_{0}+\frac{m\mathbf{v}}{\gamma}\nabla_{ \mathbf{r}}q_{0}+\mathcal{O}(\gamma^{-2})\right)\] \[=\ln q_{0}+\frac{m\mathbf{v}}{\gamma}\nabla_{\mathbf{r}}\ln q_{ 0}+\mathcal{O}(\gamma^{-2}) \tag{10}\]
where we have defined \(\beta Q_{0}=\ln q_{0}\). For an underdamped equation of motion the relative action, \(\Delta U_{\lambda}\) is given by
\[\Delta U_{\lambda}[\mathbf{X}] =-\sum_{i=1}^{N}\frac{1}{4\gamma k_{\mathrm{B}}T}\int_{0}^{t_{f}} dt\,\mathbf{\lambda}_{i}^{2}-2\mathbf{\eta}_{i}\mathbf{\lambda}_{i}^{2} \tag{10}\] \[=-\sum_{i=1}^{N}\frac{1}{4\gamma k_{\mathrm{B}}T}\int_{0}^{t_{f}} dt\,\left[\mathbf{\lambda}_{i}^{2}\right.\] \[\left.-2\mathbf{\lambda}_{i}\left(m\dot{\mathbf{v}}_{i}+\gamma\mathbf{ v}_{i}-\mathbf{F}_{i}(\mathbf{r}^{N})\right)\right]\]
just as in the overdamped case. In the second line we have eliminated the noise for using the equation of motion. Substituting the underdamped Doob force,
\[\mathbf{\lambda}_{i}^{*}=\frac{2\gamma k_{\mathrm{B}}T}{m}\nabla_{\mathbf{v}_{i}}Q (\mathbf{r}^{N},\mathbf{v}^{N},t) \tag{11}\]
into \(\Delta U_{\lambda}[\mathbf{X}]\) we find,
\[\Delta U_{\lambda^{*}} =\sum_{i}^{N}\int_{0}^{t_{f}}dt\left[\dot{\mathbf{v}}_{i}\nabla _{\mathbf{v}_{i}}Q+\frac{\gamma}{m}\nabla_{\mathbf{v}_{i}}Q\right. \tag{12}\] \[\left.-\frac{\mathbf{F}_{i}}{m}\nabla_{\mathbf{v}_{i}}Q-\frac{ \gamma k_{\mathrm{B}}T}{m^{2}}(\nabla_{\mathbf{v}_{i}}Q)^{2}\right]\]
The first term can be resolved using Ito's Lemma
\[\dot{Q}=\partial_{t}Q+\sum_{i}^{N}\dot{\mathbf{v}}_{i}\nabla_{\mathbf{v}_{i}}Q +\mathbf{v}_{i}\nabla_{\mathbf{r}_{i}}Q+\frac{\gamma k_{\mathrm{B}}T}{m^{2}} \nabla_{\mathbf{v}_{i}}^{2}Q \tag{13}\]
Substituting this back to the relative action, we get:
\[\Delta U_{\lambda^{*}} =-\int_{0}^{t_{f}}dt\sum_{i}^{N}\left[\frac{\gamma k_{\mathrm{B}} T}{m^{2}}(\nabla_{\mathbf{v}_{i}}Q)^{2}+\mathbf{v}_{i}\nabla_{\mathbf{r}_{i}}Q\right. \tag{14}\] \[+\left.\frac{\gamma k_{\mathrm{B}}T}{m^{2}}\nabla_{\mathbf{v}_{i }}^{2}Q-\frac{\gamma}{m}\nabla_{\mathbf{v}_{i}}Q+\frac{\mathbf{F}_{i}}{m} \nabla_{\mathbf{v}_{i}}Q\right]-\dot{Q}+\partial_{t}Q\]
Finally, using the perturbative approximation of \(Q\) in Eq. 10, and substituting the approximated form of the backward Kolmogorov equation in Eq. 29 yields
\[\Delta U_{\lambda^{*}} =-\int_{0}^{t_{f}}dt\sum_{i}^{N}\Big{[}\frac{k_{\mathrm{B}}T}{ \gamma}(\nabla_{\mathbf{r}_{i}}Q_{0})^{2}+\frac{m\mathbf{v}_{i}^{2}}{\gamma} \nabla_{\mathbf{r}_{i}}^{2}Q_{0}\] \[\qquad\qquad\qquad+\frac{\mathbf{F}_{i}}{\gamma}\nabla_{\mathbf{ r}_{i}}Q_{0}\Big{]}-\dot{Q}+\partial_{t}Q\] \[=-\int_{0}^{t_{f}}dt\,\dot{Q}=\ln q \tag{15}\]
Hence, \(\Delta U_{\lambda^{*}}\) quantifies the transition probability between the states \(A\) and \(B\) over time \(t_{f}\) when averaged over an initial distribution in \(A\).
## Appendix B Relative action for Langevin leap-frog integrator
The equations of motion for the Langevin leap-frog integrator is given by [69; 72]
\[\mathbf{v}_{i}[t+\Delta t/2] =\alpha\mathbf{v}_{i}[t-\Delta t/2]+\frac{1-\alpha}{\gamma m_{i}} \mathbf{F}_{i}[t]+\mathbf{\eta}_{i}[t] \tag{16}\] \[\mathbf{r}_{i}[t+\Delta t] =\mathbf{r}_{i}[t]+\mathbf{v}_{i}[t+\Delta t/2]\Delta t\]
where the definitions of \(\mathbf{v}_{i}\), \(\mathbf{r}_{i}\) and \(\mathbf{F}_{i}\) are the same as in Eq. 25, \(m_{i}\) is the mass of particle \(i\), \(\gamma\) is friction coefficient, \(\Delta t\) is the timestep, and \(\alpha=\exp[-\gamma\Delta t]\). The noise, \(\mathbf{\eta}_{i}\) is a Gaussian random variable with mean \(\langle\mathbf{\eta}_{i}(t)\rangle=0\) and variance \(\langle\mathbf{\eta}_{i}(t)\rangle\otimes\mathbf{\eta}_{j}(t^{\prime})=2k_{\mathrm{B}} T(1-\alpha^{2})m_{i}^{-1}\delta_{ij}\mathbf{1}(t-t^{\prime})\). For this discretization, the relative stochastic action is
\[\Delta U_{\lambda}=\sum_{n}^{t_{f}/\Delta t}\sum_{i}^{N}\frac{(1-\alpha)\mathbf{ \lambda_{i}}^{2}[n\Delta t]}{2(1+\alpha)m_{i}\gamma^{2}k_{\mathrm{B}}T}-\frac {\mathbf{\lambda_{i}}[n\Delta t]\mathbf{\eta}_{i}[n\Delta t]}{\gamma k_{\mathrm{B}}T(1 +\alpha)} \tag{17}\]
which is the same general form as in the overdamped case.
## Appendix C Internal coordinates for alanine dipeptide
In the studies on alanine dipeptide we parameterized our NN ansatz for the time dependent committor based on a set of internal coordinates. In Tables 1 and 2 we define each of the angles and dihedrals referred to in the main text based on the atom numbering Fig. 9.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Label & Type & Description & Contribution \\ \hline \(a_{13}\) & Angle & 0 - 1 - 3 & 0.035 \\ \(a_{41}\) & Angle & 17 - 16 - 18 & 0.035 \\ \(d_{4}\) & Dihedral & 4 - 6 - 8 - 14 & 0.034 \\ \(a_{23}\) & Angle & 7 - 6 - 8 & 0.032 \\ \(a_{33}\) & Angle & 11 - 10 - 12 & 0.032 \\ \(a_{12}\) & Angle & 0 - 1 - 2 & 0.031 \\ \(a_{15}\) & Angle & 2 - 1 - 3 & 0.030 \\ \(a_{34}\) & Angle & 11 - 10 - 13 & 0.029 \\ \(d_{21}\) & Dihedral & 5 - 4 - 6 - 7 & 0.026 \\ \(a_{47}\) & Angle & 20 - 18 - 21 & 0.025 \\ \(d_{7}\) & Dihedral & 10 - 8 - 14 - 15 & 0.024 \\ \(d_{3}\) & Dihedral & 4 - 6 - 8 - 10 & 0.023 \\ \(d_{11}\) & Dihedral & 1 - 6 - 4 - 5 & 0.023 \\ \(a_{27}\) & Angle & 9 - 8 - 10 & 0.021 \\ \(d_{23}\) & Dihedral & 4 - 6 - 8 - 9 & 0.019 \\ \(a_{35}\) & Angle & 12 - 10 - 13 & 0.018 \\ \(a_{44}\) & Angle & 16 - 18 - 21 & 0.017 \\ \(d_{33}\) & Dihedral & 9 - 8 - 10 - 12 & 0.015 \\ \(a_{42}\) & Angle & 16 - 18 - 19 & 0.015 \\ \(d_{17}\) & Dihedral & 3 - 1 - 4 - 5 & 0.015 \\ \hline \end{tabular}
\end{table}
Table 1: Contribution to rate in explicit solvent
Figure 9: Atom indices for alanine peptide. |
2309.11485 | Decision-Directed Hybrid RIS Channel Estimation with Minimal Pilot
Overhead | To reap the benefits of reconfigurable intelligent surfaces (RIS), channel
state information (CSI) is generally required. However, CSI acquisition in RIS
systems is challenging and often results in very large pilot overhead,
especially in unstructured channel environments. Consequently, the RIS channel
estimation problem has attracted a lot of interest and also been a subject of
intense study in recent years. In this paper, we propose a decision-directed
RIS channel estimation framework for general unstructured channel models. The
employed RIS contains some hybrid elements that can simultaneously reflect and
sense the incoming signal. We show that with the help of the hybrid RIS
elements, it is possible to accurately recover the CSI with a pilot overhead
proportional to the number of users. Therefore, the proposed framework
substantially improves the system spectral efficiency compared to systems with
passive RIS arrays since the pilot overhead in passive RIS systems is
proportional to the number of RIS elements times the number of users. We also
perform a detailed spectral efficiency analysis for both the pilot-directed and
decision-directed frameworks. Our analysis takes into account both the channel
estimation and data detection errors at both the RIS and the BS. Finally, we
present numerous simulation results to verify the accuracy of the analysis as
well as to show the benefits of the proposed decision-directed framework. | Ly V. Nguyen, A. Lee Swindlehurst | 2023-09-20T17:29:30Z | http://arxiv.org/abs/2309.11485v1 | # Decision-Directed Hybrid RIS Channel Estimation with Minimal Pilot Overhead
###### Abstract
To reap the benefits of reconfigurable intelligent surfaces (RIS), channel state information (CSI) is generally required. However, CSI acquisition in RIS systems is challenging and often results in very large pilot overhead, especially in unstructured channel environments. Consequently, the RIS channel estimation problem has attracted a lot of interest and also been a subject of intense study in recent years. In this paper, we propose a decision-directed RIS channel estimation framework for general unstructured channel models. The employed RIS contains some hybrid elements that can simultaneously reflect and sense the incoming signal. We show that with the help of the hybrid RIS elements, it is possible to accurately recover the CSI with a pilot overhead proportional to the number of users. Therefore, the proposed framework substantially improves the system spectral efficiency compared to systems with passive RIS arrays since the pilot overhead in passive RIS systems is proportional to the number of RIS elements times the number of users. We also perform a detailed spectral efficiency analysis for both the pilot-directed and decision-directed frameworks. Our analysis takes into account both the channel estimation and data detection errors at both the RIS and the BS. Finally, we present numerous simulation results to verify the accuracy of the analysis as well as to show the benefits of the proposed decision-directed framework.
Reconfigurable intelligent surfaces, channel estimation, sensing, decision-directed, spectral efficiency analysis.
## I Introduction
Reconfigurable intelligent surfaces (RIS) are a novel technology that has changed the conventional long-standing perspective that wireless channels are an uncontrollable part of the environment. RISs are planar arrays composed of elements whose electromagnetic reflection coefficients can be adaptively configured to shape the wireless channel in beneficial ways. As such, they can be deployed to improve the system throughput, network coverage, or energy efficiency [1, 2]. However, the exploitation of this channel-shaping ability generally requires RIS-related channel state information (CSI), which is challenging to obtain since the number of RIS elements can be very large, and the RIS elements are often constructed as passive devices without active radio-frequency (RF) chains or computational resources. Therefore, the RIS channel estimation problem has been a subject of intense study in the last few years [3]. The literature of RIS channel estimation can be divided into two categories including structured and unstructured channel estimations. While structured channel estimation considers models that are parameterized by the angles of arrival (AoAs), angles of departure (AoDs), and complex gains of the propagation paths, unstructured channel estimation methods assume more generic channels described by arbitrary complex coefficients.
Numerous results on structured RIS channel estimation have been reported, for example in [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], where the sparsity property of high-frequency (e.g., millimeter-wave, or "mmWave") channels are exploited to reduce the pilot overhead. For example, the studies in [4, 5, 6, 7, 8, 9, 10, 11, 12] formulated the cascaded mmWave channel estimation problem as a sparse signal recovery problem so that various compressive sensing techniques can be exploited to recover the channel parameters, e.g., distributed orthogonal matching pursuit (OMP) [4], iterative atom pruning based subspace pursuit (IAP-SP) [5], atomic norm minimization [7], Newtonized orthogonal matching pursuit [8], alternating direction method of multipliers (ADMM) [10], and the hybrid multi-objective evolutionary paradigm [12]. Several other system scenarios and designs were investigated in [13, 14, 15, 17, 18]. More specifically, the work in [13] considers low-precision analog-to-digital converters (ADCs) at the BS and derives a linear channel estimator. The authors in [15] exploited the sparse structure of mmWave channels to derive a Cramer-Rao lower bound (CRB) for the channel parameters, which is then optimized to design an RIS reflection pattern. The effect of beam squint was taken into account in [18] and a twin-stage orthogonal matching pursuit (TS-OMP) algorithm was developed to estimate the channel parameters. The double-structured angular sparsity of cascaded channels was exploited in [14, 17] to both reduce the pilot overhead and improve the estimation performance. The work in [19] developed a maximum likelihood (ML) channel estimation framework for estimating the line-of-sight (LoS) user-RIS channel. Exploiting the fact that the channel angles vary much slower than the channel gains, the authors in [16] proposed a two-timescale parametric estimation strategy which estimates all the channel angles and gains in the first coherence block, and then only re-estimates the channel gains in the remaining coherence blocks. Joint channel estimation and data detection taking into account the sparsity and low-rank structure of mmWave channels was studied in [20, 21] where Taylor series expansion and Gaussian approximation were used in [20] and a two-stage fitting algorithm was derived [21].
Unlike the aforementioned works where all the RIS elements are assumed to be passive, some other structured channel estimation studies in [22, 23, 24, 25, 26, 27] assume that the RIS contains a small number of active elements that can operate in
sensing mode to estimate partial CSI, which is then exploited together with the sparsity structure of mmWave channels to reconstruct the full CSI. While compressed sensing methods were used in [22, 23], some other techniques were employed in [24, 25, 26], e.g., signal parameters via rotational invariance technique (ESPRIT) and multiple signal classification (MU-SIC) in [24, 26] and deep residual networks in [25]. Unlike the methods in [22, 23, 24, 25, 26] that require both uplink and downlink training signals, the work in [27] developed a variational inference-sparse Bayesian learning channel estimator that uses only the uplink training signals and exploits the received signals at both the RIS and the BS.
On the other hand, unstructured RIS channel estimation has also been rigorously investigated in many works, e.g., literature [28, 29, 30]. For single-user systems, the works in [28, 29] used a binary reflection strategy where only one reflecting element is turned on in each time slot. It was then shown in [31, 30] that turning on all the RIS elements at the same time and using a discrete Fourier transform (DFT) matrix as the reflecting pattern provides better performance compared to the binary reflection strategy. Similar results were also reported for the case of multiple users in [32]. Additionally, the study in [33] examines the reflecting pattern design problem while imposing the restriction that the phase shifts are limited to a finite set of discrete values. For multi-user systems, the work in [34] exploits known spatial correlation at both the BS and the RIS as well as other statistical characteristics of multi-specular fading to derive Bayesian channel estimators. The work in [38] assumes a low-rank RIS-BS channel and develops a two-stage algorithm based on matrix factorization and matrix completion. Some other methods such as matrix-calibration-based factorization and parallel factor tensor decomposition were used in [36] and [35, 37], respectively. More general channel models were considered in [39, 40] where two- and three-phase estimation approaches were proposed, respectively. While both of these latter approaches require the same pilot overhead, the two-phase approach outperforms the other thanks to the alleviation of error propagation. Joint channel estimation and data detection for unstructured channels were also studied in [41, 42]. Expectation-maximization was exploited in [41] for a single-user system, and an on/off reflection strategy was used in [42] for a multiuser systems.
In all of the above unstructured channel estimation methods, passive RIS arrays were used. In this paper, we consider a recent hybrid RIS structure [43, 44, 45] in which the RIS elements can simultaneously reflect and sense the incoming signal, and develop a decision-directed (DD) channel estimator that can be used for unstructured channels where AoA/AoD information cannot be exploited. The novelty of the approach lies in the application of hybrid RIS for unstructured channel estimation, and the use of DD to reduce the pilot overhead. The contributions of this paper are summarized as follows:
* Based on the hybrid RIS structure, we first develop a two-phase pilot-directed (PD) channel estimation approach. The estimation strategy is similar to that in [40] but we show that the pilot overhead is lower for multiuser systems.
* Next, we propose a two-phase DD channel estimation framework and we show that with the help of the hybrid RIS elements, it is possible to accurately recover the CSI with a pilot overhead only proportional to the number of users. Therefore, the proposed DD framework substantially improves the system spectral efficiency (SE). More specifically, in the channel estimation stage, the users transmit a sequence including both pilot and data symbols where the number of pilot symbols is the same as the number of users. The RIS uses some sensor elements with RF chains to recover the data symbols, and then forwards the detected data symbols to the BS for cascaded channel estimation. For the BS to accurately estimate the CSI, the RIS phase shifts must be varied. We point out that changing the RIS phase shifts does not affect data detection by the sensing RIS elements, and thus both data recovery at the RIS and channel estimation at the BS are guaranteed. We also explain why accurate CSI recovery is not guaranteed when the DD approach is applied at the BS and the RIS has no sensing elements.
* We then perform a detailed spectral efficiency (SE) analysis for both the PD and DD frameworks for single-user systems. Our analysis takes into account both the channel estimation and data detection errors at the RIS and the BS, and thus accurately reflects the uncertainty of RIS-assisted data detection in the DD framework. It is observed that there is often a crossing point at which the DD framework outperforms the PD one, and so the analysis can be used to decide when the PD or DD approach should be used. Finally, we present numerous simulation results to verify the accuracy of the SE analysis as well as to show the benefits of the proposed DD framework.
The rest of this paper is organized as follows: Section II presents the considered system model. The pilot directed and decision-directed channel estimation frameworks are presented in Section III and Section IV, respectively. We perform the spectral efficiency analysis in Section V. Section VI shows simulation results and finally Section VII concludes the paper.
_Notation_: Upper-case and lower-case boldface letters denote matrices and column vectors, respectively. Scalars \(x_{ij}\) and \([\mathbf{X}]_{ij}\) both denote the element at the \(i\)th row and \(j\)th column of a matrix \(\mathbf{X}\). Vectors \(\mathbf{x}_{i}\) and \(\mathbf{X}_{:,i}\) both denote the \(i\)th column of a matrix \(\mathbf{X}\), while \(\mathbf{X}_{k:,:}\) denotes the \(k\)-th row of \(\mathbf{X}\). The notation \(\mathbf{X}_{i:j,k:\ell}\) represents the sub-matrix of \(\mathbf{X}\) that includes rows \(i\) to \(j\) and columns \(k\) to \(\ell\). The expectation, variance, and covariance of random quantities are denoted by \(\mathbb{E}[\cdot]\), \(\mathrm{Var}[\cdot]\), and \(\mathrm{Cov}[\cdot]\), respectively. Depending on the context, the operator \(|\cdot|\) is used to denote the absolute value of a number, or the cardinality of a set. The \(\ell_{2}\)-norm of a vector is represented by \(\|\cdot\|\). The transpose and conjugate transpose are denoted by \([\cdot]^{T}\) and \([\cdot]^{H}\), respectively, \(j\) is the unit imaginary number satisfying \(j^{2}=-1\), \(\mathcal{N}(\cdot,\cdot)\) and \(\mathcal{CN}(\cdot,\cdot)\) represent the real and the complex normal distributions respectively, where the first arguments is the mean and the second argument is the variance or the covariance matrix. The \(i\)-th element of the set \(\mathcal{A}\) is indicated by \(\mathcal{A}(i)\). The Q-function that quantifies the tail distribution of a standard normal random variable is given
by \(Q(\cdot)\).
## II System Model
We consider an uplink RIS-assisted MIMO system in which a BS with \(M\) antennas serves \(K\) single-antenna users under the assistance of an \(N\)-element RIS. Let \(\mathbf{H}_{\mathrm{d}}\in\mathbb{C}^{M\times K}\), \(\mathbf{H}\in\mathbb{C}^{M\times N}\), and \(\mathbf{G}\in\mathbb{C}^{N\times K}\) denote the direct channel from the users to the BS, the channel from the RIS to the BS, and the channel from the users to the RIS, respectively. The RIS contains a number of sensing elements equipped with radio-frequency (RF) chains as illustrated in Fig. 1. These sensing elements are able to simultaneously reflect and sense the impinging signal. Let \(\mathcal{A}\) denote the index set of the sensing elements, so that \(\mathcal{A}\subset\{1,\,\ldots,\,N\}\), and let \(N_{\mathcal{A}}\) be the number of sensing elements, i.e., \(N_{\mathcal{A}}=|\mathcal{A}|\), where it is assumed that \(K\leq N_{\mathcal{A}}\ll N\).
Define the channel matrices \(\mathbf{H}_{\mathrm{d}}\triangleq[\mathbf{h}_{\mathrm{d},1},\,\ldots,\, \mathbf{h}_{\mathrm{d},K}]\) and \(\mathbf{G}\triangleq[\mathbf{g}_{1},\,\ldots,\,\mathbf{g}_{K}]\), so that the received signal at the BS is modeled as
\[\mathbf{y}^{\mathtt{BS}} =\sqrt{P}\sum_{k=1}^{K}(\mathbf{h}_{\mathrm{d},k}+\mathbf{H} \operatorname{diag}\left(\mathbf{g}_{k}\right)\operatorname{diag}\left( \boldsymbol{\rho}\right)\boldsymbol{\phi})s_{k}+\mathbf{n}^{\mathtt{BS}} \tag{1}\] \[=\sqrt{P}\sum_{k=1}^{K}\mathbf{H}_{\mathrm{c},k}\operatorname{ diag}\left(\begin{bmatrix}1\\ \boldsymbol{\rho}\end{bmatrix}\right)\begin{bmatrix}1\\ \boldsymbol{\phi}\end{bmatrix}s_{k}+\mathbf{n}^{\mathtt{BS}} \tag{2}\]
where \(\boldsymbol{\phi}=[\phi_{1},\,\ldots,\,\phi_{N}]^{T}\) is the phase shift vector of the RIS, \(\mathbf{H}_{\mathrm{c},k}=[\mathbf{h}_{\mathrm{d},k},\,\mathbf{H} \operatorname{diag}\left(\mathbf{g}_{k}\right)]\in\mathbb{C}^{M\times(N+1)}\) is the cascaded channel of the \(k\)-th user, \(P\) is the transmit power, and \(\boldsymbol{\rho}\triangleq[\rho_{1},\,\ldots,\,\rho_{N}]^{T}\) with \(0\leq\rho_{n}\leq 1\) if \(n\in\mathcal{A}\), otherwise \(\rho_{n}=1\). Hence, \(\rho_{n}^{2}\) is the portion of the power of the impinging signal that is reflected by the \(n\)-th RIS element. For convenience, we use the notation \(\boldsymbol{\rho}^{\mathcal{A}}=[\rho_{1}^{\mathcal{A}},\,\ldots,\,\rho_{N_{ \mathcal{A}}}^{\mathcal{A}}]^{T}\) where \(\rho_{i}^{\mathcal{A}}\triangleq\rho_{\mathcal{A}(i)}\) for \(i=1,\,\ldots,\,N_{\mathcal{A}}\), and \(\boldsymbol{\eta}^{\mathcal{A}}=[\eta_{1}^{\mathcal{A}},\,\ldots,\,\eta_{N_{ \mathcal{A}}}^{\mathcal{A}}]^{T}\) where \(\eta_{i}^{\mathcal{A}}=\sqrt{1-(\rho_{i}^{\mathcal{A}})^{2}}\). Hence, \((\eta_{i}^{\mathcal{A}})^{2}\) represents the amount of signal power absorbed by the RIS element \(\mathcal{A}(i)\).
With \(N_{\mathcal{A}}\) sensing elements at the RIS, the received signal at the RIS is given as
\[\mathbf{y}^{\mathtt{RIS}}=\sqrt{P}\operatorname{diag}\left( \boldsymbol{\eta}^{\mathcal{A}}\right)\operatorname{diag}\left(\boldsymbol{ \phi}^{\mathcal{A}}\right)\sum_{k=1}^{K}\mathbf{g}_{k}^{\mathcal{A}}s_{k}+ \mathbf{n}^{\mathtt{RIS}}\, \tag{3}\]
where \(\mathbf{g}_{k}^{\mathcal{A}}=[g_{k,1}^{\mathcal{A}},\,\ldots,\,g_{k,N_{ \mathcal{A}}}^{\mathcal{A}}]^{T}\) with \(g_{k,i}^{\mathcal{A}}\triangleq g_{k,\mathcal{A}(i)}\). In this paper, it is important to note that the superscripts \((\cdot)^{\mathcal{A}}\) and \((\cdot)^{\mathcal{B}}\) are used to imply variables that are associated with the sensing and reflecting RIS elements, respectively.
We assume an uplink communication protocol with two stages including a channel estimation stage followed a data transmission stage. After the channel estimation stage, the RIS phase shifts are optimized and configured before the data transmission stage begins. It should be noted that during the channel estimation stage, data detection occurs at the RIS since the users transmit both pilot and data symbols during this stage. During the data detection stage, in order to minimize power consumption at the RIS, the sensing function of the hybrid RIS elements is turned off and the incoming signal is completely reflected.
## III Pilot-Directed Channel Estimation
In this section, we present a two-phase pilot-directed approach for estimating the cascaded channel matrices \(\mathbf{H}_{\mathrm{c},1},\,\ldots,\,\mathbf{H}_{\mathrm{c},K}\). Since all the users experience the same RIS-BS channel, i.e. the same channel matrix \(\mathbf{H}\), the total number of channel elements to be estimated is \(M(K+N)+N(K-1)\)[39, 40]. Let \(\mathbf{A}_{k}=\mathbf{H}\operatorname{diag}\left(\mathbf{g}_{k}\right)\), then we have \(\mathbf{A}_{k}=\mathbf{A}_{1}\operatorname{diag}\left(\mathbf{\lambda}_{k}\right)\) where \(\lambda_{k,n}=g_{k,n}/g_{1,n}\) for \(k=2,\,\ldots,\,K\) and \(n=1,\,\ldots,\,N\). Note that \(\boldsymbol{\lambda}_{1}=\mathbf{1}_{N}\). Therefore, it suffices to estimate \(\mathbf{H}_{\mathrm{d}}\), \(\mathbf{A}_{1}\), and \(\boldsymbol{\lambda}_{2},\,\ldots,\,\boldsymbol{\lambda}_{K}\).
Our two-phase estimation strategy is similar to that in [40] where \(\mathbf{H}_{\mathrm{c},1}=[\mathbf{h}_{\mathrm{d},1},\,\mathbf{A}_{1}]\) is estimated in phase 1 and \(\mathbf{h}_{\mathrm{d},2},\,\ldots,\,\mathbf{h}_{\mathrm{d},K}\) and \(\boldsymbol{\lambda}_{2},\,\ldots,\,\boldsymbol{\lambda}_{K}\) are estimated in phase 2. However, unlike the work in [40], which considers an RIS with passive elements only, our work here considers a hybrid RIS structure as presented above. In this section, we assume that only pilot signals are used for the channel estimation. For notational convenience, let \(\mathcal{T}_{1}=\{1,\,\ldots,\,\tau_{1}\}\) and \(\mathcal{T}_{2}=\{\tau_{1}+1,\,\ldots,\,\tau_{1}+\tau_{2}\}\) where \(\tau_{1}\) and \(\tau_{2}\) are the length of phase 1 and phase 2, respectively.
### _Phase 1_
In this phase, we estimate \(\mathbf{H}_{\mathrm{c},1}=[\mathbf{h}_{\mathrm{d},1},\,\mathbf{A}_{1}]\). One selected user transmits a pilot vector of length \(N+1\), while the other users remain idle. Without loss of generality, we set the index of the typical user to 1. The received signal at the BS in this phase is given as
\[\mathbf{y}_{t}^{\mathtt{BS}}=\sqrt{P}\mathbf{H}_{\mathrm{c},1} \operatorname{diag}\left(\begin{bmatrix}1\\ \boldsymbol{\rho}\end{bmatrix}\right)\begin{bmatrix}1\\ \boldsymbol{\phi}_{t}\end{bmatrix}s_{1,t}+\mathbf{n}_{t}^{\mathtt{BS}}. \tag{4}\]
Since \(\mathbf{H}_{\mathrm{c},1}\) contains \(N+1\) columns, we need at least \(\tau_{1}=N+1\) time slots to accurately estimate \(\mathbf{H}_{\mathrm{c},1}\). For simplicity, we can set the pilot vector as \(\mathbf{S}_{1,\mathcal{T}_{1}}=\mathbf{1}_{\tau_{1}}^{T}\) and the RIS phase shift matrix \(\boldsymbol{\Phi}\) is chosen so that \([\mathbf{1}_{\tau_{1}},\boldsymbol{\Phi}^{T}]^{T}=\mathbf{V}_{\tau_{1}}\) where \(\mathbf{V}_{\tau_{1}}\) is the DFT matrix of size \(\tau_{1}\times\tau_{1}\). This means \([1,\boldsymbol{\phi}_{t}^{T}]^{T}\) is the \(t\)-th column of \(\mathbf{V}_{\tau_{1}}\). Then, the cascaded channel \(\mathbf{H}_{\mathrm{c},1}\) can be estimated via standard methods, such as for example the least-squares (LS):
\[\widehat{\mathbf{H}}_{\mathrm{c},1} =\frac{1}{\sqrt{P}\tau_{1}}\mathbf{Y}_{\tau_{1}}^{\mathtt{BS}} \boldsymbol{\Phi}_{\tau_{1}}^{H}\operatorname{diag}\left(\begin{bmatrix}1\\ \boldsymbol{\rho}\end{bmatrix}\right)^{-1}\] \[=\mathbf{H}_{\mathrm{c},1}+\frac{1}{\sqrt{P}\tau_{1}}\mathbf{N}_{ \tau,1}^{\mathtt{BS}}\boldsymbol{\Phi}_{\tau_{1}}^{H}\operatorname{diag} \left(\begin{bmatrix}1\\ \boldsymbol{\rho}\end{bmatrix}\right)^{-1}. \tag{5}\]
Fig. 1: Sensing-RIS-assisted multi-user MIMO system.
The received signal at the sensing elements of the RIS is
\[\mathbf{y}_{t}^{\mathtt{RIS}}=\sqrt{P}\operatorname{diag}\left(\mathbf{\eta}^{\mathcal{ A}}\right)\operatorname{diag}\left(\mathbf{\phi}_{t}^{\mathcal{A}}\right)\mathbf{g}_{1}^{ \mathcal{A}}+\mathbf{n}_{t}^{\mathtt{RIS}} \tag{6}\]
and so the sensed portion of the channel \(\mathbf{g}_{1}^{\mathcal{A}}\) can be estimated as
\[\mathbf{\hat{g}}_{1}^{\mathcal{A}}=\frac{1}{\sqrt{P}\tau_{1}} \operatorname{diag}\left(\mathbf{\eta}^{\mathcal{A}}\right)^{-1}\sum_{t=1}^{\tau_ {1}}\mathbf{\psi}_{t}. \tag{7}\]
where \(\mathbf{\psi}_{t}=\operatorname{diag}\left(\mathbf{\phi}_{t}^{\mathcal{A}}\right)^{-1 }\mathbf{y}_{t}^{\mathtt{RIS}}\).
### _Phase 2_
In this phase, the typical user remains idle while the other users transmit pilot sequences. Since \(\mathbf{h}_{\mathrm{d},1}\) and \(\mathbf{A}_{1}\) have been estimated in phase 1, we will estimate \(\mathbf{h}_{\mathrm{d},2},\,\ldots,\,\mathbf{h}_{\mathrm{d},K}\) and \(\mathbf{\lambda}_{2},\,\ldots,\,\mathbf{\lambda}_{K}\) during phase 2. The received signal at the BS can be decomposed as
\[\mathbf{y}_{t}^{\mathtt{BS}}=\sqrt{P}\sum_{k=2}^{K}\left(\mathbf{ h}_{\mathrm{d},k}+\mathbf{A}_{1}^{\mathcal{B}}\operatorname{diag}\left(\mathbf{ \phi}_{t}^{\mathcal{B}}\right)\mathbf{A}_{k}^{\mathcal{B}}+\right.\] \[\left.\mathbf{A}_{1}^{\mathcal{A}}\operatorname{diag}\left(\mathbf{ \rho}^{\mathcal{A}}\right)\operatorname{diag}\left(\mathbf{\phi}_{t}^{\mathcal{A}} \right)\mathbf{\lambda}_{k}^{\mathcal{A}}\right)s_{k,t}+\mathbf{n}_{t}^{ \mathtt{BS}} \tag{8}\]
where \(\mathbf{A}_{1}^{\mathcal{A}}\) and \(\mathbf{A}_{1}^{\mathcal{B}}\) are matrices whose columns are drawn from \(\mathbf{A}_{1}\) with indices \(\mathcal{A}\) and \(\mathcal{B}\), respectively, \(\mathbf{\lambda}_{k}^{\mathcal{A}}=[\lambda_{k,1}^{\mathcal{A}},\,\ldots,\,\lambda _{k,N_{\mathcal{A}}}^{\mathcal{A}}]^{T}\) and \(\mathbf{\hat{\mathcal{B}}}_{k}^{\mathcal{B}}=[\lambda_{k,1}^{\mathcal{B}},\, \ldots,\,\lambda_{k,N_{\mathcal{B}}}^{\mathcal{B}}]^{T}\) where \(\lambda_{k,i}^{\mathcal{A}}\equiv\lambda_{k,\mathcal{A}(i)}\) and \(\lambda_{k,i}^{\mathcal{B}}\equiv\lambda_{k,\mathcal{B}(i)}\).
#### Iii-B1 Estimating \(\mathbf{\lambda}_{2}^{\mathcal{A}},\,\ldots,\,\mathbf{\lambda}_{K}^{\mathcal{A}}\)
This is done at the RIS. Since \(\lambda_{k,i}^{\mathcal{A}}=g_{k,i}^{\mathcal{A}}/g_{i,i}^{\mathcal{A}}\), the parameters \(\mathbf{\lambda}_{2}^{\mathcal{A}},\,\ldots,\,\mathbf{\lambda}_{K}^{\mathcal{A}}\) are defined as long as \(\mathbf{G}^{\mathcal{A}}=[\mathbf{g}_{1}^{\mathcal{A}},\,\ldots,\,\mathbf{g} _{K}^{\mathcal{A}}]\) is known. Note that the first column of \(\mathbf{G}^{\mathcal{A}}\) has been estimated using (7) in phase 1. The signal received at the RIS in phase 2 is given as
\[\mathbf{y}_{t}^{\mathtt{RIS}}=\sqrt{P}\operatorname{diag}\left(\mathbf{\eta}^{ \mathcal{A}}\right)\operatorname{diag}\left(\mathbf{\phi}_{t}^{\mathcal{A}} \right)\mathbf{G}_{:,2:K}^{\mathcal{A}}\mathbf{S}_{2:K,t}+\mathbf{n}_{t}^{ \mathtt{RIS}}. \tag{9}\]
The sub-matrix \(\mathbf{G}_{:,2:K}^{\mathcal{A}}\) can be estimated by the RIS as follows:
\[\mathbf{\hat{G}}_{:,2:K}^{\mathcal{A}}=\frac{1}{\sqrt{P}}\operatorname{diag} \left(\mathbf{\eta}^{\mathcal{A}}\right)^{-1}\mathbf{\Psi}_{\mathcal{T}_{2}}\mathbf{S} _{2:K,\mathcal{T}_{2}}^{H}(\mathbf{S}_{2:K,\mathcal{T}_{2}}\mathbf{S}_{2:K, \mathcal{T}_{2}}^{H})^{-1} \tag{10}\]
where \(\mathbf{\Psi}_{\mathcal{T}_{2}}=[\mathbf{\psi}_{\tau_{1}+1},\,\ldots,\,\mathbf{\psi}_{ \tau_{1}+\tau_{2}}]\). Thus, an estimate of \(\lambda_{k,i}^{\mathcal{A}}\) can be obtained as \(\hat{\lambda}_{k,i}^{\mathcal{A}}=\hat{g}_{k,i}^{\mathcal{A}}/\hat{g}_{i,i}^{ \mathcal{A}}\).
Iii-B2 Estimating \(\mathbf{h}_{\mathrm{d},2},\,\ldots,\,\mathbf{h}_{\mathrm{d},K}\) and \(\mathbf{\lambda}_{2}^{\mathcal{B}},\,\ldots,\,\mathbf{\lambda}_{K}^{\mathcal{B}}\)
This is accomplished at the BS. Let
\[\mathbf{B}_{t} =[\mathbf{I}_{M},\,\,\mathbf{A}_{1}^{\mathcal{B}}\operatorname{ diag}\left(\mathbf{\phi}_{t}^{\mathcal{B}}\right)],\] \[\mathbf{v}_{k} =[\mathbf{h}_{\mathrm{d},k}^{T},\,(\mathbf{\lambda}_{k}^{\mathcal{B}}) ^{T}]^{T},\] \[\mathbf{f}_{k,t}^{\mathcal{A}} =\mathbf{A}_{1}^{\mathcal{A}}\operatorname{diag}\left(\mathbf{\rho}^{ \mathcal{A}}\right)\operatorname{diag}\left(\mathbf{\phi}_{t}^{\mathcal{A}} \right)\mathbf{\lambda}_{k}^{\mathcal{A}},\]
then the received signal at the BS in (8) can be written in the following form:
\[\mathbf{y}_{t}^{\mathtt{BS}} =\sqrt{P}\sum_{k=2}^{K}(\mathbf{B}_{t}\mathbf{\upsilon}_{k}+\mathbf{f }_{k,t}^{\mathcal{A}})s_{k,t}+\mathbf{n}_{t}^{\mathtt{BS}}\] \[=\sqrt{P}((\mathbf{S}_{2:K,t}^{\mathcal{A}}\otimes\mathbf{B}_{t}) \mathbf{\upsilon}+\mathbf{F}_{t}^{\mathcal{A}}\mathbf{S}_{2:K,t})+\mathbf{n}_{t}^{ \mathtt{BS}}\] \[=\sqrt{P}(\mathbf{Q}_{t}\mathbf{\upsilon}+\mathbf{\tilde{y}}_{t}^{ \mathtt{BS}})+\mathbf{n}_{t}^{\mathtt{BS}} \tag{11}\]
where \(\mathbf{\upsilon}=[\mathbf{\upsilon}_{2}^{T},\,\ldots,\,\mathbf{\upsilon}_{T}^{T}]^{T}\), \(\mathbf{Q}_{t}=\mathbf{S}_{2:K,t}^{T}\otimes\mathbf{B}_{t}\), \(\mathbf{F}_{t}^{\mathcal{A}}=[\mathbf{f}_{2:t}^{\mathcal{A}},\,\ldots,\,\mathbf{f }_{k,t}^{\mathcal{A}}]\), and \(\mathbf{\tilde{y}}_{t}^{\mathtt{BS}}=\mathbf{F}_{t}^{\mathcal{A}}\mathbf{S}_{2:K,t}\). Stacking the received signals \(\{\mathbf{y}_{t}^{\mathtt{BS}}\}\) in (21) with \(t\in\mathcal{T}_{2}\) on top of each other, we have the following
\[\operatorname{vec}\left(\mathbf{Y}_{:,\,\mathcal{T}_{2}}^{ \mathtt{BS}}\right)-\sqrt{P}\operatorname{vec}\left(\mathbf{\tilde{Y}}_{:, \,\mathcal{T}_{2}}^{\mathtt{BS}}\right)=\sqrt{P}\mathbf{Q}\mathbf{\upsilon}+ \mathbf{n}^{\mathtt{BS}} \tag{12}\]
where \(\mathbf{Q}=[\mathbf{Q}_{t+1}^{T},\,\ldots,\,\mathbf{Q}_{t+1}^{T}]^{T}\). Note that \(\mathbf{\upsilon}\) is the vector we need to estimate and the size of \(\mathbf{Q}\) is \(M\tau_{2}\times(K-1)(M+N-N_{\mathcal{A}})\). Therefore, in order to accurately recover \(\mathbf{\upsilon}\), two conditions should be satisfied: \(M\tau_{2}\geq(K-1)(M+N-N_{\mathcal{A}})\) and \(\operatorname{rank}(\mathbf{Q})=M+N-N_{\mathcal{A}}\). An estimate of \(\mathbf{\upsilon}\) can be then obtained as
\[\hat{\mathbf{\upsilon}}=\frac{1}{\sqrt{P}}\mathbf{Q}^{\dagger}\left( \operatorname{vec}\left(\mathbf{Y}_{:,\,\mathcal{T}_{2}}^{\mathtt{BS}}\right)- \sqrt{P}\operatorname{vec}\left(\mathbf{\tilde{Y}}_{:,\,\mathcal{T}_{2}}^{ \mathtt{BS}}\right)\right). \tag{13}\]
If \(M\geq N-N_{\mathcal{A}}\), we need at least \(\tau_{2}=2(K-1)\) time slots to recover \(\mathbf{\upsilon}\) and if \(M<N-N_{\mathcal{A}}\), we need at least \(\tau_{2}=K-1+\left\lceil\frac{(K-1)(N-N_{\mathcal{
#### Iii-A1 Estimating \(\mathbf{g}_{1}^{\mathcal{A}}\)
The received signal at the RIS in the first time slot with the pilot symbol \(s_{1,1}=1\) is given in (6), and so an estimate of \(\mathbf{g}_{1}^{\mathcal{A}}\) can be obtained as
\[\mathbf{\hat{g}}_{1}^{\mathcal{A}}=\frac{1}{\sqrt{P}}\operatorname{diag}\left( \boldsymbol{\eta}^{\mathcal{A}}\right)^{-1}\operatorname{diag}\left(\boldsymbol {\phi}_{1}^{\mathcal{A}}\right)^{-1}\mathbf{y}_{1}^{\text{RIS}}. \tag{16}\]
#### Iii-A2 Data Detection for user 1 in time slots \(2\),..., \(\tau_{1}\)
The received signal is \(\mathbf{y}_{t}^{\text{RIS}}=\sqrt{P}\operatorname{diag}\left(\boldsymbol{\eta }^{\mathcal{A}}\right)\operatorname{diag}\left(\boldsymbol{\phi}_{t}^{\mathcal{ A}}\right)\mathbf{g}_{1}^{\mathcal{A}}s_{1,t}+n_{t}^{\text{RIS}}\). Therefore, the data symbols \(s_{1,t}\) can be detected by the RIS using \(\mathbf{\hat{g}}_{1}^{\mathcal{A}}\) from (16) as follows:
\[\hat{s}_{1,t}=\operatorname*{arg\,min}_{s\in\mathcal{S}}\left\|\mathbf{y}_{t} ^{\text{RIS}}-\sqrt{P}\operatorname{diag}\left(\boldsymbol{\eta}^{\mathcal{A} }\right)\operatorname{diag}\left(\boldsymbol{\phi}_{t}^{\mathcal{A}}\right) \mathbf{\hat{g}}_{1}^{\mathcal{A}}s\right\|^{2}. \tag{17}\]
It can be seen that even when the phase shift vector \(\boldsymbol{\phi}_{t}^{\mathcal{A}}\) varies in different time slots, it is still possible for the RIS to accurately recover the data symbols since the effect of \(\boldsymbol{\phi}_{t}^{\mathcal{A}}\) is merely a phase rotation of the noiseless received signal which can be easily taken into account as in (17).
#### Iii-A3 Estimating \(\mathbf{H}_{\text{c},1}\)
The detected data symbols \(\{\hat{s}_{1,t}\}\) in (17) will be forwarded by the RIS to the BS to estimate the cascaded channel matrix \(\mathbf{H}_{\text{c},1}\) as follows:
\[\mathbf{\hat{H}}_{\text{c},1}=\frac{1}{\sqrt{P}\tau_{1}}\mathbf{Y}_{:,\tau_{1} }^{\text{BS}}\operatorname{diag}\left(\mathbf{\hat{s}}_{\tau_{1}}\right)^{-1 }\boldsymbol{\Phi}_{\tau_{1}}^{H}\operatorname{diag}\left(\begin{bmatrix}1 \\ \boldsymbol{\rho}\end{bmatrix}\right)^{-1}\!. \tag{18}\]
where \(\mathbf{\hat{s}}_{\tau_{1}}=[1,\,\hat{s}_{1,2},\,\ldots,\,\hat{s}_{1,\tau_{1}}]\). Thus, in phase 1, with the help of the sensing elements, we use only one time slot for pilot signalling, i.e., \(\tau_{1}=1\), while the other \(N\) time slots are used for data transmission. The BS is still able to accurately recover the cascaded channel matrix \(\mathbf{H}_{\text{c},1}\) as long as the data symbols are correctly detected by the RIS.
### _Phase 2_
We divide phase 2 into two sub-phases that we refer to as 2a and 2b. Sub-phase 2a is associated with the time frame \(\mathcal{T}_{\text{2a}}=\tau_{1}+1,\,\ldots,\,\tau_{1}+K-1\) where user 1 is idle and users \(2\) through \(K\) transmit their pilot signals. Sub-phase 2b is associated with the time frame \(\mathcal{T}_{\text{2b}}=\tau_{1}+K,\,\ldots,\,\tau_{1}+\tau_{2}\) where all the users transmit data symbols.
Iii-D1 Estimating \(\mathbf{g}_{2}^{\mathcal{A}},\,\ldots,\,\mathbf{g}_{K}^{\mathcal{A}}\) and \(\boldsymbol{\lambda}_{2}^{\mathcal{A}},\,\ldots,\,\boldsymbol{\lambda}_{K}^{ \mathcal{A}}\)
Pilot signals are transmitted in the first \(K-1\) time slots of phase 2, from \(\tau_{1}+1\) to \(\tau_{1}+K-1\), so the sub-matrix \(\mathbf{G}_{2:K}^{\mathcal{A}}=[\mathbf{g}_{2}^{\mathcal{A}},\,\ldots,\, \mathbf{g}_{K}^{\mathcal{A}}]\) can be estimated by the RIS as follows:
\[\mathbf{\hat{G}}_{:,2:K}^{\mathcal{A}}=\frac{\operatorname{diag}\left( \boldsymbol{\eta}^{\mathcal{A}}\right)^{-1}\boldsymbol{\Psi}_{\mathcal{T}_{ \text{2a}}}\mathbf{S}_{2:K,\mathcal{T}_{\text{2a}}}^{H}(\mathbf{S}_{2:K, \mathcal{T}_{\text{2a}}}\mathbf{S}_{2:K,\mathcal{T}_{\text{2a}}}^{H})^{-1}}{ \sqrt{P}}, \tag{19}\]
where \(\boldsymbol{\Psi}_{\mathcal{T}_{\text{2a}}}=[\psi_{\tau_{1}+1},\,\ldots,\, \boldsymbol{\psi}_{\tau_{1}+K-1}]\). Furthermore, an estimate of \(\lambda_{k,i}^{\mathcal{A}}\) can be obtained as \(\hat{\lambda}_{k,i}^{\mathcal{A}}=\hat{g}_{k,i}^{\mathcal{A}}/\hat{g}_{1,i}^{ \mathcal{A}}\).
#### Iii-D2 Detecting data
For the remaining time slots from \(\tau_{1}+K\) to \(\tau_{1}+\tau_{2}\) in sub-phase 2b, all \(K\) users can transmit data, and the received signal at the RIS is
\[\mathbf{y}_{t}^{\text{RIS}}=\sqrt{P}\operatorname{diag}\left(\boldsymbol{\eta }^{\mathcal{A}}\right)\operatorname{diag}\left(\boldsymbol{\phi}_{t}^{\mathcal{ A}}\right)\mathbf{G}^{\mathcal{A}}\mathbf{s}_{t}+\mathbf{n}_{t}^{\text{ RIS}}. \tag{20}\]
The RIS can use \(\mathbf{y}_{t}^{\text{RIS}}\) and \(\mathbf{\hat{G}}^{\mathcal{A}}=[\mathbf{\hat{g}}_{1}^{\mathcal{A}},\,\ldots,\, \mathbf{\hat{g}}_{K}^{\mathcal{A}}]\) to detect the users' transmitted data \(\mathbf{s}_{t}\), which is a conventional MIMO data detection problem. Similarly, the effect of \(\boldsymbol{\phi}_{t}^{\mathcal{A}}\) is merely the phase rotation of the noiseless received signal, and so its it is feasible for the RIS to accurately recover the data symbols as \(\boldsymbol{\phi}_{t}^{\mathcal{A}}\) varies in time.
Iii-D3 Estimating \(\mathbf{h}_{\text{d},2},\,\ldots,\,\mathbf{h}_{\text{d},K}\) and \(\boldsymbol{\lambda}_{2}^{\mathcal{B}},\,\ldots,\,\boldsymbol{\lambda}_{K}^{ \mathcal{B}}\)
Since the typical user also transmits data during sub-phase 2b, the received signal at the BS can be re-written in the following form:
\[\mathbf{y}_{t}^{\text{BS}} -\mathbf{H}_{\text{c},1}\operatorname{diag}\left(\begin{bmatrix}1 \\ \boldsymbol{\rho}\end{bmatrix}\right)\left[\begin{bmatrix}1\\ \boldsymbol{\phi}_{t}\end{bmatrix}s_{1,t}\right.\] \[=\sqrt{P}\sum_{k=2}^{K}(\mathbf{B}_{t}\boldsymbol{\upsilon}_{k}+ \mathbf{f}_{k,t}^{\mathcal{A}})s_{k,t}+\mathbf{n}_{t}^{\text{BS}}\] \[=\sqrt{P}((\mathbf{S}_{2:K,t}^{\mathcal{A}}\otimes\mathbf{B}_{t}) \boldsymbol{\upsilon}+\mathbf{F}_{t}^{\mathcal{A}}\mathbf{S}_{2:K,t})+\mathbf{n }_{t}^{\text{BS}}\] \[=\sqrt{P}(\mathbf{Q}_{t}\boldsymbol{\upsilon}+\mathbf{\hat{y}}_{t}^{ \text{RIS}})+\mathbf{n}_{t}^{\text{BS}}, \tag{21}\]
where \(t\in\mathcal{T}_{2}\). Note that \(s_{1,t}=0\) for for \(t\in\mathcal{T}_{\text{2a}}\) since user 1 is idle during sub-phase 2a. Then, we can use a similar technique as in the PD approach for estimating \(\boldsymbol{\upsilon}\), but we need to replace \(s_{k,t}\) with \(\hat{s}_{k,t}\) for \(t\in\mathcal{T}_{\text{2b}}\).
### _Overall Training Overhead_
The overall training overhead for the proposed decision-directed approach is \(\tau_{p}=K\) since phase 1 and phase 2 require only 1 and \(K-1\) time slots for pilot signalling, respectively. The BS is guaranteed to accurately recover the channel matrices as long as the data symbols are correctly detected by the RIS. Although only the typical user transmits in phase 1 and thus the spectral efficiency will not be as large as if all the users were transmitting, we will show in the numerical results that the proposed DD approach can still result in an increase in the spectral efficiency compared with the PD approach.
### _Comparison with a Passive RIS and DD at the BS_
In the proposed DD method, the BS can accurately recover the cascaded channel matrices when the data symbols are correctly detected by the RIS. Here, we explain why an alternative scenario in which the RIS has no sensing elements and the DD strategy is applied at the BS cannot guarantee accurate CSI estimation. To show this, it is enough to consider the case with only one user, where the received signal at the BS is given as
\[\mathbf{y}_{t}^{\text{BS}}=\mathbf{H}_{\text{c},1}\boldsymbol{\phi}_{t}s_{t}+ \mathbf{n}_{t}^{\text{BS}}. \tag{22}\]
To accurately recover \(\mathbf{H}_{\text{c},1}\), the phase shift vector \(\boldsymbol{\phi}_{t}\) must vary for different time slots \(t\) in order to make \(\boldsymbol{\Phi}_{\mathcal{T}_{1}}=[\boldsymbol{\phi}_{1},\,\ldots,\,\boldsymbol{ \phi}_{\tau_{1}}]\) full-rank. However, if we change \(\boldsymbol{\phi}_{t}\), the effective channel \(\mathbf{f}_{t}=\mathbf{H}_{\text{c},1}\boldsymbol{\phi}_{t}\) changes as
## V Spectral Efficiency Analysis
In this section, we perform a spectral efficiency analysis for both the PD and DD approaches. We consider an RIS-aided system where the BS has one antenna serving one user without a direct channel. We assume \(D\)-PSK data signalling, i.e., \(s\in\mathcal{S}=\{\exp\left(j\pi\frac{2\ell+1}{D}\right)\}\) for \(\ell\in\{0,\,\ldots,\,D-1\}\) with the Gray code mapping data bits to data symbols. We also assume that the data symbols are equally likely. Let \(a_{i}=h_{i}g_{i}\) be the cascaded channel where \(g_{i}\) and \(h_{i}\) are the channels from the user and the BS to the \(i\)-th element of the RIS, respectively. It is assumed that \(g_{i}\sim\mathcal{CN}(0,\sigma_{g}^{2})\) and \(h_{i}\sim\mathcal{CN}(0,\sigma_{h}^{2})\) are independent of each other. Let \(\mathbf{a}=[a_{1},\,\,\ldots,\,\,a_{N}]^{T}\) so that the received signal at the BS can be written as
\[y^{\mathtt{BS}}=\sqrt{P}\mathbf{a}^{H}\boldsymbol{\phi}s+n^{\mathtt{BS}}. \tag{23}\]
Let \(\mathbf{\hat{a}}=\mathbf{a}+\boldsymbol{\epsilon}\) be the estimated cascaded channel where \(\boldsymbol{\epsilon}=[\epsilon_{1},\,\ldots,\,\epsilon_{N}]^{T}\) is the channel estimation error. Given the \(\mathbf{\hat{a}}\), the RIS coefficients \(\boldsymbol{\phi}\) in the data transmission phase are chosen to maximize the effective channel strength, i.e.,
\[\underset{\{\boldsymbol{\phi}\}}{\mathrm{maximize}}\,\,|\mathbf{\hat{a}}^{H} \boldsymbol{\phi}|^{2}\,\,\mathrm{subject}\,\,\,\mathrm{to}\,\,|\phi_{i}|\leq 1 \,\,\forall i=1,\,\ldots,\,N,\]
which has the optimal solution \(\phi_{i}^{\star}=e^{j\mathcal{L}(\hat{a}_{i})}\). The received signal at the BS in the data transmission phase will then be
\[y^{\mathtt{BS}}=\sqrt{P}\sum_{i=1}^{N}a_{i}^{\star}e^{j\mathcal{L}(a_{i}+ \epsilon_{i})}s+n^{\mathtt{BS}}=\sqrt{P}\sum_{i=1}^{N}z_{i}s+n^{\mathtt{BS}}\]
where \(z_{i}\stackrel{{\Delta}}{{=}}a_{i}^{\star}e^{j\mathcal{L}(a_{i}+ \epsilon_{i})}\). Thus we have
\[z_{i,\mathtt{R}} \stackrel{{\Delta}}{{=}}\Re\{z_{i}\}=\frac{|a_{i}|^{2 }+\Re\{a_{i}\epsilon_{i}^{\star}\}}{\sqrt{|a_{i}|^{2}+2\Re\{a_{i}\epsilon_{i}^ {\star}\}+|\epsilon_{i}|^{2}}},\] \[z_{i,\Im} \stackrel{{\Delta}}{{=}}\Im\{z_{i}\}=\frac{\Im\{a_{i} \epsilon_{i}^{\star}\}}{\sqrt{|a_{i}|^{2}+2\Re\{a_{i}\epsilon_{i}^{\star}\}+| \epsilon_{i}|^{2}}}.\]
### _Pilot-Directed_
For the pilot-directed method, the SE is given as
\[\mathrm{SE}_{\mathtt{PD}}=\frac{\tau_{\mathrm{c}}-\tau_{\mathrm{p}}}{\tau_{ \mathrm{c}}}\left(1-\mathrm{BER}_{\mathtt{PD}}\right)\log_{2}(D), \tag{24}\]
where \(\tau_{\mathrm{c}}\) and \(\tau_{\mathrm{p}}\) are the lengths of the coherence block and the pilot sequence, respectively. In the PD approach, the first \(N\) time slots are used for channel estimation (i.e., \(\tau_{\mathrm{p}}=N\)), and we assume without loss of generality that the pilot signal is an all-ones vector. Assuming a DFT matrix of size \(N\) is used to configure the RIS phase shifts during the channel estimation phase, we have that \(\epsilon_{i}\sim\mathcal{CN}\big{(}0,\frac{N_{\mathtt{PD}}^{\mathtt{BS}}}{PN} \big{)}\). The CSI errors \(\{\epsilon_{i}\}\) are also i.i.d.. and uncorrelated with \(a_{i}\). We will compute the PD bit error rate \(\mathrm{BER}_{\mathtt{PD}}\), which requires the distribution of the effective channel \(f=\sum_{i=1}^{N}z_{i}\).
We first obtain the following approximate means
\[\mu_{z_{i,\mathtt{R}}}\stackrel{{\Delta}}{{=}}\mathbb{E}[z_{i, \mathtt{R}}] \approx\frac{\mathbb{E}\big{[}|a_{i}|^{2}\big{]}+\Re\{\mathbb{E}[a_{i} \epsilon_{i}^{\star}]\}}{\sqrt{\mathbb{E}[|a_{i}|^{2}]+2\Re\{\mathbb{E}[a_{i} \epsilon_{i}^{\star}]\}+\mathbb{E}[|\epsilon_{i}|^{2}]}}\] \[=\frac{\sigma_{a}^{2}}{\sqrt{\sigma_{a}^{2}+\sigma_{e}^{2}}} \tag{25}\] \[\mu_{z_{i,\Im}}\stackrel{{\Delta}}{{=}}\mathbb{E}[z_ {i,\Im}] \approx\frac{\Im\{\mathbb{E}[a_{i}\epsilon_{i}^{\star}]\}}{\sqrt{ \mathbb{E}[|a_{i}|^{2}]+2\Re\{\mathbb{E}[a_{i}\epsilon_{i}^{\star}]\}+\mathbb{E} [|\epsilon_{i}|^{2}]}}=0 \tag{26}\]
and variances
\[\sigma_{z_{i,\mathtt{R}}}^{2}\stackrel{{\Delta}}{{=}} \mathrm{Var}[z_{i,\mathtt{R}}]=\mathbb{E}[z_{i,\mathtt{R}}^{2}]-\mathbb{E}[z_ {i,\mathtt{R}}]^{2} \approx\frac{7\sigma_{a}^{4}+\sigma_{a}^{2}\sigma_{e}^{2}}{2(\sigma_{a}^{ 2}+\sigma_{e}^{2})} \tag{27}\] \[\sigma_{z_{i,\Im}}^{2}\stackrel{{\Delta}}{{=}} \mathrm{Var}[z_{i,\Im}]=\mathbb{E}[z_{i,\Im}^{2}]-\mathbb{E}[z_{i,\Im} ]^{2} \approx\frac{\sigma_{a}^{2}\sigma_{e}^{2}}{2(\sigma_{a}^{2}+\sigma_{e}^{2})} \tag{28}\]
where \(\sigma_{a}^{2}\stackrel{{\Delta}}{{=}}\sigma_{g}^{2}\sigma_{h}^{2}\) and we have used the following results:
\[\mathbb{E}\big{[}a_{i,\mathtt{R}}^{2}\big{]} =\mathbb{E}\big{[}(h_{i,\mathtt{R}}g_{i,\mathtt{R}}-h_{i,\Im}g_{i,\Im})^{2}\big{]}=\frac{1}{2}\sigma_{h}^{2}\sigma_{g}^{2}=\frac{1}{2}\sigma_{a}^{ 2},\] \[\mathbb{E}\big{[}a_{i,\mathtt{R}}^{4}\big{]} =\mathbb{E}\big{[}(h_{i,\mathtt{R}}g_{i,\mathtt{R}}-h_{i,\Im}g_{i,\Im})^{4}\big{]}=\frac{3}{2}\sigma_{h}^{4}\sigma_{g}^{4}-\frac{3}{2}\sigma_{a}^{ 4}.\]
It can be seen that the means and variances \(\mu_{z_{i,\mathtt{R}}}\), \(\mu_{z_{i,\Im}}\), \(\sigma_{z_{i,\mathtt{R}}}^{2}\), and \(\sigma_{z_{i,\Im}}^{2}\) above are the same for different indices \(i\). Therefore, for convenience in the rest of the PD-SE analysis, we drop the index \(i\) for these values.
Since the \(\{a_{i}\}\) and \(\{\epsilon_{i}\}\) are i.i.d., then the \(\{z_{i}\}\) are also i.i.d.. Using the central-limit theorem, for large \(N\) we have \(f_{\mathtt{R}}=\sum_{i=1}^{N}z_{i,\mathtt{R}}\sim\mathcal{CN}(N\mu_{\tau_{ \mathtt{R}}},N\sigma_{z_{\mathtt{R}}}^{2})\) and \(f_{\Im}=\sum_{i=1}^{N}z_{i,\Im}\sim\mathcal{CN}(N\mu_{z_{\mathtt{R}}},N\sigma_{z_{ \mathtt{R}}}^{2})\). Note that we have \(\mathrm{Cov}[f_{\mathtt{R}},f_{\Im}]=0\) since the \(\{z_{i}\}\) are i.i.d, and \(\mathrm{Cov}[z_{i,\mathtt{R}},z_{i,\Im}]=0\).
Let \(\tilde{y}=y^{\mathtt{S}}=\sqrt{P}f+\tilde{n}\) be the rotated received signal, and define \(r_{\tilde{y}}=\sqrt{\tilde{y}_{\mathtt{R}}^{2}+\tilde{y}_{\mathtt{Q}}^{2}}\) and \(\theta_{\tilde{y}}=\mathcal{L}(\tilde{y})=\arctan(\tilde{y}_{\tilde{y}}/\tilde{ y}_{\mathtt{R}})\). Then the joint pdf of \(r_{\tilde{y}}\) and \(\theta_{\tilde{y}}\) is given as
\[p(r_{\tilde{y}},\theta_{\tilde{y}})=\frac{r_{\tilde{y}}}{2\pi \sqrt{(PN\sigma_{z_{\mathtt{R}}}^{2}+N_{0}/2)(PN\sigma_{z_{\mathtt{0}}}^{2}+N_{0} /2)}}\times\] \[\exp\left\{-\frac{1}{2}\left[\frac{(r_{\tilde{y}}\cos\theta_{ \tilde{y}}-\sqrt{PN\mu_{\mathtt{z}}})^{2}}{PN\sigma_{z_{\mathtt{R}}}^{2}+N_{0} /2}+\frac{(r_{\tilde{y}}\sin\theta
### _Decision-Directed_
The SE of the decision-directed approach is given as
\[\mathrm{SE}_{\texttt{DD}}=\left(\frac{\tau_{\mathrm{d},1}(1-\mathrm{BER}_{ \texttt{DD1}})+\tau_{\mathrm{d},2}(1-\mathrm{BER}_{\texttt{DD2}})}{\tau_{\mathrm{ c}}}\right)\log_{2}(D) \tag{33}\]
where \(\tau_{\mathrm{d},1}\) and \(\mathrm{BER}_{\texttt{DD1}}\) are the data transmission length and the BER in the channel estimation stage. Similarly, \(\tau_{\mathrm{d},2}\) and \(\mathrm{BER}_{\texttt{DD2}}\) are the data transmission length and the BER in the data transmission stage. Thus, for the DD spectral analysis analysis, we need to compute \(\mathrm{BER}_{\texttt{DD1}}\) and \(\mathrm{BER}_{\texttt{DD2}}\) to obtain \(\mathrm{SE}_{\texttt{DD}}\). While \(\mathrm{BER}_{\texttt{DD1}}\) is simple to obtain and can be computed exactly, obtaining an exact value \(\mathrm{BER}_{\texttt{DD2}}\) is much more challenging and thus we provide an accurate approximation.
To simplify the analysis, we assume the RIS has only one active receiver element, which we take to be element \(N\). We further assume that this element completely absorbs the incoming signal power during the channel estimation stage and is then turned off in the data transmission stage. In the first time slot, a pilot signal \(s_{1}=1\) is transmitted to generate the following received signal at the RIS: \(y_{1}^{\texttt{RIS}}=\sqrt{P}g_{N}+n_{t}^{\texttt{RIS}}\), and so an estimate of \(g_{N}\) can be obtained as \(\hat{g}_{N}=y_{1}^{\texttt{RIS}}/\sqrt{P}=g_{N}+\frac{1}{\sqrt{P}}n_{1}^{ \texttt{RIS}}\). From the second time slot to the \((N-1)\)-th time slot, data symbols are transmitted and the received signal at the RIS is \(y_{t}^{\texttt{RIS}}=\sqrt{P}g_{N}s_{t}+n_{t}^{\texttt{RIS}},\;t=2,\;\ldots,\;N-1\). An equalizer based on \(\hat{g}_{N}\) is used by the RIS to detect the symbols as
\[\tilde{s}_{t}=\frac{y_{t}^{\texttt{RIS}}}{\sqrt{P}\hat{g}_{N}}=\frac{\sqrt{P}g _{N}s_{t}+n_{t}^{\texttt{RIS}}}{\sqrt{P}g_{N}+n_{1}^{\texttt{RIS}}},\;t=2,\; \ldots,\;N-1 \tag{34}\]
To evaluate the BER of (34), we consider the rotated signal
\[\bar{s}_{t}=\tilde{s}_{t}s_{t}^{*}=\frac{\sqrt{P}g_{N}+\tilde{n}_{t}^{\texttt{ RIS}}}{\sqrt{P}g_{N}+n_{1}^{\texttt{RIS}}}. \tag{35}\]
Since the error rates are the same for different time indices \(t\), we drop the subscript \(t\) for convenience. Let \(r_{\bar{s}}=\sqrt{\bar{s}_{\bar{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ \texttt{ }}}}}}{}}{}}{}}{}}{}}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{ }{{}}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{{}}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{ }{{}}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{ }{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{ }{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{}{{}}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{ }}{{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{ }{{}}{{}{}{}{{}{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{ }{{}}{{}{}{}{{}{}{}{}{{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{ }{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{} {}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{} {{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{ }{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{{}{}{}{{}{}{}{{}{}{}{} {}{{}{{}{}{}{{}{}{}{}{{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{}{} {{}{}{{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{} {{}{{}{{}{{}{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{} {}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{{}{}{{}{}{}{}{}{{}{}{{}{} {}{{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{} {}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{} {{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{
where \(\sigma_{a}^{2}\stackrel{{\Delta}}{{=}}\mathbb{E}\big{[}[a|^{2}]\big{]}\) and \(\sigma_{\epsilon}^{2}\stackrel{{\Delta}}{{=}}\mathbb{E}\big{[}| \epsilon_{i}|^{2}\big{]}=\sigma_{\delta_{i}}^{2}+\sigma_{\tilde{n}_{i}}^{2}\). The variance of \(\delta_{i}\) and \(\tilde{n}_{i}\) (\(\sigma_{\delta_{i}}^{2}\) and \(\sigma_{\tilde{n}_{i}}^{2}\)) are given as follows:
\[\sigma_{\delta_{i}}^{2} =\frac{\sigma_{a}^{2}}{(N-1)}\operatorname{tr}\Big{(}\mathbb{E} \Big{[}\mathrm{diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{i,:}^{H}\mathbf{\Phi}_{i,:} \mathrm{diag}\left(\mathbf{\xi}^{*}\right)\Big{]}\Big{)}\] \[=\frac{\sigma_{a}^{2}}{(N-1)}\sum_{t=2}^{N-1}\mathbb{E}\big{[}| \xi_{t}|^{2}\big{]}=\frac{N-2}{(N-1)}\sigma_{a}^{2}\mu_{|\xi|^{2}} \tag{46}\]
and \(\sigma_{\tilde{n}_{i}}^{2}=\frac{N_{0}^{\text{gs}}}{P(N-1)}\). Hence, the variance of \(\epsilon\) is
\[\sigma_{\epsilon}^{2}=\frac{P(N-2)\sigma_{a}^{2}\mu_{|\xi|^{2}}+N_{0}^{\text{gs }}}{P(N-1)}. \tag{47}\]
Iv-C2 Compute the variances \(\sigma_{\tilde{f}_{\text{gs}}}^{2}\) and \(\sigma_{\tilde{f}_{\text{gs}}}^{2}\)
The variance of \(f_{\text{gs}}\) and \(f_{\text{gs}}\) are
\[\sigma_{\tilde{f}_{\text{gs}}}^{2} =\sum_{i=1}^{N-1}\operatorname{Var}[z_{i,\Re}]+2\sum_{i<t} \operatorname{Cov}[z_{i,\Re},z_{t,\Re}], \tag{48}\] \[\sigma_{\tilde{f}_{\text{gs}}}^{2} =\sum_{i=1}^{N-1}\operatorname{Var}[z_{i,\Im}]+2\sum_{i<t} \operatorname{Cov}[z_{i,\Im},z_{t,\Im}]. \tag{49}\]
Since \(\operatorname{Var}[z_{i,\Re}]=\mathbb{E}[z_{i,\Re}^{2}]-\mathbb{E}[z_{i,\Re}]^ {2}\) and \(\mathbb{Var}[z_{i,\Im}]=\mathbb{E}[z_{i,\Im}]-\mathbb{E}[z_{i,\Im}]^{2}\), and since \(\mathbb{E}[z_{i,\Re}]\) and \(\mathbb{E}[z_{i,\Im}]\) are already given in (45), we now need to obtain \(\mathbb{E}[z_{i,\Re}^{2}]\) and \(\mathbb{E}[z_{i,\Im}^{2}]\), which can be approximated as follows:
\[\mathbb{E}\big{[}z_{i,\Re}^{2}\big{]}\approx\frac{\mathbb{E}\big{[}|a_{i}|^{4 }\big{]}+2\mathbb{E}\big{[}|a_{i}|^{2}\Re\{a_{i}\epsilon_{i}^{*}\}\big{]}+ \mathbb{E}\big{[}\Re\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]}}{\mathbb{E}[|a_{i}|^ {2}]+2\Re\{\mathbb{E}[a_{i}\epsilon_{i}^{*}\}]+\mathbb{E}[|\epsilon_{i}|^{2} ]}, \tag{50}\]
The first term in the numerator of (IV-C2) is \(\mathbb{E}\big{[}|a_{i}|^{4}\big{]}=4\sigma_{a}^{4}\). The second term in the numerator of (IV-C2) can be obtained by computing \(\mathbb{E}\big{[}|a_{i}|^{2}a_{i}\epsilon_{i}^{*}\big{]}\) since \(\mathbb{E}\big{[}|a_{i}|^{2}\Re\{a_{i}\epsilon_{i}^{*}\}\big{]}=\Re\{\mathbb{ E}\big{[}|a_{i}|^{2}a_{i}\epsilon_{i}^{*}\}\big{]}\). We have
\[\mathbb{E}\big{[}|a_{i}|^{2}a_{i}\epsilon_{i}^{*}\big{]} =\mathbb{E}\big{[}|a_{i}|^{2}a_{i}\delta_{i}^{*}\big{]}\] \[=\frac{1}{1-N}\mathbb{E}\Bigg{[}\sum_{n=1}^{N-1}|a_{i}|^{2}a_{i}a_ {n}^{*}\mathbf{\Phi}_{n,:}\mathrm{diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{i,:}^{H} \Bigg{]}\] \[=\frac{1}{1-N}\mathbb{E}\Big{[}|a_{i}|^{4}\mathbf{\Phi}_{i,:}\mathrm{ diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{i,:}^{H}\Big{]}\] \[=\frac{1}{1-N}4\sigma_{a}^{4}\sum_{t=2}^{N-1}\mathbb{E}[\xi_{t}]= \frac{N-2}{1-N}4\sigma_{a}^{4}\mu_{\xi}, \tag{52}\]
which is a real number, and so \(\mathbb{E}\big{[}|a_{i}|^{2}\Re\{a_{i}\epsilon_{i}^{*}\}\big{]}=\frac{N-2}{1-N} 4\sigma_{a}^{4}\mu_{\xi}\).
We exploit the following relations:
\[|a_{i}\epsilon_{i}^{*}|^{2} =\Re\{a_{i}\epsilon_{i}^{*}\}^{2}+\Im\{a_{i}\epsilon_{i}^{*}\}^{2}, \tag{53}\] \[\Re\{(a_{i}\epsilon_{i}^{*})^{2}\} =\Re\{a_{i}\epsilon_{i}^{*}\}^{2}-\Im\{a_{i}\epsilon_{i}^{*}\}^{2}, \tag{54}\]
to obtain \(\mathbb{E}\big{[}\Re\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]}\) (the last term in the numerator of (IV-C2)) and \(\mathbb{E}\big{[}\Im\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]}\) (the numerator term of (IV-C2)) as follows:
\[\mathbb{E}\big{[}\Re\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]} =\frac{\mathbb{E}\big{[}|a_{i}\epsilon_{i}^{*}|^{2}\big{]}+\Re\{ \mathbb{E}\big{[}(a_{i}\epsilon_{i}^{*})^{2}\big{]}\}}{2}, \tag{55}\] \[\mathbb{E}\big{[}\Im\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]} =\frac{\mathbb{E}\big{[}|a_{i}\epsilon_{i}^{*}|^{2}\big{]}-\Re\{ \mathbb{E}\big{[}(a_{i}\epsilon_{i}^{*})^{2}\big{]}\}}{\sqrt{\mathbb{E}[\epsilon_{i,\Im}]}}. \tag{56}\]
We need to find \(\mathbb{E}\big{[}|a_{i}\epsilon_{i}^{*}|^{2}\big{]}\) and \(\mathbb{E}\big{[}(a_{i}\epsilon_{i}^{*})^{2}\big{]}\) to retrieve \(\mathbb{E}\big{[}\Re\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]}\) and \(\mathbb{E}\big{[}\Im\{a_{i}\epsilon_{i}^{*}\}^{2}\big{]}\). The term \(\mathbb{E}\big{[}|a_{i}\epsilon_{i}^{*}|^{2}\big{]}\) is given by
\[\mathbb{E}\big{[}|a_{i}\epsilon_{i}^{*}|^{2}\big{]} =\mathbb{E}\big{[}|a_{i}\delta_{i}^{*}|^{2}\big{]}+\mathbb{E} \big{[}|a_{i}\tilde{n}_{i}^{*}|^{2}\big{]}\] \[=\mathbb{E}\big{[}|a_{i}\delta_{i}^{*}|^{2}\big{]}+\frac{\sigma_{a} ^{2}N_{0}^{\text{gs}}}{P(N-1)} \tag{57}\]
where
\[\mathbb{E}\big{[}|a_{i}\delta_{i}^{*}|^{2}\big{]}\] \[=\frac{\mathbb{E}\bigg{[}|a_{i}\mathbf{a}^{H}\mathbf{\Phi}\,\mathrm{ diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{i,:}^{H}\Big{]}^{2}}{(N-1)^{2}}\] \[=\frac{\mathbb{E}\Big{[}\mathrm{tr}\left\{\big{|}a_{i}|^{2}\mathbf{ \Phi}^{H}\mathbf{a}\mathbf{a}^{H}\mathbf{\Phi}\,\mathrm{diag}\left(\mathbf{\Phi}_{i,:} \right)\mathbf{\xi}\mathbf{\xi}^{H}\,\mathrm{diag}\left(\mathbf{\Phi}_{i,:}\right)\right\} \Big{]}}{(N-1)^{2}}\] \[=\frac{\mathrm{tr}\Big{\{}\mathbf{\Phi}^{H}\,\mathrm{diag}\left(\mathbf{ \alpha}_{i}\right)\mathbf{\Phi}\,\mathrm{diag}\left(\mathbf{\Phi}_{i,:}^{H}\right)\mathbf{ \mathrm{R}_{\mathbf{\xi}\mathbf{\xi}^{H}}\,\mathrm{diag
and
\[\mathbb{E}[z_{i,\Im}z_{t,\Im}]\approx\frac{\mathbb{E}[\Im\{a_{i} \epsilon_{i}^{*}\}\Im\{a_{i}\epsilon_{t}^{*}\}]}{\sqrt{\mathbb{E}[\varkappa_{t, t}]}}, \tag{61}\]
where \(\varkappa_{i,t}\) is given in (62). Since the expansion of \(\varkappa_{i,t}\) also includes the numerator terms in (60) and (61), to obtain \(\mathbb{E}[z_{i,\Re}z_{t,\Re}]\) and \(\mathbb{E}[z_{i,\Im}z_{t,\Im}]\), it suffices to compute the expectation of all the terms in (62).
First, we have \(\mathbb{E}\big{[}|a_{i}|^{2}|a_{t}|^{2}\big{]}=\sigma_{a}^{4}\). Using the same approach as in (52), we obtain
\[\mathbb{E}\big{[}|a_{i}|^{2}a_{t}\epsilon_{t}^{*}\big{]}=\mathbb{E }\big{[}|a_{t}|^{2}a_{i}\epsilon_{t}^{*}\big{]}=\frac{N-2}{1-N}\sigma_{a}^{4} \mu_{\xi}. \tag{63}\]
The two terms \(\mathbb{E}\big{[}|a_{i}\epsilon_{t}|^{2}\big{]}\) and \(\mathbb{E}\big{[}|a_{t}\epsilon_{i}|^{2}\big{]}\) are also equal and given by
\[\mathbb{E}\big{[}|a_{i}\epsilon_{t}|^{2}\big{]} =\mathbb{E}\big{[}|a_{i}\delta_{t}|^{2}\big{]}+\mathbb{E}\big{[} |a_{i}\tilde{n}_{t}|^{2}\big{]}\] \[=\mathbb{E}\big{[}|a_{i}\delta_{t}|^{2}\big{]}+\frac{\sigma_{a}^ {2}N_{0}^{\text{RS}}}{P(N-1)} \tag{64}\]
where
\[\mathbb{E}\big{[}|a_{i}\delta_{t}|^{2}\big{]}\] \[=\frac{\mathbb{E}\bigg{[}|a_{i}\mathbf{a}^{H}\mathbf{\Phi}\operatorname {diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{t,\cdot}^{H}\big{]}^{2}}{(N-1)^{ 2}}\] \[=\frac{\mathbb{E}\Big{[}\text{tr}\left\{|a_{i}|^{2}\mathbf{\Phi}^ {H}\mathbf{a}\mathbf{a}^{H}\mathbf{\Phi}\operatorname{diag}\left(\mathbf{ \Phi}_{t,\cdot}^{H}\right)\mathbf{\xi}\mathbf{\xi}^{H}\operatorname{diag} \left(\mathbf{\Phi}_{t,\cdot}\right)\right\}\Big{]}}{(N-1)^{2}}\] \[=\frac{\text{tr}\left\{\mathbf{\Phi}^{H}\operatorname{diag} \left(\mathbf{\alpha}_{i}\right)\mathbf{\Phi}\operatorname{diag}\left( \mathbf{\Phi}_{t,\cdot}^{H}\right)\mathbf{R}_{\mathbf{\xi}\mathbf{\xi}^{H} \operatorname{diag}\left(\mathbf{\Phi}_{t,\cdot}\right)}\right\}}{(N-1)^{2}}.\]
The terms \(\mathbb{E}[a_{i}\epsilon_{i}^{*}a_{t}\epsilon_{t}^{*}]\) and \(\mathbb{E}[a_{i}\epsilon_{i}^{*}a_{t}^{*}\epsilon_{t}]\) are obtained as follows:
\[\mathbb{E}[a_{i}\epsilon_{i}^{*}a_{t}\epsilon_{t}^{*}]=\mathbb{E }[a_{i}a_{t}\delta_{i}^{*}\delta_{t}^{*}]\] \[=\frac{\mathbb{E}\big{[}a_{i}a_{t}\mathbf{a}^{H}\mathbf{\Phi} \operatorname{diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{i,\cdot}^{H} \mathbf{a}^{H}\mathbf{\Phi}\operatorname{diag}\left(\mathbf{\xi}\right) \mathbf{\Phi}_{t,\cdot}^{H}\big{]}}{(N-1)^{2}}\] \[=\frac{\mathbb{E}\Big{[}\text{tr}\left\{\mathbf{\Phi}^{T}a_{i} a_{t}\mathbf{a}^{*}\mathbf{a}^{H}\mathbf{\Phi}\operatorname{diag}\left(\mathbf{\xi} \right)\mathbf{\Phi}_{i,\cdot}^{H}\mathbf{\Phi}_{t,\cdot}^{H}\mathbf{\Phi}_{ t,\cdot}^{*}\operatorname{diag}\left(\mathbf{\xi}\right)\right\}\Big{]}}{(N-1)^{ 2}} \tag{65}\]
where \(\mathbf{\Sigma}\) is a matrix with \(\mathbf{\Sigma}_{i,t}=\mathbf{\Sigma}_{t,i}=\sigma_{a}^{4}\), and zeroes elsewhere, and
\[\mathbb{E}[a_{i}\epsilon_{i}^{*}a_{t}^{*}\epsilon_{t}]=\mathbb{E }[a_{i}a_{t}^{*}\delta_{i}^{*}\delta_{t}]\] \[=\frac{\mathbb{E}\big{[}a_{i}a_{t}^{*}\mathbf{a}^{H}\mathbf{\Phi }\operatorname{diag}\left(\mathbf{\xi}\right)\mathbf{\Phi}_{i,\cdot}^{H} \mathbf{a}^{T}\mathbf{\Phi}^{*}\operatorname{diag}\left(\mathbf{\xi}^{*} \right)\mathbf{\Phi}_{t,\cdot}^{T}\big{]}}{(N-1)^{2}}\] \[=\frac{\mathbb{E}\Big{[}\text{tr}\left\{\mathbf{\Phi}^{H}a_{i}a_{ t}^{*}\mathbf{a}^{H}\mathbf{\Phi}\operatorname{diag}\left(\mathbf{\xi} \right)\mathbf{\Phi}_{i,\cdot}^{H}\mathbf{\Phi}_{t,\cdot}\operatorname{diag} \left(\mathbf{\xi}^{*}\right)\right\}\Big{]}}{(N-1)^{2}}\] \[=\frac{\text{tr}\left\{\mathbf{\Phi}^{H}\mathbf{\Omega}\mathbf{ \Phi}\operatorname{diag}\left(\mathbf{\Phi}_{i,\cdot}^{H}\right)\mathbf{R}_{ \mathbf{\xi}\mathbf{\xi}^{H}}\operatorname{diag}\left(\mathbf{\Phi}_{t,\cdot }\right)\right\}}{(N-1)^{2}} \tag{66}\]
where \(\mathbf{\Omega}\) is a matrix with \(\mathbf{\Omega}_{t,i}=\sigma_{a}^{4}\), and zeroes elsewhere.
The two terms \(\mathbb{E}\big{[}a_{i}\epsilon_{i}^{*}|\epsilon_{t}|^{2}\big{]}\) and \(\mathbb{E}\big{[}a_{i}\epsilon_{i}^{*}|\epsilon_{t}|^{2}\big{]}\) are also equal and given by
\[\mathbb{E}\big{[}a_{i}\epsilon_{i}^{*}|\epsilon_{t}|^{2}\big{]} =\mathbb{E}\big{[}a_{i}\delta_{i}^{*}|\delta_{t}|^{2}\big{]}+ \mathbb{E}\big{[}a_{i}\delta_{i}^{*}|\tilde{n}_{t}|^{2}\big{]}\] \[\approx\mathbb{E}\big{[}a_{i}\delta_{i}^{*}|\tilde{n}_{t}|^{2} \big{]}=-\frac{(N-2)N_{0}^{\text{RS}}}{P(N-1)^{2}}\sigma_{a}^{2}\mu_{\xi}. \tag{67}\]
Finally, the term \(\mathbb{E}\big{[}|\epsilon_{i}|^{2}|\epsilon_{t}|^{2}\big{]}\) is approximated as
\[\mathbb{E}\big{[}|\epsilon_{i}|^{2}|\epsilon_{t}|^{2}\big{]} =\mathbb{E}\big{[}|\delta_{i}|^{2}|\delta_{t}|^{2}\big{]}+2 \mathbb{E}\big{[}|\delta_{i}|^{2}|\tilde{n}_{t}|^{2}\big{]}+\mathbb{E}\big{[}| \tilde{n}_{i}|^{2}|\tilde{n}_{t}|^{2}\big{]}\] \[\approx\mathbb{2E}\big{[}|\delta_{i}|^{2}|\tilde{n}_{t}|^{2}\big{]}+ \mathbb{E}\big{[}|\tilde{n}_{i}|^{2}|\tilde{n}_{t}|^{2}\big{]}\] \[=\frac{2(N-2)N_{0}^{\text{RS}}}{P(N-1)^{2}}\sigma_{a}^{2}\mu_{ \xi|^{2}}+\frac{(N_{0}^{\text{RS}})^{2}}{P^{2}(N-1)^{2}}. \tag{68}\]
The quantities \(\mathbb{E}\big{[}a_{i}\delta_{i}^{*}|\delta_{t}|^{2}\big{]}\) in (67) and \(\mathbb{E}\big{[}|\delta_{i}|^{2}|\delta_{t}|^{2}\big{]}\) in (68) are ignored in our approximation since they make a negligible contribution to the result.
We approximate the distribution of \(f_{\Re}\) and \(f_{\Im}\) as Gaussian with the above approximate means and variances, and so \(\operatorname{BER}_{\mathrm{DD2}}\) can be obtained in the same manner as in (29) - (31).
## VI Numerical Results
In this section, we present various numerical results to verify our SE analysis and to show the benefits of the proposed DD channel estimation framework. We use a general channel model with \(\mathbf{h}_{\mathrm{d},k}=\sqrt{\beta_{k}^{\mathrm{UB}}}\tilde{\mathbf{h}}_{ \mathrm{d},k}\), \(\mathbf{g}_{k}=\sqrt{\beta_{k}^{\mathrm{UR}}}\tilde{\mathbf{g}}_{k}\), \(\mathbf{H}=\sqrt{\beta_{k}^{\mathrm{RB}}}\tilde{\mathbf{H}}\) where \(\tilde{\mathbf{h}}_{\mathrm{d},k}\sim\mathcal{CN}(\mathbf{0},\mathbf{\Sigma}_{k}^{ \mathrm{UB}
accurately predict the performance crossover point. Thus the analytical SE can be used in the system design to determine the crossing point and decide whether the PD or DD approach should be used.
We evaluate the SE of the PD and DD approaches as the number of RIS elements \(N\) increases in Fig. 3, where the transmit power is fixed at 5-dBm. It is interesting to observe that increasing the number of RIS elements can actually lead to a reduction in SE for the PD framework, since the pilot overhead of the PD approach grows proportionally with \(N\) leading to a reduction in the number of time slots available for data transmission. On the other hand, the pilot overhead of the DD framework does not depend on the number of RIS elements, and thus the DD approach does not suffer from SE reduction as \(N\) increases. Again, our analysis accurately predicts the performance crossover point, which is an important factor for the system design.
In Figs. 4 and 5 we also consider a single user scenario but the BS is equipped with multiple antennas. After the channel estimation stage, the phase shift vector \(\mathbf{\phi}\) of the RIS is optimized to maximize the effective channel strength \(\|\hat{\mathbf{h}}_{\mathrm{d,1}}+\hat{\mathbf{A}}_{1}\phi\|^{2}\), which is solved by semi-definite relaxation (SDR). Fig. 4 shows the channel estimation and spectral efficiency performance for \(M=8\), \(N=50\), \(\rho_{i}^{A}=0.5\), and 16-PSK signalling with different noise power levels and user transmit powers. The normalized mean-squared error (NMSE) is computed as \(\mathbb{E}\Big{[}\|\hat{\mathbf{H}}_{\mathrm{c,1}}-\mathbf{H}_{\mathrm{c,1}} \|_{F}^{2}/\|\mathbf{H}_{\mathrm{c,1}}\|_{F}^{2}\Big{]}\). The results in Fig. (a)a show that the PD method achieves a better channel estimate than DD, but this gain is offset by the increased training overhead for either higher transmit power or a lower noise figure when the DD method can reliably decode the data at the RIS.
Fig. 5 illustrates that there is a trade-off in the choice of the fraction of the incident power \(\rho_{i}^{A}\) that is reflected by the RIS elements with active receivers. A larger \(\rho_{i}^{A}\) means more signal power is reflected and less is sensed by the RIS. When the amount of signal power sensed by the RIS is too small, the noise at the RIS may dominate the received signal and cause data detection errors, which in turn leads to lower channel estimation accuracy and SE. One the other hand, if the amount of signal power sensed by the RIS is large so as to efficiently recover the data symbols at the RIS, the signals reflected from the RIS to the BS will be weaker, which can lead to less accurate channel estimation at the BS and a reduction in SE as well. The trade-off is not too serious to handle for small noise levels, but becomes more important as the SNR decreases. For the cases considered in this example, a relatively small value such as \(\rho_{i}^{A}=0.2\) appears to provide the best system performance.
To study the case of multiple users, we position the RIS and BS at the locations \((x,y)=(50,50)\) and \((x,y)=(100,0)\), respectively, and we locate the users randomly within a square whose side length is \(20\)m and is centered at the origin. Simulation results for a scenario with \(K=4\), \(M=8\), \(N=200\), \(\rho_{i}^{A}=0.5\), and 16-PSK signalling are given in Fig. 6. In the sub-phase (b)b, we employ the conventional zero-forcing (ZF) detector for recovering data symbols at the RIS. To configure the RIS phase shift after the channel estimation stage, we find the \(\mathbf{\phi}\) that maximizes the minimum signal-to-interference plus noise ration (SINR) using the SDR approach as in [47]. It is seen that while the typical user and the other users have the same SE for the PD approach, the SE of the typical user is much higher than the SE of the other users when the DD approach is used since only the typical user transmits during the \(N\) time slots of phase-1, while all users transmit data in phase-2. This creates a fairness issue that can be addressed in a number of ways. For example, the red curve shows the result obtained by rotating the role of the typical
Fig. 3: SE comparison with \(K=M=1\), \(N\) varies, and \(P=5\) dBm.
Fig. 2: SE comparison with \(K=M=1\) and \(N=50\).
user among all the users over different coherence blocks. In this approach, the average SE of all the users will be the same, and an improvement compared with the unbalanced case is obtained. In particular, the fair DD approach yields approximately a 60% in SE performance compared to the PD approach.
Finally, we study the effect of the number of sensing elements \(N_{\mathcal{A}}\) on the spectral efficiency in Fig. 7. The noise power at the RIS is set to \(-120\) dBm/Hz and the transmit power \(P\) is 10 dBm. Interestingly, increasing the number of sensing elements \(N_{\mathcal{A}}\) only slightly improves the spectral efficiency of the DD approach, indicating that very few sensing elements at the RIS are necessary to achieve the benefit of decision direction. The SE improves more with increasing \(N_{\mathcal{A}}\) for the PD approach, since unlike DD, increasing \(N_{\mathcal{A}}\) results in a reduction in the pilot overhead of PD.
## VII Conclusion
In this paper, we have proposed a decision-directed channel estimation framework for general unstructured RIS channel models. It has been shown that with the help of some RIS elements with active receivers, it is possible to accurately estimate the CSI with a pilot overhead only proportional to the number of users and thus significantly improve the spectral efficiency compared to systems with passive RIS arrays. We also performed an intensive spectral efficiency analysis to verify the efficiency of the proposed DD framework. Our analysis takes into account both the channel estimation and data detection errors at both the RIS and the BS, and thus accurately reflects the data detection uncertainty inherent in the decision directed approach.
## Appendix A Proof of Theorem 1.
The symbol error rate (SER) can be approximated as
\[\mathrm{SER}\approx\mathbb{P}[\tilde{y}_{\Re}\tan\theta-\tilde{y}_{\Im}\leq 0 ]+\mathbb{P}[\tilde{y}_{\Re}\tan\theta+\tilde{y}_{\Im}\leq 0] \tag{69}\]
where \(\tilde{y}_{\Re}\tan\theta-\tilde{y}_{\Im}=0\) and \(\tilde{y}_{\Re}\tan\theta+\tilde{y}_{\Im}=0\) define the rotated decision boundaries. We have \((\tilde{y}_{\Re}\tan\theta-\tilde{y}_{\Im})\sim\)
\(\mathcal{N}(\tilde{\mu},\tilde{\sigma}^{2})\) and \((\tilde{y}_{\mathcal{R}}\tan\theta+\tilde{y}_{\Im})\sim\mathcal{N}(\tilde{\mu}, \tilde{\sigma}^{2})\) where
\[\tilde{\mu} =\sqrt{P}N\mu_{z_{\mathcal{R}}}\tan\theta,\] \[\tilde{\sigma}^{2} =\left(PN\sigma_{z_{\mathcal{R}}}^{2}+\frac{N_{0}}{2}\right)\tan^ {2}\theta+PN\sigma_{z_{0}}^{2}+\frac{N_{0}}{2}.\]
Therefore,
\[\mathbb{P}[\tilde{y}_{\mathcal{R}}\tan\theta-\tilde{y}_{\Im} \leq 0]=\mathbb{P}[\tilde{y}_{\mathcal{R}}\tan\theta+\tilde{y}_{\Im}\leq 0]\] \[=Q\left(\frac{\sqrt{P}N\mu_{z_{\mathcal{R}}}\tan\theta}{\sqrt{ \left(PN\sigma_{z_{\mathcal{R}}}^{2}+\frac{N_{0}}{2}\right)\tan^{2}\theta+PN \sigma_{z_{0}}^{2}+\frac{N_{0}}{2}}}\right), \tag{70}\]
which means the SER can be approximated as
\[\mathrm{SER}\approx 2Q\left(\frac{\sqrt{P}N\mu_{z_{\mathcal{R}}}\tan \theta}{\sqrt{\left(PN\sigma_{z_{\mathcal{R}}}^{2}+\frac{N_{0}}{2}\right)\tan^ {2}\theta+PN\sigma_{z_{0}}^{2}+\frac{N_{0}}{2}}}\right). \tag{71}\]
At high SNRs, \(\epsilon\) is small, and we have
\[\mu_{z_{\mathcal{R}}} =\mathbb{E}[z_{i,\mathcal{R}}]\approx\mathbb{E}[|a_{i}|]= \mathbb{E}[|h_{i}||\mathbb{E}[|g_{i}|]=\frac{\pi}{4}\sigma_{a}, \tag{72}\] \[\sigma_{z_{\mathcal{R}}}^{2} =\mathrm{Var}[z_{i,\mathcal{R}}]=\mathbb{E}[z_{i,\mathcal{R}}^{2} ]-|\mathbb{E}[z_{i,\mathcal{R}}]|^{2}\] \[\approx\mathbb{E}[|a_{i}|^{2}]-\mathbb{E}[|a_{i}|]^{2}=\left(1- \frac{\pi^{2}}{16}\right)\sigma_{a}^{2}. \tag{73}\]
Substituting (72) and (73) into (71) and using the result that \(\mathrm{BER}\approx\mathrm{SER}/\log_{2}(D)\) for a Gray code at high SNRs gives us the approximated BER in (32).
## Appendix B Proof of Lemma 1.
We have
\[\mathbb{E}[\xi_{t}] =1-\mathbb{E}[s_{t}\hat{s}_{t}^{*}], \tag{74}\] \[\mathbb{E}\big{[}\xi_{t}^{2}\big{]} =1-2\mathbb{E}[s_{t}\hat{s}_{t}^{*}]+\mathbb{E}\big{[}(s_{t}\hat{ s}_{t}^{*})^{2}\big{]}. \tag{75}\]
Thus, to obtain \(\mathbb{E}[\xi_{t}]\) and \(\mathbb{E}\big{[}\xi_{t}^{2}\big{]}\), we need to compute \(\mathbb{E}[s_{t}\hat{s}_{t}^{*}]\) and \(\mathbb{E}\big{[}(s_{t}\hat{s}_{t}^{*})^{2}\big{]}\), which are given as follows:
\[\mathbb{E}[s_{t}\hat{s}_{t}^{*}] =\sum_{d=0}^{D-1}p_{d}^{\texttt{DD1}}\mathcal{S}(0)\mathcal{S}(d) ^{*}\] \[=p_{0}^{\texttt{DD1}}-p_{\frac{\texttt{DD1}}{d}}^{\texttt{DD1}}+2 \sum_{d=1}^{\frac{D}{2}-1}p_{d}^{\texttt{DD1}}\cos\left(\frac{2\pi d}{D} \right), \tag{76}\] \[\mathbb{E}\big{[}(s_{t}\hat{s}_{t}^{*})^{2}\big{]} =\sum_{d=0}^{D-1}p_{d}^{\texttt{DD1}}(\mathcal{S}(0)\mathcal{S}( d)^{*})^{2}\] \[=p_{0}^{\texttt{DD1}}+p_{\frac{\texttt{DD1}}{2}}^{\texttt{DD1}}+2 \sum_{d=1}^{\frac{D}{2}-1}p_{d}^{\texttt{DD1}}\cos\left(\frac{4\pi d}{D}\right). \tag{77}\]
Substituting (76) and (77) into (74) and (75), we obtain \(\mu_{\xi}\) and \(\mu_{\xi^{2}}\) as in (40) and (41), respectively. The expectation of \(\left|\xi_{t}\right|^{2}\) is given as
\[\mathbb{E}\big{[}\big{|}\xi_{t}\big{|}^{2}\big{]} =\mathbb{E}\big{[}\big{|}\hat{s}_{t}-s_{t}\big{|}^{2}\big{]}=\sum _{d=0}^{D-1}\left|\mathcal{S}(d)-\mathcal{S}(0)\right|^{2}\,p_{d}^{\texttt{DD1}}\] \[=4p_{\frac{\texttt{DD1}}{d}}^{\texttt{DD1}}+4\sum_{d=1}^{\frac{D}{2 }-1}\left[1-\cos\left(\frac{2\pi d}{D}\right)\right]p_{d}^{\texttt{DD1}}. \tag{78}\]
Note that in (76), (77), and (78) we have used the following results: \(p_{d}^{\texttt{DD1}}=p_{d+\frac{\texttt{DD1}}{d}}^{\texttt{DD1}}\), \(\mathcal{S}(0)\mathcal{S}(d)^{*}=\mathcal{S}(0)\mathcal{S}(d+D/2)^{*}=\cos(4\pi d/D)\), and \(\left|\mathcal{S}(d)-\mathcal{S}(0)\right|^{2}=\left|\mathcal{S}(d+D/2)- \mathcal{S}(0)\right|^{2}=4(1-\cos\left(2\pi d/D\right))\) for \(d=1,\,\ldots,\,D/2-1\).
|
2307.16646 | Constraining a companion of the galactic center black hole, Sgr A* | We use 23 years of astrometric and radial velocity data on the orbit of the
star S0-2 to constrain a hypothetical intermediate-mass black hole orbiting the
massive black hole Sgr A* at the Galactic center. The data place upper limits
on variations of the orientation of the stellar orbit (inclination, nodal
angle, and pericenter) at levels between 0.02 and 0.07 degrees per year. We use
a combination of analytic estimates and full numerical integrations of the
orbit of S0-2 in the presence of a black-hole binary. For a companion IMBH
whose semi-major axis $a_c$ is larger than that of S0-2 (1020 a.u.), we find
that in the region between 1000 and 4000 a.u., a companion black hole with mass
$m_c$ between $10^3$ and $10^5 M_\odot$ is excluded, with a boundary behaving
as $a_c \sim m_c^{1/3}$. For a companion with $a_c < 1020$ a.u., we find that a
black hole with mass between $10^3$ and $10^5 \, M_\odot$ is again excluded,
with a boundary behaving as $a_c \sim m_c^{-1/2}$. These bounds arise from
quadrupolar perturbations of the orbit of S0-2. However, significantly stronger
bounds on the mass of an inner companion arise from the fact that the location
of S0-2 is measured relative to the bright emission of Sgr A*. As a
consequence, that separation is perturbed by the ``wobble'' of Sgr A* about the
center of mass between it and the companion, leading to ``apparent''
perturbations of S0-2's orbit that also include a dipole component. The result
is a set of bounds as small as $400 \, M_\odot$ at 200 a.u.; the numerical
simulations suggest a bound from these effects varying as $a_c \sim m_c^{-1}$.
We compare and contrast our results with those from a recent analysis by the
GRAVITY collaboration. | Clifford M. Will, Smadar Naoz, AurΓ©lien Hees, Alexandria Tucker, Eric Zhang, Tuan Do, Andrea Ghez | 2023-07-31T13:26:44Z | http://arxiv.org/abs/2307.16646v2 | # Constraining a companion of the Galactic center black hole, Sgr A*
###### Abstract
We use 23 years of astrometric and radial velocity data on the orbit of the star S0-2 to constrain a hypothetical intermediate-mass black hole orbiting the massive black hole Sgr A* at the Galactic center. The data place upper limits on variations of the orientation of the stellar orbit (inclination, nodal angle, and pericenter) at levels between 0.02 and 0.07 degrees per year. We use a combination of analytic estimates and full numerical integrations of the orbit of S0-2 in the presence of a black-hole binary. For a companion IMBH whose semi-major axis \(a_{c}\) is larger than that of S0-2 (1020 a.u.), we find that in the region between 1000 and 4000 a.u., a companion black hole with mass \(m_{c}\) between \(10^{3}\) and \(10^{5}\,M_{\odot}\) is excluded, with a boundary behaving as \(a_{c}\sim m_{c}^{1/3}\). For a companion with \(a_{c}<1020\) a.u., we find that a black hole with mass between \(10^{3}\) and \(10^{5}\,M_{\odot}\) is again excluded, with a boundary behaving as \(a_{c}\sim m_{c}^{-1/2}\). These bounds arise from quadrupolar perturbations of the orbit of S0-2. However, significantly stronger bounds on the mass of an inner companion arise from the fact that the location of S0-2 is measured relative to the bright emission of Sgr A*. As a consequence, that separation is perturbed by the "wobble" of Sgr A* about the center of mass between it and the companion, leading to "apparent" perturbations of S0-2's orbit that also include a dipole component. The result is a set of bounds as small as \(400\,M_{\odot}\) at 200 a.u.; the numerical simulations suggest a bound from these effects varying as \(a_{c}\sim m_{c}^{-1}\). We compare and contrast our results with those from a recent analysis by the GRAVITY collaboration.
## 1. Introduction
Sagittarius A* (Sgr A*) is a compact, bright radio source at the center of the Milky Way. Recent technological advances, such as the advent of adaptive optics (AO), have made it possible to observe stars orbiting this source. The results imply that this is the likely location of a supermassive black hole (SMBH) of about 4 million solar masses (e.g., Ghez et al., 2000, 2008; Gillessen et al., 2009), surrounded by a cluster of stars (e.g., Ghez et al., 2003; Gillessen et al., 2009; Lu et al., 2013). Combined infrared (e.g., Keck observations, Witzel et al., 2018), radio and X-ray observations (e.g., JVLA and Chandra observations, Dibi et al., 2016; Capellupo et al., 2017) have revealed hot emission from gas near the event horizon of Sgr A*. Further observations by the Event Horizon Telescope collaboration may reveal additional information about the behavior of gas and light close to the black hole (Event Horizon Telescope Collaboration et al., 2019; Wielgus et al., 2022). Thus, the proximity of the Milky Way's galactic center provides a unique laboratory for addressing issues in the fundamental physics of supermassive black holes, their impact on the central regions of galaxies, and their role in galaxy formation and evolution.
Since almost every galaxy harbors a SMBH at its center, the hierarchical nature of the galaxy formation paradigm suggests that galaxy mergers may result in the formation of _binaries_ of SMBH (e.g., Di Matteo et al., 2005; Hopkins et al., 2006; Robertson et al., 2006; Callegari et al., 2009). While observations of SMBH binaries are challenging, there exist several confirmed binary candidates with sub-parsec to hundreds of parsec separations (e.g., Sillanpaa et al., 1988; Rodriguez et al., 2006; Komossa et al., 2008; Bogdanovic et al., 2009; Boroson & Lauer, 2009; Dotti et al., 2009; Batcheldor et al., 2010; Deane et al., 2014; Liu et al., 2014; Liu et al., 2016; Li et al., 2016; Bansal et al., 2017; Kharb et al., 2017; Runnoe et al., 2017; Pesce et al., 2018). Additionally, observations of dual active galactic nuclei with kpc-scale separations have been suggested as SMBH binary candidates (e.g., Komossa et al., 2003; Bianchi et al., 2008; Comerford et al., 2009; Liu et al., 2010; Green et al., 2010; Smith et al., 2010; Comerford et al., 2018; Stemo et al., 2020).
Recent observations by the LIGO/Virgo/Kagra collaboration have now confirmed the existence of intermediate-mass black holes (IMBH) (e.g., GW190521 The LIGO Scientific Collaboration et al., 2020a,b). Our galactic center may harbor IMBH as a result of a possible minor merger with a low-mass or dwarf galaxy or even with a globular cluster. Such a scenario was considered by Rashkov & Madau (2013), who suggested that if IMBH serve as the seeds of SMBH in the center of galaxies, hierarchical galaxy evolution could yield many IMBH in our galaxy. Additionally, a combination of theoretical and observational arguments suggest that IMBH |
2309.10943 | Recent advances in algorithmic problems for semigroups | In this article we survey recent progress in the algorithmic theory of matrix
semigroups. The main objective in this area of study is to construct algorithms
that decide various properties of finitely generated subsemigroups of an
infinite group $G$, often represented as a matrix group. Such problems might
not be decidable in general. In fact, they gave rise to some of the earliest
undecidability results in algorithmic theory. However, the situation changes
when the group $G$ satisfies additional constraints. In this survey, we give an
overview of the decidability and the complexity of several algorithmic problems
in the cases where $G$ is a low-dimensional matrix group, or a group with
additional structures such as commutativity, nilpotency and solvability. | Ruiwen Dong | 2023-09-19T21:51:45Z | http://arxiv.org/abs/2309.10943v1 | # Recent advances in algorithmic problems for semigroups
###### Abstract
In this article we survey recent progress in the algorithmic theory of matrix semigroups. The main objective in this area of study is to construct algorithms that decide various properties of finitely generated subsemigroups of an infinite group \(G\), often represented as a matrix group. Such problems might not be decidable in general. In fact, they gave rise to some of the earliest undecidability results in algorithmic theory. However, the situation changes when the group \(G\) satisfies additional constraints. In this survey, we give an overview of the decidability and the complexity of several algorithmic problems in the cases where \(G\) is a low-dimensional matrix group, or a group with additional structures such as commutativity, nilpotency and solvability.
## 1 Introduction
It has been known since the work of Church and Turing from the 1930s that certain decision problems in mathematical logic do not admit algorithmic solutions. For a long period of time, all examples of undecidable problems were derived directly from mathematical logic or the theory of computing, and the notion of undecidability seemed intangible for most mathematicians. It was not until the late 1940s that the Soviet mathematician Andrey Markov produced a concrete undecidable problem using linear algebra. In his seminal work _"On certain insoluble problems concerning matrices"_[10], Markov studied the following decision problem. Its input is a finite set of square matrices \(\mathcal{G}=\{A_{1},\ldots,A_{K}\}\) and a matrix \(T\), and the problem is whether or not there exist an integer \(p\geq 1\) and a sequence \(A_{i_{1}},\ldots,A_{i_{p}}\) of matrices in \(\mathcal{G}\) such that \(T=A_{i_{1}}A_{i_{2}}\cdots A_{i_{p}}\). Markov showed this problem to be undecidable for integer matrices of dimension at least six, thus marking the first undecidability result obtained outside of mathematical logic and the theory of computing.
Markov's work falls into the area of computational group theory, which is one of the oldest and most well-developed parts of computational algebra. The "official" start of computational group theory dates back to 1911, when Max Dehn formulated three basic problems that would become its foundation. Given a finite presentation of a group \(G\), it is asked whether there are algorithms that solve the _Word Problem_ (whether an element is the neutral element), the _Conjugacy Problem_ (whether two elements are conjugate in \(G\)), and the _Isomorphism Problem_ (whether \(G\) is isomorphic to another finitely presented group). It was not until the 1950s that the three problem were shown to be undecidable in general groups [23, 1].
Using the language of computational group theory, Markov's problem can be reformulated as deciding _Semigroup Membership_ in a matrix (semi)group. For a finite subset \(\mathcal{G}\) of a group \(G\), denote by \(\langle\mathcal{G}\rangle\) the subsemigroup generated by \(\mathcal{G}\). Then the Semigroup Membership problem can be formulated as follows.
1. _(Semigroup Membership)_ given a finite set \(\mathcal{G}\) and an element \(T\) in \(G\), decide whether \(T\in\langle\mathcal{G}\rangle\).
Markov's undecidability result as well as the subsequent undecidability results for Dehn's problems generated a surge of research interest in computational group theory. In the 1960s, Mikhailova introduced the group version of Semigroup Membership. For a finite subset \(\mathcal{G}\) of a group \(G\), denote by \(\langle\mathcal{G}\rangle_{grp}\) the subgroup generated by \(\mathcal{G}\).
2. _(Group Membership)_ given a finite set \(\mathcal{G}\) and an element \(T\) in \(G\), decide whether \(T\in\langle\mathcal{G}\rangle_{grp}\).
[Mikhailova 1966] showed undecidability of Group Membership when \(G\) is the group \(\mathsf{SL}(4,\mathbb{Z})\) of \(4\times 4\) integer matrices with determinant one. One may note that undecidability of Group Membership subsumes that of Semigroup Membership by including the inverse of the elements in \(\mathcal{G}\).
There has been a steady growth in research intensity for Group and Semigroup Membership problems as they establish important connections between algebra and logic. These problems now play an essential role in analysing system dynamics and program termination, and have numerous applications in automata theory, complexity theory, and interactive proof systems [Beals and Babai 1993; Blondel et al. 2005; Derksen et al. 2005; Hrushovski et al. 2018]. It is worth noting that membership problems are in fact decidable for many classes of groups, such as abelian groups and low dimensional matrix groups [Babai et al. 1996; Choffrut and Karhumaki 2005]. For example, in the matrix group \(\mathsf{SL}(2,\mathbb{Z})\), Semigroup Membership is decidable by a classic result of [Choffrut and Karhumaki 2005], and Group Membership is decidable in polynomial time (PTIME) by a recent result of [Lohrey 2023]. Our interest in computational group theory is two-fold. From an application point of view, we are interested in developing practical algorithms for specific classes of groups. From a theory point of view, we aim to close the gap between decidability and undecidability.
For most classes of groups, Group Membership is much more tractable than Semigroup Membership. For example, Group Membership is decidable in the class of _polycyclic groups_[Kopytov 1968]; whereas Semigroup Membership is undecidable even in the subclass of _nilpotent groups_[Roman'kov 2022]. This gap motivated the introduction of two intermediate problems in [Choffrut and Karhumaki 2005]:
1. _(Identity Problem)_ given a finite subset \(\mathcal{G}\) of \(G\), decide whether \(\langle\mathcal{G}\rangle\) contains the neutral element of \(G\).
2. _(Group Problem)_ given a finite subset \(\mathcal{G}\) of \(G\), decide whether \(\langle\mathcal{G}\rangle=\langle\mathcal{G}\rangle_{grp}\).
In other words, the Identity Problem asks whether there exists a non-empty sequence of elements from \(\mathcal{G}\) whose product is the neutral element; and the Group Problem asks whether the semigroup \(\langle\mathcal{G}\rangle\) is a group.
The Group Problem is crucial in determining structural properties of a semigroup. For example, given a decision procedure for the Group Problem, one can compute a generating set for the _group of units_ of a finitely generated semigroup \(\langle\mathcal{G}\rangle\). We also point out that there are significantly more available algorithms for groups than there are for semigroups. Therefore performing preliminary checks using the Group Problem can help decide Semigroup Membership in many special cases. Using the Group Problem, one can also decide the lesser known _Inverse Problem_: given a finite set \(\mathcal{G}\) and an element \(a\in\mathcal{G}\), decide whether \(a^{-1}\in\langle\mathcal{G}\rangle\). The solution for the Identity Problem is usually the most essential special case on the way to building an algorithm for Semigroup Membership.
Unfortunately, decidability of these two intermediate problems remains open for most classes of groups, even in cases where decidability of Group and Semigroup Membership already have definitive answers. Notable examples include nilpotent groups, polycyclic groups and metabelian groups.
Beyond the membership problems and the intermediate problems, another classic problem is _Semigroup Intersection_:
1. _(Semigroup Intersection)_ given two finite subsets \(\mathcal{G},\mathcal{H}\) of \(G\), decide whether \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\emptyset\).
In the seminal paper where Markov demonstrated undecidabilty of Semigroup Membership, he also showed undecidability of Semigroup Intersection for integer matrix groups of dimension at least four [Markov 1947]. Markov's idea is to encode the famous
_Post Correspondence Problem_, which can be reformulated as Semigroup Intersection in a direct product of two free monoids.
We may note that some algorithmic problems are intrinsically more difficult than others. For example, Semigroup Membership subsumes Group Membership by including the inverses of the generators. In terms of decidability, both Semigroup Membership and Semigroup Intersection subsume the Group Problem, which itself subsumes the Identity Problem [Bell and Potapov 2010]. We also point out that all five problems are special cases of the more general _Rational Subset Membership_ problem, which asks whether an element \(T\) is contained in a given _rational subset_ of \(G\) (a subset defined by a rational expression over the generators of \(G\)). Rational Subset Membership is beyond the scope of this article, and we refer interested readers to [Lohrey 2013] for a survey on this problem. See also [Lohrey et al. 2015; Chistikov and Haase 2016; Cadilhac et al. 2020] for recent advances. Reductions in terms of decidability between the different algorithmic problems is summarized in Figure 1.
### Example: the case of \(\mathbb{Z}^{d}\)
We end this introduction with an example to illustrate why the above algorithmic problems can be considered fundamental in computational group theory.
Let the ambient group \(G\) be the abelian group \(\mathbb{Z}^{d}\) for some \(d\in\mathbb{N}\), where the group law "\(+\)" is coordinate-wise addition: \((a_{1},\ldots,a_{d})+(b_{1},\ldots,b_{d})=(a_{1}+b_{1},\ldots,a_{d}+b_{d})\). We show that the above algorithmic problems in the group \(\mathbb{Z}^{d}\) correspond to several classic problems in computer science.
Consider Semigroup Membership in \(\mathbb{Z}^{d}\). Let \(\mathcal{G}\coloneqq\{(a_{11},\ldots,a_{1d}),\ldots,(a_{K1},\ldots,a_{Kd})\}\) and \(T\coloneqq(b_{1},\ldots,b_{d})\) be the input of Semigroup Membership. Recall that we want to decide whether \(T\) is contained in the semigroup \(\langle\mathcal{G}\rangle\). Since \(G=\mathbb{Z}^{d}\) is abelian, the semigroup \(\langle\mathcal{G}\rangle\) can be explicitly described as follows:
\[\langle\mathcal{G}\rangle=\left\{n_{1}\cdot(a_{11},\ldots,a_{1d})+\cdots+n_{K }\cdot(a_{K1},\ldots,a_{Kd})\;\big{|}\;(n_{1},\ldots,n_{K})\in\mathbb{N}^{K} \setminus\{0^{K}\}\right\}. \tag{1}\]
Then, deciding whether \(T\in\langle\mathcal{G}\rangle\) corresponds to deciding whether the system of linear equations
\[\begin{array}{l}n_{1}a_{11}+\cdots+n_{K}a_{K1}=b_{1},\\ n_{1}a_{12}+\cdots+n_{K}a_{K2}=b_{2},\end{array}\]
Fig. 1: Reductions between different algorithmic problems.
\[\vdots\] \[n_{1}a_{1d}+\cdots+n_{K}a_{Kd}=b_{d},\]
admits a non-negative solution \((n_{1},\ldots,n_{K})\in\mathbb{N}^{K}\setminus\{0^{K}\}\). This is equivalent to the fundamental problem of _Integer Programming_ and is therefore NP-complete.
Similarly, consider the problem of Group Membership in \(\mathbb{Z}^{d}\), that is, whether \(T\) is contained in the group \(\langle\mathcal{G}\rangle_{grp}\) generated by the set \(\mathcal{G}\). The group \(\langle\mathcal{G}\rangle_{grp}\) can be explicitly described as follows:
\[\langle\mathcal{G}\rangle_{grp}=\left\{n_{1}\cdot(a_{11},\ldots,a_{1d})+ \cdots+n_{K}\cdot(a_{K1},\ldots,a_{Kd})\,\big{|}\,\,(n_{1},\ldots,n_{K})\in \mathbb{Z}^{K}\right\}. \tag{2}\]
The difference of the _group_\(\langle\mathcal{G}\rangle_{grp}\) from the _semigroup_\(\langle\mathcal{G}\rangle\) is that we allow inverses of the generators to appear in the sum, hence the tuple \((n_{1},\ldots,n_{K})\) now takes value in \(\mathbb{Z}^{K}\) instead of \(\mathbb{N}^{K}\setminus\{0^{K}\}\). Therefore, Group Membership is equivalent to deciding whether the system of linear equations
\[n_{1}a_{11}+\cdots+n_{K}a_{K1} =b_{1},\] \[n_{1}a_{12}+\cdots+n_{K}a_{K2} =b_{2},\] \[\vdots\] \[n_{1}a_{1d}+\cdots+n_{K}a_{Kd} =b_{d},\]
admits an integer solution \((n_{1},\ldots,n_{K})\in\mathbb{Z}^{K}\). This becomes the classic problem of solving linear equations over \(\mathbb{Z}\), and is decidable in polynomial (less then cubic) time.
Consider now the Identity Problem in \(\mathbb{Z}^{d}\). Specializing Semigroup Membership by taking \(T\) to be the neutral element \((0,\ldots,0)\), the Identity Problem is equivalent to deciding whether the system of _homogeneous_ linear equations
\[n_{1}a_{11}+\cdots+n_{K}a_{K1} =0,\] \[n_{1}a_{12}+\cdots+n_{K}a_{K2} =0,\] \[\vdots\] \[n_{1}a_{1d}+\cdots+n_{K}a_{Kd} =0,\]
admits a non-trivial solution \((n_{1},\ldots,n_{K})\in\mathbb{N}^{K}\setminus\{0^{K}\}\). By scaling the solution, this is equivalent to deciding whether the system admits a non-trivial solution over the non-negative _rationals_\((n_{1},\ldots,n_{K})\in\mathbb{Q}_{\geq 0}^{K}\setminus\{0^{K}\}\). Since the system only admits integer coefficients, it admits a non-trivial solution over the non-negative rationals if and only if it admits a non-trivial solution over the non-negative _reals_. Hence, the Identity Problem becomes the famous _Linear Programming_ problem, which is known to be decidable in polynomial time.
The Group Problem is perhaps less intuitive in terms of systems of linear equations. However it becomes clearer from a geometric point of view. Indeed, the semigroup \(\langle\mathcal{G}\rangle\) can be understood as the cone generated by the elements \(\mathcal{G}\) in the space \(\mathbb{Z}^{d}\), whereas the group \(\langle\mathcal{G}\rangle_{grp}\) can be understood as the linear space generated by the elements \(\mathcal{G}\). Therefore the Group Problem can be reformulated as deciding whether a finitely generated cone is actually a linear space.
For Semigroup Intersection, write \(\mathcal{G}\coloneqq\{(a_{11},\ldots,a_{1d}),\ldots,(a_{K1},\ldots,a_{Kd})\}\) and \(\mathcal{H}\coloneqq\{(c_{11},\ldots,c_{1d}),\ldots,(c_{M1},\ldots,c_{Md})\}\). Then deciding whether \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\emptyset\) is equivalent to deciding whether the system of homogeneous linear equations
\[n_{1}a_{11}+\cdots+n_{K}a_{K1} =\ell_{1}c_{11}+\cdots+\ell_{M}c_{M1},\] \[n_{1}a_{12}+\cdots+n_{K}a_{K2} =\ell_{1}c_{12}+\cdots+\ell_{M}c_{M2},\]
\[\vdots\] \[n_{1}a_{1d}+\cdots+n_{K}a_{Kd}=\ell_{1}c_{1d}+\cdots+\ell_{McMd},\]
admits a pair of non-trivial solutions \((n_{1},\ldots,n_{K})\in\mathbb{N}^{K}\setminus\{0^{K}\},(\ell_{1},\ldots,\ell_{ M})\in\mathbb{N}^{M}\setminus\{0^{M}\}\). This is again solvable by Linear Programming by scaling the solutions to \(\mathbb{Q}_{\geq 0}\). In particular, since we are working in the abelian group \(\mathbb{Z}^{d}\), commutativity allows us to freely permute elements in the sum, and move terms between the two sides of an equation. Hence Semigroup Intersection becomes similar to the Identity Problem. However, when we work in groups with more complex structures (namely non-abelian groups), this equivalence will fail, and Semigroup Intersection can become much more difficult than the Identity Problem.
Lastly, we mention that Rational Subset Membership in \(\mathbb{Z}^{d}\) corresponds to the notion of computing _semilinear sets_. See [Chistikov and Haase 2016] for a proof of decidability and its complexity analysis.
## 2 Notes for the reader
It is clear from the introduction that the complexity and decidability of semigroup algorithmic problems greatly depend on the ambient group \(G\). As shown in the last example, the case where \(G=\mathbb{Z}^{d}\) is relatively simple. On the contrary, when the group \(G\) is an arbitrary (non-commutative) group, the algorithmic problems we consider become much more complex and even undecidable. To achieve decidability, one must assume additional structure on the group \(G\). The aim of this article is to give an overview of recent advances in the decidability and complexity of the aforementioned algorithmic problems in different classes of groups \(G\), as well as point out various open problems. Section 3 is about low-dimensional matrix groups as well as their subgroups and extensions, whereas in Section 4 we consider groups with additional structures such as nilpotency and being metabelian. In Section 5 we mention some other related algorithmic problems in groups and semigroups.
For the most part of this article, the group \(G\) is explicitly given as a matrix group (such as \(\mathsf{SL}(n,\mathbb{Z})\)). In this case, the elements of \(G\) are represented as matrices with _binary encoded entries_. Nevertheless, many of the results stated in this article can be extended to the case where \(G\) is given as an abstract group, which is usually embeddable in some well-studied matrix group.
## 3 Low-dimensional matrix groups
This section contains results for low-dimensional matrix groups. In particular, we are interested in the _special linear groups_\(\mathsf{SL}(n,\mathbb{Z})\) as well as their subgroups and extensions.
**Definition 3.1** (_Special linear groups_).: Let \(n\in\mathbb{N}\). The _special linear group_\(\mathsf{SL}(n,\mathbb{Z})\) is the group of \(n\times n\) integer matrices with determinant one:
\[\mathsf{SL}(n,\mathbb{Z})\coloneqq\{A\in\mathbb{Z}^{n\times n}\mid\det(A)=1\}.\]
Semigroups are closely related to the notion of _words_ over an alphabet:
**Definition 3.2** (_Words over an alphabet_).: By an _alphabet_, we mean a finite set \(\Sigma\). Elements of an alphabet are called _letters_. A _word_ over an alphabet \(\Sigma\) is a finite string of letters, possibly empty. In particular, the empty string is called the _empty word_, usually denoted by \(\epsilon\). Given any alphabet \(\Sigma\), we denote by \(\Sigma^{*}\) the set of words over \(\Sigma\):
\[\Sigma^{*}\coloneqq\left\{a_{1}a_{2}\cdots a_{m}\mid m\geq 0,a_{1},\ldots a_{m }\in\Sigma\right\}.\]
For two words \(v,w\in\Sigma^{*}\), we denote by \(v\cdot w\) or simply \(vw\) the concatenation of \(v\) and \(w\), it is again a word over \(\Sigma\). The operation of concatenation gives the sets \(\Sigma^{*}\) and \(\Sigma^{*}\setminus\{\epsilon\}\) the structure of semigroups.
For example, Semigroup Membership can now be reformulated as asking whether there exists a word \(w\in\mathcal{G}^{*}\setminus\{\epsilon\}\), whose product in \(G\) is equal to \(T\); and Semigroup Intersection can be reformulated as asking whether there exists two words \(w\in\mathcal{G}^{*}\setminus\{\epsilon\},v\in\mathcal{H}^{*}\setminus\{\epsilon\}\), such that the products of \(w\) and \(v\) in \(G\) are equal.
### Dimension two
The group \(\mathsf{SL}(2,\mathbb{Z})\) has been a classic object of study. It is closely related to various areas in mathematics such as hyperbolic geometry, modular forms, and dynamical systems [1, 10, 11].
The key to solving algorithmic problems in \(\mathsf{SL}(2,\mathbb{Z})\) is the fact that \(\mathsf{SL}(2,\mathbb{Z})\) is _virtually free_, meaning it contains a _free_ subgroup of finite index.
**Definition 3.3** (_Free group_).: Given an alphabet \(\Sigma\), define the corresponding group alphabet \(\Sigma^{\pm}\coloneqq\Sigma\cup\{a^{-1}\mid a\in\Sigma\}\), where \(a^{-1}\) is a new letter for each \(a\in\Sigma\). There is a natural involution \((\cdot)^{-1}\) over the set of words \((\Sigma^{\pm})^{*}\) defined by \((a^{-1})^{-1}=a\) and \((a_{1}a_{2}\cdots a_{m})^{-1}=a_{m}^{-1}\cdots a_{2}^{-1}a_{1}^{-1}\). A word over the alphabet \(\Sigma^{\pm}\) is called _reduced_ if it does not contain consecutive letters \(aa^{-1}\) or \(a^{-1}a\), \(a\in\Sigma\). For a word \(w\), define \(\mathrm{red}(w)\) to be the reduced word obtained by iteratively replacing consecutive letters \(aa^{-1}\) and \(a^{-1}a\) with the empty string. The _free group_\(F(\Sigma)\) over \(\Sigma\) is then defined as the set of reduced words in \(\left(\Sigma^{\pm}\right)^{*}\), where multiplication is given by \(v\cdot w=\mathrm{red}(vw)\), inversion is given by the involution \((\cdot)^{-1}\), and the neutral element is the empty word \(\epsilon\).
If we take \(\Sigma=\{a,b\}\), then the group \(F(\Sigma)\) is the free group over two generators, and is denoted by \(F_{2}\). For example, in \(F(\{a,b\})\) we have \((aaba^{-1})^{-1}=ab^{-1}a^{-1}a^{-1}\), and \(aaba^{-1}\cdot ab=\mathrm{red}(aaba^{-1}ab)=aabb\).
In particular, the subgroup of \(\mathsf{SL}(2,\mathbb{Z})\) generated by \(A\coloneqq\begin{pmatrix}1&2\\ 0&1\end{pmatrix}\) and \(B\coloneqq\begin{pmatrix}1&0\\ 2&1\end{pmatrix}\) is a free group [1, Example 7.63]. It is of index 12, meaning the quotient \(\mathsf{SL}(2,\mathbb{Z})/\langle A,B\rangle_{grp}\) is a set with 12 elements [10].
Algorithmic problems in free groups have been extensively studied since the 1950s [10]. A classic tool for solving algorithmic problem in free groups is the automata-inspired _Stallings foldings_. The idea of using automata-based approaches to deal with free groups can be traced back to [1] who showed decidability of Rational Subset Membership in finitely generated free groups. We recommend [11] for an introduction on Stallings foldings as well as their applications in algorithmic problems in free groups.
While Stallings foldings are initially proposed to deal with free groups, they are easily generalizable to virtually free groups, notably \(\mathsf{SL}(2,\mathbb{Z})\). This becomes the foundation of several results concerning decidability and complexity of algorithmic problems in \(\mathsf{SL}(2,\mathbb{Z})\):
**Theorem 3.4**.: _In the group \(\mathsf{SL}(2,\mathbb{Z})\):_
1. _[_1_]_ _[_10_]_ _Rational Subset Membership is decidable._
2. _[_1_]_ _Bell et al._ 2017; Bell et al._ 2023_] Semigroup Membership, the Identity Problem and the Group Problem are decidable and NP-complete._
3. _[_1_]_ _Lohrey_ 2023_] Group Membership is decidable in PTIME._
**Example: Semigroup Membership in the free group \(F_{2}\)**
We give an example of the decision procedure for Semigroup Membership in the free group \(F_{2}\) to show some of the techniques involved. This is very similar to the decision procedure for \(\mathsf{SL}(2,\mathbb{Z})\) (which contains \(F_{2}\) as a finite index subgroup), but is easier to describe.
Working in the group \(F_{2}=F(\{a,b\})\), suppose we want to decide whether \(T\coloneqq a\) is in the semigroup generated by the set \(\mathcal{G}\coloneqq\{aab,b^{-1}a^{-1}\}\). In other words, we want to decide whether there is an expression of the form \(w=w_{1}w_{2}\cdots w_{p},p\geq 1,w_{1},\ldots,w_{p}\in\{aab,b^{-1}a^{-1}\}\), such that \(\mathrm{red}(w)=a\) (recall the definition of \(\mathrm{red}(\cdot)\) from Definition 3.3). Since \(a\neq\epsilon\), we can without loss of generality include the case \(p=0\). Therefore, deciding whether \(a\in\langle aab,b^{-1}a^{-1}\rangle\) is equivalent to deciding whether there exists an expression of the form \(v=a^{-1}w_{1}w_{2}\cdots w_{p},\ p\geq 0,w_{1},\ldots,w_{p}\in\{aab,b^{-1}a^{-1}\}\), such that \(\mathrm{red}(v)=\epsilon\).
We can construct an automaton \(\mathcal{A}\) that recognizes the set of words
\[\mathcal{L}\coloneqq\big{\{}a^{-1}w_{1}w_{2}\cdots w_{p}\bigm{|}p\geq 0,w_{1}, \ldots,w_{p}\in\{aab,b^{-1}a^{-1}\}\big{\}}.\]
See Figure 2 for an illustration of the automaton \(\mathcal{A}\). In particular, every path starting from the initial state \(q_{I}\) and ending at the accepting state \(q_{F}\) represents a word in the language \(\mathcal{L}\); and every word in \(\mathcal{L}\) can be represented as a path from \(q_{I}\) to \(q_{F}\). For example, the word \(a^{-1}\cdot aab\cdot b^{-1}a^{-1}\in\mathcal{L}\) corresponds to the path \(q_{I}\to q_{F}\to q_{11}\to q_{12}\to q_{F}\to q_{21}\to q_{F}\).
However, it is not clear from the automaton \(\mathcal{A}\) whether \(\mathcal{L}\) contains a word \(v\) such that \(\mathrm{red}(v)=\epsilon\). The idea is to now "saturate" the automaton \(\mathcal{A}\) to include words obtained from reducing words in \(\mathcal{L}\). To be precise, we want to construct an automaton \(\mathcal{A}_{0}\) that recognizes a language \(\mathcal{L}_{0}\), such that the reduced words in \(\mathcal{L}_{0}\) are exactly the set \(\{\mathrm{red}(v)\mid v\in\mathcal{L}\}\).
A construction of the automaton \(\mathcal{A}_{0}\) is illustrated in Figure 3. Starting with \(\mathcal{A}\), we add the edge \(q_{12}\xrightarrow{\epsilon}q_{21}\) because any path containing the subpath \(q_{12}\xrightarrow{b}q_{F}\xrightarrow{b^{-1}}q_{21}\) results in \(bb^{-1}\) which can be reduced to \(\epsilon\). Similarly, we add the edge \(q_{11}\xrightarrow{\epsilon}q_{F}\) thanks to the path \(q_{11}\xrightarrow{a}q_{12}\xrightarrow{\epsilon}q_{21}\xrightarrow{a^{-1}}q_ {F}\) resulting in \(aa^{-1}\); we add the edge \(q_{21}\xrightarrow{\epsilon}q_{11}\) thanks to the path \(q_{21}\xrightarrow{a^{-1}}q_{F}\xrightarrow{a}q_{11}\); and we add the edge \(q_{I}\xrightarrow{\epsilon}q_{11}\) thanks to the path
\(q_{I}\xrightarrow{a^{-1}}q_{F}\xrightarrow{a}q_{11}\). Finally, we add the edge \(q_{I}\xrightarrow{\epsilon}q_{F}\) thanks to the path \(q_{I}\xrightarrow{\epsilon}q_{11}\xrightarrow{\epsilon}q_{F}\). As a conclusion, we can pass from \(q_{I}\) to \(q_{F}\) using a path which reduces to \(\epsilon\). Therefore \(T\coloneqq a\) is indeed contained in the semigroup generated by the set \(\mathcal{G}\coloneqq\{aab,b^{-1}a^{-1}\}\).
Note that this process of adding \(\epsilon\)-transitions always finishes in finitely many steps because there are only a finite number of states in \(\mathcal{A}\). In general, if after the process there is an \(\epsilon\)-transition from \(q_{I}\) to \(q_{F}\) then we conclude that \(T\in\langle\mathcal{G}\rangle\); otherwise we conclude \(T\notin\langle\mathcal{G}\rangle\). Note that the termination of this process relies on the fact that \(\operatorname{red}(\cdot)\) is _deterministic_, meaning for every \(v\) the image \(\operatorname{red}(v)\) is unique. This feature grants free groups their relatively simple structure as well as decidability of their various algorithmic problems.
The group \(\operatorname{SL}(2,\mathbb{Z})\) is not far from being a free group, and a similar approach to the above automata construction can also be applied to \(\operatorname{SL}(2,\mathbb{Z})\). We refer readers to [10] for a detailed account of the method for \(\operatorname{SL}(2,\mathbb{Z})\).
### Semigroup Membership in extensions of \(\operatorname{SL}(2,\mathbb{Z})\)
Note that one can naturally generalize the definition of Semigroup Membership to the case where the ambient group \(G\) is a _semigroup_ instead of a group. A natural follow-up question to \(\operatorname{SL}(2,\mathbb{Z})\) is whether Semigroup Membership remains decidable when \(G\) is the semigroup \(\mathbb{Z}^{2\times 2}\) of \(2\times 2\) integer matrices. Compared to \(\operatorname{SL}(2,\mathbb{Z})\), the semigroup \(\mathbb{Z}^{2\times 2}\) contains matrices with determinant \(0\) and \(\pm d,d\geq 1\), which add an extra layer of difficulty. Nevertheless, several important partial results have been obtained via extensions of the solution for \(\operatorname{SL}(2,\mathbb{Z})\):
**Theorem 3.5**.: _Semigroup Membership is decidable in:_
1. _[_10_]_ _the set of_ \(2\times 2\) _integer matrices with non-zero determinants._
2. _[_10_]_ _the set of_ \(2\times 2\) _integer matrices with determinants_ \(0,-1\) _and_ \(1\)_._
Decidability of Semigroup Membership remains an open problem for the set of _all_\(2\times 2\) integer matrices.
Another interesting extension of \(\operatorname{SL}(2,\mathbb{Z})\) is the group \(\operatorname{GL}(2,\mathbb{Q})\) of \(2\times 2\) invertible matrices with _rational_ entries. All algorithmic problems listed in Figure 1 remain open for the group \(\operatorname{GL}(2,\mathbb{Q})\), with very limited partial results currently known [11, 1].
### Dimension three
One of the most outstanding open problems in computational group theory is the decidability of Group Membership in \(\operatorname{SL}(3,\mathbb{Z})\). In fact, the decidability status of all algorithmic problems listed in Figure 1 remains open for \(\operatorname{SL}(3,\mathbb{Z})\). This is notably due to a lack of understanding for the structure of subgroups in \(\operatorname{SL}(3,\mathbb{Z})\). Currently, the closest undecidability result is Semigroup Membership in the set \(\mathbb{Z}^{3\times 3}\) of \(3\times 3\) integer matrices, by [10]:
**Theorem 3.6** (Mortality Problem [10]).: _The following problem is undecidable: given a finite set \(\mathcal{G}\) of \(3\times 3\) integer matrices, decide whether the zero matrix \(0^{3\times 3}\) is in the semigroup \(\langle\mathcal{G}\rangle\)._
Although the decidability of algorithmic problems in \(\operatorname{SL}(3,\mathbb{Z})\) seems out of reach for the moment, positive results have been obtained for several important subgroups of \(\operatorname{SL}(3,\mathbb{Z})\).
The Heisenberg group \(\mathrm{H}_{3}(\mathbb{Z})\)
We consider the _integer Heisenberg group_\(\mathrm{H}_{3}(\mathbb{Z})\), which is the group of \(3\times 3\) upper-triangular integer matrices with ones on the diagonal.
\[\mathrm{H}_{3}(\mathbb{Z})\coloneqq\left\{\begin{pmatrix}1&*&*\\ 0&1&*\\ 0&0&1\end{pmatrix},\text{ where }*\text{ are entries in }\mathbb{Z}\right\},\]
The Heisenberg groups play an important role in many branches of mathematics, physics and computer science. They first arose in the description of one-dimensional quantum mechanical systems [23, 24], and have now become an important mathematical object connecting areas like representation theory, Fourier analysis and quantum algorithms [13, 14, 15]. From a computational point of view, the Heisenberg group received much attention in the past ten years because it is one of the simplest non-commutative matrix groups.
**Theorem 3.7**.: _In the Heisenberg group \(\mathrm{H}_{3}(\mathbb{Z})\):_
1. _[_1_]_ _Semigroup Membership is decidable._
2. _[_1_]_ _The Identity Problem is decidable in PTIME._
3. _[_1_]_ _The Group Problem is decidable in PTIME._
4. _[_1_]_ _Semigroup Intersection is decidable in PTIME._
It follows from Theorem 3.7(i) that Group Membership is also decidable in \(\mathrm{H}_{3}(\mathbb{Z})\). However we are not aware of any complexity analysis on this problem. The main idea for proving Theorem 3.7(i) is to use the _Baker-Campbell-Hausdorff formula_[1, 19, 18] from Lie algebra, as well as to incorporate Semigroup Membership in a Parikh automaton. The main ideas behind Theorem 3.7(ii)-(iv) are reductions to word combinatorics problems. Note that the decidability of Rational Subset Membership in \(\mathrm{H}_{3}(\mathbb{Z})\) remains an intricate open problem.
The special affine group \(\mathrm{SA}(2,\mathbb{Z})\)
Compared to \(\mathrm{H}_{3}(\mathbb{Z})\), a larger subgroup of \(\mathsf{SL}(3,\mathbb{Z})\) is the _special affine group_\(\mathrm{SA}(2,\mathbb{Z})\). The group \(\mathrm{SA}(2,\mathbb{Z})\) consists of affine transformations of \(\mathbb{Z}^{2}\) that preserve orientation, and can be considered as an intermediate group between \(\mathsf{SL}(2,\mathbb{Z})\) and \(\mathsf{SL}(3,\mathbb{Z})\). Written as matrices, elements of \(\mathrm{SA}(2,\mathbb{Z})\) are \(3\times 3\) integer matrices of the following form.
\[\mathrm{SA}(2,\mathbb{Z})\coloneqq\left\{M=\begin{pmatrix}*&*&*\\ *&*&*\\ 0&0&1\end{pmatrix}\right|\det(M)=1\text{, where }*\text{ are entries in }\mathbb{Z}\right\}.\]
Like \(\mathsf{SL}(2,\mathbb{Z})\), the special affine group is important in the context of many fundamental problems, such as Lie groups [24], polyhedral geometry [17], dynamical systems [1], and computer vision [13]. Apart from the intrinsic interest to study \(\mathsf{SA}(2,\mathbb{Z})\), we also point out that the Special Affine group has tight connections to various reachability problems in automata theory. Some of the central questions in automated verification include reachability problems in _Affine Vector Addition Systems_ and _Affine Vector Addition Systems with states (Affine VASS)_ over the integers [16]. The study of these reachability problems in dimension two necessitates the understanding of subsemigroups of \(\mathsf{SA}(2,\mathbb{Z})\).
**Every element in \(\mathsf{SA}(2,\mathbb{Z})\) can be written as a pair \((A,\mathbf{a})\), where \(A\in\mathsf{SL}(2,\mathbb{Z}),\mathbf{a}\in\mathbb{Z}^{2}\). This pair represents the element \(\begin{pmatrix}A&\mathbf{a}\\ 0&1\end{pmatrix}\). Multiplication in \(\mathsf{SA}(2,\mathbb{Z})\) is then given by**
\[(A,\mathbf{a})\cdot(B,\mathbf{b})=(AB,Ab+\mathbf{a}).\]
**Naturally, \(\mathsf{SA}(2,\mathbb{Z})\) has a subgroup \(\{(A,\mathbf{0})\mid A\in\mathsf{SL}(2,\mathbb{Z})\}\) isomorphic to \(\mathsf{SL}(2,\mathbb{Z})\). In the language of group theory, this means that \(\mathsf{SA}(2,\mathbb{Z})\) is a _semidirect product_ of the groups \(\mathsf{SL}(2,\mathbb{Z})\) and \(\mathbb{Z}^{2}\). This structure is crucial to the resolution of several algorithmic problems in \(\mathsf{SA}(2,\mathbb{Z})\).**
**Theorem 3.8**: _In the group \(\mathsf{SA}(2,\mathbb{Z})\):_
1. _[_Kapovich et al._ 2005_; Delgado Rodriguez_ 2017_]_ _Group Membership is decidable._
2. _[_Dong_ 2023_b_]_ _The Identity Problem and the Group Problem are decidable and NP-complete._
The decidability of Semigroup Membership and Semigroup Intersection in \(\mathsf{SA}(2,\mathbb{Z})\) remains open. See [Dong 2023b] for a discussion of current obstacles to solving these problems. Theorem 3.8(i) comes from the classic idea that \(\mathsf{SA}(2,\mathbb{Z})\) can be realized as a _graph of groups_. Graph of groups is a central object in the area of geometric group theory, and Group Membership has been solved for a large class of graph of groups using generalizations of Stallings foldings [Kapovich et al. 2005]. However, such methods generally fail for semigroup problems, and the proof of Theorem 3.8(ii) employs an algebraic approach instead of an automata-based approach.
### Dimension four
While the algorithmic problems of Figure 1 are decidable in \(\mathsf{SL}(2,\mathbb{Z})\) and open in \(\mathsf{SL}(3,\mathbb{Z})\), they all become undecidable for \(\mathsf{SL}(4,\mathbb{Z})\). Note that \(\mathsf{SL}(4,\mathbb{Z})\) naturally contains as a subgroup the direct product \(\mathsf{SL}(2,\mathbb{Z})\times\mathsf{SL}(2,\mathbb{Z})\), and that \(\mathsf{SL}(2,\mathbb{Z})\) contains the free subgroup \(F_{2}\). Therefore \(\mathsf{SL}(4,\mathbb{Z})\) contains the subgroup \(F_{2}\times F_{2}\), which constitutes the key to proving all undecidability results in \(\mathsf{SL}(4,\mathbb{Z})\). If a problem is undecidable in \(F_{2}\times F_{2}\), then it must also be undecidable in \(\mathsf{SL}(4,\mathbb{Z})\).
**Theorem 3.9**: _In the groups \(F_{2}\times F_{2}\) and \(\mathsf{SL}(4,\mathbb{Z})\):_
1. _[_Mikhailova_ 1966_]_ _Group Membership is undecidable._
2. _[_Bell and Potapov_ 2010_]_ _The Identity Problem is undecidable._
As a result of Theorem 3.9, all algorithmic problems listed in Figure 1 are undecidable in \(F_{2}\times F_{2}\) and in \(\mathsf{SL}(4,\mathbb{Z})\). Mikhailova's idea of proving Theorem 3.9(i) is an embedding of the _Word Problem_, which is one of the three fundamental problems of computational group theory.
**Theorem 3.10** (The Word Problem [Novikov 1955]): _The following problem is undecidable: given an alphabet \(\Sigma\) as well as elements \(v_{1},\ldots,v_{n},w\) in the free group \(F(\Sigma)\), decide whether \(w\) is in the normal subgroup \(\langle\langle v_{1},\ldots,v_{n}\rangle\rangle\) generated by \(v_{1},\ldots,v_{n}\). In other words, it is undecidable whether a word \(w\in(\Sigma^{\pm})^{*}\) is equal to the neutral element in the quotient \(F(\Sigma)/\langle\langle v_{1},\ldots,v_{n}\rangle\rangle\)._
The subgroup \(\langle\langle v_{1},\ldots,v_{n}\rangle\rangle\) is defined as the smallest _normal_ subgroup containing the elements \(v_{1},\ldots,v_{n}\). It is in general different from \(\langle v_{1},\ldots,v_{n}\rangle_{grp}\). This constitutes the difference between the Word Problem and Group Membership. For this reason, Group Membership is sometimes referred to in literature as the _Generalized Word Problem_. Mikhailova showed that one can embed an instance of the Word Problem into the Group Membership problem in \(F_{2}\times F_{2}\), thus proving its undecidability.
Bell and Potapov's idea of proving Theorem 3.9(ii) is to embed a variation of the famous _Post Correspondence Problem_.
**Theorem 3.11** (Post Correspondence Problem [14]).: _The following problem is undecidable: given a set of word pairs \(S=\{(v_{1},w_{1}),\ldots,(v_{K},w_{K})\}\) in \(\{a,b\}^{*}\times\{a,b\}^{*}\), decide whether there exist \(p\geq 1\) and a sequence of indices \(i_{1},\ldots,i_{p}\in\{1,\ldots,K\}\) such that \(v_{i_{1}}v_{i_{2}}\cdots v_{i_{p}}=w_{i_{1}}w_{i_{2}}\cdots w_{i_{p}}\)._
The Post Correspondence Problem is one of the first known undecidability results in algorithmic theory, and it remains a staple tool for proving undecidability of various combinatorial problems. Bell and Potapov showed that one can embed an instance of the Post Correspondence Problem into the Identity Problem in \(F_{2}\times F_{2}\), thus proving its undecidability.
_Remark 3.12_.: The Post Correspondence Problem can be considered as a special case of Semigroup Intersection in the semigroup \(\{a,b\}^{*}\times\{a,b\}^{*}\). Indeed, the Post Correspondence Problem can be reformulated as deciding whether the semigroup \(\langle(v_{1},w_{1}),\ldots,(v_{K},w_{K})\rangle\subseteq\{a,b\}^{*}\times\{ a,b\}^{*}\) intersects the semigroup of diagonals \(\langle(a,a),(b,b)\rangle\). Therefore, if a group \(G\) contains \(\{a,b\}^{*}\times\{a,b\}^{*}\) as a subsemigroup, then \(G\) has undecidable Semigroup Intersection. Since the group \(F_{2}=F(\{a,b\})\) naturally contains the subsemigroup \(\{a,b\}^{*}\), we immediately obtain undecidability of Semigroup Intersection in \(F_{2}\times F_{2}\). Bell and Potapov's main contribution for Theorem 3.9(ii) is the strengthening of the undecidability result from Semigroup Intersection to the Identity Problem.
It is worth pointing out that the same method cannot be applied to the group \(\mathsf{SL}(3,\mathbb{Z})\). Indeed, it has been shown in [Ko et al. 2018, Theorem 11] that \(\mathsf{SL}(3,\mathbb{Z})\) does not contain any subsemigroup isomorphic to \(\{a,b\}^{*}\times\{a,b\}^{*}\). This added an extra layer of mystery to the decidability status of problems in \(\mathsf{SL}(3,\mathbb{Z})\).
## 4 Groups with additional structure
In Section 3 we showed that most algorithmic problems that are undecidable for general matrix groups become decidable when restricted to low dimensions. In this section, instead of the restriction on dimension, we consider restrictions on the _structure_ of the matrix group. An iconic result in this direction comes from Babai, Beals, Cai, Ivanyos and Luks [Babai et al. 1996], who showed the NP-completeness of Semigroup Membership and PTIME decidability of Group Membership in _commutative_ matrix groups. From the introduction (Section 1), we also know that the problems in Figure 1 all have classic interpretations in the commutative group \(\mathbb{Z}^{d}\). In this section we consider various relaxations of the commutativity requirement.
Recall from Subsection 3.3 that all problems in Figure 1 become undecidable when the group \(G\) contains the direct product \(F_{2}\times F_{2}\) of two free groups. Most natural classes of groups are stable under direct product (for example, the direct product of two commutative groups is still commutative). This motivates us to consider classes of groups that do not contain \(F_{2}\) as subgroup, as these are the only natural candidates for positive decidability results.
By the following celebrated result of Tits, a matrix group that does not contain the subgroup \(F_{2}\) must be _virtually solvable_, meaning it contains a _solvable_ subgroup of finite index.
**Theorem 4.1** (Tits alternative [17]).: _Fix any \(n\in\mathbb{N}\) and let \(\mathbb{K}\) be any field. Recall that \(\mathsf{GL}(n,\mathbb{K})\) denotes the group of \(n\times n\) invertible matrices with entries in \(\mathbb{K}\). For every finitely generated subgroup \(H\) of \(\mathsf{GL}(n,\mathbb{K})\), exactly one of the following is true:_
1. _H contains_ \(F_{2}\) _as a subgroup._
_._
2. \(H\) _contains a solvable subgroup of finite index._
We therefore concentrate our study on the class of solvable groups. Its exact definition is given as follows.
_Definition 4.2_ (_Solvable groups_).: Given a group \(G\) and its subgroup \(H\), define the commutator \([G,H]\) to be the group generated by the elements \(\{ghg^{-1}h^{-1}\mid g\in G,h\in H\}\). The _derived series_ of a group \(G\) is the inductively defined descending sequence of subgroups
\[G=G^{(0)}\geq G^{(1)}\geq G^{(2)}\geq\cdots,\]
in which \(G^{(k)}=[G^{(k-1)},G^{(k-1)}]\). A group \(G\) is called _solvable_ if its derived series terminates with \(G^{(d)}\) being the trivial group for some finite \(d\). In this case, the smallest such \(d\) is called the _derived length_ of \(G\). For example, an abelian group is solvable of derived length one.
We refer interested readers to [Drutu and Kapovich 2018, Chapter 13] for a comprehensive introduction of solvable groups. It is well known that all subgroups, quotients and direct products of solvable groups are solvable [Drutu and Kapovich 2018, Proposition 13.91].
_Example 4.3_.: Denote by \(\mathsf{T}(n,\mathbb{Q})\) the group of \(n\times n\) invertible upper triangular matrices with rational entries:
\[\mathsf{T}(n,\mathbb{Q})\coloneqq\left\{\begin{pmatrix}a_{1}&*&\cdots&*&*\\ 0&a_{2}&\cdots&*&*\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&a_{n-1}&*\\ 0&0&\cdots&0&a_{n}\end{pmatrix},\text{ where }a_{i}\text{ and }*\text{ are entries in }\mathbb{Q}\:,a_{1}a_{2}\cdots a_{n}\neq 0 \right\}.\]
Then the group \(\mathsf{T}(n,\mathbb{Q})\) is solvable.
Unfortunately, without additional constraints, algorithmic problems in solvable groups remain highly intractable. For example, [Kharlampovich 1981] famously constructed a solvable group of derived length three with undecidable Word Problem1. Other problems such as Semigroup Membership and Semigroup Intersection are also undecidable in general solvable groups (see Subsections 4.1 and 4.2). Therefore, in order to obtain decidability results, we want to consider solvable groups with additional constraints. In what follows, we consider two important special classes of solvable groups: _nilpotent groups_ and _metabelian groups_.
Footnote 1: To be precise, this means that the undecidability result in Theorem 3.10 still holds when \(\Sigma,v_{1},\ldots,v_{n}\) are fixed so that the quotient \(F(\Sigma)/\langle\langle v_{1},\ldots,v_{n}\rangle\rangle\) is a solvable group of derived length three.
### Nilpotent groups
In this subsection we consider nilpotent groups, which are usually considered as an immediate generalization of abelian groups.
_Definition 4.4_ (_Nilpotent groups_).: The _lower central series_ of a group \(G\) is the inductively defined descending sequence of subgroups
\[G=G_{0}\geq G_{1}\geq G_{2}\geq\cdots,\]
in which \(G_{k}=[G,G_{k-1}]\). A group \(G\) is called _nilpotent_ if its lower central series terminates with \(G_{d}\) being the trivial group for some finite \(d\). In this case, the smallest such \(d\) is called the _nilpotency class_ of \(G\).
In particular, abelian groups are nilpotent of class one. One can easily show \(G_{k}\geq G^{(k)}\) by induction on \(k\), therefore nilpotent groups are also solvable (but the converse is not true). It is well known that all subgroups, quotients and direct products of nilpotent groups are nilpotent [11, Lemma 13.56, Theorem 13.57].
**The unitriangular matrix groups \(\mathsf{UT}(n,\mathbb{Q})\)**
One of the most important examples of nilpotent groups is the group of unitriangular matrices:
**Definition 4.5** (_Unitriangular matrix groups_).: Denote by \(\mathsf{UT}(n,\mathbb{Q})\) the group of \(n\times n\) upper triangular rational matrices with ones on the diagonal:
\[\mathsf{UT}(n,\mathbb{Q})\coloneqq\left\{\begin{pmatrix}1&*&\cdots&*&*\\ 0&1&\cdots&*&*\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&1&*\\ 0&0&\cdots&0&1\end{pmatrix}\right\},\text{ where }*\text{ are entries in }\mathbb{Q}\,\right\}.\]
Then \(\mathsf{UT}(n,\mathbb{Q})\) is a nilpotent group of class \(n-1\)[11, Examples 13.36]. Similarly, one can define the groups \(\mathsf{UT}(n,\mathbb{Z})\) by changing the entries from rationals to integers.
The group \(\mathsf{UT}(n,\mathbb{Z})\) (and hence \(\mathsf{UT}(n,\mathbb{Q})\)) play an important role in the study of finitely generated nilpotent groups by the following fact.
**Theorem 4.6** ([11]).: _Every finitely generated nilpotent group is isomorphic to a finite extension of a subgroup of \(\mathsf{UT}(n,\mathbb{Z})\) for some \(n\in\mathbb{N}\)._
We have already encountered the group \(\mathrm{H}_{3}(\mathbb{Z})=\mathsf{UT}(3,\mathbb{Z})\) in Subsection 3.2. Therefore the results in this subsection can be seen as generalizations of Theorem 3.7.
**Theorem 4.7**.: _In subgroups of \(\mathsf{UT}(n,\mathbb{Q})\), we have the following results._
1. _[_1_]_ _Group Membership is decidable in_ \(\mathsf{UT}(n,\mathbb{Q}),n\in\mathbb{N}\)_._
2. _[_1_]_ _There exists a large enough_ \(k\in\mathbb{N}\)_, such that Semigroup Membership is undecidable in the direct product_ \(\mathsf{UT}(3,\mathbb{Z})\)_1_._ Footnote 1: While \(\mathsf{UT}(n,\mathbb{Q})\) is not finitely generated, for Group Membership we are always working in finitely generated subgroups of \(\mathsf{UT}(n,\mathbb{Q})\)._
3. _[_1_]_ _Let_ \(G\) _be a class-2 nilpotent subgroup of_ \(\mathsf{UT}(n,\mathbb{Q})\) _for some_ \(n\)_. Then Semigroup Intersection is PTIME decidable in_ \(G\)_._ Footnote 2: While \(\mathsf{UT}(n,\mathbb{Q})\) is not finitely generated, for Group Membership we are always working in finitely generated subgroups of \(\mathsf{UT}(n,\mathbb{Q})\).
4. _[_1_]_ _Fix_ \(d\leq 10\)_. Let_ \(G\) _be a class-d nilpotent subgroup of_ \(\mathsf{UT}(n,\mathbb{Q})\) _for some_ \(n\)_. Then the Identity Problem and the Group Problem are PTIME decidable in_ \(G\)_._
For example, for any \(k\in\mathbb{N}\), the direct product \(\mathsf{UT}(3,\mathbb{Z})^{k}\) is a class-two nilpotent subgroups of \(\mathsf{UT}(3k,\mathbb{Q})\). Therefore Theorem 4.9(iii) shows that \(\mathsf{UT}(3,\mathbb{Z})^{k}\) admits decidable (PTIME) Semigroup Intersection. However, for large enough \(k\), it admits undecidable Semigroup Membership (Theorem 4.7(ii)).
It is worth pointing out that Theorem 4.7(i) follows from a more general result of Mal'cev on finitely generated nilpotent groups2 (see Theorem 4.9(i)). Theorem 4.7(ii) is proven through an embedding of the _Hilbert's tenth problem_, whose undecidability follows from a celebrated result of Matiyasevich:
Footnote 2: While \(\mathsf{UT}(n,\mathbb{Q})\) is not finitely generated, for Group Membership we are always working in finitely generated subgroups of \(\mathsf{UT}(n,\mathbb{Q})\).
**Theorem 4.8** (Hilbert's tenth problem [11]).: _The following problem is undecidable: given as input a polynomial \(f\in\mathbb{Z}[X_{1},\ldots,X_{n}]\), decide whether there exist integers \(z_{1},\ldots,z_{n}\) such that \(f(z_{1},\ldots,z_{n})=0\)._
On the other hand, Theorem 4.7(iii) and (iv) are proven using techniques from Lie algebra and convex geometry. In fact, [11] proposed a conjecture (depending on the nilpotency class \(d\)), subject to which Theorem 4.7(iv) still holds for \(d>10\). The conjecture is only verified up to \(d=10\) using computer algebra software.
### Arbitrary nilpotent groups
The close connection between \(\mathsf{UT}(n,\mathbb{Q})\) and nilpotent groups allows one to generalize the decidability results in Theorem 4.7 to arbitrary finitely generated nilpotent groups.
**Theorem 4.9**.: _In finitely generated nilpotent groups3, we have the following results._
Footnote 3: As a convention in computational group theory, a finitely generated nilpotent group is usually represented by its _finite presentation_, meaning it is written as a quotient \(F(\Sigma)/(\langle v_{1},\ldots,v_{n}\rangle)\) for some alphabet \(\Sigma\) and elements \(v_{1},\ldots,v_{n}\in\left(\Sigma^{\pm}\right)^{*}\), similar to the setup of the Word Problem. Every finitely generated nilpotent group admits a finite presentation [10, Proposition 13.84].
[MISSING_PAGE_POST]
### Example: the wreath product \(\mathbb{Z}\wr\mathbb{Z}\)
An important example of metabelian groups is the _wreath product_\(\mathbb{Z}\wr\mathbb{Z}\). The wreath product is a fundamental construction in group and semigroup theory, and it also plays an important role in the algebraic theory of automata. The Krohn-Rhodes theorem [Krohn and Rhodes 1965] states that every finite semigroup (and correspondingly, every finite automaton) can be decomposed into elementary components using wreath products.
Following the work of [Magnus 1939], it became clear that solving algorithmic problems in arbitrary metabelian groups usually boils down to solving them in wreath products of abelian groups [Baumslag 1973; Baumslag et al. 1994]. Therefore, studying the group \(\mathbb{Z}\wr\mathbb{Z}\) is usually the first step towards understanding general metabelian groups.
The wreath product \(\mathbb{Z}\wr\mathbb{Z}\) is most easily defined as a matrix group over the Laurent polynomial ring \(\mathbb{Z}[X,X^{-1}]\):
\[\mathbb{Z}\wr\mathbb{Z}=\left\{\begin{pmatrix}X^{b}&y\\ 0&1\end{pmatrix}\biggm{|}y\in\mathbb{Z}[X,X^{-1}],b\in\mathbb{Z}\right\}. \tag{3}\]
To see that \(\mathbb{Z}\wr\mathbb{Z}\) is indeed metabelian, take its abelian normal subgroup
\[A\coloneqq\left\{\begin{pmatrix}1&y\\ 0&1\end{pmatrix}\biggm{|}y\in\mathbb{Z}[X,X^{-1}]\right\}\cong\mathbb{Z}[X,X^ {-1}],\]
then the quotient \((\mathbb{Z}\wr\mathbb{Z})/A\cong\mathbb{Z}\) is also abelian.
The group \(\mathbb{Z}\wr\mathbb{Z}\) admits an interesting interpretation as an infinite-state machine. Each element of \(\mathbb{Z}\wr\mathbb{Z}\) can be seen as a configuration of a "Turing machine". The machine is composed of a bi-infinite band with cells indexed by \(\mathbb{Z}\), along with a _head_ positioned at one of its cells. Each cell contains an integer. The element \(\left(X^{b}\ \sum_{i=p}^{q}a_{i}X^{i}\right)\) corresponds to a configuration where the cells contain the integers \(\cdots,0,a_{p},a_{p+1},\ldots,a_{q},0,\cdots\), with \(a_{i}\) being on the \(i\)-th cell. The head of the machine is placed on the \(b\)-th cell. See Figure 4 for an illustration.
Multiplication in \(\mathbb{Z}\wr\mathbb{Z}\) also admits a simple interpretation using this machine representation. To multiple two elements under the machine representation, it suffices to align the \(\downarrow\) arrow of first element and the \(\uparrow\) arrow of second element, then add up the integers in corresponding cells. For the arrows, we keep the position of \(\uparrow\) in the first element and the position of \(\downarrow\) in the second element. See Figure 5 for an illustration.
Using the machine representation, every element in \(\mathbb{Z}\wr\mathbb{Z}\) can be seen as an instruction on the machine by its effect of right-multiplication. For example, the element \(\left(X\ 0\ 1\right)\) or \(\left(0\ 0\ 0\ \right)\) corresponds to the instruction "move the head one cell to the right",
Figure 4: **Inpretation of an element in \(\mathbb{Z}\wr\mathbb{Z}\) as a βTuring machineβ. The upward arrow \(\uparrow\) marks the \(0\)-th cell, and the downward arrow \(\downarrow\) marks the position of the head. In this example, \(\downarrow\) marks the cell at position \(2\), meaning \(b=2\).**
which is the effect of (right)-multiplying to a configuration. Similarly, the element \(\left(\begin{matrix}1&1\\ 0&1\end{matrix}\right)\) or \(\left[\begin{matrix}1\\ \end{matrix}\right]\) corresponds to the instruction "in the cell marked by the head, increase the number by one". Hence, Semigroup Membership in \(\mathbb{Z}\wr\mathbb{Z}\) can be reformulated as the problem: given a configuration \(T\) and a set of instructions \(\mathcal{G}\), can one reach \(T\) using a sequence of instructions from \(\mathcal{G}\)? Similarly, the Identity Problem can be reformulated as: given a set of instructions \(\mathcal{G}\), can one reach the initial configuration using a (non-empty) sequence of instructions from \(\mathcal{G}\)? For \(\mathbb{Z}\wr\mathbb{Z}\), we have the following results.
**Theorem 4.13**.: _In the group \(\mathbb{Z}\wr\mathbb{Z}\):_
1. _[label=()]_
2. _[_Lohrey et al._ 2015_]_ _Semigroup Membership is undecidable._
3. _[_Romanovskii 1974_]_ _Group Membership in decidable._
4. _[_Dong 2023a_]_ _The Identity Problem and the Group Problem are decidable._
Given the machine-like structure of \(\mathbb{Z}\wr\mathbb{Z}\), the undecidability result in Theorem 4.13(i) may not come as a surprise. Indeed, [Lohrey et al. 2015] showed that Semigroup Membership in \(\mathbb{Z}\wr\mathbb{Z}\) can simulate the halting problem in _2-counter machines_ (also known as _Minsky machines_), which is equivalent to the halting problem in arbitrary Turing machines [Minsky 1967]. As for Theorem 4.13(ii), it follows from a more general result of Romanovskii on metabelian groups (see Theorem 4.15(ii)). The most unexpected result seems to be the decidability of the Identity Problem in Theorem 4.13(iii). This shows that by restricting the target to the neutral element in Semigroup Membership, the problem becomes much more tractable (the same phenomenon can be observed in Theorem 4.9). The proof of Theorem 4.13(iii) uses a combination of graph theory techniques and deep tools from algebraic geometry.
**Remark 4.14**.: _The decidability of Semigroup Intersection in \(\mathbb{Z}\wr\mathbb{Z}\) remains an open problem at the moment. It can be shown that \(\mathbb{Z}\wr\mathbb{Z}\) does not contain \(\{a,b\}^{*}\times\{a,b\}^{*}\) as a subsemigroup. This excludes the possibility of proving undecidability by directly embedding the Post Correspondence Problem. However, \(\mathbb{Z}\wr\mathbb{Z}\) contains \(\{a,b\}^{*}\) as a subsemigroup [Baumslag and Roitberg 1982]. Therefore as in Remark 3.12, Semigroup Intersection is undecidable in the direct product \((\mathbb{Z}\wr\mathbb{Z})\times(\mathbb{Z}\wr\mathbb{Z})\)._
### Arbitrary metabelian groups and beyond
We now consider arbitrary finitely generated metabelian groups.
Figure 5: Example of multiplying two elements in \(\mathbb{Z}\wr\mathbb{Z}\) using the matrix representation and using the machine representation.
**Theorem 4.15**: _In finitely generated metabelian groups5, we have the following results._
Footnote 5: As a convention in computational group theory, a finitely generated metabelian group is usually represented by a _finite \(\mathscr{A}^{2}\)-presentation_. Let \(\Sigma\) be an alphabet and \(F(\Sigma)\) be the free group over \(\Sigma\). The quotient \(M(\Sigma)\coloneqq F(\Sigma)/[[F(\Sigma),F(\Sigma)],[F(\Sigma),F(\Sigma)]]\) is metabelian, and is called the _free metabelian group_ over \(\Sigma\). A finite \(\mathscr{A}^{2}\)-presentation of a metabelian group \(G\) is the writing of \(G\) as a quotient \(M(\Sigma)/\langle\langle v_{1},\ldots,v_{n}\rangle\rangle\) for some alphabet \(\Sigma\) as well as elements \(v_{1},\ldots,v_{n}\in M(\Sigma)\). Here, \(\langle\langle v_{1},\ldots,v_{n}\rangle\rangle\) denotes the normal subgroup of \(M(\Sigma)\) generated by \(v_{1},\ldots,v_{n}\). Every finitely generated metabelian group admits a finite \(\mathscr{A}^{2}\)-presentation [Hall 1954, Corollary 1] [Baumslag et al. 1994, p.629].
(i) [Lohrey et al. 2015] There exists a finitely generated metabelian group, namely \(\mathbb{Z}\wr\mathbb{Z}\), where Semigroup Membership is undecidable. (ii) [Romanovskii 1974] Group Membership is decidable in all finitely generated metabelian groups. (iii) [Dong 2023c] The Identity Problem and the Group Problem are decidable in all finitely generated metabelian groups. (iv) (See Remark 4.14) There exists a finitely generated metabelian group, namely \((\mathbb{Z}\wr\mathbb{Z})\times(\mathbb{Z}\wr\mathbb{Z})\), where Semigroup Intersection is undecidable. In particular, Theorem 4.15(iii) generalizes Theorem 4.13(iii), while Theorem 4.13(ii) is in fact a corollary of Theorem 4.15(ii). Beyond nilpotent groups and metabelian groups, it would be interesting to consider algorithmic problems in larger classes of solvable groups. A natural candidate is the class of _polycyclic groups_[Drutu and Kapovich 2018, Chapter 13, p.463], which generalizes finitely generated nilpotent groups. By a classic result of [Kopytov 1968], polycyclic groups admit decidable Group Membership. However, they are not yet well-studied in the context of semigroup algorithmic problems. Another interesting class to consider is the so-called _center-by-metabelian groups_[Groves 1978], which are a relatively tractable subclass of solvable groups of derived length three.
## 5 Other algorithmic problems for groups and semigroups
To end this survey we mention some other interesting algorithmic problems in infinite groups and matrix groups. These problems are less directly related to semigroup theory, but have nevertheless tight connections with many other areas in the theory of computing. Let \(G\) be a group. The following list is non-exhaustive.
1. [label=()]
2. (Knapsack Problem [Myasnikov et al. 2015, Konig et al. 2016]) Given as input the elements \(A_{1},\ldots,A_{K},T\in G\), decide whether there exist \(n_{1},\ldots,n_{K}\in\mathbb{N}\) such that \(A_{1}^{n_{1}}\cdots A_{K}^{n_{K}}=T\).
3. (Freeness Problem [Cassaigne et al. 1999, Cassaigne and Nicolas 2012]) Given as input the elements \(A_{1},\ldots,A_{K}\in G\), decide whether there exist two different sequences \((i_{1},\ldots,i_{p})\) and \((j_{1},\ldots,j_{q})\) such that \(A_{i_{1}}\cdots A_{i_{p}}=A_{j_{1}}\cdots A_{j_{q}}\).
4. (Subgroup Intersection [Howson 1954, Baumslag et al. 2010]) Given two finite sets \(\mathcal{G},\mathcal{H}\subseteq G\), decide whether \(\langle\mathcal{G}\rangle_{grp}\cap\langle\mathcal{H}\rangle_{grp}\) is the trivial group.
5. (Coset Intersection [Babai 2010, Macdonald et al. 2019]) Given two finite sets \(\mathcal{G},\mathcal{H}\subseteq G\), as well as elements \(g,h\in G\), decide whether \(g\cdot\langle\mathcal{G}\rangle_{grp}\cap h\cdot\langle\mathcal{H}\rangle_{grp}\) is empty.
6. (Vector Reachability [Bell and Potapov 2008, Potapov and Semukhin 2019]) Given \(K\) square matrices \(A_{1},\ldots,A_{K}\) of dimension \(d\), as well as two vectors \(v,w\) of dimension \(d\), decide whether there exists \(T\in\langle A_{1},\ldots,A_{K}\rangle\) such that \(Tv=w\). In particular, if one only considers invertible matrices, then Vector Reachability can be reformulated as a generalization of Coset Intersection. Let \(T_{0}\) be an arbitrary matrix such that \(T_{0}v=w\), and denote by \(H\) the matrix group \(\{A\mid Aw=w\}\). Then Vector Reach
ability is equivalent to deciding whether the semigroup \(\langle A_{1},\ldots,A_{K}\rangle\) has non-empty intersection with the coset \(H\cdot T_{0}\).
6. (Skolem Problem [Skolem 1933; Ouaknine and Worrell 2015]) Given a square matrix \(A\) of dimension \(d\), as well as two vectors \(v,w\) of dimension \(d\), decide whether there exists \(n\in\mathbb{N}\) such that \(v^{\top}A^{n}w=0\).
7. (Positivity Problem [Soittalo 1975; Ouaknine and Worrell 2014]) Given a square matrix \(A\) of dimension \(d\), as well as two vectors \(v,w\) of dimension \(d\), decide whether there exists \(n\in\mathbb{N}\) such that \(v^{\top}A^{n}w\geq 0\).
|
2309.04088 | Data-driven classification of low-power communication signals by an
unauthenticated user using a software-defined radio | Many large-scale distributed multi-agent systems exchange information over
low-power communication networks. In particular, agents intermittently
communicate state and control signals in robotic network applications, often
with limited power over an unlicensed spectrum, prone to eavesdropping and
denial-of-service attacks. In this paper, we argue that a widely popular
low-power communication protocol known as LoRa is vulnerable to
denial-of-service attacks by an unauthenticated attacker if it can successfully
identify a target signal's bandwidth and spreading factor. Leveraging a
structural pattern in the LoRa signal's instantaneous frequency representation,
we relate the problem of jointly inferring the two unknown parameters to a
classification problem, which can be efficiently implemented using neural
networks. | Tarun Rao Keshabhoina, Marcos M. Vasconcelos | 2023-09-08T02:57:38Z | http://arxiv.org/abs/2309.04088v1 | Data-driven classification of low-power communication signals by an unauthenticated user using a software-defined radio
###### Abstract
Many large-scale distributed multi-agent systems exchange information over low-power communication networks. In particular, agents intermittently communicate state and control signals in robotic network applications, often with limited power over an unlicensed spectrum, prone to eavesdropping and denial-of-service attacks. In this paper, we argue that a widely popular low-power communication protocol known as _LoRa_ is vulnerable to denial-of-service attacks by an unauthenticated attacker if it can successfully identify a target signal's bandwidth and spreading factor. Leveraging a structural pattern in the LoRa signal's instantaneous frequency representation, we relate the problem of jointly inferring the two unknown parameters to a classification problem, which can be efficiently implemented using neural networks.
## I Introduction
Multi-agent robotic systems are used in various modern applications, including industrial automation, agriculture, and environmental monitoring [1, 2]. In these systems, autonomous robots work together to accomplish a common goal, such as monitoring an environment or cooperatively completing a task. In such systems, coordination, and communication among the robots are critical to their success. Each robot must be aware of the state and actions of the other robots in the system to coordinate their actions and achieve their goals. For example, in an agricultural monitoring system, each robot may be responsible for monitoring a different field area, and they must coordinate their movements to ensure that the entire field is covered. Therefore, communication among the robots must be reliable, even in challenging scenarios such as remote or outdoor environments, which are subject to disruption by obstacles or malicious interference. Protecting such networks against denial-of-service attacks is of paramount importance to prevent service disruption and economic loss.
A Low Power Wide Area Network (LPWAN) protocol LoRaWAN (Long Range Wide Area Network) offers long-range and low-power communication capabilities well-suited to multi-agent robotic systems [3]. Additionally, LoRaWAN supports creating large-scale networks with multiple nodes, making it an ideal solution for coordinating the activities of large groups of robots communicating intermittently. While LoRaWAN is one of the most robust and resilient low-power communication protocols, it is still vulnerable to a class of denial-of-service attacks known as _jamming_.
A jamming attack follows the diagram in Fig. 1: a transmitting agent, Tx, sends a signal to a receiving agent, Rx; the transmitted signal is intercepted by an attacker using a software-defined radio unit; The attacker then infers two private parameters used for communication between Tx and Rx, and subsequently sends a jamming signal to interfere with the transmitted signal at the receiver.
### _Related Work_
Wireless communication protocols transmit over the air, which makes them vulnerable to interference from any radio transmitter within their vicinity. This fundamental aspect of shared media in wireless networks has made way for extensive research in the wireless jamming domain [4, 5, 6]. Energy-constrained jamming methodologies attempt to block the channel in reaction to transmission activity to save power. Herein, we discuss such a reactive jamming strategy for LoRa PHY. Securing communication systems and improving performance in the presence of _intelligent_ jammers [7, 8] is the motivation to this work.
Numerous studies have examined the throughput and performance of ultra-narrow band (UNB) and spread spectrum-based technologies in the unlicensed Industrial, Scientific, and Medical (ISM) band [9, 10]. Amongst these, a comprehensive study of PHY layer vulnerabilities, countermeasures and security features of LoRaWAN are presented in [11], and its authors also provide a brief overview of jamming methodologies for LoRa. Long-range transmissions on LoRa are susceptible to several attack strategies such as replay attacks, wormhole attacks, and compromising network key information, in addition to jamming [11].
LoRa's medium access control (MAC) layer design introduces many configurable parameters that affect its service reliability. An in-depth explanation of such parameters, and
Fig. 1: Block diagram for the communication scenario herein: two legitimate agents communicate a signal represented by \(X\), an attacker observes a correlated signal \(\tilde{X}\), with the intent to emit a jamming signal \(J\).
their resulting performance tradeoffs are presented in [12]. Choices of these parameters, driven by service requirements, also play a role in the PHY layer encoding of signals, having implications on the approaches adopted by intelligent jammers.
When signals from one packet are \(6\mathrm{dB}\) stronger than another, it goes on to be demodulated, leaving the weaker packet to be discarded (this is the so-called _channel capture effect_) [13]. Building on the concept, the authors of [14] have shown that LoRa can be jammed using commercially available hardware. Herein, they induce collisions on the channel, by flooding it with numerous packets of identical parameter choices. A more advanced technique, targetting the symbol demodulation process in LoRa was explored in [15], introducing the idea of jamming chirps. They revealed that LoRa receivers cannot distinguish between a well-synchronized jamming chirp and a legitimate chirp.
LoRa was found vulnerable to interference when two packets employ the same configuration of two parameters known as the Bandwidth (\(BW\)) and Spreading Factor (\(SF\)). The symbol demodulation process in LoRa involves two steps: first, declhering, and then, FFT (Fast Fourier Transform). Symbols are determined by identifying peaks within the FFT. When interfering packets utilize the same \(BW\) and \(SF\), this can cause multiple indiscernible peaks in the FFT, leading to symbol errors [16].
Contemporary work in LoRa jamming exploit this property, and an empirical analysis of the approach is discussed in [17]. While they prove the effectiveness of this strategy, they also make a hard assumption. Particularly, that the jammer has apriori knowledge of the target signal's \(BW\) and \(SF\) choices, neccessary for generating the jamming chirps. However, these parameters are generally not available to adversarial agents, which are unathenticated users of the network.
In this paper, we take a step further, exploring how an adversary may employ a simple neural network implementation to estimate this information and jam LoRa signals reactively, without such assumed knowledge. We provide numerical results on the detection and identification of the \(BW\) and \(SF\) parameters from observed signals. Then, we quantify the robustness of our model by evaluation on a wide range of signal-to-noise ratio (SNR) levels of signals.
The rest of this paper is organized as follows. Section II introduces LoRa PHY and the chirp spread spectrum. Section III describes system architecture. Section IV describes our proposed feature extraction technique. Section V describes the architecture of the neural network classifier. Section VI presents our simulation results and discusses our system's performance. Finally, Section VII concludes the paper and outlines future research directions.
## II Signal description
LoRa PHY is a pass band modulation technique that uses chirp spread spectrum (CSS) to modulate digital information onto a carrier wave. In CSS, a chirp is a signal whose instantaneous frequency increases or decreases linearly as a function of time.
In LoRa, each transmitted symbol is mapped into a chirp. The bandwidth (\(BW\)) and spreading factor (\(SF\)) are the most critical parameters defining a LoRa chirp. The \(BW\) corresponds to the range of frequencies of the channel occupied by the chirp, and the \(SF\) determines the number of bits transmitted in a symbol. Each symbol carries \(SF\) bits (i.e., values ranging from \(0\) to \(2^{SF}-1\)). The joint choice of \(SF\) and \(BW\) determines the data rate of the communication link. Following [18], in this section, we describe the CSS modulation.
A fundamental characteristic of the LoRa chirp is its cyclically shifted frequency. Wherein, the frequency incrementally rises from the initial frequency in discrete steps. Upon reaching the highest frequency, it wraps around to the lowest frequency and continues its ascent until it cycles back to the initial frequency. The chirp encodes information by adjusting its starting frequency according to its symbol value, \(s_{n}\).
Consider the transmission of a sequence of symbols \(\mathbf{s}:=\{s_{n}\}\). Each symbol carries \(SF\) bits, denoted by a vector \(\mathbf{w}_{n}=(w_{n,0},\ldots,w_{n,SF-1})\), where \(w_{n,b}\in\{0,1\}\), \(b\in\{0,\ldots,SF-1\}\). A new symbol is transmitted every \(T_{s}\) seconds, corresponding to a chirp signal's duration in time. The value of the symbol \(s_{n}\) is given by
\[s_{n}=\sum_{b=0}^{SF-1}w_{n,b}\times 2^{b}. \tag{1}\]
Since \(s_{n}\) can take on \(2^{SF}\) distinct values, the channel bandwidth is divided into \(2^{SF}\) discrete levels. Each of these levels signifies the starting frequency for a specific symbol value.
Therefore, the chirp completes \(2^{SF}\) discrete steps throughout its duration, in cycling back to its initial frequency. For a chosen bandwidth, \(BW\), each step lasts for a duration of \(T=1/BW\) seconds, adding up to the entire symbol duration \(T_{s}\). Thus, \(SF\) determines the number of steps, and \(BW\) determines the time period of each step, collectively defining the symbol duration, \(T_{s}=2^{SF}/BW\).
Let \(f_{c}\) denote the channel's center frequency. The \(n\)-th transmitted symbol, \(s_{n}\), is mapped into a chirp signal \(c_{n}(t)\in\mathbb{C}\) given by
\[c_{n}(t)=\frac{1}{\sqrt{2^{SF}}}\exp\big{\{}j\big{(}2\pi f_{n}(t)\big{)}t \big{\}},\ \ t\in[0,T_{s}] \tag{2}\]
where,
\[f_{n}(t)=f_{c}+\mathrm{mod}\big{(}s_{n}+t\times BW,2^{SF}\big{)}\times\frac{ BW}{2^{SF}}-\frac{BW}{2}, \tag{3}\]
and \(\mathrm{mod}(\xi,2^{SF})\) is the remainder of the division of \(\xi\) by \(2^{SF}\).
In LoRa \(SF\in\{7,8,9,10,11,12\}\). It is customary to represent a chirp in discrete-time using \(2^{SF}\times f_{s}/BW\) samples indexed by \(k\), where \(f_{s}\) is the sampling frequency and \(f_{s}/BW\) is the oversampling factor. Letting \(t=k/f_{s}\), we obtain:
\[c_{n}(k)=\frac{1}{\sqrt{2^{SF}}}\exp\bigg{\{}j2\pi\times\Big{(}f_ {c}+\\ \mathrm{mod}\ \big{(}s_{n}+k\times\frac{BW}{f_{s}},2^{SF}\big{)} \times\frac{BW}{2^{SF}}-\frac{BW}{2}\Big{)}\bigg{\}}\times k, \tag{4}\]
where \(k=\{0,1,2,\ldots,(2^{SF}\times f_{s}/BW)-1\}\). Figure 2 shows a chirp in continuous time, in discrete time and in its time-frequency representation.
## III System description
Traditionally, jamming in the physical layer corresponds to adding white Gaussian noise (AWGN) to the transmitted signal. Such naive strategies are ineffective in LoRa communications. Due to that resiliency to AWGN, LoRa has also been referred to as a secure communication protocol. However, it has been shown by [17] that LoRa is vulnerable to jamming using a chirp-type waveform. Generating the chirp-type waveform to cause destructive interference requires the knowledge of \(BW\) and \(SF\).
The LoRaWAN specification fixes the choice of these parameters to a finite set of \(18\) combinations (\(BW\in\{125\mathrm{kHz},\ 250\mathrm{kHz},\ 500\mathrm{kHz}\}\) and \(SF\in\{7,8,9,10,11,12\}\)). These parameters are agreed by the legitimate communicating parties, but are not readily available to a jamming adversary. Hence, the jammer needs to estimate this information from an observed signal.
Figure 3 shows the block diagram of the data pipeline used by a reactive LoRa jammer. Each component of this system is described in the following subsections.
### _Data batch preprocessing block_
The SDR captures signals in real time and outputs a stream of In-phase and Quadrature (IQ) samples of indefinite length. On the other hand, our neural network classifier operates on data batches of finite size. The _preprocessor_ block collects data flowing in from the SDR into a matrix of appropriate size for processing in the subsequent blocks.
The SDR is tuned to the channel of interest and configured to a sampling rate of \(1\mathrm{MHz}\). Due to the Shannon-Nyquist Theorem, a minimum sampling rate of \(1\mathrm{MHz}\) is required since the maximum \(BW\) in LoRa is \(500\mathrm{KHz}\). A lower sampling rate might result in distortion from aliasing, and higher rates imply higher demand for computational resources. Therefore, the SDR generates a noisy IQ stream \(\tilde{X}\) of discrete-time samples to the host PC. The preprocessor block parses this stream of complex values into smaller signal blocks and reshapes them into a matrix of dimensions \(B\times M\). Where \(B\) represents batch size and \(M\) represents length of the signal segment.
Determining the proper block length \(M\) is crucial, as it must contain enough samples to distinguish the LoRa configurations reliably. If the block length is too small, the signal is truncated and information is lost. If the block length is too large, the the neural network processing introduces latency. Hence it must be as small as possible yet carry enough signal information.
We have empirically determined that the ideal block length must span two LoRa symbols for the longest configuration. The longest configuration in LoRa is \(BW=125\mathrm{KHz}\), and \(SF=12\), resulting in a symbol duration of \(T_{s}=2\times 2^{12}/125000\) seconds. For a sampling frequency of \(1\mathrm{MHz}\), we obtain an over-sampling factor of \(8\), resulting in \(2\times 2^{12}\times 8=65,536\) of samples. Therefore, we fix the block length to \(M=65,600\).
### _Feature Extraction_
The _feature extraction_ block employs an algorithm based on the instantaneous frequency (IF), which leads to a compact representation of LoRa signal sequences. Such representation accentuates features related to the identification of \(BW\) and \(SF\). The algorithm first transforms the signal vectors from the
Fig. 3: Block diagram for a reactive jammer in a communication system that uses CSS modulation.
Fig. 2: A chirp signal with BW = 125 KHz, and SF = 7 in continuous time (left), discrete time (middle), and its spectrogram (right).
time domain to the frequency domain and tracks the instantaneous frequency of the signal over time. In the frequency domain, any pair of LoRa signals corresponding to different configurations appear distinctly different. The algorithm takes in a batch of signal blocks from the preprocessor block, \(V\), and applies the algorithm described in Section IV to produce a matrix \(F\) of IF vectors.
Our goal is to infer the parameters \(SF\) and \(BW\). One influences the duration of the chirp, and the other affects both the duration and frequency sweep range in the chirp. Our approach to feature extraction here is to characterize the instantaneous frequency of the signal, describing the evolution of the frequency in the signal with time. Through this representation, we can observe both the range of the frequencies swept and the time elapsed for each sweep, enabling simultaneous estimation of \(SF\) and \(BW\).
### _Chirp classifier_
The _chirp classifier_ block uses a neural network (NN) to identify the transmitted chirp signal. Our model is trained using a dataset of IF vectors labeled with their corresponding \(BW\) and \(SF\) configurations. Once trained, this block receives an IF vector and performs a _soft-decision_ classification of \(BW\) and \(SF\) in a vector \(C\) of probabilities for each of the \(18\) possible signal configurations. This information passed to the chirp-generator block.
In the context of classifying LoRa signals based on their features, it is important to note that the relationship between these features and their respective classifications is non-linear. NNs can learn complex relationships and patterns in data, making them suitable for tasks like classifying signals with intricate or non-linear relationships between their features and categories. With proper training and a sufficiently rich architecture, NNs can provide accurate signal classification even at extremely low levels of SNR. We will discuss the NN architecture in more detail on Section V.
### _Chirp generator_
The _chirp generator_ block is responsible for utilizing the inferred \(BW\) and \(SF\) to generate a stream of discrete-time IQ values for the jamming chirps, denoted by \(J\). The IQ stream should be sent to the SDR, which uses a Digital to Analog Converter (DAC) that converts them from discrete-time to a corresponding continuous-time signal. Once converted to analog, the SDR can adjust the signal to the channel's center frequency for transmission. The resulting signal would represent a chirp with the same \(BW\) and \(SF\) as the target signal, leading to interference at the receiver.
LoRa uses a two-step demodulation procedure: the first is known as _declirping_, followed by an FFT. The declirping operation multiplies the sampled signal with a base down chirp of the same \(BW\) and \(SF\). The resulting signal has a constant frequency, which matches the chirp's initial frequency. Then, from its FFT, we identify the bin index of this frequency, determining the encoded symbol's value. Under this demodulation scheme, when two signals of the same \(BW\) and \(SF\) configuration interfere at the receiver, they result in multiple indiscernible peaks in the FFT step. Such interference deceives the receiver into misidentifying the original symbol. This misidentification leads to symbol demodulation errors, resulting in packet drops, effectively jamming the signal.
With the knowledge of \(BW\) and \(SF\) we can generate chirp signals using Eq. (4). However, the chirp's polarity (_upchirp_ or _downchirp_), the symbol value, and the arrival time influence the effectiveness of interference with the target signal. Considering these factors, the authors of [17] introduced three effective methods to jam LoRa signals when \(BW\) and \(SF\) are known, which can be implemented in the chirp generator block, summarized as follows:
* **Identical chirps:** A simple approach is to continuously repeat the same symbol in sequence. By transmitting continuously, we avoid sudden shifts across demodulation windows. Any delays and time offsets only affect the initial frequency of the chirp and still result in demodulation errors. This method is lightweight because it does not require strict time synchronization.
* **Consecutive downchirps:** This method targets the Start Frame Delimiter (SFD) symbol of LoRa packets, which is a base downchirp that marks the beginning of the packet header. From transmitting base downchirps consecutively, the receiver is tricked into making errors in identifying the legitimate SFD, resulting in incorrect packet parsing and leading to packet drops.
* **Synchronized chirps:** This method is considered to be the most effective jamming strategy in LoRa [15, 17]. It involves transmitting random symbols that perfectly align with the demodulation window at a receiver. This is made possible by estimating and compensating the Carrier Frequency Offset (CFO) and the Sampling Time Offset (STO), as in a legitimate LoRa demodulator. The _synchronized chirps_ method requires strict synchronization and additional computing, however, it is the most effective and difficult to detect method known to date.
In conjunction with the inferred parameters, the chosen method defines the sequence of jamming chirps to be transmitted. The IQ values corresponding to this sequence is streamed from the chirp generator block to the SDR at a fixed rate. Consequently, the SDR transmits this waveform over the air to jam the target signal at the receiver. This strategy shows that it is possible to jam LoRa signals of unknown \(BW\) and \(SF\) configurations by an unauthenticated agent.
## IV Feature extraction
In this section, we identify a pattern in the data, also known as feature, that aids in distinguishing one category from another. To that end, we compute the _instantaneous frequency_ of the signal. Considering this feature, we can retain information about the range of frequencies swept, and their sweep rate simultaneously, which are directly related to our two parameters of interest, \(BW\) and \(SF\).
Here, we follow a two-step procedure to computing the instantaneous frequency: a Short Term Fourier Transform (STFT) followed by Instantaneous Frequency (IF) estimation.
### _Short Term Fourier Transform (STFT)_
Given the inherent time-varying nature of frequency in a chirp signal, we employ the STFT on each input signal segment [19]. A given signal segment is further subdivided into overlapping windows, each consisting of \(W=128\) samples, with an overlap of \(L=64\) samples. Subsequently, an FFT is executed on these windows. This operation obtains the power distribution across all the frequencies in the channel bandwidth, \(BW\) as the signal evolves in time, as follows:
\[Q[k,m]=\sum_{n=0}^{W-1}x[n+mL]w[n]e^{-j2\pi nk/W}, \tag{5}\]
where \(Q[k,m]\) is the STFT coefficient at frequency bin \(k\) and time index \(m\), \(x\) is the input signal segment, and \(w[n]\) is the Hann window function [20].
### _Instantaneous Frequency Estimation_
Unlike stationary signals where the spectral properties are constant, the frequency of a chirp signal varies linearly with time [21]. For such signals, we must compute the _instantaneous frequency_ instead of frequency. The instantaneous frequency is a time-varying parameter related to the average of the frequencies present in the signal as it evolves in time [22].
From the STFT operation in Eq. (5), we obtain the energy distribution over all frequency bins for every time-step. We use this energy distribution to compute a weighted average of the frequencies at each time-step, obtaining the instantaneous frequency of the signal, as follows:
\[f_{inst}(m)=\frac{\sum_{k=1}^{K}P(k,m)f(k,m)}{\sum_{k=1}^{K}P(k,m)}, \tag{6}\]
where \(f_{inst}(m)\) is the instantaneous frequency at the time index \(m\), \(f(k,m)\) is the peak frequency at frequency index \(k\) and time index \(m\), and \(P(k,m)\) is the power spectral density, computed as \(P(k,m)=|Q[k,m]|^{2}\).
## V Neural network architecture
LoRa nodes operate under power constraints (typically from \(10\mathrm{dBm}\) to \(20\mathrm{dBm}\)) and often transmit over long communication distances (typically from \(10^{3}\)m to \(10^{4}\)m). As a result, LoRa signals are often received at low SNR, sometimes even below the noise floor. Identifying and distinguishing such signals reliably demand a classifier model with high noise tolerance and discriminative power.
Neural networks have been extensively used for signal classification in wireless communications, spanning applications such as channel sensing, interference detection and spectrum management [23, 24, 25]. Central to their efficacy in these applications is their inherent ability to model non-linear relationships between parameters and noisy data [26].
Figure 4 illustrates the feature representations corresponding to different \(BW\) and \(SF\) configurations. The first three subfigures show the case of fixed \(BW\), and the last three figures illustrate the case of fixed \(SF\). These graphs indicate that changes in \(BW\) and \(SF\) result in clearly distinct waveforms. Additionally, the characterization based on the IF of these waveforms makes the task of distinguishing signals of different configurations much simpler by converting the the problem of estimating \(BW\) and \(SF\) into a signal classification problem.
For this classification task, we use a feed-forward neural network as illustrated in Figure 5. The model features two hidden layers with 16 neurons each, and an output layer of 18 neurons, as specified in Table I. The input is the IF vector, where \(t\) is the time index. To classify an IF vector into one of the \(18\) categories, we use a softmax function in the output layer to obtain a probability distribution on the likelihood of each class given the observed data. Consider an output of the final layer, \(Z=[z_{1},z_{2},\ldots,z_{18}]\) of \(18\) real numbers, the softmax function, \(S(\cdot)\), is defined as:
\[S(z_{i})=\frac{e^{z_{i}}}{\sum_{j=1}^{18}e^{z_{j}}},\ \ i=1,\ldots,18. \tag{7}\]
## VI Simulation Results
We use synthetic datasets of LoRa signals, creating separate datasets for training and validation.1. Here, the noisy signals
Fig. 4: Feature representations for various LoRa Configurations
are generated according to an Additive White Gaussian Noise (AWGN) model producing signal data at diverse SNR levels. Our training dataset has \(10\) SNR levels, ranging from \(0\) to \(20\mathrm{dB}\). For each of the \(18\) configurations, we have generated \(50\) signal files. Thus, the training dataset contains a total of \(9000\) entries. Our validation dataset has a broader SNR range, from \(-15\) to \(20\mathrm{dB}\), leading to \(18\) SNR levels in total. Here, we have generated \(20\) signal files for each case, leading to a total of \(6480\) entries. We found out that if the training dataset included signals with SNR below zero, the classification performance of the of the NN is severely degraded.
Consider a clean signal, denoted by \(X\), subjected to AWGN denoted by \(Z\) as follows:
\[\tilde{X}=X+Z, \tag{8}\]
where \(\tilde{X}\) is the resulting noisy signal. The power level of \(Z\) is determined by the desired SNR level. We obtain confidence intervals on results by repeating the experiment \(30\) times. Addinally, we experiment with fixed \(BW\) and \(SF\) choices, observing their influence on classification performance. Figure 6 (left) illustrates the classifier's overall accuracy in relation with SNR. Classification accuracy starts at around 12% for \(-15\mathrm{dB}\) SNR. The accuracy increases sharply and saturates at \(-5\mathrm{dB}\) SNR. Figure 6 (middle) illustrates the classifier's accuracy as a function of SNR for the different possible \(SF\) configurations. Each curve differs from the others by their saturation points and the accuracy levels they can reach. The curve for \(SF\) 12 reaches saturation the earliest and at the highest accuracy level, succeeded by \(SF\) 11, with subsequent configurations following in descending order. We observe that for a fixed SNR level, higher SF choices yield consistently higher accuracy scores. The mean classification accuracy improved with an increase in \(SF\) from 7 to 12. This trend results from LoRa's spreading waveform, where \(SF\) determines the sweep rate of the chirp. A higher \(SF\) leads to a longer chirp duration resulting in a more elongated and discernible frequency trajectory over time. With more samples constituting the waveform, identification becomes more precise, improving classification accuracy.
Figure 6 (right) illustrates the classifier's accuracy as a function of SNR for three \(BW\) configurations: \(125\mathrm{KHz}\), \(250\mathrm{KHz}\), and \(500\mathrm{KHz}\). Before reaching saturation, the \(125\mathrm{KHz}\) curve shows a higher accuracy compared to the other two. Meanwhile, the classifier's accuracy for the \(250\mathrm{KHz}\) curve is consistently higher than for \(500\mathrm{KHz}\).
After reaching saturation, the \(500\mathrm{KHz}\) curve exhibits higher accuracy over the other two. However, beyond this point, all three configurations deliver high classification accuracy. Thus, despite the high accuracy of the \(500\mathrm{KHz}\) curve post saturation, the real differentiator lies in their points of saturation. The earlier the saturation, the lower the minimum SNR needed to classify the signal reliably. Thus, the order in which the curves saturate imply that lower \(BW\) configurations yield better detection.
The mean classification accuracy saturates later for higher \(BW\) choices from \(125\mathrm{KHz}\) to \(500\mathrm{KHz}\). With a wider \(BW\), the signal's frequency changes on a broader range in a reduced period. This rapid shifting causes the instantaneous frequency vectors to become too closely spaced, making it more challenging for the classifier to distinguish them.
The choice of \(BW\) and \(SF\) in LoRa is motivated by the application's quality of service requirements. However, in practice, nodes switch between several parameter choices to save power and optimize throughput. Therefore, when jamming or extensive interference is a concern, legitimate nodes must consider switching to faster \(BW\) and \(SF\) choices. Our results conclude that, to avoid detection by unauthorized agents, legitmate LoRa nodes must opt for lower \(SF\) choices and higher \(BW\) choices whenever possible.
## VII Conclusions and future work
Many large-scale multi-agent systems rely on LPWAN protocols. Amongst these, LoRaWAN has found widespread adoption, due to its energy efficiency, long range, and use of unlicensed spectrum. However, it is susceptible to cyber-attacks, including eavesdropping and jamming. In this paper, we explored the vulnerability of LoRa to signal jamming.
A survey of related literature revealed that LoRa is vulnerable to jamming with a particular chirp type signal. However, generating such signals require the knowledge of the bandwidth, and spreading factor of the target LoRa signal. We argue that this information is shared amongst legitimate parties but unavailable to an unautheticated adversarial agent. In this work, we presented the high-level design of a practical jammer, that makes use of a neural network classifier for estimating these parameters by eavesdropping and reactively emits jamming chirps.
Fig. 5: Neural network architecture used in our system.
Leveraging a structural pattern in LoRa's signal waveform, we relate the problem of estimating these parameters to a signal classification task. To that end, we proposed a feature extraction method that computes the instantaneous frequency of signals, enhancing features pertinent to identifying \(BW\) and \(SF\) configurations. Then we trained a feedforward neural network classifier on a dataset LoRa signals to learn these characteristics for predictive analysis. Our results indicate that the classifier begins to reliably estimate these parameters for signals stronger than \(-5\mathrm{dB}\) SNR. Additionally, we analyzed detection performance at various configurations of \(BW\) and \(SF\). Ultimately revealing that, to hinder such detection, legitimate users of LoRa must use lower \(SF\) and higher \(BW\).
Directions for future work include experimenting this classifier on a dataset of real signals captured using a software radio to provide a real-world validation of this analysis, and an end-to-end implementation of the proposed jammer to explore real-time performance of the design.
|
2310.00399 | Empirical Study on Transformer-based Techniques for Software Engineering | Many Transformer-based pre-trained models for code have been developed and
applied to code-related tasks. In this paper, we review the existing
literature, examine the suitability of model architectures for different tasks,
and look at the generalization ability of models on different datasets, and
their resource consumption.
We examine three very representative pre-trained models for code: CodeBERT,
CodeGPT, and CodeT5, and conduct experiments on the top-4 most targeted
software engineering tasks that we found in our literature survey: Code
Summarization, Bug Fixing, Bug Detection, and Code Search. In our study, we
showcase the capability of decoder-only models (CodeGPT) for specific
generation tasks under state-of-the-art evaluation metrics and contest the
common belief that the encoder-decoder architecture is optimal for
general-purpose coding tasks. Additionally, we found that the most frequently
used models are not necessarily the most suitable for certain applications and
the developers' needs are not adequately addressed by current research. As
well, we found that the benchmark and frequent dataset for Bug Fixing and Code
Summarization both fail to enable models to generalize onto other datasets for
the same task (the frequent dataset refers to the dataset with the highest
frequency used in literature other than the benchmark). We use statistical
testing to support our conclusions from experiments. Finally, CodeBERT is
highly efficient for understanding tasks, whereas CodeT5's efficiency for
generation tasks is in doubt, as the highest resource consumption does not
guarantee a consistent better performance on different metrics. We also discuss
the numerous practical issues in advancing future research on transformer-based
models for code-related tasks. | Yan Xiao, Xinyue Zuo, Lei Xue, Kailong Wang, Jin Song Dong, Ivan Beschastnikh | 2023-09-30T14:45:22Z | http://arxiv.org/abs/2310.00399v1 | # Empirical Study on Transformer-based Techniques for Software Engineering
###### Abstract
Many Transformer-based pre-trained models for code have been developed and applied to code-related tasks. In this paper, we review the existing literature, examine the suitability of model architectures for different tasks, and look at the generalization ability of models on different datasets, and their resource consumption.
We examine three very representative pre-trained models for code: CodeBERT, CodeGPT, and CodeT5, and conduct experiments on the top-4 most targeted software engineering tasks that we found in our literature survey: Code Summarization, Bug Fixing, Bug Detection, and Code Search. In our study, we showcase the capability of decoder-only models (CodeGPT) for specific generation tasks under state-of-the-art evaluation metrics and contest the common belief that the encoder-decoder architecture is optimal for general-purpose coding tasks. Additionally, we found that the most frequently used models are not necessarily the most suitable for certain applications and the developers' needs are not adequately addressed by current research. As well, we found that the benchmark and frequent dataset for Bug Fixing and Code Summarization both fail to enable models to generalize onto other datasets for the same task (the frequent dataset refers to the dataset with the highest frequency used in literature other than the benchmark). We use statistical testing to support our conclusions from experiments. Finally, CodeBERT is highly efficient for understanding tasks, whereas CodeTS's efficiency for generation tasks is in doubt, as the highest resource consumption does not guarantee a consistent better performance on different metrics. We also discuss the numerous practical issues in advancing future research on transformer-based models for code-related tasks.
transformer-based pre-trained models, CodeBERT, CodeGPT, CodeT5, promises and perils
## I Introduction
The availability of large natural language corpora and advances in ML have led recent models to achieve extraordinary performance on Natural Language Processing (NLP) tasks. Transformer-based architectures [1], first introduced by Vaswani et al. in 2017, are among the most successful model variants in this field. Transformer-based models, like BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), have revolutionized NLP tasks, including text classification, sentiment analysis, and language generation.
Given the abundance of large software code corpora, Transformer-based models have also rapidly gained traction in software engineering (SE) research [2], with hundreds of transformer-related papers published in top-tier SE conferences and journals in the past five years. In many instances, these works have reported state-of-the-art performance on a variety of SE tasks. Some example applications of transformer-based techniques include automated program repair [3, 4, 5], merge conflict resolution [6, 7], requirements engineering [8, 9, 10], code and comment generation [11, 12], code and machine translation [11, 13, 14], and more [2, 15]. Model structures, like encoder-only, decoder-only, and encoder-decoder [11], together with the different pre-training objectives, such as generative objectives and denoising objectives, also add to the diversity of work in this space.
The excitement around these transformer-based models, however, must be tempered with a careful assessment of their advantages and pitfalls. This is the focus of this paper.
In this paper, we take a step back and reflect on the copious amount of work that has been published in this area thus far. We study 282 papers published at 27 top conferences and journals during 2017-2022. We consider which models are being used in these papers, which SE applications they target, benchmarks that they use, and other key characteristics of this quickly growing body of work. We then closely look at the performance of the top models from the literature on the most popular applications and review the corresponding model generalizability and computational complexity. Throughout, we frame our discussion in terms of _promises_ and _perils_ to help position SE research that relies on transformer-based models on a firmer footing.
The closest related empirical studies of this rich research space have considered a fixed applications set [16, 17, 18], different evaluation measures [19] and interpretability [20] of BERT-related variants for Code Summarization, and a specific research object like Copilot [21]. Different from these studies, our review is more comprehensive, covering a wider range of papers published over a longer period. We identified three very representative pre-trained transformer-based models for code and the top-4 most popular applications. We re-implemented all three models across all four applications and managed to contest certain beliefs regarding model architectures in the literature with their performance measured using up-to-date evaluation metrics. In addition to performance and architecture analysis, we also analyzed models' generalizability on different datasets for each application, as well as their time
consumption. We additionally support our conclusions with statistical testing. Our study provides a more holistic view of the capabilities and limitations of transformer-based models in SE research. We have made all of our related code and data open-source1.
Footnote 1: [https://anonymous.4open.science/r/Transformer-Empirical-SE-63ED](https://anonymous.4open.science/r/Transformer-Empirical-SE-63ED)
In summary, our work makes the following contributions:
* We perform a comprehensive review of transformer-based research published during 2017-2022 and report on their key characteristics. For example, we find that the SE community lags other domains in adopting the latest techniques. And, we find that the four most popular applications of transformer-based models in SE are Code Summarization, Bug Fixing, Bug Detection, and Code Search, with 33, 29, 18, and 16 papers respectively. However, we also find that the current research overlooks some of the most critical applications that developers need.
* We find that CodeBERT is best suited for understanding tasks as evidenced by the increase in evaluation metrics as well as the lowest resource consumption. Besides, we also demonstrate that CodeGPT has promising performance for specific generation tasks under the state-of-the-art metric, which refutes the claims of the incapability of decoder-only models in the literature. Also, we argue that previous studies overlook the capability of encoder-only models (CodeBERT) to complete code generation tasks, and the belief that encoder-decoder architecture is optimal for general-purpose coding tasks may not be valid. We also discuss how SE researchers can go about selecting models based on their suitability for specific tasks and some non-compliance.
* We fill in a gap in understanding the generalizability of transformer-based models for SE. Our findings suggest that models trained on the benchmark and frequent dataset for Code Summarization and Bug Fixing generalize poorly to the other dataset. Nevertheless, with the experiments on mixing data from both datasets, we suggest the exploration of dataset pruning/selection techniques for future improvement.
* We consider the resource consumption of models and find that CodeBERT is highly efficient for understanding tasks, achieving the highest performance with the lowest resource. We question CodeT5's efficiency for generation tasks, as the highest resource consumption does not guarantee consistently better performance on different metrics. Therefore, when selecting transformer-based models, researchers and practitioners should carefully consider the performance and time complexity trade-offs for their specific application.
## II Transformer-based Models
The most important components of Transformer architecture are the encoder-decoder structure and attention mechanism, which resides in the Transformer blocks.
The encoder aims to extract important information from the input, and outputs the encoded representation. The encoded representation is then taken in as input by the decoder, and the decoder generates the output in an autoregressive manner [1]. Some variants of the Transformer model may contain only an encoder or only a decoder.
A Transformer has multiple layers, which are called Transformer blocks, and they serve as the building blocks for the encoder and decoder. The core component of a Transformer block is the attention mechanism, which is used to process the input to each Transformer block. Through the attention mechanism, a Transformer provides context for different tokens in the input sequence.
### _Pipeline for Transformer-based Pre-trained Models_
There have been many Transformer-based pre-trained models proposed over the past 5 years. They are large Transformer-based models trained on extensive amounts of unlabeled data with unsupervised objectives. The intention for developing pre-trained models is to obtain models with general and transferable knowledge in a certain domain [11], such as programming languages. This section presents the pipeline and variations of Transformer-based pre-trained models.
**Input:** The inputs of Transformers vary across different models. For natural language models like BERT [22], the input starts with a special token [CLS], and sentences are separated by the special separator token [SEP]. As illustrated in Figure 1, \(w_{1}\), \(w_{2}\),..., \(w_{i}\) represents the first sentence and \(w_{j}\), \(w_{j+1}\),..., \(w_{k}\) represents the second sentence.
Language models for code (the focus of this paper), such as CodeBERT [23], CodeGPT [16], CodeT5 [24], accept bimodal data instead, i.e., both natural language and code. \(w_{1}\), \(w_{2}\),..., \(w_{n}\) represents the natural language, and \(c_{1}\), \(c_{2}\),..., \(c_{m}\) represents the corresponding code. The two segments are separated by the [SEP] token, and [EOS] token denotes the end of input.
There exist variants of language models for code which contain more information in their input, e.g., GraphCode
Fig. 1: Pipeline and Variations for Transformer-based Pre-trained Models
BERT [25] additionally includes variables in the input program, denoted by \(x_{1}\), \(x_{2}\),..., \(x_{k}\). Additional input combined with corresponding pre-training objectives will enhance models' knowledge regarding a certain aspect, e.g., code structure in the case of GraphCodeBERT.
```
1#Calculatestheareaofarectangle
2defcalculate_area(length,width):area=length*width
3returnarea
```
**Input Embedding:** Inputs with correct formats are used to generate input embeddings that are fed into the Transformer blocks. An input embedding typically consists of three parts: token embedding, segment embedding, and position embedding. For token embedding, there exists three different options, word embedding, subword embedding, and character embedding. Among the three options, subword embedding is the most frequently used technique in Transformer-based models since it can deal with out-of-vocabulary (OOV) issues. There are two common subword tokenization methods, WordPiece [26] and Byte Pair Encoding [27]. WordPiece is used in BERT and CodeBERT, while Byte Pair Encoding is used in CodeGPT and CodeT5.
Segment embedding is used to distinguish which segment of input a specific token belongs to. For example, in BERT, segment embedding could be 0 for words in the first sentence, and 1 for words in the second sentence.
Lastly, position embedding encodes the positional information of words. There are different options available. A traditional one is sequential embedding, for example, using _sine_ and _cosine_ functions to capture the positional information [28]. To capture structure-related positional information, tree positional embedding [29] can be used. Sequential relative positional embedding [30] and tree relative positional embedding [31] captures the relative position of the input elements instead of the absolute position, which are more effective in sequence-based tasks.
**Model Architecture:** There are three different model structures for Transformer-based models: encoder-only model, decoder-only model, and encoder-decoder model. The most representative pre-trained language models with such structures are BERT, GPT, and T5 for natural language, and CodeBERT, CodeGPT, and CodeT5 for programming languages.
BERT [22], Bidirectional Encoder Representations from Transformers, is an encoder-only model pre-trained on BookCorpus and Wikipedia, with the objective of masked language modeling and next sentence prediction [22]. There are multiple variants of BERT used for SE-related tasks, such as CodeBERT [23] and GraphCodeBERT [25], for type inference, automated program repair, etc.
GPT [32], Generative Pre-trained Transformer, is a decoder-only autoregressive language model pre-trained on BookCorpus, with the generative objective of predicting the next word given some previous words [32]. GPT variants that are applied to SE-related tasks, such as code generation, include GPT-C [33] and CodeGPT [16].
T5 [34] is an encoder-decoder model pre-trained on the C4 (Colossal Clean Crawled Corpus) dataset with a "span corruption" objective [34]. T5's variants include CodeT5 [24] and CooIt5 [4] which are suitable for SE tasks such as code translation and code summarization.
CodeBERT, CodeGPT, and CodeT5 have the same model architectures as BERT, GPT, and T5 (see Figure 1). Apart from the encoder and decoder components, the differences between the model architectures exist in the multi-head attention mechanism and input/output. For CodeBERT and CodeT5, the attention mechanism is bidirectional, allowing the model to capture the context from both directions and to better capture long-range dependencies. Whereas the attention mechanism in CodeGPT is unidirectional, which means that the model only attends to the past inputs in the sequence, avoiding potential future information leakage.
The output of CodeBERT is a contextualized representation of input, and can be utilized to perform e.g., classification tasks. For CodeT5, after the encoder has generated the contextualized representation of an input, like CodeBERT, the decoder takes it in to combine with output generated at previous steps to form the input to the decoder. CodeGPT generates output using the input combined with output generated at previous steps, and the output of CodeT5 and CodeGPT are both probabilities of tokens that can be converted to the corresponding generated sequence.
**Pre-training Objectives:** A pre-trained model for code may have multiple objectives, which constitute a hybrid objective function and contribute to better code understanding [23, 24]. The three pre-trained models that we investigate in this paper have different pre-training objectives. However, Transformer-based pre-trained models for code are mostly pre-trained on different subsets of the same dataset, CodeSearchNet [35], unlike the pre-trained models for natural language.
_CodeBERT:_ CodeBERT is pre-trained with the objectives of Masked Language Modeling (MLM) and Replaced Token Detection (RTD) [23]. MLM aims to predict the masked out token in both NL and PL sections of the program. Figure 3(a) is an example of MLM. The objective is to predict the original token for [MASK]. RTD aims to determine whether a token is the original one or a replaced one. For example, if the generator mutates the original program (Figure 2), to Figure 3(b), the discriminator should recognize that the "length" is a replaced token instead of the original token.
_CodeGPT:_ CodeGPT is pre-trained with the objective to predict the next token, given previous context. Figure 4(a) is the illustration of the pre-training objective.
_CodeT5:_ CodeT5 has four pre-training objectives. The first one is Masked Span Prediction (MSP). It can be viewed as a variation of MLM, and it allows masking of multiple consecutive tokens. Besides MSP, CodeT5 has also introduced two additional tasks: Identifier Tagging (IT) and Masked Identifier Prediction (MIP) to enable the model to learn code-specific
Fig. 2: Original Program.
structural information. IT aims to determine whether a token is an identifier and MIP performs obfuscation on the PL part of the program and aims to predict the masked out identifiers, as shown in Figure 5(a) and 5(b). The last pre-training objective of CodeT5 is bimodal dual generation, which aims to perform NL\(\rightarrow\)PL generation and PL\(\rightarrow\)NL generation simultaneously, as illustrated in Figure 5(c).
The different pre-training objectives, together with different model architectures, enable the models to be suitable for different tasks.
**Fine-tuning Techniques:** Fine-tuning is a widely used technique in Transformer-based SE research. It tunes a pre-trained model, which has been trained with unsupervised objective, on a labeled dataset to achieve high performance on downstream tasks. Benefitting from the general knowledge learned during the pre-training phase of model training, fine-tuning requires a much smaller data size than training a model from scratch.
Despite the prevalence of fine-tuning, there exist some variants, including zero-shot, few-shot learning, and prompt tuning. Zero-shot learning applies pre-trained models without any fine-tuning and few-shot learning simulates data scarcity by providing very limited data examples. Prompt tuning transforms the downstream tasks into a similar format as the pre-training tasks using prompts. We summarize the characteristics and respective advantages of the different tuning methods in Table I.
## III Methodology
Our main objective in this paper is to first provide a comprehensive review of the transformer-based techniques proposed to tackle SE problems and published in top SE venues, we then summarize the promises and potential pitfalls of these techniques to help researchers and practitioners better understand their capabilities and limitations.
### _Key Words for Literature Review_
To generate keywords for our comprehensive literature review, we first searched for the top four software engineering conferences from 2019-2022: 1) ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) 2) International Conference on Software Engineering (ICSE) 3) International Conference on Automated Software Engineering (ASE) 4) International Symposium on Software Testing and Analysis (ISSTA). There are multiple types of pre-trained models mentioned in the papers, including Vanilla Transformer, BERT, GPT, T5, and their variants. We recorded all the models mentioned in the papers, as well as their popular variants, resulting in a comprehensive list of 17 keywords, including Transformer, BERT, GPT, T5, CodeBERT, CodeBERTa, GraphCodeBERT, RoBERTa, CuBERT, C-BERT, BERTOverflow, GPT-C, CodeGPT, PLBART, BART, IntelliCode, and CodeT5.
### _Identify Related Literature_
We identify 27 relevant SE conferences and journals with core ranking A* or A, as shown in Table II. Using the
\begin{table}
\begin{tabular}{c|c|c} \hline
**Tuning Methods** & **Characteristics** & **Advantage** \\ \hline \multirow{2}{*}{\begin{tabular}{c} Fine-tuning \\ \end{tabular} } & Require high-quality & Task-specific \\ & labelled dataset & \& Flexible \\ \hline \multirow{2}{*}{\begin{tabular}{c} Zero/Few-shot \\ Learning \\ \end{tabular} } & Simulates & Better generalization \\ & data scarcity & ability [36] \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Prompt Tuning \\ \end{tabular} } & Augment input & Fully utilizes pre-trained \\ & with prompts & models [15, 37] \\ \hline \end{tabular}
\end{table} TABLE I: Fine-tuning and Variations
Fig. 4: Objectives of CodeGPT.
Fig. 5: Objectives of CodeT5.
Fig. 3: Objectives of CodeBERT.
keywords mentioned above, we conducted an extensive search for Transformer-based papers in those 27 conferences and journals published between 2017 and December 2022 2.
Footnote 2: We excluded ICSEβ23 since the pdf of most accepted papers were unavailable in December.
We locate the keywords in the papers to confirm that they are about applying Transformer-based pre-trained models to SE domain, and exclude papers that the keywords appear by coincidence: e.g., a mathematical transformer, a java library, names of authors. Note that transformer-related papers are unlikely to be missed out using the keyword list, as papers usually mention or reference the models in the list, even if the paper uses variants of them. Finally, we identified 282 relevant papers and 57 different applications.
To extract information from the selected papers, we primarily focused on the pipelines used by the authors to incorporate transformers into their research. This involved studying the applications, datasets, pre-processing, input, architecture, training, and output of the transformers used. We also searched through all 282 papers using the 57 summarized application names and recorded the number of papers for each application.
## IV Experiments
We aim to answer the following research questions in our empirical study.
### _Research Questions_
**RQ1. Literature, Popular Applications, and Developers' Needs:**_What are the characteristics of papers utilizing Transformer models, such as the yearly publication trends and the extent to which different applications have been explored?_
To summarize the publication characteristics, we have conducted a thorough review of 282 papers mentioned in Section III. Based on these papers, we have analyzed and summarized all the applications related to transformer-based models. We have examined why certain applications are more popular than others and whether they are relevant to the needs of SE developers. Through this analysis, we aim to shed light on the most important applications for SE research, and help the community focus on the areas that are most relevant to developers' needs.
**RQ2. Applications' Performance:**_What is the performance of the three base models used for the top-4 popular applications?_
To investigate the performance of various models for different tasks, we conducted a comparative analysis of three representative models: CodeBERT, CodeGPT, and CodeT5. Specifically, we evaluated their effectiveness on the top-4 most popular applications: Code Summarization, Bug Fixing, Bug Detection, and Code Search. We found that previous claims about model performance and architectures have been misleading without the use of up-to-date metrics for specific tasks, and we better support our conclusion about model suitability with non-parametric Wilcoxon signed-rank test. We also identified the most commonly-used models for each task by reviewing the literature, and checked for consistency with our findings.
**RQ3. Generalization:**_How well do the base models trained on commonly-used benchmark datasets for each application generalize to frequent datasets, and conversely, how well do models trained on frequent datasets perform on the benchmark datasets?_
To evaluate the generalization ability of the models, we conducted a search for the datasets used in all papers related to the top-4 applications. For these applications, we adopt the dataset from CodeXGLUE [16] as the benchmark dataset, as CodeXGLUE is widely used by Transformer-based techniques for SE applications. The frequent dataset is defined as the dataset with the highest _frequency_ of use other than the benchmark dataset. We follow the common train test split used in the literature. We found that the benchmark datasets were the most commonly used in all applications except for Bug Fixing, where more researchers utilized datasets such as Defects4J [38] than the benchmark dataset provided (BFpsmall & BFPmedium [39]). However, for Code Search and Bug Detection, only a few researchers used non-benchmark datasets, which makes it less meaningful to investigate generalization on those datasets. Therefore, we focused on the benchmark and frequent datasets for Code Summarization and Bug Fixing. We then evaluate the models trained on the benchmark dataset on the test set of the frequent dataset and vice versa to examine their generalization capabilities. Statistical testing is performed to safeguard our conclusions.
**RQ4. Resource Consumption:**_What is the resource consumption of inference for each base model and application?_
We answer this question by comparing and analyzing the average inference time and memory usage of various models
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Venues** & **2019** & **2020** & **2021** & **2022** & **Sum** \\ \hline ESEC/FSE & 0 & 6 & 14 & 16 & 36 \\ \hline ICSE & 0 & 4 & 15 & 21 & 40 \\ \hline ASE & 1 & 4 & 13 & 15 & 33 \\ \hline ISSTA & 0 & 3 & 2 & 7 & 12 \\ \hline TSE & 0 & 1 & 9 & 12 & 22 \\ \hline TOSEM & 0 & 0 & 1 & 8 & 9 \\ \hline ESE & 0 & 0 & 2 & 6 & 8 \\ \hline PDDI & 0 & 7 & 0 & 7 \\ \hline OOPSLA & 0 & 1 & 0 & 0 & 1 \\ \hline ISSRE & 0 & 2 & 7 & 0 & 9 \\ \hline ESEM & 1 & 1 & 0 & 0 & 2 \\ \hline SANER & 0 & 0 & 4 & 8 & 12 \\ \hline EASE & 0 & 0 & 1 & 1 & 2 \\ \hline IST & 0 & 0 & 4 & 13 & 17 \\ \hline ISS & 0 & 1 & 1 & 8 & 10 \\ \hline ICPC & 0 & 0 & 3 & 7 & 10 \\ \hline RE & 0 & 4 & 13 & 2 & 19 \\ \hline CAiSE & 0 & 1 & 2 & 1 & 4 \\ \hline ICSME & 1 & 4 & 11 & 0 & 16 \\ \hline IST & 0 & 0 & 1 & 1 & 2 \\ \hline MSR & 0 & 1 & 4 & 4 & 9 \\ \hline ICSA & 0 & 0 & 0 & 1 & 1 \\ \hline ECSA & 0 & 1 & 0 & 0 & 1 \\ \hline \hline
**Sum** & 3 & 34 & 114 & 131 & 282 \\ \hline \end{tabular} Note that, POPL, SEAMS, TOPLAS, and FM had 0 papers for all years.
\end{table} TABLE II: Statistics for Papers Published in Top-tier Venues.
on all 4 applications across different datasets.
### _Experimental Setup_
**Pre-trained Models.** We choose CodeBERT, CodeGPT, and CodeT5 as the pre-trained models with the consideration of representativeness in model architectures and their wide presence in the literature.
_CodeBERT_ is an encoder-only model which has the same model architecture as RoBERTa [40]. It is pre-trained on CodeSearchNet and is capable of processing both source code and natural language text. The model we use is CodeBERT-base, which has 125M parameters3.
Footnote 3: [https://huggingface.co/microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
_CodeGPT_ is a decoder-only model which has the same model architecture as GPT-2 [32]. The model we use in our paper is CodeGPT-small-java-adaptedGPT24 with 124M parameters, which is pre-trained on the Java corpora from the CodeSearchNet dataset and uses the GPT-2 model as the starting point.
Footnote 4: [https://huggingface.co/microsoft/CodeGPT-small-java-adaptedGPT2](https://huggingface.co/microsoft/CodeGPT-small-java-adaptedGPT2)
_CodeT5_ is a variant of T5 [34], and achieves state-of-the-art performance for many code intelligence tasks. It views all tasks through a sequence-to-sequence paradigm, and can solve both code understanding and generation tasks. CodeT5 is pre-trained on CodeSearchNet and an additional C/C# dataset [24]. We use CodeT5-base5 that has 220M parameters.
Footnote 5: [https://huggingface.co/Salesforce/code45-base](https://huggingface.co/Salesforce/code45-base)
**Datasets.** Table III shows the benchmark datasets we use in the experiments for the four applications.
_Code Summarization:_ CodeSearchNet [35] is a dataset which consists of \(<code,comment>\) pairs from open source projects. Code refers to the code snippet for a method, and comment refers to the description of the code, for example in Javadoc format.
_Bug Fixing:_ This dataset is provided by Tufano et al. [39]. It contains method-level pairs of the buggy and corresponding fixed code from thousands of GitHub Java repositories. The pairs are called bug-fix pairs, BFPs in short. Based on the code length, Tufano et al. provided two datasets: BFPsmall, which has a code length below 50; and BFPmedium, which has a length between 50 and 100.
_Bug Detection:_ This dataset is provided by Zhou et al. [41]. It contains 27k+ C code snippets from two open-source projects, FFmpeg and QEMU, of which 45% are defective.
_Code Search:_ This dataset is the Python version of the CodeSearchNet [35] dataset.
**Evaluation Metrics.** Different metrics are used for the four applications studied in this paper.
_Code Summarization:_ Following previous works [42, 23], we use BLEU [43], METEOR [44], and ROUGE-L [45] to measure the quality of the summary generated in the Code Summarization task. Each of them considers different aspects.
BLEU is based on the n-gram precision between the generated summary and the reference [43]:
\[BLEU=BP*exp(\sum_{n=1}^{N}w_{n}logp_{n}) \tag{1}\]
where _BP_ is the brevity penalty that penalizes short summary. \(p_{n}\) refers to n-gram precision and \(w_{n}\) is the weight.
METEOR focuses on the harmonic mean of unigram precision and recall [44]:
\[METEOR=(1-P)*F_{mean} \tag{2}\]
where \(P\) is the penalty for difference between the word order in the summary generated and the reference, and more weight is put on recall when calculating the harmonic mean \(F_{mean}\). METEOR allows exact, stem, and synonym matches.
ROUGE-L is based on the longest common sub-sequence (LCS) between the generated summary and the reference [45]:
\[ROUGE-L=\frac{(1+\beta^{2})R_{lcs}P_{lcs}}{R_{lcs}+\beta^{2}P_{lcs}} \tag{3}\]
where \(R_{lcs}\) measures the proportion of the LCS length relative to the length of the reference and \(P_{lcs}\) measures that to the length of the generated summary, and ROUGE-L calculates the harmonic mean of them.
_Bug Fixing:_ We use BLEU-4, Accuracy, and CodeBLEU to measure the quality of the repaired code, where accuracy considers only exact matches, and CodeBLEU [46] additionally considers aspects such as code structure:
\[CodeBLEU =\alpha*BLEU+\beta*BLEU_{weight} \tag{4}\] \[+\gamma*Match_{ast}+\delta*Match_{df}\]
which involves n-gram, weighted n-gram, AST, and data-flow matches.
The definition of accuracy adopted in this paper is the same as below except that \(y_{i}\) and \(\hat{y}_{i}\) refer to the buggy and fixed code instead.
_Bug Detection:_ We use Accuracy as the evaluation metric for bug detection, following the work of CodeT5 [24]. Accuracy helps to measure the ability of the model to distinguish buggy code from normal code [37],
\[Accuracy=\frac{\sum_{i=1}^{|D|}1(y_{i}==\hat{y}_{i})}{|D|} \tag{5}\]
where \(D\) refers to the dataset, and \(|D|\) refers to its size, \(y_{i}\) and \(\hat{y}_{i}\) refer to the ground truth label and predicted label, respectively. The function in the numerator is \(1\) if the two labels are equal, and 0 otherwise.
_Code Search:_ We use Mean Reciprocal Rank (MRR) to measure the ability of the model to retrieve relevant code given a natural language query. It calculates the multiplicative
\begin{table}
\begin{tabular}{c|c|c} \hline
**Task** & **Fine-tuning Dataset** & **Train/Valid/Test** \\ \hline Code Summarization & Java Snise in CodeSearchNet [35] & 164,932 / 5.183 / 10.955 \\ \hline Big Fixing & BFPmall [59] & 46,680 / 5.835 / 5.835 \\ \hline Big Detection & BFPmall [59] & 52,964 / 5.835 / 5.835 \\ \hline Bug Detection & Zhou et al. [41] & 21,854 / 2.732 / 2.732 \\ \hline Code Search & Python Snise in CodeSearchNet [35] & 21,820 / 9,604 / 19,210 \\ \hline \end{tabular}
\end{table} TABLE III: Datasets.
inverse of the rank of the correctly retrieved code snippet and is defined as below [47].
\[MRR=\frac{1}{|Q|}\sum_{i=1}^{|Q|}\frac{1}{rank_{i}} \tag{6}\]
where \(rank_{i}\) refers to the rank of the first correctly retrieved code snippet and \(|Q|\) represents the number of queries.
**Configurations.** We conducted all experiments on a NVIDIA RTX A4000 GPU with CUDA version 11.6 and 16GB of VRAM. Implementation of all models are under the PyTorch framework.
## V Results
In this section, we aim to answer the research questions mentioned in Section IV by discussing the promises and perils of using transformer-based models for SE research.
### _RQ1. Literature, Popular Applications, and Developers' Needs_
**On Papers Published.** Table II presents the number of papers on Transformer-based techniques published in top-tier SE conferences or journals during 2019-2022. Through our search described in Section III, we found a total of 3 papers published in 2019, 34 papers in 2020, 114 papers in 2021, and 131 papers in 2022. The increasing trend in the numbers indicates a growing interest in using Transformer-based techniques to solve SE-related problems.
Although the Transformer architecture was proposed in 2017 [1], it was rarely used by SE researchers until 2020. Besides, even though transformer models are more widely studied over time, there is still room for improvement. For example, some SE studies [48, 49] only compare their models to Vanilla Transformers instead of including additional and more advanced models such as T5, nearly three years after the proposal of these new models. This shows that the SE research community needs to explore breakthrough techniques from other domains and investigate their application to SE-related tasks. Additionally, researchers should stay informed and attentive to the latest methodologies, such as T5, CodeT5 and so on.
**Peril 1: SE researchers using transformer-based models should pay attention to recent techniques in domains like NLP and be more attentive to the latest methodologies.**
**On Applications.**
We collected an exhaustive list of applications explored by Transformer-related papers in ICSE/FSE/ASE/ISSTA by 2022. We further examined the frequency of the applications in papers in Table II. Parts of the application statistics are shown in Figure 6. Out of a total of 282 papers, we found that code summarization appeared 33 times, bug fixing appeared 29 times, bug detection appeared 18 times, and code search appeared 16 times, making them the top four popular applications. Due to their representation of the community's research focus, we chose to conduct experiments on these four applications for our research questions.
_Code Summarization_[50]: This task generates a natural language summary for a code snippet to aid programmers' understanding.
_Bug Fixing_[39]: This task aims to fix buggy programs automatically.
_Bug Detection_[41]: This task performs binary classification - determining if a program is buggy or not.
_Code Search_[35]: This task searches for relevant code snippets given their natural language descriptions.
The four tasks belong to different categories (Table IV), making them representative of SE tasks researchers study and appropriate for our empirical study. Furthermore, due to their specific task implementation, bug detection and code search tend to be more focused on understanding tasks, while code summarization and bug fixing are more generation-oriented. Thus, the optimal performance on the tasks is achieved by different types of models, as discussed in RQ2.
Apart from experimenting with the most representative and popular applications, we have also identified the least-studied applications. Some of the applications are too specific, e.g., taint propagation detection and defect inheritance reduction, leading to fewer researchers studying them. Some are related to the more popular applications, e.g., algorithm classification
\begin{table}
\begin{tabular}{c|c|c} \hline
**Task Name** & **Nature** & **Category** \\ \hline Code Summarization & Code \(\rightarrow\) NL & Generation \\ \hline Bug Fixing & Code \(\rightarrow\) Code & Generation \\ \hline Bug Detection & Code \(\rightarrow\) Class (0/1) & Understanding \\ \hline Code Search & NL \(\rightarrow\) Code & Understanding \\ \hline \end{tabular}
\end{table} TABLE IV: Top-4 Popular Applications
Fig. 6: Frequency of Applications. We only list applications with frequency \(\geq\) 5.
can be seen as the derivative of code summarization, as the core idea of both applications is to understand intent. One potential reason for other applications to be less well-studied is that there are no commonly used benchmarks.
[leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep,partopsep=0pt,partopsep=0pt,partopsep,partopsep=0pt,partopsep,partopsep=0pt,partopsep,partopsep=0pt,partopsep=0pt,partopsep,partopsep=0pt,partopsep=0pt,partopsep,partopsep=0pt,partopsep,partopsep=0pt,partopsep,partopsep=0pt,partopsep=0pt,partopsepsep,partopsep=0pt,partopsep,partopsep=0pt,partopsepsep,partopsep=0pt,partopsepsep,partopsep=0pt,partopsepsep,partopsep=0pt,partopsepsep,partopsep=0pt,partopsepsep,partopsepsep,partopsep=,partopsep,partopsepsep,partopsep=,partopsep,partopsep,partopsep=,partopsep,partopsep,partopsepsep,partopsepsep,partopsepsepsep,partopsepsepsep,partopsepsepsepsep,partopsepsepsep,partopsepsepsep,partopsepsepsepsepsepsepsepsepsepsep,parsep,part
most frequently used pre-trained model, as well as the best-performing model for each task.
We can see that the most frequently used model is also the best-performing model for Bug Detection and Code Search, and partially for Bug Fixing. However, this is not the case for Code Summarization.
For Code Summarization, CodeT5 is a more advanced pre-trained model with an encoder-decoder architecture, just like Vanilla Transformer. CodeGPT is a decoder-only model with a different model architecture. Previous studies [16, 24] collectively demonstrate that CodeT5 is able to outperform the Vanilla Transformer significantly in this task, and our result shows that CodeGPT has competitive performance over CodeT5 under state-of-the-art evaluation metric. Thus, the community should watch for ML and SE advancements and integrate advanced models to achieve optimal results, instead of using the earliest or maybe the most well-known models.
For Bug Fixing, it is worth noting that the literature commonly believes that the encoder-decoder models are more suitable for code generation tasks [34, 59, 60], yet many papers have opted for encoder-only models (CodeBERT) without proper justification or comparison to encoder-decoder models.
**Peril 4:**_Encoder-only model (such as CodeBERT) is more suitable for understanding tasks, and decoder-only model (such as CodeGPT) and encoder-only model (such as CodeBERT) can also outperform encoder-decoder model (such as CodeT5) under different metrics for generation tasks. Previous claims regarding the incapability of decoder-only models and the optimality of encoder-decoder models do not hold. Also, the most frequently used model for generation tasks is Vanilla Transformer (18 out of 33), indicating that the community should put care into selecting the most suitable models on a task-by-task basis._
### _RQ3. Generalization_
In this research question, we explore how well the base models trained on commonly-used benchmark datasets for each application generalize to frequent datasets, and conversely, how well models trained on frequent datasets perform on the benchmark datasets. Table VII presents the benchmark and frequent datasets used in our study. For Code Summarization, the frequent dataset was used 6 times (compared to 11 times for the benchmark dataset), while for Bug Fixing, the frequent dataset was used 5 times (compared to 4 times for the benchmark dataset). To ensure the validity of comparison, we pre-processed the frequent datasets to be in the same format as benchmark datasets. For example, patches in Defects4J were processed into bug-fix pairs.
To assess the generalization ability of the models trained on the benchmark and frequent datasets, we conducted experiments where we trained each model on the benchmark dataset and then evaluated it on the test set of the frequent dataset. This enabled us to evaluate whether the knowledge that the models learned from the benchmark dataset is transferable to other datasets or if it is only useful for the benchmark dataset. We also evaluated the performance of the model trained on the frequent dataset on the test set of the benchmark dataset and compared it to the performance of the model trained on the benchmark dataset itself.
**Code Summarization.** Table VIII presents the performance of various models on different datasets and settings. Note that "B" refers to the benchmark dataset, "F" refers to the frequent dataset, "Mix-Bench" consists of the benchmark dataset combined with randomly sampled data from frequent dataset (the sampled dataset has roughly equal size as the benchmark dataset). "B model on B" refers to the performance of model trained on benchmark dataset evaluated on the corresponding testing set. "F model on B" refers to the performance of model trained on the frequent dataset evaluated on the testing set of benchmark dataset, etc.
When evaluating the models trained on either the benchmark or the frequent dataset on the test set of the other dataset, we found that they both do not generalize well (refer to columns "B model on B", "F model on B", and "B model on F" in Table VIII). For instance, when evaluating CodeBERT trained on the benchmark dataset on the frequent dataset, its BLEU-4 score is only 17.61, which is significantly lower than the BLEU-4 score of 32.20 achieved by the model trained on the frequent dataset itself. Across all models and evaluation metrics, B model on B is able to significantly outperform F model on B, and F model on F significantly outperforms B model on F. Thus, both benchmark and frequent dataset fail to enable models to generalize onto other datasets.
Therefore, we investigated if combining data from both datasets could enable the model to learn more diverse knowledge.
The performance of the models trained on the Mix-Bench dataset and evaluated on the benchmark dataset is shown in the "Mix-Bench model on B" column of Table VIII. Through statistical testing, we found only CodeBERT's BLEU-4 and ROUGE-L have been significantly improved. The improve
\begin{table}
\begin{tabular}{c|c|c} \hline
**Task Name** & **Most Frequent** & **Best Performing** \\ & **Model** & **Model** \\ \hline Code Summarization & Vanilla Transformer (18/33) & CodeGPT/CodeT5 \\ \hline Bug Fixing & CodeBERT (13/29) & CodeBERT/CodeT5 \\ \hline Bug Detection & CodeBERT (8/18) & CodeBERT \\ \hline Code Search & CodeBERT (7/16) & CodeBERT \\ \hline \end{tabular}
\end{table} TABLE VI: Most Frequently Used Model vs. Best Performing Model.
\begin{table}
\begin{tabular}{c|c|c} \hline
**Task Name** & **Benchmark Dataset** & **Frequent Dataset** \\ \hline Code Summarization & CodeSearchNet (11) & LeClair et al. [61] (6) \\ \hline Bug Fixing & BFPmedium (4) & Defects4J, etc\({}^{0}\) (5). \\ \hline \end{tabular}
\end{table} TABLE VII: Frequency of Datasets.
ment in CodeBERT's METEOR and CodeT5's BLEU-4 are insignificant. Moreover, additional training data has led to significant performance drop in CodeGPT across all metrics, and CodeT5 on METEOR. Therefore, we can conclude that including more diverse data into the training dataset does not necessarily increase model generalization ability. We thus suggest future work look into the possibility of utilizing dataset pruning/selection techniques to improve the dataset quality, which in turn will increase model generalization ability.
We observed that adding information from the frequent dataset improved the performance of CodeBERT on the benchmark dataset. However, since the mixed dataset is twice as large as the benchmark dataset, the comparison may not be fair. To address this, we created the "Mix-Half" dataset, which has approximately the same size as the benchmark dataset. Its performance is presented in the last two columns of Table VIII.
**Bug Fixing.** Table IX presents the results of evaluating models trained on Benchmark/Frequent datasets on the Frequent/Benchmark test sets for Bug Fixing. With statistical testing, we found that benchmark model's performance on benchmark dataset is significantly higher than frequent model's performance on benchmark dataset, across all three models and different metrics. The same holds true for frequent model's performance on frequent dataset and benchmark model's performance on frequent dataset. Thus, we conclude that the benchmark and frequent dataset of Bug Fixing also fail to enable model generalization ability onto other datasets.
### _RQ4. Resource Consumption_
In this research question, we investigate the resource consumption of different models across tasks and datasets. Resource consumption becomes important if a model is deployed and concurrently used by many users. We focus on resource consumption during the inference phase for this reason. Table X presents the average inference time and memory consumption for the three models across tasks and datasets.
We conclude that CodeBERT is highly efficient for understanding tasks like Bug Detection and Code Search, achieving the highest performance while consuming the least resources. Additionally, significantly more resources are required for CodeBERT to perform generation tasks than understanding tasks.
For generation tasks like Code Summarization and Bug Fixing, while CodeT5 demonstrates superior performance in certain metrics, it also exhibits an increase in resource consumption attributable to the model's complexity. This raises questions regarding the efficiency of CodeT5 in generation tasks, especially given its higher resource consumption and subpar performance observed in some experiments employing more targeted and contemporary metrics, such as CodeBLEU and METEOR. Additionally, CodeT5 consistently utilizes more memory compared to CodeBERT and CodeGPT across all tasks, a consequence of its larger parameter size, standing at 220M.
Besides, with the contrast between the time consumption for Benchmark (S) and Benchmark (M) for Bug Fixing, which differ in the length of code, we can clearly see that the time complexity of models for a certain task increases when the input complexity increases.
_Promise 2: CodeBERT is the most efficient model for code understanding tasks, achieving highest performance with least resource._
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline
**Bug Fixing** & **Benchmark (M)** & **Frequent** & **B model on F** & **F model on B** & **Bug Fixing** & **Code** \\ \hline CodeBERT & 88.60/10.10/88.40 & 97.61/92.58/92.73 & 13.29/18.96 & 88.04/05/81.45 & **88.50** & **88.50** \\ \hline CodeGPT & 88.56/12.40/88.09 & 97.16/92.59/92.4 & 14.55/16.97 & 8.41/100/88.91 & 8.41/100/88.91 \\ \hline CodeT5 & 88.56/13.65/86.24 & 90.04/05.03/97.21 & 81.60/32.84 & 67.91/00.0076.35 & 8.51/00.0076.35 & 8.51/00.0076.35 \\ \hline \end{tabular}
* Note: the zeroes in the table indicate that there are no exact matches, however, partial matches exist and are reflected by the BLEU-4/CodeBLEU scores.
\end{table} TABLE IX: Generalization Performance of Models on Bug Fixing (BLEU-4/Accuracy/CodeBLEU).
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline
**Bug Fixing** & **Benchmark (M)** & **Frequent** & **B model on F** & **F model on B** \\ \hline CodeBERT & 88.60/10.10/88.40 & 97.61/92.58/92.73 & 13.29/18.96 & 88.04/05/81.45 \\ \hline CodeGPT & 88.56/12.40/88.09 & 97.16/92.59/92.4 & 14.55/16.97 & 8.41/100/88.91 \\ \hline CodeT5 & 88.56/13.65/86.24 & 90.04/05.03/97.21 & 81.60/32.84 & 67.91/00.0076.35 & 8.51/00.0076.35 \\ \hline \end{tabular}
* Note: the zeroes in the table indicate that there are no exact matches, however, partial matches exist and are reflected by the BLEU-4/CodeBLEU scores.
\end{table} TABLE IX: Generalization Performance of Models on Bug Fixing (BLEU-4/Accuracy/CodeBLEU).
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline
**Code Summarization** & **B model** & **F model** & **Mix-Bench** & **F model** & **B model** \\
**S Summarization** & **on B** & **on B** & **model on B** & **on F** & **on F** \\ \hline CodeBERT & 18.74/13.16/35.03 & 8.70/5.29/14.46 & 19.31/13.25/35.73 & 32.20/20.52/42.17 & 17.61/12.87/25.75 \\ \hline CodeGPT & 14.15/15.89/33.23 & 2.70/1.91/7.81 & 13.92/15.41/32.80 & 31.97/20.19/41.83 & 12.85/13.19/22.89 \\ \hline CodeT5 & 20.20/15.06/38.2 & 12.10/8.92/21.53 & 20.52/14.51/38.08 & 32.94/22.05/44.33 & 19.24/14.25/28.35 \\ \hline \end{tabular}
\end{table} TABLE VIII: Generalization Performance of Models on Code Summarization (BLUE-4/METEOR/ROUGE-L).
_Peril 6: CodeT5's efficiency for generation tasks is in doubt, as the highest resource consumption due to model complexity does not guarantee a consistent better performance on different metrics._
## VI Related Work
For our related work, we focus on the pre-trained language models, the four applications - code summarization, bug fixing, bug detection, and code search, and related empirical studies of transformers.
### _Pre-trained Language Models_
Different pre-trained language models are developed and demonstrated to have high performance in many NLP tasks [22, 32, 40, 60]. With the success of pre-trained language models in the NLP domain, researchers have been exploring and applying these models to code-related tasks [62, 63, 64]. Many pre-trained models for code have been developed. CodeBERT [23] is one of the earliest models that has been specifically trained for code-related tasks. Subsequently, models like GraphCodeBERT [25] were proposed to improve over CodeBERT by incorporating additional information, such as data flow. Similarly, CodeGPT [16] and CodeT5 [24] are built based on GPT and T5 architectures, but pre-trained on a code-related corpus with additional pre-training objectives to better understand code. Our experiments are conducted on these three representative pre-trained models for code - CodeBERT, CodeGPT, and CodeT5. Currently, there are many revolutionary large language models being developed and applied to different domains, e.g., GPT-4 [65] and LLaMA [66], which are excluded from our study due to their commercial attribute.
### _Applications_
We study four applications in this paper - code summarization, bug fixing, bug detection, and code search.
#### Vi-B1 Code Summarization
Code summarization is one of the most popular tasks in deep learning. Fernandes et al. [67] combined RNN/Transformers with GGNN. Ahmad et al. [68] applied Transformer to code summarization, and showed the advantage of sequential relative attention over positional encoding.
#### Vi-B2 Bug Fixing
There has been a lot of work in the domain of Transformer-based models that focuses on bug fixing. A majority of this work fine-tunes a pre-trained model for code. For example, CURE [69] fine-tuned a GPT model to generate patches for buggy code. SPT-Code [57], which has similar model structure as CodeBERT, also used fine-tuning to perform code refinement.
#### Vi-B3 Bug Detection
Many approaches have been explored in the field of bug detection. Over the past decades, developer information has been utilized to predict bugs [70, 71, 72]. With the development of deep learning techniques, Yang et al. [73] leveraged deep learning to generate new features from traditional features using a Deep Belief Network (DBN). Later, Li et al. [74] generated new features from processing Abstract Syntax Trees (ASTs) of program through Convolutional Neural Network, and combined them with traditional hand-crafted features to perform bug prediction. There are also deep learning algorithms that specialize in bug detection, e.g., DeepJIT [75] and CC2Vec [76].
#### Vi-B4 Code Search
In deep learning domain for code search, Sachdev et al. [77] developed the tool NCS to learn embeddings of code without supervision. Gu et al. [78] proposed CODEnn to learn code representations through three encoded individual channels. With the outstanding performance of Transformer-based pre-trained models, many works [23, 59, 80, 29] have looked into their application to code search and achieved satisfying results.
### _Empirical Study_
There are some empirical studies on transformers in the literature. For example, Zeng et al. [17] has studied the suitability and robustness of different pre-trained models for code understanding and generation tasks. In [28], Chirkova and Troshin investigated the capabilities of Transformers to leverage syntactic information in different tasks. Wan et al. [20] analyzed the interpretability of pre-trained models from different aspects including attention mechanism, word embedding, and syntax tree. Shi et al. [42] assessed and improved 4 benchmark datasets widely used in code summarization.
Compared to these previous works, our work additionally reviews the literature, summarizes the most widely studied applications, suggests on developers' needs, contests certain beliefs, concludes on model generalization ability trained on different datasets, and investigates the resource consumption of different models.
## VII Conclusion
In this empirical study, we comprehensively review the literature on transformer-based pre-trained models used in SE research published from 2017-2022. We focus on the three very representative and widely studied models - CodeBERT, CodeGPT, and CodeT5 - and evaluate their performance on four most popular code-related tasks - Code Summarization, Bug Fixing, Bug Detection, and Code Search. We examine existing literature and developers' needs, contest current beliefs for model architectures, evaluate models' generalization ability to different datasets, and consider the resource consumption of models. Significantly, we also summarize the promises and perils of using transformer-based models for SE research.
This study highlights several practical issues that need to be addressed in future research on transformer-based models for SE tasks. First, it is crucial to prioritize applications that are most relevant to developers' needs, such as communication and collaboration, deprecated features, module dependencies, and commit issues. Second, the encoder-decoder architecture's optimality for general-purpose coding tasks needs to be re-examined. Third, it is important to carefully select the most suitable model for each specific task. Fourth, the commonly used benchmark datasets should be improved to enhance their
generalization ability, making the trained models applicable to other datasets for the same task. Finally, it would be worthwhile to investigate the potential benefits of pruning techniques on trained models to reduce time and space complexity by using a smaller model for inference instead of the original larger model.
## VIII Data Availability
Our implementation is open-source7 and the datasets we used are publicly available whose access are also showed in the above repository.
Footnote 7: [https://anonymous.4open.science/r/Transformer-Empirical-SE-63ED](https://anonymous.4open.science/r/Transformer-Empirical-SE-63ED)
|
2309.07595 | Stable nanofacets in [111] tilt grain boundaries of face-centered cubic
metals | Grain boundaries can dissociate into facets if that reduces their excess
energy. This, however, introduces line defects at the facet junctions, which
present a driving force to grow the facets in order to reduce the total number
of junctions and thus the system's energy. Often, micrometer-sized facet
lengths are observed and facet growth only arrests for kinetic reasons. So far,
energetically stable, finite-sized facets have not been observed, even though
theoretical stability conditions have already been proposed. Here, we show a
case where nanometer-sized facets are indeed stable compared to longer facets
in [111] tilt grain boundaries in Cu by atomistic simulation and transmission
electron microscopy. The facet junctions lack a Burgers vector component, which
is unusual, but which removes the main energy cost of facet junctions. Only
attractive interactions via line forces remain, which result from a
discontinuity of grain boundary excess stress at the junction. Atomistic
simulations predict that the same phenomenon also occurs in at least Al and Ag. | Tobias Brink, Lena Langenohl, Swetha Pemma, Christian H. Liebscher, Gerhard Dehm | 2023-09-14T10:55:06Z | http://arxiv.org/abs/2309.07595v2 | # Stable Nanofacets in [111] Tilt Grain Boundaries of Face-Centered Cubic Metals
###### Abstract
Grain boundaries can dissociate into facets if that reduces their excess energy. This, however, introduces line defects at the facet junctions, which present a driving force to grow the facets in order to reduce the total number of junctions and thus the system's energy. Often, micrometer-sized facet lengths are observed and facet growth only arrests for kinetic reasons. So far, energetically stable, finite-sized facets have not been observed, even though theoretical stability conditions have already been proposed. Here, we show a case where nanometer-sized facets are indeed stable compared to longer facets in [11\(\overline{1}\)] tilt grain boundaries in Cu by atomistic simulation and transmission electron microscopy. The facet junctions lack a Burgers vector component, which is unusual, but which leads to attractive interactions via line forces. Atomistic simulations predict that the same phenomenon also occurs in at least Al and Ag.
Grain boundaries (GBs) are known to decompose into different facets when they can thereby reduce their (free) energy. Typically, asymmetric GBs split into symmetric facets [1; 2], but transitions from one symmetric GB plane into facets of different symmetric GB planes are also known, most famously for \(\Sigma 3\) GBs with \(\{011\}\) habit planes in fcc metals [3; 4; 5; 6; 7]. Faceting/defaceting transitions as a function of temperature were reported [3; 6; 8]. However, once facets appear, there is a driving force for their growth [5]. This is because the line separating different facets--the facet junction--is a defect [9; 10; 11] and the system can reduce its energy by reducing the number of junctions [5].
The \(\Sigma 19\)b [11\(\overline{1}\)] \(\{178\}\) and \(\Sigma 37\)c [11\(\overline{1}\)] \(\{1\,10\,11\}\) GBs in Cu and Al are close to the \(\Sigma 3\)[11\(\overline{1}\)] \(\{011\}\) GB, but do not exhibit macroscopic facets [12; 13; 14]. Instead, an ordered GB structure termed domino was found [12; 13; 14; 15]. In Cu, an additional pearl structure was found [12; 13; 15], meaning two GB phases [16; 17; 18; 19; 20] exist and can transition into one another based on the thermodynamic conditions [12; 13; 15]. On the second symmetric plane, inclined by \(30^{\circ}\), only a single GB phase (termed zipper) was reported for \(\Sigma 19\)b [11\(\overline{1}\)] \(\{235\}\) and \(\Sigma 37\)c [11\(\overline{1}\)] \(\{347\}\) GBs [21; 14; 22]. In this Letter, we demonstrate that the domino phase does in fact consist of zipper structure facets and that its facet junctions have an attractive interaction, leading to stable nanofacets.
## Methods
We used lammps[23; 24] for molecular statics and molecular dynamics (MD) simulations of [11\(\overline{1}\)] tilt GBs using embedded atom method (EAM) potentials for Cu [25], Al [26], and Ag [27]. Bicrystals were constructed by joining two crystallites in the desired orientation, sampling different relative displacements, and minimizing atomic positions with regards to the energy until the desired GB phases were found (\(T=0\) K). Details are provided in previous publications [15; 22]. GB planes and \(\Sigma\) values as function of the misorientation were found with the code from Ref. [28]. MD simulations with 2 fs timesteps and temperature control via Nose-Hoover thermostats were performed to confirm our findings.
We additionally conducted experiments on Cu thin films that were epitaxially grown on \(\langle 0001\rangle\) Al\({}_{2}\)O\({}_{3}\) wafers. The films were subsequently annealed for at least 1 h at 673 to 723 K [13; 22]. TEM lamellas containing GBs with different misorientation angles and GB planes were extracted using the focused Ga\({}^{+}\) ion beam of a Thermo Fisher Scientific Scios2HiVac dual-beam secondary electron microscope. They were then investigated in a FEI Titan Themis 80-300 (Thermo Fisher Scientific) scanning transmission electron microscope (STEM) at a voltage of 300 kV, beam currents of 70 to 80 pA, and a convergence angle of 17.8 mrad. A high-angle annular dark field (HAADF) STEM detector (Fishione Instruments Model 3000) with a collection angle of 78 to 200 mrad was used for the registration of all images. The interested reader is referred to Refs. [13; 22] for more details on the experimental methods.
Raw (meta-)data for simulation and experiment is provided in the companion dataset [29].
## Stable Nanofaceting
The \(\Sigma 37\)c [11\(\overline{1}\)] \(\{1\,10\,11\}\) domino phase from atomistic simulations of Cu is shown in Fig. 1(a). Red lines highlight an arbitrary choice of atomic motifs--which we simply call "squares"--serving to guide the eye. It appears that the two motifs are each inclined by \(\pm 30^{\circ}\) towards the GB plane, coinciding with the indicated crystal directions of the inset coordinate systems. They correspond to the habit planes of the zipper structure in the \(\Sigma 37\)c [11\(\overline{1}\)] \(\{347\}\) GB [Figs. 1(b)-(c)]. We can find the same square motifs from the domino phase in the zipper phase. By rotation of the zipper structures around [11\(\overline{1}\)] by \(\pm 30^{\circ}\), we can combine the zipper structures into the domino phase without introducing long-ranged lattice distortions
(Fig. 2(a) and Supplemental Fig. S6 in Ref. [30]). The same is found for other misorientation angles we investigated (\(\theta=46.83^{\circ}\) to \(55.59^{\circ}\), see Supplemental Figs. S1-S15). This suggests that the domino phase can be considered as a nanofaceted variant of the zipper structure. The structures in Al and Ag are the same [15] and this result consequently also applies to these metals. In previous experiments [12; 13; 14] and simulations [12; 13; 15], however, facet growth was never observed for domino. This is atypical, since--for example--the related \(\Sigma 3\)[\(11\overline{1}\)] {011} is prone to facet formation and growth (at least at low temperatures) [3; 4; 5; 6; 7; 14]. Consequently, it is possible that the domino phase is not faceted, but simply related to the zipper structure.
To clarify this, we investigated the GB energies when artificially constructing longer facets for different misorientation angles. Since the combination of two rotated zipper structures does not imply any additional strain and fits seamlessly at the joints, we can easily construct different facet lengths consisting of multiple zipper units. This allows to compute the GB energies \(\gamma=(E_{\rm GB}-Ne_{\rm coh})/A\) for \(T=0\) and without externally-applied stress (\(\sigma=0\)),
Figure 1: Motifs of \(\Sigma 37\)c GBs in Cu. (a) Domino consists of a (b) left and (c) right zipper motif. Atom color indicates ABC stacking. Red lines are used to highlight the square motifs, blue and pink lines each highlight one trapezoidal unit. Black lines in (b) follow {112} planes, which is the GB plane of the corresponding \(\Sigma 3\) GB, while black/gray lines in (c) indicate Burgers circuits. As detailed in Supplemental Fig. S25, we obtain \(\mathbf{b}=a/6\)(112) for the black circuit and \(\mathbf{0}\) for the gray one.
Figure 2: Facets in Cu. (a) Joining two \(\Sigma 37\)c zippers without further relaxation. The atoms coming from the left zipper are indicated by circles filled on the left half and vice versa for the right zipper. In the center region, atoms from both structures are shown and overlap. Thus there is no Burgers vector. The inset shows how the trapezoidal units indicated in Fig. 1 overlap seamlessly. The yellow lines are shared between both trapezoidal units. (b) For \(\Sigma 3\), however, the indicated Burgers vector remains, here seen as a shift in the bottom crystal. (c) GB energies of GBs with an average plane corresponding to domino but different facet lengths. For \(\Sigma 19\)b, longer facets became instable even in static minimization, reverting partially to domino and are not shown here.
with \(E_{\rm GB}\) being the energy of a region consisting of \(N\) atoms containing the GB, \(e_{\rm coh}\) being the cohesive energy per atom of the bulk fcc phase (defined here as a negative number), and \(A\) being the GB area. Figure 2(c) shows that the minimum GB energy always occurs at the smallest facet length for our samples with \(46.83^{\circ}\leq\theta\leq 55.59^{\circ}\), opposite to the effects observed in many other faceted GBs [5; 6; 7] and in our \(\Sigma\)3 GB [\(\theta=60^{\circ}\), dark gray line in Fig. 2(c)]. In fact, energies converge asymptotically to the value of zipper facets with zero junction energy (infinite facet length), showing that nanofacets are even stable compared to microscale facets. The same results are obtained for Al and Ag (Supplemental Fig. S17). We performed MD simulations with longer facets and found that they shrink to the minimum size (Supplemental Fig. S18), supporting the hypothesis that the domino phase can be regarded as a stable, nanofaceted zipper variant. It remains to explore why this phenomenon occurs in our [\(11\overline{1}\)] tilt GBs for \(\theta<60^{\circ}\).
In Eq. 5 of Ref. [5], a criterion was derived for energetic stabilization of finite facet lengths from interactions between dislocation content and line forces at the junctions. The facet junction energy consists of contributions from their core energy, dislocation interaction (due to the junction Burgers vector \(\mathbf{b}\)), and line forces due to the GB excess stress [\(\tau\)]. For \([\tau]>bC\), with \(C\) being a positive constant, facet growth is predicted to be disabled. A Burgers circuit around a junction in our faceted \(\Sigma\)37c GB reveals that the Burgers vector content of the junction with reference to the zipper phase is zero (Supplemental Fig. S20). This is consistent with the combination of left and right zipper leading to an undistorted domino phase motif [Fig. 2(a)]. facet junctions in \(\Sigma\)3 GBs, in contrast, contain a finite dislocation content [Fig. 2(b) and Supplemental Fig. S19]. To fulfill the criterion from Ref. [5], we just need a positive [\(\tau\)]. More exactly, the GB excess stress is a tensorial value, defined at \(T=0\) and \(\sigma=0\) as
\[[\tau_{ij}]=\frac{\overline{\sigma}_{ij}V}{A}, \tag{1}\]
where \(\overline{\sigma}_{ij}\) is the average residual stress in a region of volume \(V\) around the GB [31; 32]. Indices \(i,j=1\) correspond to the tilt axis, \(i,j=2\) to its orthogonal direction in the GB plane, and \(i,j=3\) to the GB normal. Excess properties of our GBs for Cu, Al, and Ag are listed in Supplemental Tables S-I and S-II and Supplemental Figs. S22-S23. Notably, the values of \([\tau_{12}]\) for the two zipper structures that make up the domino structures have opposite values (\(\pm 0.15\,\)J/m\({}^{2}\) for X37c in Cu), while all other components are the same. The line forces can be computed as the difference of the two stress vectors of the zipper facets as
\[f_{j}=\pm[\tau_{ij}^{\rm left}]\cdot v_{i}\mp[\tau_{ij}^{\rm right}]\cdot v_{i },\text{ with }\mathbf{v}=(0,1,0) \tag{2}\]
representing the \([\overline{4}73]\) direction (coordinate system of lower crystallite) [33; 34]. The sign convention is arbitrary. Due to the opposite sign of \([\tau_{12}]\) in both facets, this yields a nonzero value of \(f_{1}\) with alternating signs at each junction. The line force thus contributes an energy proportional to \(-f_{1}^{2}\ln L\), where \(L\) is the facet length [33; 34]. In other words, in our case there is no Burgers vector but only opposite and therefore attractive line forces at the junctions and the criterion for facet shrinkage is trivially fulfilled. Similar formalisms have been used for surface phase coexistence [35; 36; 37; 38]. This elastic interaction can also be visualized by computing the lattice strains (per-atom strain with reference to the perfect fcc lattice) as calculated by the polyhedral template matching method [39] in ovito [40]: No long-range volumetric strains exist at the GB and the shear strains of the facets compensate each other when the junctions are closely-spaced (Supplemental Fig. S21). For the \(\Sigma\)3 GB, we find in contrast \([\tau_{12}]=0\) and \(\mathbf{b}\neq\mathbf{0}\), leading to a repulsion of the junctions and facet growth [Fig. 2(c)].
## IV Zipper structure and facet lengths
Regarding the zipper structure, the motifs in the \(\Sigma\)3 \(\{112\}\) GB and close-by GBs with lower misorientation angles \(\theta\) are similar [Fig. 3(f)-(g) and Supplemental Figs. S2, S5, S8, S11, S14, S16]: All GBs contain the squares indicated in red. Additionally, GBs with \(\theta<60^{\circ}\) contain another motif, indicated by the edge dislocation symbol in Fig. 1(b)-(c). We call this the trapezoidal unit [22]. Inspection of the \(\{112\}\) planes in Fig. 1(b) reveals that these motifs do indeed resemble edge dislocations: It appears that \(\{112\}\) planes terminate at the trapezoidal unit. An analysis of the local stress fields (Supplemental Fig. S24) and a Burgers circuit analysis [Fig. 1(c) with details in Supplemental Fig. S25] confirm that these units can be regarded as virtual \(a/6\langle 112\rangle\) dislocations when compared to the reference state of the \(\Sigma\)3 GB [22]. It is important to note that these are not real dislocations (in principle the dislocation content of \(\Sigma\)37c at \(\theta=50.57^{\circ}\) is lower than for \(\Sigma\)3), but that they behave similarly. We thus term them "virtual dislocations". The energy of the zipper GB can therefore be written as a combination of the \(\Sigma\)3 GB energy plus the energy of a low-angle GB (Supplemental Fig. S26).
This is relevant to the faceting because the distance between trapezoidal units can be written as
\[L_{\rm min}(\theta)=\frac{b}{2\sin\frac{60^{\circ}-\theta}{2}} \tag{3}\]
via a reversal of Read and Shockley's equation [41], as validated in Supplemental Fig. S26(a)-(c). Figure 2(a)-(b) shows that the zipper facets join seamlessly at the trapezoidal unit, and cannot join seamlessly if it is absent (\(\Sigma\)3). Thus, \(L_{\rm min}\) is also the minimal facet length of the domino structure, as confirmed in Fig. 3(a). Figures 3(b)-(e) show STEM images of domino phases with differ
ent misorientation angles \(\theta\). The simulations match the experimental observations where comparison is possible. For larger \(\theta\) values it becomes obvious that the domino structure is faceted. The experimental facet lengths correspond to the predictions [Fig. 3(a)]. We never observed longer facets in our data, despite annealing after deposition for at least \(1\,\mathrm{h}\) at around \(T\approx$700\,\mathrm{K}$\sim$0.5T_{\mathrm{melt}}$\). Faceting/defacting transitions in \(\Sigma 3\) {011} GBs in Al were previously observed on the micrometer scale over holding times of \(40\,\mathrm{min}\) at similar homologous temperatures [3], showing that the facet-growth kinetics should definitely be sufficiently fast to observe nanofacet growth if there were an energetic driving force. The absence of long facets at atomic resolution in our experiment thus provides proof for the stabilization of nanofacets.
## III Asymmetric GBs
Combination of zipper structures into domino structures allows for a simple method to produce asymmetric GBs in between the zipper and domino inclinations by increasing the length of one of the facets. We constructed different inclinations \(\phi\) for \(\Sigma 37\)c GBs and calculated their GB energies and excess properties using molecular statics. Figure 4(a) shows that it is thus possible to gradually transition from domino to zipper and vice versa by changing the GB plane inclination. The combination of zipper structures with opposite values of [\(\tau_{12}\)] into domino with [\(\tau_{12}\)] = 0 is also evident. We repeated these simulations for Al and Ag, yielding equivalent results (Supplemental Fig. S17). The energy landscape suggests that domino and zipper can in principle transition into each other. MD simulations confirm that inclinations change towards
Figure 4: (a) Changing inclination for the \(\Sigma 37\)c GB from the left zipper via domino to the right zipper by changing the length of one side of the facets. Inclination angle \(\phi\) is relative to the domino plane. (b)β(c) STEM images of asymmetric \(\Sigma 37\)c GBs. The measured inclination angle has an uncertainty of \(\pm 0.5^{\circ}\).
Figure 3: (a) Predicted facet lengths in domino assuming that the zippers are joined at the trapezoidal unit and that the trapezoidal unit is a dislocation with \(\mathbf{b}=a/6\)(112). Simulations represent the relevant domino phase, constructed by joining the left and right zipper structures. Experimental orientations \(\theta\) and facet lengths \(L\) are measured from the STEM images in (b)β(e) and from Fig. 1 in Ref. [12] for \(\Sigma 19\)b. Data in (b) is from Ref. [13] and data in (f) and (g) from Ref. [22]. The measurement error of \(\theta\) in all experimental images is \(\pm 0.5^{\circ}\).
the zipper plane for constant GB length (Supplemental Fig. S27).
Other types of structural defects could also compensate asymmetric inclinations and we do not claim that lengthening facets is the only way to construct asymmetric domino/zipper GBs. Nevertheless, Figs. 4(b)-(c) show STEM images of two different inclinations whose atomic structures indeed consist of longer and shorter facets, highlighting that low-energy configurations can occur this way.
## IV Conclusion
This is the first finding of GB nanofacets that are energetically stable compared to longer facets in pure metals. Atomic-resolution imaging is required to resolve these nanofaceted domino structures experimentally. Zipper-type facets of \([11\overline{1}]\) tilt GBs with misorientation \(\theta<60^{\circ}\) can be joined seamlessly into domino structures. Their facet junction occurs at the site of the zipper structure's trapezoidal motif--which behaves like an \(a/6(112)\) edge dislocation--and contains no additional Burgers vector. In agreement with theory [5], junction attraction due to line forces in the absence of a junction Burgers vector is sufficient to stabilize nanofacets energetically. Because the trapezoidal unit is not present in the more well-studied \(\Sigma 3\) GBs, these GBs exhibit a junction Burgers vector and thus a driving force for facet growth. The faceting/defaceting in \(\Sigma 3\)[3; 6; 8] is thus likely a sign of a GB phase transition (change away from the domino/zipper structures) and should be investigated in the future.
We thank Gunther Richter and his team from the Max Planck Institute for Intelligent Systems for producing the Cu thin film by molecular beam epitaxy. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 787446; GB-CORRELATE).
T.B., G.D., and C.H.L. each conceptualized parts of the study. T.B. performed and analyzed the atomistic computer simulations, while S.P. analyzed the elastic interactions due to Burgers vectors. L.L. performed the experimental sample preparation, HAADF-STEM investigations, and analysis of the corresponding datasets. The project was supervised by C.H.L. and G.D., who also contributed to discussions. G.D. secured funding for T.B., L.L., and S.P. via the ERC grant GB-CORRELATE. T.B. and L.L. prepared the initial manuscript draft and all authors contributed to the preparation of the final manuscript.
|
2309.06853 | Spin-orbital correlations from complex orbital order in MgV$_{2}$O$_{4}$ | MgV$_{2}$O$_{4}$ is a spinel based on magnetic V$^{3+}$ ions which host both
spin ($S=1$) and orbital ($l_{eff}=1$) moments. Owing to the underlying
pyrochlore coordination of the magnetic sites, the spins in MgV$_{2}$O$_{4}$
only antiferromagnetically order once the frustrating interactions imposed by
the $Fd\overline{3}m$ lattice are broken through an orbitally-driven structural
distortion at T$_{S}$ $\simeq$ 60 K. Consequently, a N\'eel transition occurs
at T$_{N}$ $\simeq$ 40 K. Low temperature spatial ordering of the electronic
orbitals is fundamental to both the structural and magnetic properties, however
considerable discussion on whether it can be described by complex or real
orbital ordering is ambiguous. We apply neutron spectroscopy to resolve the
nature of the orbital ground state and characterize hysteretic spin-orbital
correlations using x-ray and neutron diffraction. Neutron spectroscopy finds
multiple excitation bands and we parameterize these in terms of a multi-level
(or excitonic) theory based on the orbitally degenerate ground state.
Meaningful for the orbital ground state, we report an "optical-like" mode at
high energies that we attribute to a crystal-field-like excitation from the
spin-orbital $j_{eff}$=2 ground state manifold to an excited $j_{eff}$=1 energy
level. We parameterize the magnetic excitations in terms of a Hamiltonian with
spin-orbit coupling and local crystalline electric field distortions resulting
from deviations from perfect octahedra surrounding the V$^{3+}$ ions. We
suggest that this provides compelling evidence for complex orbital order in
MgV$_{2}$O$_{4}$. We then apply the consequences of this model to understand
hysteretic effects in the magnetic diffuse scattering where we propose that
MgV$_{2}$O$_{4}$ displays a high temperature orbital memory of the low
temperature spin order. | H. Lane, P. M. Sarte, K. Guratinder, A. M. Arevalo-Lopez, R. S. Perry, E. C. Hunter, T. Weber, B. Roessli, A. Stunault, Y. Su, R. A. Ewings, S. D. Wilson, P. BΓΆni, J. P. Attfield, C. Stock | 2023-09-13T09:59:59Z | http://arxiv.org/abs/2309.06853v1 | # Spin-orbital correlations from complex orbital order in MgV\({}_{2}\)O\({}_{4}\)
###### Abstract
MgV\({}_{2}\)O\({}_{4}\) is a spinel based on magnetic V\({}^{3+}\) ions which host both spin (\(S=1\)) and orbital (\(l_{eff}=1\)) moments. Owing to the underlying pyrochlore coordination of the magnetic sites, the spins in MgV\({}_{2}\)O\({}_{4}\) only antiferromagnetically order once the frustrating interactions imposed by the \(Fd\overline{3}m\) lattice are broken through an orbitally-driven structural distortion at T\({}_{S}\simeq 60\) K. Consequently, a Neel transition occurs at T\({}_{N}\simeq 40\) K. Low temperature spatial ordering of the electronic orbitals is fundamental to both the structural and magnetic properties, however considerable discussion on whether it can be described by complex or real orbital ordering is ambiguous. We apply neutron spectroscopy to resolve the nature of the orbital ground state and characterize hysteretic spin-orbital correlations using x-ray and neutron diffraction. Neutron spectroscopy finds multiple excitation bands and we parameterize these in terms of a multi-level (or excitonic) theory based on the orbitally degenerate ground state. Meaningful for the orbital ground state, we report an "optical-like" mode at high energies that we attribute to a crystal-field-like excitation from the spin-orbital \(j_{eff}\)=2 ground state manifold to an excited \(j_{eff}\)=1 energy level. We parameterize the magnetic excitations in terms of a Hamiltonian with spin-orbit coupling and local crystalline electric field distortions resulting from deviations from perfect octahedra surrounding the V\({}^{3+}\) ions. We suggest that this provides compelling evidence for complex orbital order in MgV\({}_{2}\)O\({}_{4}\). We then apply the consequences of this model to understand hysteretic effects in the magnetic diffuse scattering where we propose that MgV\({}_{2}\)O\({}_{4}\) displays a high temperature orbital memory of the low temperature spin order.
pacs: 75.40.-a, 75.40.-b, 75.40.-b, 75.40.-b
## I Introduction
Atomic orbitals of magnetic ions provide a link between the crystallographic structure and local magnetic moments and hence an avenue to couple structural and magnetic degrees of freedom. [1; 2; 3; 4; 5; 6; 7] A central parameter to controlling orbital order in materials is spin-orbit coupling which exists in magnetic ions with an inherit single-ion orbital degeneracy. Given that spin-orbit coupling in multielectron atoms scales with the atomic number squared (\(\lambda\propto Z^{2}\))[8], first row transition metals provide an opportunity to study magnetism when spin-orbit coupling is of a similar energy scale to the magnetic exchange and also energy scales of local structural distortions away from a perfect, for example, octahedral environment. These energy scales have consequences to the nature of the real space magnetic orbitals which spatially order at low temperatures, breaking any degeneracy. In particular is the situation of complex orbitals and real orbitals illustrated in Fig. 1. In this paper we investigate the interplay between spin-orbital physics in MgV\({}_{2}\)O\({}_{4}\) and apply neutron spectroscopy to obtain information on the underlying orbital order pointing to ordering of complex orbitals.
properties of the \(A\)V\({}_{2}\)O\({}_{4}\) series are based on two broadly different classes with the \(A\)-site either being magnetic (as is the case for (Mn,Fe,Co)V\({}_{2}\)O\({}_{4}\)) or not (for example in (Mg,Zn)V\({}_{2}\)O\({}_{4}\)). [12] These two classes of materials display some notable differences with the size of the ordered moment on the V\({}^{3+}\) reportedly stronger when the \(A\)-site of the spinel structure is magnetic [13]. Also, magnetic \(A\)-site vanadate spinels display noncollinear magnetic order in contrast to their nonmagnetic counterparts. [14] For simplicity towards investigating the orbital ground state of the V\({}^{3+}\) ion in this class of compounds, we discuss in this paper the case where the \(A\)-site is nonmagnetic. Given the orbital degeneracy on the V\({}^{3+}\) site, (Mg,Zn)V\({}_{2}\)O\({}_{4}\) compounds exhibit a Jahn-Teller distortion with a contraction along the \(c\)-axis accompanying a cubic to tetragonal structural phase transition at \(T_{S}\). This distortion relieves the underlying geometric magnetic frustration and hence is followed by the formation of long-range collinear antiferromagnetic order at \(T_{N}<T_{S}\). The ordered moments in the Neel phase are reduced from expectations of a spin-only moment of \(gS=2\)\(\mu_{B}\) with values of, for example, \(\simeq 0.5\)\(\mu_{B}\) reported for MgV\({}_{2}\)O\({}_{4}\). [15]
The specific case of MgV\({}_{2}\)O\({}_{4}\) exhibits an orbitally driven Jahn-Teller structural transition at T\({}_{S}\simeq 60\) K and a Neel transition at T\({}_{N}\simeq 40\) K. The exact nature of the low temperature orbital order is ambiguous because two types of low-temperature orbital order have been proposed for this compound. The first, termed Real Orbital Ordering (ROO), corresponds to orbital ordering of the real \(t_{g}\) and \(e_{g}\) orbitals which are eigenstates in the limit that the crystal field imposed on the V\({}^{3+}\) ion from the surrounding oxygen octahedra is large. This model has been advocated based on x-ray and electron beam diffraction data. [16] An alternate suggestion [17] has been made for ordering of the complex basis of the \(\hat{L}_{z}\) observable which are complex linear combinations of the real \(d\)-orbitals. This is denoted as Complex Orbital Ordering (COO) and is the basis state used in the weak-intermediate crystal field limit for transition metal ions. Based on powder neutron spectroscopy and diffraction, it has also been suggested that the orbital ordering could possibly be intermediate between the two ROO and COO extremes. [15] A graphical representation of the two different extremes is illustrated in Fig. 1.
A theoretical study outlined in Ref. [18] suggested neutron spectroscopy as a means of distinguishing between COO and ROO orders at low temperatures. In particular the study noted the importance of spin-orbit coupling with COO giving multiple magnetic branches in the neutron response and also a larger magnetic zone-center gap than would be expected in the ROO model. Motivated by this, we investigate the magnetic neutron response in single crystals of MgV\({}_{2}\)O\({}_{4}\). We study the hysteretic critical dynamics using diffraction, measure the dynamic response, and develop a theory to describe the magnetic excitations from the spin-orbital ground state. Given the spatially diffuse and correlated nature of the spin-orbital states used for our theory, we term this an excitonic approach.
We apply this excitonic approach to model our neutron spectroscopy results using the single-ion states and then treating the single-ion Hamiltonian (including spin-orbit coupling) and exchange energetics equally. [19; 20; 21; 22] This model combined with the data favors the ordering of complex orbitals (COO) in MgV\({}_{2}\)O\({}_{4}\) and illustrates the importance of using a complex orbital basis for understanding the properties of the first-row transition metal ions.
This paper is divided into four sections. First, we outline the materials preparation methodology and experimental techniques used to probe the magnetic fluctuations and critical scattering. Second, we outline the spectroscopic experimental results of both the low-energy spin-wave excitations and a higher energy optic-like mode. Third, we present a theory for the magnetic excitations including spin-orbit coupling and the orbitally degenerate ground state. Finally, we investigate the hysteretic effects resulting from the orbital degeneracy using energy integrated neutron diffuse scattering.
## II Experimental information
_Materials preparation:_ Single crystals (Fig. 2\(b\)) of MgV\({}_{2}\)O\({}_{4}\) were grown using the floating zone technique with details provided in the Appendix. Given the extreme sensitivity of the magnetic properties to stoichiometry and chemical order in MgV\({}_{2}\)O\({}_{4}\)[23], we have characterized our single crystals using both thermodynamic and scattering probes with neutrons and x-rays. The diffraction results are summarized in Fig. 2\((c)\), with high temperature structural (T\({}_{S}\simeq 60\) K) and magnetic (T\({}_{N}\simeq 40\) K from the magnetic \(\vec{Q}\)=(1,1,0) Bragg peak) transitions
Figure 1: Graphical representation of the real \(|d_{\alpha\beta}\rangle\) and imaginary \(|\hat{l}_{z}=\pm 1\rangle\) orbital wavefunctions. The surface represents the absolute magnitude of the wavefunction whereas the color represents the phase.
consistent with the published literature [15; 24].
_Characterization:_ Figure 2 (\(c\)) illustrates powder diffraction measurements using synchrotron x-rays (Diamond, I11) and neutrons (PSI, RITA-II). Powder synchrotron data shows a first order transitions at T\({}_{S}\simeq\) 60 K from a high temperature cubic to a low temperature tetragonal phase. This is confirmed on single crystal neutron diffraction data following the \(\vec{Q}\)=(0,0,2) Bragg peak on warming and cooling. A large hysteresis in the neutron intensity is observed between warming and cooling over the same temperature range where synchrotron x-rays measure a coexistence of tetragonal and cubic phases. This is further discussed in the Appendix where this region was found to extend from \(\sim\) 55-60 K. We have further confirmed that the structural properties of single crystals are consistent with the published structural phases through single crystal neutron diffraction data (D9, ILL). The refinement results are summarized in Fig. 2 (\(d\)) in both the cubic (T=140 K) and tetragonal (T=50 K) phases with the structural parameters listed in the Appendix. Fig. 2 (\(d\)) illustrates the calculated Bragg peak intensities (\(\propto F^{2}\)) as a function of the measured intensities with a straight line indicative that the models describe the results well. The slope of the line a calibration factor scaling experiment and calculated values. We have confirmed the magnetic structure to be consistent with that outlined in the literature for ZnV\({}_{2}\)O\({}_{4}\)[25] using polarized neutrons (DNS, MLZ). The results of this are schematically shown in Fig. 2 (\(a\)) with the magnetic V\({}^{3+}\) magnetic moments pointing along the tetragonal \(c\)-axis.
_Critical magnetic fluctuations:_ In this paper we discuss energy-integrated polarized diffuse scattering measurements sensitive to the magnetic critical scattering and neutron inelastic data probing the spin-orbital dynamics. To investigate the magnetic critical scattering as a function of temperature, we studied the polarized neutron cross section using an \(xyz\) geometry provided by the DNS diffractometer (MLZ, Munich) [26]. Further details of the polarized beam measurements and analysis are provided in the Appendix.
_Neutron Spectroscopy:_ To probe the magnetic dynamics sensitive to the spin-orbital ground state, neutron spectroscopy was performed on four different spectrometers. To study the high-energy excitations which represent spin-orbital excitations we used the MAPS spec
Figure 2: (\(a\)) The low temperature nuclear and magnetic structure of MgV\({}_{2}\)O\({}_{4}\) obtained from our neutron diffraction results. (\(b\)) Our coaligned single crystals. (\(c\)) Comparison of powder x-ray and neutron results illustrating the structural and magnetic transitions. The shaded region indicates where both tetragonal and cubic phases coexist. (\(d\)) Single crystal neutron diffraction refinements of the nuclear structure in the high temperature T=140 K cubic phase and low temperature T=50 K tetragonal phases. The calculated Bragg peak structure factor squared (\(F_{calc}^{2}\)) is plotted against the measured structure factor squared (\(F_{obs}^{2}\)) as measured on D9 (ILL).
trometer. The sample was oriented so that \(\vec{k}_{i}\) was oriented along the \(c^{*}\) axis and the intensity along \(L\) integrated as typically done with time of flight direct geometry spectrometers to measure one or two dimensional spin excitations. To obtain an overview of the low-energy and low dimensional dispersive dynamics, the multi rep rate option on MERLIN [27] was exploited combined with rotating the crystal. Further data was taken on the EIGER triple-axis spectrometer [28] to characterize weakly three dimensional dispersive excitations sensitive to interactions between chains. Finally, to probe the low-energy gapped excitations, sensitive to local single-ion anisotropy, we used the cold neutrons on the RITA-II spectrometer [29].
## III Experimental results
### Low-energy dispersive dynamics
The magnetic excitations characterizing the underlying spin-orbital ground state below the Neel temperature T\({}_{N}\) are summarized in Fig. 3. The overall dispersion is displayed in Fig. 3 (\(a\)) taken on the MERLIN spectrometer with E\({}_{i}\)=49 meV. This constant momentum slice illustrates a strong band of magnetic excitations which extends up to \(\sim\) 35 meV. Perpendicular to this direction, a much more weakly dispersive mode is illustrated in Fig. 3 (\(b\)). This reflects the underlying strongly one dimensional coordination [30] of the V\({}^{3+}\) ions. The dispersion is further studied through a series of constant momentum scans using the EIGER thermal triple-axis spectrometer in Fig. 3 (\(d-f\)). Analyzing the low-energy fluctuations with the cold triple-axis spectrometer RITA-II in Fig. 3 (\(c\)), illustrates an energy gap of \(\sim\) 6 meV in the fluctuation spectrum. These results demonstrate the presence of strongly one-dimensional magnetic fluctuations with an energetic gap due to crystalline anisotropy. We discuss the origin of these below when we present an excitonic model for the spin-orbital excitations where we also discuss the relative magnetic exchange energies.
### Higher energy gapped mode
One of the distinctions between COO and ROO orbital ordering scenarios is the presence of an optical-like excitation at higher energies [18] corresponding to a transition between the \(j_{eff}=2\) and \(j_{eff}=1\) manifolds (as theoretically outlined below). A higher energy band of magnetic excitations is investigated using the MAPS spectrometer. The results are summarized in Fig. 4. Fig. 4 (\(a\)) displays a constant energy slice showing weakly correlated excitations near \(\vec{Q}\)=(2,0,0) and equivalent positions. A background (taken from large momentum detectors) corrected constant momentum cut is shown in Fig. 4 (\(b\)) displaying a well-defined peak at \(\sim\) 50 meV. We discuss the origin of this additional excitation and connect it with the low-energy response below by investigating the spin-orbital neutron response theoretically.
## IV Theory
In MgV\({}_{2}\)O\({}_{4}\) the magnetic V\({}^{3+}\) (\(3d^{2}\)) ions form a pyrochlore lattice with the V\({}^{3+}\) ions on the spinel B sites surrounded by an octahedral coordination of oxygen O\({}^{2-}\) (Fig. 5 \(b,c\)) which determines the orbital ground state. Neighboring V\({}^{3+}\) ions occupy edge-sharing octahedra with non-magnetic tetrahedrally-coordinated Mg filling the voids between VO\({}_{6}\) octahedra. The magnetic inter-ion interactions are likely governed by a combination of direct \(d\)-\(d\) orbital overlap and oxygen mediated superexchange via the \(\sim 90^{\circ}\) V-O-V bonds.
In the rare earth pyrochlores [33], where the B site is occupied by the heavier \(4f\) ions, spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds), the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds), the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds), and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds). The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O-V bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O-V bonds) is \(\sim 90^{\circ}\) V-O bonds. The spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O bonds) is \(\sim 90^{\circ}\) V-O bonds, and the spin-orbit coupling (\(\lambda\sim 90^{\circ}\) V-O bonds) is \(\sim 90^{\circ}\) V-O bonds.
\(Z^{2}\)) [8] is the dominant energy scale and motivates the projection onto a ground state manifold \(J\equiv L+S\). The crystalline electric field further splits the ground state with the resulting energy distribution dependent on the species of ion and local environment. In the case that the ground state is a Kramers doublet, a projection onto an effective \(S=1/2\) pseudospin [34; 35; 36] has been utilized and provided the motivation for seeking out quantum spin liquid [37] behavior in these materials.
The physics of the \(3d\) pyrochlores is unlike that of their rare earth cousins owing to a different hierarchy of single-ion energy scales. Particularly, in \(3d\) ions, the spin-orbit coupling (\(\lambda\)) is smaller than the crystalline electric field (\(Dq\)) [38; 39; 32]. With typical energy scales of \(Dq\sim 0.1\) eV and \(\lambda\sim 10\) meV, spin-orbit coupling is a perturbation on the crystalline electric field Hamiltonian and the ground state is defined by the orbital angular momentum \(L\). Within this ground state manifold, the spin-orbit coupling, magnetic exchange and Jahn-Teller energy scales are comparable. In the \(3d\) transition metal compounds it is therefore necessary to consider both single-ion spin-orbital energy scales (\(\mathcal{H}_{SI}\)) and the corresponding magnetic exchange interactions (\(\mathcal{H}_{exch}\))[38]
\[\mathcal{H}=\mathcal{H}_{SI}+\mathcal{H}_{exch}.\]
We will begin by discussing the single ion physics of the \(3d^{2}\) ions to understand the magnetic ground state before considering the magnetic inter-ion exchange in vanadium spinels. Using a Green's function formalism, the dynamical structure factor measured with neutrons will then be calculated using the random phase approximation (RPA) applied to MgV\({}_{2}\)O\({}_{4}\), treating the single-ion Hamiltonian which produces a quantized multilevel ground state explicitly.
### \(\mathcal{H}_{SI}\) - Single-ion physics
The single ion physics discussed in this section is schematically shown in Fig. 5 which determines the quantized spin-orbital ground state of the magnetic V\({}^{3+}\) ions. We consider first the free-ion V\({}^{3+}\) ground state followed by the effects of the crystalline electric field, spin-orbit coupling, distortions from an octahedral environment, and finally the Zeeman-like molecular field originating from magnetic ordering in the low temperature Neel phase.
#### iii.1.1 \({}^{3}f\) - Free ion ground state
For the case of a free V\({}^{3+}\) ions with 2 electrons in the five degenerate \(d\) orbitals, the ground state is determined by Hund's rules which defines \(L=3\) and \(S=1\). This fixes the orbital ground state of the free-ion V\({}^{3+}\) state to be \({}^{3}F\) in spectroscopic notation.
#### iii.1.2 \(\mathcal{H}_{CEF}\) - Crystalline electric field
As discussed above, the dominant energy scale for magnetic \(3d\) is the crystalline electric field originating from the O\({}^{2-}\) ions forming an octahedron around the V\({}^{3+}\) ion. Application of an octahedral crystalline electric field in terms of Steven's operators \(O_{l}^{m}\) gives the following
Figure 4: (\(a\)) displays a constant momentum slice at E=[50,55] meV taken using the MAPS spectrometer. (\(b\)) illustrates a background-corrected constant momentum cut showing a well-defined peak in energy indicative of a second higher energy magnetic band. The red line shows a fitted Gaussian on an exponentially decaying background peaked at 51 meV with a FWHM of 3 meV.
\[\mathcal{H}_{CEF}=B_{4}\left(O_{4}^{0}+5O_{4}^{4}\right)\]
where \(B_{4}<0\) for \(d^{2}\) ions [32]. The energy spectrum resulting from this crystalline electric field Hamiltonian is schematically displayed in Fig. 5 (\(a\)) (Tanabe-Sugano diagram) and parameterized in terms of the crystal field strength (\(Dq\)) and the Racah parameter \(B\) which physically corresponds to the energy cost associated with the Coulomb repulsion. The limit \(Dq/B\to 0\) corresponds to the weak crystal field limit where Hunds rules across all \(d\)-orbitals applies. In the large \(Dq/B\) limit, the crystal field energy scale is dominant and in some transition metal ions (such as Co\({}^{2+}\)) can result in spin transitions.
Figure 5: (\(a\)) Tanabe-Sugano diagram for a \(d^{2}\) ion in an octahedral crystal field. The Racah parameters have been chosen such that \(C/B=4.43\)[31]. For the discussion of the crystal field parameters, we have followed Ref. [32] (Table 7.3) and used B=0.11 eV and \(Dq\)=0.22 eV, indicated by the dashed line. The VO\({}_{6}\) octahedron as viewed from (\(b\)) an isometric viewpoint and (\(c\)) the \(c\)-axis. The octahedron is tetragonally distorted, as evidenced by the shorted bond lengths along \(c\). A further trigonal distortion is present along [111] leading to two different bond lengths in the \(a\)-\(b\) plane. (\(d\)) Eigenvalues of the single-ion Hamiltonian for an octahedrally coordinated V\({}^{3+}\) ion (\(S=1\), \(l=1\)) subject to a crystallographic distortion as described in Sect. IV.1. Red lines indicate the ground state and excitated states for which dipole-allowed transitions exist.
Figure 5 (\(a\)) shows that the octahedral crystal field splits the orbital \({}^{3}\)F ground state into two triplets (\({}^{3}T_{1,2}\)) and a singlet (\({}^{3}A_{2}\)), with the \({}^{3}T_{1}\) triplet being the orbital ground state. The splitting between \({}^{3}T_{1}\) and the first excited triplet is \(8Dq=480B_{4}\approx 1.8\) eV [32] (in the limit of small \(Dq\)). Each of the multiplets under the octahedral crystal field forms an irreducible representation of the octahedral double group, \(O^{\prime}\)[40]. By the orthogonality of different irreps of the same group, for an operator that is invariant under all octahedral symmetry operations, matrix elements between multiplets are zero [40]. Assuming further symmetry lowering terms away from an octahedral field are small and the large energy separation in comparison to the temperatures of interest, one can justifiably work in the ground state \({}^{3}T_{1}\) multiplet, neglecting the excited states.
Considering energy levels beyond the \({}^{3}F\) manifold in Fig. 5 (\(a\)), the octahedral field mixes in the excited \({}^{3}P\) state (Fig. 5 \(a\) when \(Dq/B\to 0\)). The lowest energy orbital state of the \({}^{3}P\) manifold has the same symmetry being a \({}^{3}T_{1}\) triplet [32] and therefore the overall ground-state is a linear combination of the two triplet states from the \({}^{3}F\) and \({}^{3}P\) manifolds
\[\ket{\psi(^{3}T_{1})}=\epsilon\ket{{}^{3}F(^{3}T_{1})}+\tau\ket{{}^{3}P(^{3}T _{1})} \tag{1}\]
where \(\epsilon^{2}+\tau^{2}=1\)[32]. One can represent the ground state orbital triplet using the fictitious orbital angular momentum operator, \(\hat{\mathbf{l}}\) via the transformation \(\hat{\mathbf{L}}=\alpha\hat{\mathbf{l}}\), where the projection factor \(\alpha<0\) can be read off the block describing the ground state manifold after projecting the matrices \(\hat{L}_{\alpha}\) onto the space spanned by the eigenvectors of \(\mathcal{H}_{CEF}\). For the \(\ket{{}^{3}F(^{3}T_{1})}\) block, \(\alpha=-\frac{3}{2}\)[41], whilst for the \(\ket{{}^{3}P(^{3}T_{1})}\) state, \(\alpha=+1\). The admixture of these two states leads to an effective projection factor that varies between these two extremal values, \(\alpha=\frac{5}{2}\tau^{2}-\frac{3}{2}\).
It is instructive to discuss the orbital triplet ground state in terms of the electronic orbital basis states which have the advantage over the eigenstates of the observables \(L^{2}\) and \(L_{z}\) of being real and therefore give a greater connection to the microscopic picture of electron hopping through chemical bonding. In the \(3d\) ions, the outer electrons partially fill \(d\)-orbitals, which are split in an octahedral crystal field into the triply degenerate \(t_{2g}\) level \((d_{xy},d_{yz},d_{xz})\) and doubly degenerate \(e_{g}\) level \((d_{x^{2}-y^{2}},d_{3^{2}-r^{2}})\). The energy splitting between \(t_{2g}\) and \(e_{g}\) levels is set by \(10Dq\), and therefore this parameter is key in distinguishing the weak (small \(Dq\)) from the strong (large \(Dq\)) crystal field limits. For weak crystal fields it is more natural to use a complex orbital basis that are eigenstates of the observables \(L^{2}\) and \(L_{z}\). In the large crystal field limit, the real \(t_{2g}\) and \(e_{g}\) states are the convenient basis.
Neglecting the spin-orbit coupling and further distortions, the ground state for 2 electrons in the \(3d\) orbitals is triply degenerate with each member of ground state manifold having two of the \(t_{2g}\) levels occupied. In terms of these real basis states one can represent the ground state triplet with the admixture caused by the crystal field (Eq. 1) equivalently as
\[\ket{\psi(^{3}T_{1})}=\text{cos}\theta\ket{t_{2g},t_{2g}}-\text{sin}\theta \ket{t_{2g},e_{g}}. \tag{2}\]
The first of these two basis states has two occupied \(t_{2g}\) levels as expected in an octahedral crystal field, and the second has an occupied \(e_{g}\) level but has the same symmetry as \(\ket{t_{2g},t_{2g}}\). By diagonalizing the energy matrix for a \(d^{2}\) ion with both a Coulomb term and an octahedral crystal field [32] (Fig. 5\(a\)) one finds \(\text{tan}2\theta=12B/(9B+10Dq)\) and using the correspondence between the two descriptions (Eqs. 1 and 2) one can quantify the fraction of the \({}^{3}P\) level in the ground state orbital triplet, \(\tau=\frac{1}{\sqrt{5}}(\text{cos}\theta-2\text{sin}\theta)\). With \(B=0.11\) eV and \(Dq=0.22\) eV, as expected for a free V\({}^{3+}\) ion, we have \(\tau\approx 0.27\). We therefore expect a projection factor of \(\alpha\approx-1.32\) in comparison to a value of -1.5 in the absence of mixing. We discuss this parameter later in context of the reported ordered magnetic moment and the possibility for quantum fluctuations.
#### iii.1.3 \(\mathcal{H}_{SO}\) - Spin-orbit coupling
The next largest term (illustrated in Fig. 5 \(d\)) in the single-ion Hamiltonian for \(3d\) ions is the spin-orbit coupling
\[\mathcal{H}_{SO}=\lambda\hat{\mathbf{L}}\cdot\hat{\mathbf{S}}=\alpha\lambda \hat{\mathbf{l}}\cdot\hat{\mathbf{S}}\]
where \(\lambda>0\) for \(d^{2}\) systems[42]. This term splits the ground state ninefold spin-orbital manifold into three \(\hat{\mathbf{j}}=\hat{\mathbf{l}}+\hat{\mathbf{S}}\) levels according to the Lande interval rule with \(j_{eff}\)=0, 1, 2 (Fig. 5 \(d\)). In \(d^{2}\) ions, the ground state is the \(j_{eff}=2\), separated from the \(j_{eff}=1\) state by \(\ket{2\alpha\lambda}\)[32]. In the language of the strong crystal field \(t_{2g}\) levels, the effect of the inclusion of spin-orbit coupling is to mix the \(d_{xz}\) and \(d_{yz}\) orbitals in complex combinations such that three basis states in which the Hamiltonian is diagonal become \(\ket{l_{z}=0}=\ket{d_{xy}}\) and \(\ket{l_{z}=\pm 1}=\frac{1}{\sqrt{2}}(\ket{d_{xz}}\pm i\ket{d_{yz}})\)[43].
#### iii.1.4 \(\mathcal{H}_{dis}\)-Distortion
In \(\text{MgV}_{2}\text{O}_{4}\) at the low temperatures of interest here, the symmetry of the local VO\({}_{6}\) octahedra is not \(O_{h}\), since a Jahn-Teller distortion, originating from the orbital degeneracy of the \(d^{2}\) vanadium ion, occurs on the transition from the high temperature cubic phase to the low temperature tetragonal phase [15]. This distortion results in the octahedra being subtly compressed along the fourfold \(\hat{z}\)-axis (illustrated in Fig. 5 \(b\)). The distortion can be modeled by the following term in the Hamiltonian
\[\mathcal{H}_{dis}=\Gamma\left(\hat{l}_{z}^{2}-\frac{2}{3}\right) \tag{3}\]
where \(\Gamma<0\) for a compression. As displayed in Fig. 5 (\(d\)) the effect of this term is to break the five fold orbital groundstate \(j_{eff}=2\) degeneracy into three levels with a groundstate doublet, an excited state doublet, and an excited state singlet.
Considering the effect of this distortion only on orbitals (without considering spin or spin-orbit coupling and in the limit of \(|\Gamma|\gg|\lambda|\)) and in terms of the strong crystal field basis of real orbitals, the effect of this axial distortion is to break the ground state \(t_{2g}\) orbital triplet degeneracy, yielding a ground state orbital doublet and excited singlet. The distortion lowers the energy of the \(d_{xy}\) orbital relative to the \(d_{xz}\) and \(d_{yz}\) orbital. If we populate these two levels with two \(d\) electrons applying Pauli's exclusion principle, this results in a doubly degenerate ground state with a hole in either the \(d_{xz}\) or \(d_{yz}\) orbital.
We note that in addition to the primary tetragonal compression driven by orbital degeneracy, the VO\({}_{6}\) octahedra are trigonal distorted (Fig. 5 \(c\)) - a compression along the threefold [111] axis (even within the high temperature cubic phase). Within the manifold of the groundstate \(t_{2g}\) orbitals, the multiplet structure under this distortion is the same as the tetragonal distortion (Eq. 3) [43]. In this paper, we will take advantage of the projection onto the ground state triplet, \(\hat{\mathbf{L}}=\alpha\hat{\mathbf{l}}\) and treat the mixing of the \(e_{g}\) levels as a small perturbation and so we can collect all contributions to the distortion Hamiltonian into a single distortion parameter, \(\Gamma\). Under a dominant trigonal distortion, the orbital spectrum has the form \(\left|a_{1g}\right\rangle=\frac{1}{\sqrt{3}}(\left|d_{xy}\right\rangle+\left| d_{xz}\right\rangle+\left|d_{yz}\right\rangle)\) with the excited doublet \(\left|e_{g}^{\pi}\right\rangle=\pm\frac{1}{\sqrt{3}}(\left|d_{xy}\right\rangle +\left|e^{\pm 2\pi i/3}\left|d_{xz}\right\rangle+e^{\mp 2\pi i/3}\left|d_{yz}\right\rangle)\)[43]. From crystallographic considerations, the low temperature tetragonal distortion discussed above is expected to be dominant, however one might expect a small difference in orbital coefficients arising due to the subleading trigonal distortion. Such a situation maybe supported by the small \(\sim 8^{\circ}\) canting of the spin towards the apical oxygen reported in some diffraction studies [15].
The orbital state for each electron can be written in general as
\[\left|\psi\right\rangle=\alpha\left|d_{xy}\right\rangle+\beta e^{i\theta} \left|d_{xz}\right\rangle+\gamma e^{i\phi}\left|d_{yz}\right\rangle \tag{4}\]
where \(\alpha^{2}+\beta^{2}+\gamma^{2}=1\) ensures normalization. Based on the single-ion physics of the V\({}^{3+}\) ion (\(3d^{2}\)) alone, the orbital order in MgV\({}_{2}\)O\({}_{4}\) is expected to be intermediate between the regimes of validity of real orbital order (ROO), which occurs when the tetragonal distortion dominates and complex orbital order (COO) where the spin-orbit coupling dominates, since the energy scales of the distortion and the spin-orbit coupling in the \(3d\) ions are typically nearly comparable [41; 19; 44]. The weightings of the \(d_{\alpha\beta}\) orbitals for each electron within each regime are summarized in Table 1. We will discuss the nature of the orbital order further later in the paper based on the insight gained through our analysis.
#### ii.1.5 \(\mathcal{H}_{MF}\) - Molecular field below T\({}_{N}\)
The final term present in the single-ion Hamiltonian is the molecular field. This results from a mean field decoupling of the exchange interaction between coupled ions, and is required such that the single ion ground state about which one expands is correct. This term breaks time reversal symmetry
\[\mathcal{H}_{MF}=h_{MF}\hat{S}_{z}\]
as is consistent with the establishment of long-range magnetic order. The magnitude of \(h_{MF}\) will be discussed further once the inter-ion spin Hamiltonian, \(\mathcal{H}_{exch}\), has been introduced.
In terms of the single-ion Hamiltonian, the ground state order is expected to be \((S=1,l=+1)\) with positive \(+l\) (instead of \(-l\)) selected due to the negative spin-orbit coupling constant in \(d^{2}\) ions which promotes alignment of the spin and orbital moments. This expected ground state gives rise to a magnetic moment \(\mu=\mu_{B}(\hat{\mathbf{L}}+2\hat{\mathbf{S}})=\mu_{B}(-1.32+2)=0.68\mu_{B}\) which agrees well with the observed (reduced) magnetic moment, \(\mu=0.47\mu_{B}\). [15] The discrepancy between the observed and calculated magnetic moment suggests the presence of quantum fluctuations, \(S-\Delta S\approx 0.9\), as is to be expected given the small value of \(S\) and reduced dimensionality originating from the frustrated lattice. This reduction in the ordered moment in turn contributes to a reduction in the molecular mean field over what would be expected for the full spin value.
### Inter-ion coupling
We now discuss the inter-ion interactions present in MgV\({}_{2}\)O\({}_{4}\). In the high temperature cubic phase, the V\({}^{3+}\) ions lie on an ideal pyrochlore lattice, with each nearest neighbor V-V bond equivalent by symmetry. In the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline OO & Dom. & \(\alpha\) & \(\beta\) & \(\gamma\) & \(\theta\) & \(\phi\) \\ \hline COO & \(\lambda\) & 1,0 & \(0,\frac{1}{\sqrt{2}}\) & \(0,\frac{1}{\sqrt{2}}\) & 0,0 & \(0,\frac{\pi}{2}\) \\ ROO & \(\Gamma_{[001]}>0\) & 1,0 & \(0,\frac{1}{\sqrt{4}}\) & \(0,\frac{\pi}{2}\) & 0,0 & 0,0 \\ Trig. COO & \(\Gamma_{[111]}\) & \(\frac{1}{\sqrt{3}}\),\(\frac{1}{\sqrt{3}}\) & \(\frac{1}{\sqrt{3}}\),\(\frac{1}{\sqrt{3}}\) & \(\frac{1}{\sqrt{3}}\) & \(0,\frac{2\pi}{3}\) & \(0,\frac{-2\pi}{3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Orbital wavefunction coefficients in common orbital ordering regimes for electron I, electron II. Coefficients correspond to Eq. 4. Column two states the dominant energy scale in each case.
case of antiferromagnetic nearest-neighbor interactions, the spin arrangement on a pyrochlore lattice is geometrically frustrated [45], opening the possibility of spin liquid states [37; 46] resulting from the large ground state degeneracy. Pyrochlore systems with \(S=1\) have been studied both theoretically [47; 48] and experimentally [49; 50; 51], particularly in search of an understanding of the crossover from quantum to classical spin liquid behavior [48; 51]. The vanadium spinels have also attracted great interest [17; 18; 52; 53; 54] since the orbital degree of freedom allows for Jahn-Teller distortions which break the ground state degeneracy and permit magnetic order. This behavior is observed in MgV\({}_{2}\)O\({}_{4}\) at T\({}_{S}\simeq\) 60 K, where the system becomes tetragonal before ordering antiferromagnetically at T\({}_{N}\simeq\) 40 K [15]. The cubic to tetragonal structural transition renders the nearest neighbor bonds along \([HH0]\) and \([H0H]\) inequivalent.
The orbital order plays an important role in the magnetic inter-ion exchange since both the \(d\)-\(d\) and \(d\)-\(p\) orbital overlap are dependent on the V\({}^{3+}\) orbital ground state, and ultimately where the electron hole lies. This provides a mechanism for a breaking of the equivalence of magnetic bonds both in terms of the amplitude of the direct and super-exchange interactions [43]. The question of the magnetic exchange in the vanadium spinels has been discussed at length by Di Matteo _et al_[52] and Perkins _et al_[18]. We will now briefly summarize the findings of the aforementioned references [18; 52] insofar as they are relevant to the magnetic exchange in MgV\({}_{2}\)O\({}_{4}\).
Di Matteo _et al_ considered the limit where the tetragonal distortion is dominant (ROO - real electronic orbitals discussed above) and where the spin-orbit coupling is dominant (COO - complex basis described above when the crystalline electric field \(Dq\) is weak) [52]. Both limits give rise to multiple magnetic and orbital configurations dependent on the underlying physical parameters [52] and described by the general Heisenberg spin Hamiltonian
\[\mathcal{H}_{exch}=\frac{1}{2}\sum_{ij}\mathcal{J}_{ij}\hat{\mathbf{S}}_{i} \cdot\hat{\mathbf{S}}_{j},\]
where the factor of \(\frac{1}{2}\) accounts for double counting. In the case of ROO, only one possible orbital ground state is consistent with the experimentally observed magnetic structure [15]. This ROO ground state comprises strong antiferromagnetic (AFM) bonds in the \(xy\) plane and a weaker ferromagnetic coupling along \([H0H]\) and \([0HH]\). The ROO model thus suggests that MgV\({}_{2}\)O\({}_{4}\) is comprised of strongly-coupled AFM chains in the \(xy\) plane with weak FM coupling between chains. In each tetrahedron, two FM bonds are satisfied and two FM bonds are frustrated.
In contrast, when the spin-orbit coupling is dominant (COO), there are two ground states consistent with the observed magnetic structure. Both possible orbital ground states have strongly coupled AFM chains in the \(xy\) plane, however one of these states has two weak AFM and two weak FM bonds per tetrahedron - thus no bond is frustrated. In the other possible COO ground state, the inter-chain bonds are weak AFM bonds, two of which are frustrated. Based on the parameters obtained from spectroscopy data for vanadium perovskites [55; 56] and typical strengths of the spin-orbit coupling for V\({}^{3+}\) reported in the literature [17; 32], Di Matteo _et al._ concluded that the latter of the two COO states is likely the ground state [52].
Finally we note that the predicted magnetic ground states in the ROO and COO limits differ only in the sign of the inter-chain coupling, with both inter-chain couplings of a similar strength [52]. It is interesting to speculate as to whether these two limits are continuously connected via a phase with zero inter-chain coupling and hence might be tuned through with the application of strain, using the tetragonal distortion, \(\Gamma\), as a tuning parameter. It is also interesting to note that a metal-insulator has been predicted in MgV\({}_{2}\)O\({}_{4}\) based on ab initio calculations. [57]
### Neutron scattering intensity calculation
\begin{table}
\begin{tabular}{c c} Index & Description \\ \hline \(\gamma\), \(\gamma^{\prime}\) & V\({}^{3+}\) sites \\ \(p\), \(q\) & spin-orbital crystal field states \\ \(\alpha\), \(\beta\), \(\mu\), \(\nu\) & Cartesian coordinates \\ \end{tabular}
\end{table}
Table 2: Summary of labeling convention for indices.
Figure 6: Magnetic structure of MgV\({}_{2}\)O\({}_{4}\) as viewed from an isometric viewpoint. The structure comprises chains in the \(a\)-\(b\) plane with the nearest-neighbor bond, \(J_{2}\). Inter-chain bond \(J_{1}\) is frustrated and couples chains along \([H0H]\) and \([0HH]\). A longer range bond, \(J_{3}\) couples perpendicular chains.
In order to calculate the neutron scattering response, we employ a spin-orbital exciton model in terms of Green's functions as applied previously to describe multi-level systems [19; 20; 21; 58; 59; 41]. A full derivation for a general single-\(\mathbf{Q}\) multi-level magnetic system is presented in Ref. [59], here we quote only the key results. The neutron scattering intensity is proportional to the structure factor
\[S(\mathbf{q},\omega)=g_{L}^{2}f^{2}(\mathbf{q})\sum_{\alpha\beta}(\delta_{ \alpha\beta}-\hat{q}_{\alpha}\hat{q}_{\beta})S^{\alpha\beta}(\mathbf{q}, \omega),\]
where \(g_{L}\) is the Lande g-factor, the V\({}^{3+}\) form factor is \(f(\mathbf{q})\) and \(S^{\alpha\beta}(\mathbf{q},\omega)\) is the partial dynamical structure factor (with cartesian indices \(\alpha,\beta\), Table 2), defined as
\[S^{\alpha\beta}(\mathbf{q},\omega)=\frac{1}{2\pi}\int dte^{i\omega t}\langle \hat{S}^{\alpha}(\mathbf{q},t)\hat{S}^{\beta}(-\mathbf{q},0)\rangle.\]
The factor preceeding the partial dynamical structure factor is the polarization factor which picks out the component of the structure factor perpendicular to the scattering wavevector, to which the scattered neutrons are sensitive. We note that neutron scattering is sensitive to magnetic correlations through interacting with the spin. It is through the presence of spin-orbit coupling in the magnetic Hamiltonian described above, neutrons are sensitive to orbital effects. The structure factor can be related to the Green's function by way of the fluctuation-dissipation theorem
\[S^{\alpha\beta}(\mathbf{q},\omega)=-\frac{1}{\pi}\frac{1}{1-\exp(-\omega/k_{ \mathrm{B}}T)}\Im G^{\alpha\beta}(\mathbf{q},\omega).\]
The Green's function for multi-level spin-systems can be written as a Dyson equation where the propagator describes the dynamics of the single ion at mean-field level with the inter-ion interaction treated at one-loop level with the self energy \(\underline{\mathcal{J}}(\mathbf{q})\). The single ion Green's function can be written as
\[g^{\alpha\beta}_{\tilde{\gamma}\tilde{\gamma}^{\prime}}(\omega)=\sum_{qp}\frac {S^{\tilde{\gamma}}_{\alpha qp}S^{\tilde{\gamma}^{\prime}}_{\beta pq}\phi_{qp} }{\omega-(\omega_{p}-\omega_{q})}, \tag{5}\]
where \(\omega_{p}\) is the single-ion eigenvalue of state \(\ket{p}\) and \(S^{\tilde{\gamma}}_{\alpha qp}=\bra{p}\hat{S}^{\tilde{\gamma}}_{\alpha}\ket{q}\). \(\phi_{qp}=(f_{q}-f_{p})\) where \(f_{q}\) is the Bose occupation factor of level \(q\). The site indices are denoted as \(\tilde{\gamma},\tilde{\gamma}^{\prime}\) (Table 2). We perform a coordinate transformation onto the rotating frame so that the molecular mean field is non-oscillatory. In the rotating frame, the single-ion of each of the magnetic ions is independent of the moment direction and varies only between crystallographically inequivalent sites. In the rotating frame the Green's function is defined as
\[\begin{split}\tilde{G}^{\alpha\beta}_{\tilde{\gamma}\tilde{ \gamma}^{\prime}}(\mathbf{q},\omega)=g^{\alpha\beta}_{\tilde{\gamma}\tilde{ \gamma}^{\prime}}(\omega)\delta_{\tilde{\gamma}\tilde{\gamma}^{\prime}}\\ +\sum_{\gamma^{\prime}}^{\mu\nu}\tilde{\mathcal{J}}^{\mu\nu}_{ \tilde{\gamma}\gamma^{\prime}}(\mathbf{q})g^{\alpha\mu}_{\tilde{\gamma}\tilde {\gamma}}(\omega)\tilde{G}^{\nu\beta}_{\gamma^{\prime}\tilde{\gamma}^{\prime}} (\mathbf{q},\omega)\end{split} \tag{6}\]
which can be solved as a matrix equation. Eq. 6 contains the Fourier transformed exchange coupling in the rotating frame, \(\tilde{\mathcal{J}}^{\mu\nu}_{\tilde{\gamma}\gamma^{\prime}}(\mathbf{q})\), which takes the form of a matrix of dimension \(3N\times 3N\), where \(N\) is the number of sites in the unit cell. At this point it is useful to examine the structure of MgV\({}_{2}\)O\({}_{4}\). Below T\({}_{N}\simeq 40\) K, long-range antiferromagnetic magnetic order is established, with \(\mathbf{Q}=(0,0,1)\)[15]. The full crystallographic and magnetic structure can be described by an eight site unit cell as considered in Ref [18].
In terms of this unit cell (Table 3), no rotation between neighboring unit cells is required and, assuming Heisenberg coupling, in the rotating frame one has
\[\underline{\tilde{\mathcal{J}}}^{\mu\nu}_{\gamma\gamma^{\prime}}(\mathbf{q}) =X^{\prime}\left(\underline{\tilde{\mathcal{J}}}_{\gamma\gamma^{\prime}}( \mathbf{q})\otimes\mathbb{I}_{3}\right)X \tag{7}\]
where \(X=\mathrm{diag}(R_{1},...,R_{n})\) rotates the spins within the unit cell. The matrix \(R_{n}\) is the \(3\times 3\) rotation matrix which rotates the spin on site \(n\) onto the \(\hat{z}\)-axis. Similarly \(X^{\prime}=\mathrm{diag}(R_{1}^{T},...,R_{n}^{T})\). Finally, the Green's function in the laboratory frame is found by rotating the Green function back using the matrices, \(X,X^{\prime}\)
\[G^{\alpha\beta}_{\tilde{\gamma}\tilde{\gamma}^{\prime}}(\mathbf{q},\omega)=X \tilde{G}^{\alpha\beta}_{\tilde{\gamma}\tilde{\gamma}^{\prime}}(\mathbf{q}, \omega)X^{\prime}.\]
We now apply this theory to the neutron scattering data presented in Sect. III.
### Application to MgV\({}_{2}\)O\({}_{4}\)
The excitonic dispersion relation was found according to the model presented in the previous section by finding the poles of the Green's function, \(G^{\alpha\beta}_{\tilde{\gamma}\tilde{\gamma}^{\prime}}(\mathbf{q},\omega)\). In Fig. 5 (\(d\)), we illustrate dipole allowed transitions through the red lines when investigating the effect of \(\mathcal{H}_{MF}\). Based on
this, we would expect two branches with a lower strongly dispersive branch corresponding to transitions within the groundstate \(j_{eff}\)=2 manifold and another transition from the ground state to a \(j_{eff}\)=1 level. We might expect that given that this is a "crystal-field-like" transition, this second higher energy and dipole allowed transition will be weakly dispersive in momentum akin to crystal field transitions commonly observed for rare earth ions.
The experimentally observed dispersion was found both along the V-V chain direction and perpendicular to the chain by fitting Gaussian peaks to one-dimensional constant energy cuts through the neutron scattering data. These data points were then fitted using the dispersion derived according to the excitonic model in order to extract physical parameters. The data in comparison to theory is summarized through constant momentum and energy slices in Figs. 7-9. The excitonic theory qualitatively shows two sets of modes with low-energy fluctuations corresponding to transitions within the ground state \(j_{eff}=2\) manifold and another very weak, and comparatively dispersionless, high energy mode corresponding to transitions to the excited \(j_{eff}=1\) spin-orbit manifold. We have plotted the weak mode on the same linear intensity scale in Fig. 7 to illustrate the strong difference in intensity between these two branches of excitations. We note that the excitonic theory predicts a very weak and comparatively flat mode at \(\sim 10\) meV. Due to the presence of overlapping domains and a comparable magnetic zone center excitation gap, we were not able to reliably resolve this mode from our data.
The inter-chain bond, \(J_{1}\), was determined to be negligible since the inclusion of this bond (in the presence of finite \(J_{3}\)) gives rise to a splitting of the lower dispersive mode at approximately \(Q\geq\pm 0.1\) (r.l.u) which is not seen in the data. We therefore determine the magnitude of \(J_{1}\) to be negligible given the absence of any of these signatures, but note that frustrated nature of this bond makes its determination difficult experimentally. The neglect of \(J_{1}\) is further justified given that previous calculations have suggested that this bond is weakly ferromagnetic or weakly antiferromagnetic in the limits of ROO and COO
Figure 8: \(a)\) Constant energy slice through the inter-multiplet mode at \(E\sim 50\) meV. \(b)\) One-dimensional cut through \(Q=(2,0)\), with multi-level spin wave calculation plotted in red. The Lorentzian half-width has been chosen to approximately match the energy resolution on MAPS, \(\epsilon=1.5\). The calculation has been integrated along the \(c^{*}\) direction across the full width of the Brillouin zone. The presence of domain coexistence and the coupling of energy transfer and the wavevector along \(c^{*}\) have been neglected. The differences between calculations and data at low energies originates from an over subtraction of the data with the chosen background.
Figure 7: Calculation of the component of the structure factor \(a)\) parallel and \(b)\) perpendicular to the chain direction. Overplotted is the extracted dispersion from neutron scattering. The green data points are from the EIGEN data set and the red and blue points are from the MERLIN \(E_{i}=24\) meV and \(49\) meV data sets respectively. The white solid line indicates the approximate peak position of the weak in intensity high energy mode and the dashed lines indicate the approximate width.
respectively [52]. In fact, the splitting caused by \(J_{1}\) is a hybridization with the lifted zero energy mode (which acquires some dispersion due to finite \(J_{3}\) ) and the low energy mode due to the intra-chain coupling. As pointed out by [18], the presence of a zero energy mode corresponds to the degeneracy associated with the rotation of the chains in the \(xy\)-plane. This degeneracy is lifted by the anisotropy provided by spin orbit coupling and crystallographic distortions. A small, but finite, \(J_{3}\) is expected based on the dispersive excitation seen in Fig. 3 (\(b\)) which is consistent with coupling of parallel chains. We note that there exists a further neighbor bond \(J_{4}\) with bond length marginally longer than \(J_{3}\). Inclusion of this bond again splits the lower dispersive mode but does not appreciably change the bandwidth or gap, provided that it is small. In the absence of any observed splitting, and in the spirit of writing down a minimal model, we neglect this term.
For the lower energy dispersive mode, the data were fitted by taking one-dimensional constant energy cuts through the neutron data to extract a dispersion curve to which the poles of the Green's function were fit as the parameters were varied (Fig. 7). To fit the higher energy mode at \(\sim 50\) meV (Fig. 8), a constant \(\mathbf{q}\) cut was made at (2,0,0) and subtracted as a background. This method is justified at high energy transfers hear the \(\sim 50\) meV mode, however it cuts through magnetic intensity at lower energies and therefore is an overestimate of the background. This is reflected in Fig. 7 which shows an overestimate of the intensity by the exciton model. The fitted parameters are listed in Table 4. Stated uncertainties represent a nominal \(15\%\) error to reflect the inherent uncertainties associated with the experimental measurements (such as spectrometer resolution) rather than the error bars associated with the numerical fitting procedure which are unphysically small. The approximate magnitude of the intra-chain coupling \(J_{2}\) is consistent with that theoretically predicted in Ref. [18] using a Kugel-Khomskii model [60]. A reduction of the molecular mean field strength from that expected for an \(S=1\) antiferromagnet
\[\mathcal{H}_{MF}=h_{MF}\hat{S}_{z}=\Theta\left(-2J_{2}+4J_{3}\right), \tag{8}\]
where \(\Theta<1\), is present due to quantum fluctuations. A reduction factor of \(\Theta\approx 0.9\) is expected based on the observed magnetic moment [15] and the orbital projection factor based on the free ion values [32]. This parameter is fixed through a comparison between the calculated magnetic moment (discussed above) and the measured magnetic moment. The calculated magnetic moment is sensitive to the projection factor \(\alpha\) which is tied to the single-ion parameters \(Dq\) and Racah parameter \(B\). There is, however, a degree of uncertainty about the single-ion parameters, \(B\), \(C\) and \(Dq\) in the crystal environment, hence \(\Theta\) was refined in our analysis of the neutron spectroscopy results. The refined value of \(\Theta\) implies that the true value of the orbital projection factor is reduced from that expected in a free ion. One possible origin of this is the presence of a nephelauxetic effect which reduces the electron Coulomb repulsion and hence reduces the value of the Racah \(B\) parameter. We note that reductions of \(\sim 10\)-\(20\)\(\%\) have been reported [61], consistent with the reduction that would be required to explain the value of \(\Theta\) fitted here.
The calculated structure factor is plotted in Fig. 7 (\(a-c\)). Overplotted are the extracted peak positions from the neutron scattering data taken on EIGER and MERLIN. The calculation well describes both the dispersion and the intensity variation throughout the Brillouin zone.
In the experimental section above it was noted that single crystal samples should be expected to exhibit domain coexistence as was observed in Ref. [16]. This occurs below the structural transition and results from the overall crystal structure retaining the average high temperature cubic symmetry in the absence of strain. We now briefly note the consequences of domain coexistence on the neutron scattering spectrum. A rotation matrix can be defined for each domain which transforms between the crystallographic basis vectors of each domain, \(d\),
\begin{table}
\begin{tabular}{c c c} Parameter & Bond Distance (Γ
) & Value (meV) \\ \hline \(J_{1}\) & 2.885 & \(\approx 0\) \\ \(J_{2}\) & 2.885 & 19.04 (\(\pm 2.86\)) \\ \(J_{3}\) & 4.997 & -0.173 (\(\pm 0.026\)) \\ \hline \(\alpha\lambda\) & - & -8.8 (\(\pm 1.3\)) \\ \(\Gamma\) & - & -42.2 (\(\pm 6.33\)) \\ \(\Theta\) & - & 0.8334 (\(\pm 0.1250\)) \\ \end{tabular}
\end{table}
Table 4: Spin-orbit exciton parameters
Figure 9: Constant energy slice at \(10\pm 1\) meV, showing the comparison between the measured data from MERLIN and the excitonic model with all three domains summed as described in the text. The intensities have been integrated along the \(c^{*}\) direction (\(L=-1\pm 0.2\) r.l.u).
\[\underline{\underline{R}}_{0}\mathbf{\hat{Q}}_{0}=\mathbf{\hat{Q}}_{d} \tag{9}\]
with the \(\mathbf{\hat{Q}}_{0}\) defining the nominal reference basis plotted in Fig. 5. The crystallographic structure of MgV\({}_{2}\)O\({}_{4}\) comprises chains along [110] and so one can define the rotation matrices
\[\underline{\underline{R}}_{0} =\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix} \tag{10a}\] \[\underline{\underline{R}}_{1} =\begin{pmatrix}1&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix}\] (10b) \[\underline{\underline{R}}_{2} =\begin{pmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{pmatrix} \tag{10c}\]
corresponding to three inequivalent domains. Fig. 9 shows a constant energy slice with the scattering for all three domains superposed. The diagonal modes originate from the domains with basis vectors \(\mathbf{\hat{Q}}_{1}\) and \(\mathbf{\hat{Q}}_{2}\) with the horizontal and vertical lines originating from the \(\mathbf{\hat{Q}}_{0}\) domain. A small difference in the position of the diagonal modes likely originates from the fact that the unit cell is tetragonal with \(c/a=0.994\)[15]. In the calculation, the domain population has been assumed to be equal however the relative intensities of the streaks in the data indicate that this is not the case. This is in agreement with previous diffraction data [16].
### Orbital order
There has been a great deal of intrigue in the nature of the orbital order in MgV\({}_{2}\)O\({}_{4}\). Much of this has centered around whether the system exhibits real or complex orbital order. We will briefly summarize both orbital ordering schemes, remarking on how our results influence this discussion.
We begin by considering the isolated single ion in an octahedral environment, neglecting the inter-ion coupling. An orbitally degenerate crystal field ground state can lower its energy via an octahedral distortion [62] (the Jahn-Teller effect). An elongation of the octahedron lowers the \(d_{xz}\) and \(d_{yz}\) orbitals by \(\frac{1}{2}E_{JT}\) and raises \(d_{xy}\) by \(E_{JT}\)[43]. The lower energy doublet is then split by spin-orbital coupling into \(\left|l_{z}=\pm 1\right\rangle=\frac{1}{\sqrt{2}}(d_{xz}\pm id_{yz})\) by \(\pm\frac{1}{2}\alpha\lambda\). For a \(d^{2}\) ion, assuming the spin-orbit coupling does not lift \(\left|l_{z}=-1\right\rangle\) above the \(d_{xy}\) orbital, then both \(\left|l_{z}=-1\right\rangle\) and \(\left|l_{z}=1\right\rangle\) are occupied and the resultant ground state has quenched orbital angular momentum.
Conversely, if the octahedron is compressed, the Jahn-Teller spectrum is inverted and the \(d_{xy}\) level is lowered, with the \(d_{xz}\) and \(d_{yz}\) levels lying at \(+\frac{1}{2}E_{JT}\). The ground state thus has one electron in the \(d_{xy}\) orbital and one in the \(\left|l_{z}=+1\right\rangle\) level. In this case the orbital angular momentum is unquenched owing to the orbital degeneracy which is broken by spin-orbit coupling, lowering the ground state energy. Based on single-ion physics alone, an elongation is favored if \(\left|\alpha\lambda\right|>E_{JT}=\frac{2}{3}\Gamma\). From the fitted values in Table 4 it is clear that the compressed structure is energetically favored by the comparatively large value of \(\Gamma\), hence the parameters extracted from the neutron scattering measurements are consistent with the refined crystallographic structure.
We now turn to the effect of orbital order on the inter-ion exchange in MgV\({}_{2}\)O\({}_{4}\). Di Matteo _et al._ and Perkins _et al._ discussed this in Refs. [18; 52] by way of a Kugel-Khomskii model [60] to model the superexchange. Here we only quote the key results. Two states were found to be consistent with the observed magnetic and crystallographic structure. The energy of the COO state is \(E_{COO}=-\tilde{J}_{1}-\frac{1}{2}\tilde{J}_{2}(3+2S^{2})-\lambda-\frac{1}{2 }E_{JT}\), where \(\tilde{J}_{1}=J(1-\eta)/(1-3\eta)\), \(\tilde{J}_{2}=J(1+\eta)(1+2\eta)\), \(J=t^{2}/U_{1}\) and \(\eta=J_{H}/U_{1}\). In terms of these variables, \(J_{2}\)=\(\tilde{J}_{2}\) and, assuming we have COO, \(J_{1}=\tilde{J}_{2}-2\tilde{J}_{0}\), with \(\tilde{J}_{0}=\eta J/(1-3\eta)\). We thus find that our extracted exchange parameters suggest \(\eta\approx 0.19\) and \(J\approx 22.1\) meV which are close to values suggested by photoemission spectroscopy (\(\eta_{exp}=0.11\), \(J_{exp}=20.4\) meV) [63; 18]. In contrast, for ROO one has \(J_{1}=-\tilde{J}_{0}\), such that for \(J_{1}\approx 0\) one expects \(\eta\approx 0\). Such a situation is clearly unphysical as it suggests either a vanishing Hund's coupling or extremely large Coulomb term. For reasonable values of \(\eta\approx 0.1\) and with \(J\approx 20\) meV, the ROO model predicts \(\left|J_{2}\right|\approx 3\) meV, which would give rise to a splitting of the low energy dispersive mode that would likely be resolvable in neutron scattering measurements.
As shown by Perkins _et al_ in Ref. [18], qualitative differences exist between the spectra expected for ROO and COO, namely an optical mode is present in the case of COO whereas ROO gives rise only to an acoustic mode. This optical mode finds its origin in the unquenched orbital angular momentum which permits inter-multiplet transitions between the \(j_{eff}=2\) and \(j_{eff}=1\) levels (Fig. 5 \(d\)). Our neutron scattering experiments confirm the existence of this optical mode at \(E\sim 50\) meV which we were able to model within a multi-level excitonic formalism. We find that the resultant single ion ground state, consistent based on the fitted parameters (Table 4), is the \(\left|l_{z}=1,S_{z}=1\right\rangle\) state presented in Ref. [52]. In this state, the effective orbital angular momentum aligns with the spin moment on all V\({}^{3+}\) sites. It should be noted that due to the negative projection factor, this corresponds to the antialignment of the spin and _true_ orbital moments which gives rise to the reduction in the magnetic moment measured in diffraction [15]. By treating the spin-orbit coupling, tetragonal distortion and molecular field on the same level using an excitonic model, we have demonstrated that MgV\({}_{2}\)O\({}_{4}\) can be understood to behave according to the the COO picture presented in Refs. [18; 52] though the distortion, spin-orbit cou
pling and molecular mean field strengths are comparable. This analysis helps to explain the apparent discrepancy between the previous neutron scattering results [15] and the predicted spectrum for COO [18] which considered \(|\Gamma|\ll|\lambda|\).
We note that Di Matteo _et al._[52] propose several types of COO ranging from all V\({}^{3+}\) being in a complex orbital state and a mixture between real and complex orbital orders on different sites. In our theoretical analysis discussed above, we have considered the case that all V\({}^{3+}\) ions are in a complex orbital state. Qualitatively, a mixture of real and complex orbital states would decrease and likely damp the high energy spin-orbit exciton corresponding to excitations from the ground state \(j_{eff}=2\) manifold to a higher energy \(j_{eff}=1\) spin-orbit manifold. Given how weak this mode is this would likely not be observable. Another point against a mixture of complex and real orbitals is crystal lattice symmetry with the ground state being inconsistent with such a situation. We therefore consider here the case of full complex orbital order (COO) in MgV\({}_{2}\)O\({}_{4}\).
## V Hysteretic magnetic correlations
There are three different spin-orbital phases in MgV\({}_{2}\)O\({}_{4}\) on cooling from high temperature. (1) At high temperatures exceeding T\({}_{c}\), the nuclear unit cell is cubic and the magnetism is paramagnetic. (2) In the temperature regime T\({}_{N}\)\(<\) T \(<\) T\({}_{S}\), the structural unit cell distorts to tetragonal. (3) For low temperature T \(<\) T\({}_{N}\), antiferromagnetic order sets in. As displayed in Fig. 5 (\(d\)), the single-ion spin-orbital ground state is different in each one of these phases transitioning from a degenerate \(j_{eff}=2\) state in the high temperature cubic phase to an orbital doublet below T\({}_{S}\), and then finally this degeneracy being split by a molecular field in the low temperature antiferromagnetic phase. This temperature dependence opens up the possibility of hysteretic effects on cooling and warming through the intermediate phase given the orbital degeneracy present, which is broken when cooling through the antiferromagnetic transition by a Zeeman like molecular field. We now investigate these spin-orbital ground states using energy-integrated magnetic diffuse scattering.
To investigate the low-energy critical magnetic fluctuations as a function of temperature in MgV\({}_{2}\)O\({}_{4}\), we studied the magnetic cross section using the DNS polar
Figure 10: The energy integrated magnetic cross section extracted from the scattering of polarized neutrons using the DNS diffractometer (FRM2). The arrows at the top and bottom of the figure indicate the cooling and heating history. Upon cooling to 3 K and reheating the diffuse scattering evolves hysteretically with no return to the short range ordered phase at T =75 K. No such hysteresis is found if the sample is only cooled to 50 K and reheated (black arrow).
ized diffractometer. The combined magnetic intensities from the diffuse scattering measurements are displayed in Fig. 10. We note the these measurements display a hysteresis depending on if the sample is cooled below the magnetic ordering temperature, T\({}_{N}\simeq\) 40 K. Cooling to T\(=\) 75 K from 120 K (Fig. 10 \(a\to b\)) one sees the development of magnetic diffuse scattering, consistent with a spin-frustrated pyrochlore system above the structural transition. On further cooling to T\(=\)50 K (Fig. 10 \(b\to c\)), the magnetic scattering changes into chain-like rods below the structural transition. Upon heating from T \(=\) 50 K back to 75 K (Fig. 10 \(c\to b\)) one recovers these short-ranged magnetic correlations. However, if the sample is cooled below the long-range magnetic transition, T\({}_{N}\) and then warmed up to T\(=\)75 K (Fig. 10 \(a\to d\)) a different behavior is observed. Whilst at T \(=\) 50 K (Fig. 10 \(d\to e\)) one still observes chain-like scattering, above the structural transition (Fig. 10 \(e\to f\)), T\({}_{S}\), the short-ranged correlations are absent and no magnetic scattering is observable. This indicates that either the magnetic fluctuations have moved outside our energy window determined by kinematics on DNS with fixed E\({}_{i}\)=4.64 meV, or the critical fluctuations are highly extended in momentum and energy.
We suggest that the explanation for this hysteresis tied to cooling through the magnetic ordering transition at T\({}_{N}\) lies in the orbital order present in MgV\({}_{2}\)O\({}_{4}\). At T\(=\)75 K, above the structural transition, in terms of a real orbital basis the ground state has equal occupation of \(|xy\rangle\), \(|xz\rangle\) and \(|yz\rangle\) orbitals (Fig. 5). In this state the strength of the superexchange is the same for both inter and intra-chain bonds, \(J_{1}=J_{2}\) and MgV\({}_{2}\)O\({}_{4}\) is a canonical frustrated pyrochlore. As one cools below the structural transition, the local octahedra tetragonally compress leading to a doubly degenerate state with \(l_{z}=\pm 1\). The average occupancy of both the \(|xz\rangle\) and \(|yz\rangle\) orbitals is \(\frac{1}{2}\) and so the strength of the inter-chain coupling \(J_{1}\) is reduced, rendering the system quasi-one-dimensional which gives rise to rods in the magnetic diffuse scattering. At this point, long-range magnetic order has yet to be established and so the orbital ground state is doubly degenerate. Below the antiferromagnetic ordering temperature, three-dimensional order is established thanks to the cooperative effect of the third-nearest-neighbor coupling \(J_{3}\) and the anisotropy provided by the distortion and spin-orbit coupling.
One key effect of longer-range coupling (beyond \(J_{1}\) and \(J_{2}\)) is to reduce the degree to which the spin moment is suppressed due to fluctuations, both thermal and due to the logarithmic divergence of spin fluctuations in the one-dimensional quantum antiferromagnet (as per the Mermin-Wagner theorem [64]). Upon establishment of long-range order, the orbital doublet ground state is split by the molecular mean field, establishing the COO ground state of Refs. [52] and [18]. As the sample is then reheated, at T \(>\) T\({}_{N}\) long-range magnetic order is lost as fluctuations build, but we speculate that the orbital moments remain frozen in their COO configuration as the tetragonal symmetry prevents them from reorienting. Finally as one reaches the structural transition from below, it should be expected that the orbital moments reorient. However, given the frozen nature of the orbital moments in the ground state established by magnetic order, this is prevented by a potential barrier established by the low temperature molecular fields. To overcome this requires a thermal energy equivalent to the molecular field of order \(\sim\) 10 meV, or above \(\sim\) 100 K. We speculate that through the Zeeman energy applied by the molecular field induced through magnetic ordering, MgV\({}_{2}\)O\({}_{4}\) displays an orbital memory of the low temperature spin order.
## VI Conclusions
In conclusion, we have mapped out the spin-orbital fluctuations in MgV\({}_{2}\)O\({}_{4}\) using neutron spectroscopy where we have observed two different branches. We have parameterized these in terms of a spin-orbital excitonic theory where the single-ion Hamiltonian is used to determine the quantized ground state which is then coupled on a lattice using RPA. The results are strongly supportive of a COO ground state. We then used this model to understand hysteretic magnetic fluctuations through the three different spin-orbital phases.
###### Acknowledgements.
The authors thank E. Chan and M. Mourigal for useful discussions. H. L. was co-funded by the ISIS facility development studentship programme and the EPSRC. C. S. acknowledges support from the Carnegie Trust for the Universities of Scotland, EPSRC, and the STFC. P.M.S. acknowledges support from the California NanoSystems Institute through the Elings Fellowship program. S.D.W. acknowledges financial support from the US Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Grant No. DE-SC0017752. The authors gratefully acknowledge the financial support provided by the Julich Centre for Neutron Science (JCNS) to perform the neutron scattering measurements at the Heinz Maier-Leibnitz Zentrum (MLZ), Garching, Germany.
|
2306.17665 | Fuzzy Dark Matter in Relativistic Stars | Fuzzy dark matter (FDM), a practical alternative to cold dark matter, can
exist in compact stars. Here, applying the FDM equation of state (EoS)
constrained by CMB and large-scale structure data, we calculate the structure
of relativistic stars in the presence of FDM. For this aim, the EoS for the
visible matter in neutron stars, quark stars, and hybrid stars from the
observational data are employed. A piecewise polytropic EoS constrained by the
observational data of GW170817 and the data of six low-mass X-ray binaries with
thermonuclear burst or the symmetry energy of the nuclear interaction describes
the neutron star matter. For quark star matter, we apply the EoSs within the
Bayesian statistical approach using the mass and radius measurements of PSR
J0030+0451 from NICER. Employing the two-fluid formalism, we study the
structure of FDM admixed relativistic stars. | Zeinab Rezaei | 2023-06-30T13:52:19Z | http://arxiv.org/abs/2306.17665v1 | # Fuzzy Dark Matter in Relativistic Stars
###### Abstract
Fuzzy dark matter (FDM), a practical alternative to cold dark matter, can exist in compact stars. Here, applying the FDM equation of state (EoS) constrained by CMB and large-scale structure data, we calculate the structure of relativistic stars in the presence of FDM. For this aim, the EoS for the visible matter in neutron stars, quark stars, and hybrid stars from the observational data are employed. A piecewise polytropic EoS constrained by the observational data of GW170817 and the data of six low-mass X-ray binaries with thermonuclear burst or the symmetry energy of the nuclear interaction describes the neutron star matter. For quark star matter, we apply the EoSs within the Bayesian statistical approach using the mass and radius measurements of PSR J0030+0451 from NICER. Employing the two-fluid formalism, we study the structure of FDM admixed relativistic stars.
keywords: (cosmology:) dark matter, stars: interiors, cosmology: observations.
## 1 Introduction
Fuzzy dark matter (FDM) composed of ultralight bosonic particles with \(m\sim 10^{-22}eV\) has been proposed to solve different problems such as disagreement between cold dark matter (DM) predictions and small scale observations, missing satellite problem and core-cusp problem in dwarf galaxies (Khlopov et al., 1985; Hu et al., 2000; Hui et al., 2017; Burkert, 2020; Niemeyer, 2020). FDM as a Bose Einstein condensate with the quantum effects at scales in the order of kpc (the de Broglie wavelength of particles) experiences quantum pressure as well as gravitational attraction. Due to the balance among the quantum pressure and the gravity, a soliton core forms near the center of FDM halo and the core structure can release the FDM particle properties (Widrow and Kaiser, 1993). The behavior of FDM at large scales is not different from the cold DM, while the quantum nature of FDM influences the structure formation at small scales (Hu et al., 2000; Guth et al., 2015) and delays galaxy formation via macroscopic quantum pressure (Church et al., 2019). The wavelike nature of FDM results in the formation of granular structures in the FDM halo (Kawai et al., 2022).
Several studies have been considered to constrain the mass of FDM particles. Galaxy luminosity function at high redshifts (Schive et al., 2016; Menci et al., 2017), Lyman alpha forests (Armengaud et al., 2017; Irsic et al., 2017; Nori et al., 2019; Rogers and Peiris, 2021), CMB power spectrum (Hlozek et al., 2018), radius-dependent velocity dispersion (Church et al., 2019), abundance of Milky Way subhalos (Nadler et al., 2019, 2021), tidal streams from globular clusters (Dalal et al., 2021), galactic ultra-faint dwarf galaxies (Hayashi et al., 2021), observed displacements of star clusters and active galactic nuclei from the centers of their host galaxies (Chowdhury et al., 2021), and the observations of high-redshift lensed galaxies from CLASH survey (Kulkarni and Ostriker, 2022) are some examples. Ultralight axion DM is one of the candidate for FDM (Svrcek and Witten, 2006; Dave and Digal, 2022). The forms of these axions have been predicted in string theory (Clooll et al., 2022). In some investigations, the detection of axion DM has been considered (Abel et al., 2017).
FDM can influence the astrophysical objects in different scales. First galaxies are collected in a FDM cosmology and the primordial stars can form along dense DM filaments (Mocz et al., 2019). The structure of self gravitating systems containing axions (axion stars) has been investigated and the collision of axion stars with neutron stars (NSs) can release the energy of axions (Barranco et al., 2013). There may be a large number of axion stars in galaxies and their collisions with each other and with other astrophysical objects such as ordinary stars and NSs are possible (Eby et al., 2017). The attractive self-interactions of DM axions result in nongravitational growth of density fluctuations and the formation of bound objects can influence the axion density perturbations on length scales (Arvanitaki et al., 2020). Cold DM axions may be converted into photons in the NS magnetosphere (Huang et al., 2018; Foster et al., 2020; Battye et al., 2021). Axion DM can be detected via the narrow radio lines radiated by the NSs (Huang et al., 2018; Hook et al., 2018;
Safdi et al., 2019; Foster et al., 2020). Pulsar timing array experiment has been suggested to detect the FDM signals (Khmelnitsky & Rubakov, 2014; Porayko & Postnov, 2014; Martino et al., 2017; Porayko et al., 2018; Kato & Soda, 2020; Nomura et al., 2020). FDM affects the dynamics of binary systems (Nacir & Urban, 2018; Armaleo et al., 2020). Variations of the orbital parameters of binary systems induced by the perturbations of FDM have been studied (Blas et al., 2020).
Recently, DM in different compact objects such as NSs and quark stars (QSs) has been one of the interesting subjects in astrophysics. NSs can constrain the asymmetric DM (Garani et al., 2019; Ivanytskyi et al., 2020). Low-mass NSs can be formed from the accretion-induced collapse of DM admixed white dwarfs (Leung et al., 2019). Spectroscopy measurements of NSs have been employed to detect DM (Camargo et al., 2019; Maity & Queiroz, 2021). NSs admixed with DM and the constraints on DM properties from the observation of GW170817 (Quddus et al., 2020) have been explored. DM particles can be captured by NSs and this leads to the NSs thermalization (Bell et al., 2019; Acevedo et al., 2020; Keung et al., 2020; Bell et al., 2020; Garani et al., 2021; Bell et al., 2021a, 2021; ApJ, 2021; Kumar et al., 2022). DM interactions with muons (Garani & Heeck, 2019), DM Admixed NSs with the DM-nucleons interactions via Higgs portal (Bhat & Paul, 2020), and self-interacting bosonic DM (Rafiei Karkevandi et al., 2022) have been considered. DM affects the nuclear matter parameters and the equations of state (EoSs) of nucleonic-matter (Das et al., 2020) and the curvatures of the NS (Das et al., 2021). By modeling a massive NS with DM particles, the secondary component of GW190814 has been constrained (Das et al., 2021). The possibility of the fact that GW190814 is a bosonic DM admixed compact star has been studied (Lee et al., 2021). Mass radius relation and second Love number of stars containing ordinary matter and non-self annihilating fermionic DM have been calculated (Dengler et al., 2022). The transmutation of NSs admixed with DM and gravitational collapse in the star centers result in the formation of black holes with masses \(M\approx 1M_{\odot}\)(Garani et al., 2022). Dynamical evolution of DM admixed NSs with fermionic DM has been investigated (Gleason et al., 2022).
Self-annihilating neutralino WIMP DM may accrete into NSs and compact objects with long-lived lumps of strange quark matter form (Perez-Garcia et al., 2010). The regions of stability for compact stars containing massless quark matter and fermionic DM have been calculated (Mukhopadhyay & Schaffner-Bielich, 2016). The observation of strange QSs could set constraints on the scattering cross sections of light quarks and non-interacting scalar DM (Zheng & Chen, 2016). The structure of strange stars admixed with self-interacting bosonic DM has been considered (Panotopoulos & Lopes, 2017a, b; Lopes & Panotopoulos, 2018). The observations of strange stars in GW170817 confirmed that these stars have a mirror DM core (Yang et al., 2021). According to the above discussions, one can easily conclude that the FDM can have important effects on relativistic stars. In this paper, we study the structure of NSs, QSs, and hybrid stars in the presence of FDM.
## 2 Fuzzy Dark Matter Constrained by the Observational Data
In this study, we employ a constrained FDM model with a quartic self-interaction (Cembranos et al., 2018). For this aim, a scalar field \(\phi\), with Lagrangian
\[\mathcal{L}=\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi), \tag{1}\]
are considered. The potential has the form
\[V(\phi)=\frac{1}{2}m^{2}\phi^{2}+\frac{1}{4}\lambda\phi^{4}, \tag{2}\]
in which \(m\) denotes the mass term and \(\lambda\) shows the strength of quartic self-interactions. Assuming a homogeneous and isotropic universe with a flat Robertson-Walker metric, anharmonic corrections to the mass term lead to the EoS for the scalar field with pressure \(P\) and density \(\rho\),
\[w=\frac{P}{\rho}, \tag{3}\]
with
\[w=\frac{\frac{3\lambda}{8m^{4}}\rho}{1+\frac{9\lambda}{8m^{4}}\rho}. \tag{4}\]
Applying CMB (Ade et al., 2016) and large-scale structure (LSS) (Parkinson et al., 2012) data, the parameters of this model have been constrained (Cembranos et al., 2018). The constraint for the mass is \(m\geqslant 10^{-24}eV\) and for allowed masses, the constraint on \(\lambda\) is as follows,
\[log_{10}\lambda<-91.86+4log_{10}(\frac{m}{10^{-22}eV}). \tag{5}\]
Here, to describe FDM, we apply the values \(m=10^{-24}eV\) for the mass and \(\lambda=10^{-100}\) for the self-interactions of FDM. In Figure. 1, we have presented the EoS of FDM constraint with the observational data.
## 3 Two-Fluid Formalism for Fuzzy Dark Matter Admixed Stars
Starting with two-fluid formalism (Sandin & Ciarcelluti, 2009; Ciarcelluti & Sandin, 2011), we apply one static and spherically symmetric spacetime described by the line element,
\[d\tau^{2}=e^{2\nu(r)}dt^{2}-e^{2\lambda(r)}dr^{2}-r^{2}(d\theta^{2}+sin^{2} \theta d\phi^{2}), \tag{6}\]
and the energy momentum tensor of a perfect fluid,
\[T^{\mu\nu}=-pg^{\mu\nu}+(p+\varepsilon)u^{\mu}u^{\nu}. \tag{7}\]
In the expression \(T^{\mu\nu}\), \(p\) and \(\varepsilon\) are the total pressure and total energy density, respectively, which are the results of both visible (\(V\)) and dark (\(D\)) sectors,
\[p(r)=p_{V}(r)+p_{D}(r), \tag{8}\]
\[\varepsilon(r)=\varepsilon_{V}(r)+\varepsilon_{D}(r). \tag{9}\]
In Eq. (8), \(p_{V}\) stands for the EoS of visible matter in compact stars, while \(p_{D}\) presents the FDM EoS given by Eq. (3). Considering the above profiles, the Einstein field equations result in (Sandin & Ciarcelluti, 2009; Ciarcelluti & Sandin, 2011)
\[e^{-2\lambda(r)}=1-\frac{2M(r)}{r}, \tag{10}\]
\[\frac{d\nu}{dr}=\frac{M(r)+4\pi r^{3}p(r)}{r[r-2M(r)]}, \tag{11}\]
\[\frac{dp_{V}}{dr}=-[p_{V}(r)+\varepsilon_{V}(r)]\frac{d\nu}{dr}, \tag{12}\]
\[\frac{dp_{D}}{dr}=-[p_{D}(r)+\varepsilon_{D}(r)]\frac{d\nu}{dr}. \tag{13}\]
Here, \(M(r)=\int_{0}^{r}dr4\pi r^{2}\varepsilon(r)\) denotes the total mass inside a sphere with radius \(r\) and we specify the visible matter sphere and DM sphere with the conditions \(p_{V}(R_{V})=0\) and \(p_{D}(R_{D})=0\), respectively. In this work, we assume that the densities of visible matter and dark matter are the same in the center of the star.
For the stars in the binaries, the tidal forces lead to induce the tidal deformabilities in the stars (Hinderer et al., 2010). The traceless quadrupole moment tensor of the star \(Q_{ij}\) is related to the tidal field tensor \(E_{ij}\) by
\[Q_{ij}=-\frac{2}{3}k_{2}R_{V}^{5}E_{ij}=-\lambda E_{ij}, \tag{14}\]
in which \(\lambda=\frac{2}{3}k_{2}R_{V}^{5}\) denotes the tidal deformability. Besides, the tidal Love number \(k_{2}\) is as follows (Hinderer, 2008),
\[k_{2} = \frac{8\beta^{5}}{5}(1-2\beta)^{2}[2-y_{R}+(y_{R}-1)2\beta] \tag{15}\] \[\times [2\beta(6-3y_{R}+3\beta(5y_{R}-8))\] \[+ 4\beta^{3}(13-11y_{R}+\beta(3y_{R}-2)+2\beta^{2}(1+y_{R}))\] \[+ 3(1-2\beta)^{2}[2-y_{R}+2\beta(y_{R}-1)]ln(1-2\beta)]^{-1}.\]
and \(\beta=M/R\) presents the compactness of the star. Furthermore, solving the following differential equation leads to the value of \(y_{R}=y(r=R_{V})\),
\[r\frac{dy(r)}{dr}+y^{2}(r)+y(r)F(r)+r^{2}Q(r)=0. \tag{16}\]
The functions \(F(r)\) and \(Q(r)\) are given by,
\[F(r)=[1-4\pi r^{2}(\varepsilon(r)-p(r))](1-\frac{2M(r)}{r})^{-1}, \tag{17}\]
and
\[r^{2}Q(r)=4\pi r^{2}[5\varepsilon(r)+9p(r)+\frac{\varepsilon(r)+p(r)}{\partial p (r)/\partial\varepsilon(r)}]\]
\[\times(1-\frac{2M(r)}{r})^{-1}-6(1-\frac{2M(r)}{r})^{-1}\] \[-\frac{4M^{2}(r)}{r^{2}}(1+\frac{4\pi r^{3}p(r)}{M(r)})^{2}(1- \frac{2M(r)}{r})^{-2}, \tag{18}\]
We solve Eq. (16) along Eqs. (10)-(13) with the initial condition \(y(0)=2\). In addition, the dimensionless tidal deformability is defined by
\[\Lambda=\frac{2}{3}k_{2}\frac{R_{V}^{5}}{M^{5}}. \tag{19}\]
In the case of quark star which is self bound, the discontinuity of the energy density at the surface of star should be considered. In the present study, we apply the boundary treatment on the stellar surface to join the interior solution with the exterior one as in Refs. (Damour & Nagar, 2009; Postnikov et al., 2010; Zhou et al., 2018),
\[y_{R}^{\rm zst}=y_{R}^{int}-\frac{\varepsilon_{s}}{M/4\pi R_{V}^{3}}, \tag{20}\]
in which \(\varepsilon_{s}\) is the energy density at the surface of star.
## 4 Fuzzy Dark Matter Admixed Neutron Star
In order to quantify the visible matter in NSs, we utilize the EoS of dense NS matter in the form of a piecewise polytropic expansion which is constrained by the observational data of GW170817 and the data of six low-mass X-ray binaries (LMXB) with thermonuclear burst or the symmetry energy of the nuclear interaction (Jiang et al., 2019). The EoS with the expression \(P=K\rho^{1}\) is parameterized with four pressure parameters \(\{\dot{p_{1}},\dot{p_{2}},\dot{p_{3}},\dot{p_{4}}\}\) at the corresponding densities of \(\{1,1.85,3.7,7.4\}\rho_{sat}\) in which the saturation density has the value \(\rho_{sat}=2.7\times 10^{14}gcm^{-3}\)(Ozel & Psaltis, 2009). The joint analysis confirms that the constraint on \(\dot{p_{1}}\) mainly is the result of nuclear constraints, the constraint on \(\dot{p_{2}}\) is predominantly determined by the gravitational wave data and the LMXB sources with thermonuclear bursts, the constraint on \(\dot{p_{3}}\) heavily comes from the LMXB source data and the current bounds of \(M_{TOV}\), and the range of \(\dot{p_{4}}\) is narrowed down by LMXB sources with thermonuclear burst.
Piecewise polytropic EoS of NS matter and the mass radius relation for both NS and NS admixed with FDM
Figure 1: Fuzzy dark matter EoS with the parameters \(m=10^{-24}eV\) and \(\lambda=10^{-100}\) from the observational data.
are given in Figure. 2. For the FDM admixed neutron star (FDMANS), we have considered the total mass versus the visible radius, i.e. the radius of sphere containing NS matter. FDM leads to stars with lower masses. The radius of FDMANSs is smaller than the radius of NSs with the same mass. Therefore, FDM results in more compact stars. For most FDMANSs, the larger stars are more massive, in contrary with NSs. The interaction between FDM and NS matter leads to the self-bound FDMANSs a behavior different from the normal NSs which are gravitationally bound. We have also shown the constraints on the mass radius relation obtained from the pulsars and the gravitational wave data with different colour bars. NICER observations for PSR J0952-0607 (Romani et al., 2022), PSR J2215+5135 (Linares et al., 2018), PSR J0740+6620 (Cromartie et al., 2020; Fonseca et al., 2021; Miller et al., 2021; Riley et al., 2021), PSR J0030+0451 (Miller et al., 2019), and the merger events GW170817 (Abbott et al., 2017, 2018) and GW190814 (Abbott et al., 2020) give these constraints. Both NSs and FDMANSs satisfy the constraints from the recent observational data. The presented observations confirm that the maximum mass of FDMANSs is lower than the value \(\sim 2.0M_{\odot}\). FDM leads to stars with lower maximum mass than all the observational data shown in this figure.
Figure. 3 explains the behavior of visible and dark sectors in FDMANSs. In very low mass stars, the mass of two sectors is not sensitive to the size of spheres. However, for other FDMANSs, the mass of visible and dark spheres grows by increasing the radius. The results confirm that for dark sphere, this behavior is not valid for all stars and in large dark spheres, the mass decreases as the radius grows. Figure. 3 verifies that in smaller FDMANSs, the mass of dark sphere is higher than the NS matter sphere. However, in larger FDMANSs, the mass of visible sector is dominant. This opposite behavior for visible and dark sectors in FDMANS is due to the different EoSs of two sectors.
In Figure. 4, we have presented the tidal Love number \(k_{2}\), the value \(y_{R}\), and the dimensionless tidal deformability \(\Lambda\) in the cases of NSs and FDMANSs. Except for the low mass stars, the tidal Love number decreases due to presence of the FDM in the stars. Besides, the star mass corresponding to the maximum value of the tidal Love number is lower when the FDM is considered in the stars. However, the value \(y_{R}\) is higher in FDMANSs compared to NSs for most cases. Our calculations confirm that the dimensionless tidal deformability decreases with the star mass for both NSs and FDMANSs. The FDM leads to a considerable reduction of the dimensionless tidal deformability. This decrease is more significant in low mass stars. Moreover, we have shown the upper limits on dimensionless tidal deformability \(\Lambda_{1.4}=190^{+390}_{-158}\) for GW170817 (Abbott et al., 2018) and \(\Lambda_{1.4}=616^{+279}_{-158}\) for GW190814 (Abbott et al., 2020) obtained by LIGO and Virgo Collaborations. In NSs, the dimensionless tidal deformability is in the range \(70\leq\Lambda_{1.4}\leq 580\) related to GW170817. This is while the parameter \(\Lambda\) for NSs is lower than \(\Lambda_{1.4}=616^{+273}_{-158}\) related to GW190814. Considering the FDMANSs, both upper limits from GW170817 and GW190814 are larger than the dimensionless tidal deformability.
## 5 Fuzzy dark matter admixed quark star
In this work, we apply three EoSs of QSs within the Bayesian statistical approach using the mass and radius measurements of PSR J0030+0451 from NICER (Li et al., 2021). These self-bound strange quark matter EoSs are based on the bag models in which the finite quark mass and superfluidity are also considered. Our system describing the strange quark matter is a mixture of the massless u, d quarks and electrons, as well as s quarks of finite mass \(m_{s}\)(Haensel et al., 1986).
In the first model, i.e. normal quark matter, the grand canonical potential per unit volume in the bag model is ex
Figure 2: Left: EoS of dense neutron star matter constrained by the observational data and Right: Mass radius relation in the cases of neutron star (NS) and FDM admixed neutron star (FDMANS). Observational constraints on the NS radii and masses from the pulsars and the gravitational wave data are also presented. These constraints are related to NICER observations for PSR J0952-0607 (Romani et al., 2022), PSR J2215+5135 (Linares et al., 2018), PSR J0740+6620 (Cromartie et al., 2020; Fonseca et al., 2021; Miller et al., 2021; Riley et al., 2021), PSR J0030+0451 (Miller et al., 2019), and the merger events GW170817 (Abbott et al., 2017, 2018) and GW190814 (Abbott et al., 2020).
pressed by,
\[\Omega_{Normal}=\sum_{i=u,d,s,e}\Omega_{i}^{0}+\frac{3(1-a_{4})}{4\pi^{2}}\mu^{4} +B_{eff}. \tag{21}\]
Here, \(\Omega_{i}^{0}\) denotes the grand canonical potential for particle type \(i\) as the ideal Fermi gas (Farhi & Jaffe, 1984) and \(\mu=(\mu_{u}+\mu_{d}+\mu_{s})/3\) presents the average quark chemical potential. In addition, \(B_{eff}\) determines the contributions from the quantum chromodynamics (QCD) vacuum, and \(a_{4}\) shows the perturbative QCD contribution from one-gluon exchange for gluon interaction. Besides, the number density of each part of strange quark matter is related to the chemical potential \(\mu_{i}(i=u,d,s,e)\) by,
\[n_{i}=-\frac{\partial\Omega}{\partial\mu_{i}}. \tag{22}\]
The conditions for the quark matter at the equilibrium state are given by the weak interactions,
\[\mu_{d}=\mu_{u}+\mu_{e}, \tag{23}\]
\[\mu_{d}=\mu_{s}. \tag{24}\]
The condition of charge neutrality is also considered,
\[\frac{2}{3}n_{u}=\frac{1}{3}[n_{d}+n_{s}]+n_{e}. \tag{25}\]
For normal quark matter, the pressure of quark matter at each value of \(\mu\) is calculated by,
\[P_{Normal}=-\Omega_{Normal}, \tag{26}\]
and the energy density of quark matter is as follows,
\[\varepsilon_{Normal}=\Omega_{Normal}+\sum_{i=u,d,s,e}\mu_{i}n_{i}. \tag{27}\]
In the two-parameter model Normal(\(B_{eff};a_{4}\)), the strange quark mass is fixed as \(m_{s}=100\ MeV\) and the two pa
Figure 4: Tidal Love number \(k_{2}\), the value \(y_{R}\), and the dimensionless tidal deformability \(\Lambda\) versus the mass for NS and FDMANS. The constraints from GW170817 and GW190814 data (LIGO and Virgo Collaborations) for neutron star of mass \(M=1.4M_{\odot}\) are also given. These upper limits on dimensionless tidal deformability are \(\Lambda_{1.4}=190^{+390}_{-120}\) for GW170817 (Abbott et al., 2018) and \(\Lambda_{1.4}=616^{+273}_{-158}\) for GW190814 (Abbott et al., 2020).
Figure 3: Mass radius relation for two sectors of neutron star matter (\(M_{N}-R_{N}\)) and dark matter (\(M_{D}-R_{D}\)) in FDMANS.
rameters (\(B_{eff};a_{4}\)) are determined from the joint MSP J0740+6620 and PSR J0030+0451 analysis (Li et al., 2021).
The second model describing the superfluid quark matter is Color-Flavor Locked (CFL) in which an additional term related to the pairing energy is added to the grand canonical potential (Li et al., 2021),
\[\Omega_{CFL}=\Omega_{Normal}+\frac{3m_{s}^{4}-48\Delta^{2}\mu^{2}}{16\pi^{2}}. \tag{28}\]
In the three-parameter model \(\mathrm{CFL}(B_{eff};a_{4};\Delta)\), as Normal model, the strange quark mass is \(m_{s}=100~{}MeV\) and the three parameters (\(B_{eff};a_{4};\Delta\)) are constrained by the observational data (Li et al., 2021). The third model is four-parameter model \(\mathrm{CFL}(B_{eff};a_{4};\Delta;m_{s})\) in which the strange quark mass \(m_{s}\) of the CFL superfluid quark matter is also constrained by the observational data (Li et al., 2021).
Figure. 5 presents the three models for the EoS of strange star matter considered in this work. In CFL and CFLm models, the EoS is stiffer than the EoS in Normal model. CFLm model also leads to EoS which is stiffer than the EoS in CFL model. The mass radius relation for QSs and FDM admixed QSs (FDMAQSs) is given in Figure. 6. In each model for quark matter, the maximum mass of FDMAQSs reaches the value lower than the one for QSs. FDM affects the star so that the FDMAQSs are smaller than QSs with the same mass. Therefore, the FDM leads to more compact stars, like the effect in NSs. This result is in agreement with the one obtained in (Yang et al., 2021). QSs fulfill both the maximum mass and the mass radius constraints from the presented observational data. In addition, the maximum mass of FDMAQSs is lower than the value related to the maximum mass constraints.
Figures. 7 shows the mass radius relation for visible and dark sectors in FDMAQSs. In three models, both visible and dark sectors represent a self-bound behavior like QS and FDMAQS. For two spheres with smaller sizes, the mass of sphere is not sensitive to the size. For most FDMAQSs, the contribution of two sectors in the mass of stars is the same. This is while in massive stars, the mass of visible sphere is higher than the dark one.
The tidal Love number \(k_{2}\), the value \(y_{R}\), and the dimensionless tidal deformability \(\Lambda\) for QSs and FDMAQSs are given in Figure. 8. In most FDMAQSs, the tidal Love number takes higher values compared to the one in QSs with the same mass. This is while in massive FDMAQSs, FDM leads to the reduction of tidal Love number. Generally, FDMAQSs can experience larger values of the tidal Love number. Figure. 8 also indicates that for both QSs and FDMAQSs \(y_{R}\) increases as the mass grows. \(y_{R}\) for FDMAQSs is larger than for QSs. Our calculations confirm that FDM in QSs results in a considerable decrease of the dimensionless tidal deformability similar to the behavior in NSs. Besides, for both QSs and FDMAQSs, the dimensionless tidal deformability is lower than the upper limits from GW170817 and GW190814.
## 6 Fuzzy dark matter admixed hybrid star
For this study, we suppose that the hybrid star is composed of a quark phase and a hadronic phase within a model like the one considered in (Pereira et al., 2021). In our model, these two parts are split by a sharp phase-transition surface without a mixed phase and the density at the phase-splitting surface can be discontinuous (Pereira et al., 2020).
For the quark phase, we apply three EoSs, i.e. Normal, CFL, and CFLm models. Furthermore, to describe the hadronic phase, the EoS of dense NS matter based on the observational data which considered in section 4 is applied. The density jump at the surface of quark-hadronic phase transition is taken as a free parameter. By defining the parameter,
\[\eta\equiv\frac{\epsilon_{q}}{\epsilon_{h}}-1, \tag{29}\]
in which \(\epsilon_{q}\) shows the density at the top of the quark phase and \(\epsilon_{h}\) denotes the density at the bottom of the hadronic phase, we quantify the density jump. According to \(p_{q}=p_{h}\) at the quark-hadronic phase transition interface, \(\epsilon_{q}\) or \(\epsilon_{h}\) and the phase transition pressure are determined.
In Figure. 9, we have presented the mass radius relation for hybrid star and FDM admixed hybrid star (FDMAHS) in two cases \(\eta=0\) and \(\eta=0.8\). FDM affects the mass of hybrid stars in a way that the maximum mass decreases. Similar to other compact objects, the FDMAHSs are smaller in size compared to hybrid stars. The FDM results in more compact stars. Similar to QSs, HHS also satisfy both the maximum mass and the mass radius constraints. Moreover, our results verify that FDMAHSs fulfill the maximum mass constraint.
Figure. 10 gives the mass radius relations of visible and dark sectors in FDMAHSs. The mass of sphere containing visible matter increases as the size grows. In all models, the spheres are self-bound with different contributions of visible and dark matter in low and massive stars. In discontinuous model, the range of the size of radius is higher than the one in continuous model. The low values of the mass of each visible and dark sectors show the contribution of these parts in the total mass of FDMAHSs. Besides, Figure. 10 verifies that FDM results in the contraction of the FDMAHSs and smaller radius of stars.
For HSs and FDMAHSs, we have shown the tidal Love number \(k_{2}\), the value \(y_{R}\), and the dimensionless tidal deformability \(\Lambda\) in Figure. 11. Except in low mass FDMAHSs, the tidal Love number of FDMAHSs is smaller than the one in HSs. Besides, considering HBs, the tidal Love number is larger in discontinuous model. However, in FDMAHSs, the tidal Love number is almost the same in continuous and discontinuous models. In addition, considering FDMAHSs, the value \(y_{R}\) is smaller than the one in HSs. The discontinuous model gives lower values for \(y_{R}\). Our calculations confirm that FDM considerably reduces the dimensionless tidal deformability of FDMAHSs like the one in FDMANSs and FDMAQSs. The dimensionless tidal deformability is higher in the discontinuous model compared to the continuous one. This enhancement is more significant in the low mass stars. Our calculations verify that for both HSs and FDMAHSs, the dimensionless tidal deformability is lower than the upper limits from GW170817 and GW190814.
Figure 5: Three EoSs of quark star matter based on the bag models constraint with the NICER data.
Figure 6: Mass radius relation for quark star (QS) and FDM admixed quark star (FDMAQS) in three models for the EoS of quark star matter. Observational constraints are the same as Figure. 2.
Figure 7: Mass radius relation for two sectors of quark star matter (\(M_{Q}-R_{Q}\)) and dark matter (\(M_{D}-R_{D}\)) in FDMAQS.
## 7 Summary and Conclusions
In the relativistic two-fluid formalism, we have explored the effects of fuzzy dark matter (FDM) on the compact stars. The equations of state for FDM as well as the visible matter in stars which have been used are based on the observational data. Our results verify that in FDM admixed neutron stars, FDM leads to neutron stars with lower masses. Moreover, FDM makes more compact neutron stars. In FDM admixed neutron stars, the mass of visible and dark spheres grows as the radius increases. Besides, the mass of visible and dark spheres depends on the size of the stars. FDM admixed quark stars are smaller than quark stars without FDM with the same mass and therefore they are more compact, like the phenomena in neutron stars. FDM admixed hybrid stars are also more compact in comparison with hybrid stars with no FDM. Furthermore, FDM in compact stars leads to a significant change in the dimensionless tidal deformability of stars.
## Acknowledgements
The author wishes to thank the Shiraz University Research Council.
Figure 8: Tidal Love number \(k_{2}\), the value \(y_{R}\), and the dimensionless tidal deformability \(\Lambda\) versus the mass in the cases of QS and FDMAQS in three models for the EoS of quark star matter. Observational constraints are the same as Figure. 4.
## Data Availability
All data are given either in this paper or in the references.
|
2302.14843 | High Probability Convergence of Stochastic Gradient Methods | In this work, we describe a generic approach to show convergence with high
probability for both stochastic convex and non-convex optimization with
sub-Gaussian noise. In previous works for convex optimization, either the
convergence is only in expectation or the bound depends on the diameter of the
domain. Instead, we show high probability convergence with bounds depending on
the initial distance to the optimal solution. The algorithms use step sizes
analogous to the standard settings and are universal to Lipschitz functions,
smooth functions, and their linear combinations. This method can be applied to
the non-convex case. We demonstrate an
$O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})$ convergence rate when the
number of iterations $T$ is known and an
$O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})$ convergence rate when $T$ is unknown
for SGD, where $1-\delta$ is the desired success probability. These bounds
improve over existing bounds in the literature. Additionally, we demonstrate
that our techniques can be used to obtain high probability bound for
AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption
from previous works. Furthermore, our technique for AdaGrad-Norm extends to the
standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the
first noise-adapted high probability convergence for AdaGrad. | Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy LΓͺ Nguyen | 2023-02-28T18:42:11Z | http://arxiv.org/abs/2302.14843v1 | # High Probability Convergence of Stochastic Gradient Methods
###### Abstract
In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. This method can be applied to the non-convex case. We demonstrate an \(O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})\) convergence rate when the number of iterations \(T\) is known and an \(O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})\) convergence rate when \(T\) is unknown for SGD, where \(1-\delta\) is the desired success probability. These bounds improve over existing bounds in the literature. Additionally, we demonstrate that our techniques can be used to obtain high probability bound for AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption from previous works. Furthermore, our technique for AdaGrad-Norm extends to the standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the first noise-adapted high probability convergence for AdaGrad.
## 1 Introduction
Stochastic optimization is a fundamental area with extensive applications in many domains, ranging from machine learning to algorithm design and beyond. The design and analysis of iterative methods for stochastic optimization has been the focus of a long line of work, leading to a rich understanding of the convergence of paradigmatic iterative methods such as stochastic gradient descent, mirror descent, and accelerated methods for both convex and non-convex optimization. However, most of these works only establish convergence guarantees that hold only in expectation. Although very meaningful, these results do not fully capture the convergence behaviors of the algorithms when we perform only a small number of runs of the algorithm, as it is typical in modern machine learning applications where there are significant computational and statistical costs associated with performing multiple runs of the algorithm (Harvey et al., 2019; Madden et al., 2020; Davis et al., 2021). Thus, an important direction is to establish convergence guarantees for a single run of the algorithm that hold not only in expectation but also with high probability.
Compared to the guarantees that hold in expectation, high probability guarantees are significantly harder to obtain and they hold in more limited settings with stronger assumptions on the problem settings and the stochastic noise distribution. Most existing works that establish high probability guarantees focus on the setting where the length of the stochastic noise follows a light-tail (sub-Gaussian) distribution (Juditsky et al., 2011; Lan, 2012, 2020; Li and Orabona, 2020; Madden et al., 2020; Kavis et al., 2021). Recent works also study the more challenging heavy-tail setting, notably under a bounded variance (Nazin et al., 2019; Gorbunov et al., 2020; Cutkosky and Mehta, 2021) or bounded \(p\)-moment assumption (Cutkosky and Mehta, 2021) on the length of the stochastic noise. Both settings are highly relevant in practice: Zhang et al. (2020) empirically studied the noise distribution for two common tasks, training a
ResNet model for computer vision and a BERT transformer model for natural language processing, and they observed that the noise distribution in the former task is well-approximated by a sub-Gaussian distribution, and it appears to be heavy-tailed in the latter task.
Despite this important progress, the convergence of cornerstone methods is not fully understood even in the more structured light-tailed noise setting. Specifically, the existing works for both convex and non-convex optimization rely on strong assumptions on the optimization domain and the gradients that significantly limit their applicability:
_The problem domain is restricted to either the unconstrained domain or a constrained domain with bounded Bregman diameter._ The convergence guarantees established depend on the Bregman diameter of the domain instead of the initial distance to the optimum. Even for compact domains, since the diameter can be much larger than the initial distance, these guarantees are pessimistic and diminish the benefits of good initializations. Thus an important direction remains to establish high probability guarantees for general optimization that scale only with the initial Bregman distance.
_The gradients or stochastic gradients are assumed to be bounded even in the smooth setting_. These additional assumptions are very restrictive and they significantly limit the applicability of the algorithm, e.g., they do not apply to important settings such as quadratic optimization. Moreover, the stochastic gradient assumption is more restrictive than other commonly studied assumptions, such as the gradients and the stochastic noise being bounded almost surely.
The above assumptions are not merely an artifact of the analysis, and they stem from important considerations and technical challenges. The high probability convergence guarantees are established via martingale concentration inequalities that impose necessary conditions on how much the martingale sequence can change in each step. However, the natural martingale sequences that arise in optimization depend on quantities such as the distance between the iterates and the optimum and the stochastic gradients, which are not a priori bounded. The aforementioned assumptions ensure that the concentration inequalities can be readily applied due to the relevant stochastic terms being all bounded almost surely. These difficulties are even more pronounced for adaptive algorithms in the AdaGrad family that set the step sizes based on the stochastic gradients. The adaptive step sizes introduce correlations between the step sizes and the update directions, and a crucial component is the analysis of the evolution of the adaptive step sizes and the cumulative stochastic noise. If the gradients are bounded, both of these challenges can be overcome by paying error terms proportional to the lengths of the gradients and stochastic gradients. Removing the bounded gradient assumptions requires new technical insights and tools.
In addition to requiring stronger assumptions, due to the technical challenges involved, several of the prior works are only able to establish convergence guarantees that are slower than the ideal sub-Gaussian rates. For example, a common approach is to control the relevant stochastic quantities across all \(T\) iterations of the algorithm via repeated applications of the concentration inequalities, leading to convergence rates that have additional factors that are poly-logarithmic in \(T\). Additionally, achieving noise-adaptive rates that improve towards the deterministic rate as the amount of noise decreases is very challenging with existing techniques.
**Our contributions:** This work aims to contribute to this line of work and overcome the aforementioned challenges. To this end, we introduce a novel generic approach to show convergence with high probability under sub-Gaussian gradient noise. Our approach is very general and flexible, and it can be used both in the convex and non-convex setting. Using our approach, we establish high-probability convergence guarantees for several fundamental settings:
In the _convex setting_, we analyze stochastic mirror descent and stochastic accelerated mirror descent for general optimization domains and Bregman distances, and we analyze the classical algorithms without any changes. These well studied algorithms encompass the main algorithmic frameworks for convex optimization with non-adaptive step sizes (Lan, 2020). Our convergence guarantees scale with only the Bregman distance between the initial point and the optimum, and thus they leverage good initializations. Our high-probability convergence rates are analogous to known results for convergence in expectation (Juditsky et al., 2011; Lan, 2012). The algorithms are universal for both Lipschitz functions and smooth functions.
In the _non-convex setting_, we analyze the SGD as well as the AdaGrad-Norm algorithm (Ward et al., 2019). Compared to existing works for SGD (Madden et al., 2020; Li and Orabona, 2020), our rates have better dependency on the time horizon and the success probability. For AdaGrad-Norm, our approach allows us to remove the restrictive assumption on the gradients as made in previous work (Kavis et al., 2021). More
importantly, the technique employed to show high probability convergence of AdaGrad-Norm readily extends to the standard coordinate version of AdaGrad; we obtain the first results for the high probability convergence guarantee for AdaGrad (Duchi et al., 2011).
Although we only focus on sub-Gaussian gradient noise - a more structured setting where there still remain significant gaps in our understanding - we believe our approach could potentially be applied in more general settings such as heavy tails noise.
### Our techniques
Compared to prior works that rely on black-box applications of martingale concentration inequalities such as Freedman's inequality and its extensions (Freedman, 1975; Harvey et al., 2019; Madden et al., 2020), we introduce here a "white-box" concentration argument that leverages existing convergence analyses for first-order methods. The high-level approach is to define a novel martingale sequence derived from the standard convergence analyses and analyze its moment generating function from first principles. By leveraging the structure of the optimization problem, we are able to overcome a key difficulty associated with black-box applications of martingale concentration results: these results pose necessary conditions on how much the martingale sequence can change, which do not a priori hold for the natural martingales that arise in optimization. By seamlessly combining the optimization and probability tool-kits, we obtain a flexible analysis template that allows us to handle general optimization domains with very large or even unbounded diameter, general objectives that are not globally Lipschitz, and adaptive step sizes.
Our technique is inspired by classical works in concentration inequalities, specifically a type of martingale inequalities where the variance of the martingale difference is bounded by a linear function of the previous value. This technique is first applied by Harvey et al. (2019) to show high probability convergence for SGD in the strongly convex setting. Our proof is inspired by the proof of Theorem 7.3 by Chung and Lu (2006). In each time step with iterate \(x_{t}\), let \(\xi_{t}:=\widehat{\nabla}f\left(x_{t}\right)-\nabla f\left(x_{t}\right)\) be the stochastic error in our gradient estimate. Classical proofs of convergence evolve around analyzing the sum of \(\left\langle\xi_{t},x^{*}-x_{t}\right\rangle\), which can be viewed as a martingale sequence. Assuming a bounded domain, the concentration of the sum can be shown via classical martingale inequalities. The key new insight is that instead of analyzing this sum, we analyze a related sum where the coefficients decrease over time to account for the fact that we have a looser grip on the distance to the optimal solution as time increases. Nonetheless, the coefficients are kept within a constant factor of each others and the same asymptotic convergence is attained with high probability.
### Related work
Convex optimization:Nemirovski et al. (2009); Lan (2012) establish high probability bounds for stochastic mirror descent and accelerated stochastic mirror descent with sub-Gaussian noise. These rates match the best rates known in expectation, but they depend on the Bregman diameter \(\max_{x,y\in\mathcal{X}}\mathbf{D}_{\psi}\left(x,y\right)\) of the domain, which can be very large or even unbounded. Similarly, Kakade and Tewari (2008); Rakhlin et al. (2011); Hazan and Kale (2014); Harvey et al. (2019); Dvurechensky and Gasnikov (2016) study the high-probability convergence of SGD that also assume that the domain has bounded diameter or the function is strongly convex. In contrast, our work complements its predecessors with a novel concentration argument that establishes convergence for the general setting of convex functions under sub-Gaussian gradient noise (as considered in Lan (2020)) that depends only on the distance \(\mathbf{D}_{\psi}\left(x^{*},x_{1}\right)\) from the initial point to the optimum instead of the diameter of the problem or having to assume that the objective is strongly convex.
On a different note, Nazin et al. (2019); Gorbunov et al. (2020) consider the more general setting of bounded variance noise. However, their problem settings are more restricted than ours. Specifically, Nazin et al. (2019) analyze stochastic mirror descent only in the setting where the optimization domain has bounded Bregman diameter. Gorbunov et al. (2020) analyze modified versions of stochastic gradient descent and accelerated stochastic gradient descent (such as with clipping), but only for unconstrained optimization with the \(\ell_{2}\) setup. In contrast, our work applies to a more general optimization setup: we analyze the classical stochastic mirror descent and accelerated mirror descent without any modifications under general Bregman distances and arbitrary optimization domains that are possibly unbounded. Finally,Davis et al. (2021) provides an algorithm to achieve high probability convergence by solving an auxiliary optimization
problem in each iteration. However, their analysis is restricted to well-conditioned objectives that are both smooth and strongly convex and the expensive optimization subroutine can be impractical.
Non-convex optimization:Li and Orabona (2020) demonstrate a high probability bound for an SGD algorithm with momentum, while Madden et al. (2020) and Li and Liu (2022) show high probability bounds for vanilla SGD that generalize to the family of sub-Weibull noise. However, these existing bounds are not optimal due to the multiplicative dependency \(O\left(\log T\log\frac{1}{\delta}\right)\). In our work, we improve the high-probability convergence for SGD in the non-convex setting via our novel approach.
For algorithms with adaptive step size like AdaGrad, Li and Orabona (2020); Kavis et al. (2021) provide some of the first high probability guarantees in the non-convex setting. However, there still remains significant gaps in our understanding: Li and Orabona (2020) is not fully adaptive due to the dependence of the initial step size on the problem parameters, whereas Kavis et al. (2021) requires that the gradients and/or stochastic gradients to be uniformly bounded almost surely, a strong assumption that excludes even the quadratic function. In contrast, we establish convergence in high probability of AdaGrad-Norm (Ward et al., 2019; Faw et al., 2022) without further restrictive assumptions. Notably, a key distinction from prior work is that our does not involve the division by the step size: this allows a direct extension of our analysis for AdaGrad-Norm (in which the step size is a scalar) to the general AdaGrad (Duchi et al., 2011) algorithm (where the step size varies for each coordinate). There, to the best of our knowledge, Defossez et al. (2022) is the only work to provide an _in expectation_ guarantee for vanilla Adagrad albeit under strong assumptions. We provide a more detailed comparison with prior work in the subsequent sections.
Convergence guarantees for the heavy tail noise regime has also been studied for non-convex objectives. However, some form of gradient clipping is required in most works to deal with the large variance. The work Zhang et al. (2020) proposes a gradient clipping algorithm that can converge _in expectation_ for noise distributions with heavier tail--that is the \(p\)-moment is bounded for \(1<p\leq 2\). Cutkosky and Mehta (2021) propose a more complex clipped SGD algorithm with momentum under the same noise assumption, for which they show a high probability convergence. However, Cutkosky and Mehta (2021) rely on the bounded moments of the stochastic gradients for the non-convex setting; a restrictive assumption that excludes quadratic objectives. In contrast, we focus on standard algorithms (albeit under sub-Gaussian noise) that have been more widely used: stochastic mirror descent, stochastic gradient descent, and AdaGrad-Norm. Our technique is general and we believe that it is possible to extend it to the heavy-tail noise setting.
## 2 Preliminaries
We consider the problem \(\min_{x\in\mathcal{X}}f(x)\) where \(f:\mathbb{R}^{d}\to\mathbb{R}\) is the objective function and \(\mathcal{X}\) is the domain of the problem. In the convex case, we consider the general setting where \(f\) is potentially not strongly convex and the domain \(\mathcal{X}\) is convex but not necessarily compact. The distance between solutions in \(\mathcal{X}\) is measured by a general norm \(\left\lVert\cdot\right\rVert\). Let \(\left\lVert\cdot\right\rVert_{*}\) denote the dual norm of \(\left\lVert\cdot\right\rVert\). In the non-convex case, we consider the setting where \(\mathcal{X}\) is \(\mathbb{R}^{d}\) and \(\left\lVert\cdot\right\rVert\) is the \(\ell_{2}\) norm.
In this paper, we use the following assumptions:
**(1) Existence of a minimizer**: In the convex setting, we assume that there exists \(x^{*}=\arg\min_{x\in\mathcal{X}}f(x)\).
**(1') Existence of a minimizer**: In the nonconvex setting, we assume that \(f\) admits a finite lower bound \(\inf_{x\in\mathcal{X}}f(x)\coloneqq f_{*}>-\infty\).
**(2) Unbiased estimator**: We assume to have access to a history independent, non-biased gradient estimator \(\widehat{\nabla}f(x)\) for any \(x\in\mathcal{X}\), that is \(\mathbb{E}\left[\widehat{\nabla}f(x)\mid x\right]=\nabla f(x)\).
**(3) Sub-Gaussian noise:**\(\left\lVert\widehat{\nabla}f(x)-\nabla f(x)\right\rVert_{*}\) is a \(\sigma\)-sub-Gaussian random variable (Definition 2.1).
There are several equivalent definitions of sub-Gaussian random variables up to an absolute constant scaling (see, e.g., Proposition 2.5.2 in Vershynin (2018)). For convenience, we use the following property as the definition.
**Definition 2.1**.: A random variable \(X\) is \(\sigma\)-sub-Gaussian if
\[\mathbb{E}\left[\exp\left(\lambda^{2}X^{2}\right)\right]\leq\exp\left( \lambda^{2}\sigma^{2}\right)\text{ for all }\lambda\text{ such that }\left|\lambda\right|\leq\frac{1}{\sigma}.\]
We will also use the following helper lemma whose proof we defer to the Appendix.
**Lemma 2.2**.: _Suppose \(X\in\mathbb{R}^{d}\) such that \(\mathbb{E}\left[X\right]=0\) and \(\left\|X\right\|\) is a \(\sigma\)-sub-Gaussian random variable, then for any \(a\in\mathbb{R}^{d}\), \(0\leq b\leq\frac{1}{2\sigma}\),_
\[\mathbb{E}\left[\exp\left(\left\langle a,X\right\rangle+b^{2}\left\|X\right\|^ {2}\right)\right]\leq\exp\left(3\left(\left\|a\right\|_{*}^{2}+b^{2}\right) \sigma^{2}\right).\]
_Especially, when \(b=0\), we have_
\[\mathbb{E}\left[\exp\left(\left\langle a,X\right\rangle\right)\right]\leq\exp \left(2\left\|a\right\|_{*}^{2}\sigma^{2}\right).\]
## 3 Convex case: Stochastic Mirror Descent and Accelerated Stochastic Mirror Descent
In this section, we analyze the Stochastic Mirror Descent algorithm (Algorithm 1) and Accelerated Stochastic Mirror Descent algorithm (Algorithm 2) for convex optimization. We define the Bregman divergence \(\mathbf{D}_{\psi}\left(x,y\right)=\psi\left(x\right)-\psi\left(y\right)- \left\langle\nabla\psi\left(y\right),x-y\right\rangle\) where \(\psi:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is an \(1\)-strongly convex mirror map with respect to \(\left\|\cdot\right\|\) on \(\mathcal{X}\). We remark that the domain of \(\psi\) is defined as \(\mathbb{R}^{d}\) for simplicity, though which is not necessary.
### Analysis of Stochastic Mirror Descent
The end result of this section is the convergence guarantee of Algorithm 1 for constant step sizes (when the time horizon \(T\) is known) and time-varying step sizes (when \(T\) is unknown) presented in Theorem 3.1. However, we will emphasize more on presenting the core idea of our approach, which will serve as the basis for the analysis in subsequent sections. For simplicity, here we consider the non-smooth setting, and assume that \(f\) is \(G\)-Lipschitz continuous, i.e., we have \(\left\|\nabla f(x)\right\|_{*}\leq G\) for all \(x\in\mathcal{X}\). However, this is not necessary. The analysis for the smooth setting follows via a simple modification to the analysis presented here as well as the analysis for the accelerated setting given in the next section.
**Theorem 3.1**.: _Assume \(f\) is \(G\)-Lipschitz continuous and satisfies Assumptions (1), (2), (3), with probability at least \(1-\delta\), the iterate sequence \((x_{t})_{t\geq 1}\) output by Algorithm 1 satisfies_
_(1) Setting \(\eta_{t}=\sqrt{\frac{\mathbf{D}_{\psi}\left(x^{*},x_{1}\right)}{6\left(G^{2} +\sigma^{2}\left(1+\log\left\{\frac{1}{2}\right\}\right)\right)T}}\), then \(\mathbf{D}_{\psi}\left(x^{*},x_{T+1}\right)\leq 4\mathbf{D}_{\psi}\left(x^{*},x_{1}\right)\), and_
\[\frac{1}{T}\sum_{t=1}^{T}\left(f\left(x_{t}\right)-f\left(x^{*}\right)\right) \leq\frac{4\sqrt{6}}{\sqrt{T}}\sqrt{\mathbf{D}_{\psi}\left(x^{*},x_{1}\right) \left(G^{2}+\sigma^{2}\left(1+\log\left(\frac{1}{\delta}\right)\right)\right)}.\]
_(2) Setting \(\eta_{t}=\sqrt{\frac{\mathbf{D}_{\psi}\left(x^{*},x_{1}\right)}{6\left(G^{2} +\sigma^{2}\left(1+\log\left\{\frac{1}{2}\right\}\right)\right)t}}\), then \(\mathbf{D}_{\psi}\left(x^{*},x_{T+1}\right)\leq 2(2+\log T)\mathbf{D}_{\psi} \left(x^{*},x_{1}\right)\), and_
\[\frac{1}{T}\sum_{t=1}^{T}\left(f\left(x_{t}\right)-f\left(x^{*}\right)\right) \leq\frac{2\sqrt{6}}{\sqrt{T}}(2+\log T)\sqrt{\mathbf{D}_{\psi}\left(x^{*},x _{1}\right)\left(G^{2}+\sigma^{2}\left(1+\log\left(\frac{1}{\delta}\right) \right)\right)}.\]
We define \(\xi_{t}:=\widehat{\nabla}f\left(x_{t}\right)-\nabla f\left(x_{t}\right)\) and let \(\mathcal{F}_{t}=\sigma\left(\xi_{1},\ldots,\xi_{t-1}\right)\) denote the natural filtration. Note that \(x_{t}\) is \(\mathcal{F}_{t}\)-measurable. The starting point of our analysis is the following inequality that follows from the standard stochastic mirror descent analysis (see, e.g., Lan (2020)). We include the proof in the Appendix for completeness.
**Lemma 3.2**.: _Lan (2020) For every iteration \(t\), we have_
\[A_{t} \coloneqq\eta_{t}\left(f\left(x_{t}\right)-f\left(x^{*}\right) \right)-\eta_{t}^{2}G^{2}+\mathbf{D}_{\psi}\left(x^{*},x_{t+1}\right)-\mathbf{D }_{\psi}\left(x^{*},x_{t}\right)\] \[\leq\eta_{t}\left\langle\xi_{t},x^{*}-x_{t}\right\rangle+\eta_{t }^{2}\left\|\xi_{t}\right\|_{*}^{2}.\]
We now turn our attention to our main concentration argument. Towards our goal of obtaining a high-probability convergence rate, we analyze the moment generating function for a random variable that is closely related to the left-hand side of the inequality above. We let \(\{w_{t}\}\) be a sequence where \(w_{t}\geq 0\) for all \(t\). We define
\[Z_{t} =w_{t}A_{t}-v_{t}\mathbf{D}_{\psi}\left(x^{*},x_{t}\right), \forall\,1\leq t\leq T\] \[\text{where }v_{t} =6\sigma^{2}\eta_{t}^{2}w_{t}^{2}\] \[\text{and }S_{t} =\sum_{i=t}^{T}Z_{i}, \forall\,1\leq t\leq T+1\]
Before proceeding with the analysis, we provide intuition for our approach. If we consider \(S_{1}\), we see that it combines the gains in function value gaps with weights given by the sequence \(\{w_{t}\}\) and the losses given by the Bregman divergence terms \(\mathbf{D}_{\psi}\left(x^{*},x_{t}\right)\) with coefficients \(v_{t}\) chosen based on the step size \(\eta_{t}\) and \(w_{t}\). The intuition here is that we want to transfer the error from the stochastic error terms on the RHS of Lemma 3.2 into the loss term \(v_{t}\mathbf{D}_{\psi}\left(x^{*},x_{t}\right)\) then leverage the progression of the Bregman divergence \(\mathbf{D}_{\psi}\left(x^{*},x_{t+1}\right)-\mathbf{D}_{\psi}\left(x^{*},x_{t}\right)\) to absorb this loss. For the first step, we can do that by setting the coefficient \(v_{t}\) to equalize coefficient of divergence term that will appear from the RHS of Lemma 3.2. For the second step, we can aim at making all the divergence terms telescope, by selecting \(v_{t}\) and \(w_{t}\) such that \(w_{t}+v_{t}\leq w_{t-1}\) to have a telescoping sum of the terms \(w_{t}\mathbf{D}_{\psi}\left(x^{*},x_{t+1}\right)-w_{t-1}\mathbf{D}_{\psi}\left( x^{*},x_{t}\right)\). In the end we will obtain a bound for the function value gaps in terms of only the deterministic quantities, namely \(\eta_{t},w_{t},G\) and the initial distance. In Theorem 3.3, we upper bound the moment generating function of \(S_{1}\) and derive a set of conditions for the weights \(\{w_{t}\}\) that allow us to absorb the stochastic errors. In Corollary 3.4, we show how to choose the weights \(\{w_{t}\}\) and obtain a convergence rate that matches the standard rates that hold in expectation.
We now give our main concentration argument that bounds the moment generating function of \(S_{t}\) inspired by the proof of Theorem 7.3 in Chung and Lu (2006).
**Theorem 3.3**.: _Suppose that \(w_{t}\eta_{t}^{2}\leq\frac{1}{4\sigma^{2}}\) for every \(1\leq t\leq T\). For every \(1\leq t\leq T+1\), we have_
\[\mathbb{E}\left[\exp\left(S_{t}\right)\mid\mathcal{F}_{t}\right]\leq \exp\left(3\sigma^{2}\sum_{i=t}^{T}w_{i}\eta_{i}^{2}\right).\]
Proof.: We proceed by induction on \(t\). Consider the base case \(t=T+1\). We have the inequality holds true trivially. Next, we consider \(1\leq t\leq T\). We have
\[\mathbb{E}\left[\exp\left(S_{t}\right)\mid\mathcal{F}_{t}\right] =\mathbb{E}\left[\exp\left(Z_{t}+S_{t+1}\right)\mid\mathcal{F}_{t }\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[\exp\left(Z_{t}+S_{t+1}\right) \mid\mathcal{F}_{t+1}\right]\mid\mathcal{F}_{t}\right]. \tag{1}\]
We now analyze the inner expectation. Conditioned on \(\mathcal{F}_{t+1}\), \(Z_{t}\) is fixed. Using the inductive hypothesis, we obtain
\[\mathbb{E}\left[\exp\left(Z_{t}+S_{t+1}\right)\mid\mathcal{F}_{t+1}\right] \leq\exp\left(Z_{t}\right)\exp\left(3\sigma^{2}\sum_{i=t+1}^{T}w_{i}\eta_{i}^{ 2}\right). \tag{2}\]
Plugging into (1), we obtain
\[\mathbb{E}\left[\exp\left(S_{t}\right)\mid\mathcal{F}_{t}\right] \leq\mathbb{E}\left[\exp\left(Z_{t}\right)\mid\mathcal{F}_{t}\right]\exp\left( 3\sigma^{2}\sum_{i=t+1}^{T}w_{i}\eta_{i}^{2}\right). \tag{3}\]
By Lemma 3.2
\[\exp\left(Z_{t}\right) =\exp\left(w_{t}\big{(}\eta_{t}\left(f\left(x_{t}\right)-f\left(x^{ *}\right)\right)-\eta_{t}^{2}G^{2}+\mathbf{D}_{\psi}\left(x^{*},x_{t+1}\right)- \mathbf{D}_{\psi}\left(x^{*},x_{t}\right)\big{)}-v_{t}\mathbf{D}_{\psi}\left(x ^{*},x_{t}\right)\bigg{)}\] \[\leq\exp\left(w_{t}\eta_{t}\left\langle\xi_{t},x^{*}-x_{t}\right \rangle+w_{t}\eta_{t}^{2}\left\|\xi_{t}\right\|_{*}^{2}\right)\exp\left(-v_{t} \mathbf{D}_{\psi}\left(x^{*},x_{t}\right)\right).\]
Next, we analyze the first term in the last line of the above inequality in expectation. Since \(\mathbb{E}\left[\left\langle\xi_{t},x^{*}-x_{t}\right\rangle\mid\mathcal{F}_{ t}\right]=0\) we can use Lemma 2.2 to obtain
\[\leq\exp\left(3\sigma^{2}\left(w_{t}^{2}\eta_{t}^{2}\left\|x^{*}- x_{t}\right\|^{2}+w_{t}\eta_{t}^{2}\right)\right) \tag{4}\]
where in the last line we used that \(\mathbf{D}_{\psi}\left(x^{*},x_{t}\right)\geq\frac{1}{2}\left\|x^{*}-x_{t} \right\|^{2}\) from the strong convexity of \(\psi\).
Plugging back into (3) and using that \(v_{t}=6\sigma^{2}\eta_{t}^{2}w_{t}^{2}\), we obtain the desired inequality
\[\mathbb{E}\left[\exp\left(S_{t}\right)\mid\mathcal{F}_{t}\right]\leq \exp\left(\left(6\sigma^{2}\eta_{t}^{2}w_{t}^{2}-v_{t}\right) \mathbf{D}_{\psi}\left(x^{*},x_{t}\right)+3\sigma^{2}\sum_{i=t}^{T}w_{i}\eta_ {i}^{2}\right)\] \[= \exp\left(3\sigma^{2}\sum_{i=t}^{T}w_{i}\eta_{i}^{2}\right).\]
Using Theorem 3.3 and Markov's inequality, we obtain the following convergence guarantee.
**Corollary 3.4**.: _Suppose the sequence \(\{w_{t}\}\) satisfies the conditions of Theorem 3.3 and that \(w_{t}+6\sigma^{2}\eta_{t}^{2}w_{t}^{2}\leq w_{t-1}.\) For any \(\delta>0\), with probability at least \(1-\delta\):_
\[\sum_{t=1}^{T}w_{t}\eta_{t}\left(f\left(x_{t}\right)-f\left(x^{*}\right) \right)+w_{T}\mathbf{D}_{\psi}\left(x^{*},x_{T+1}\right)\leq w_{0}\mathbf{D}_ {\psi}\left(x^{*},x_{1}\right)+\left(G^{2}+3\sigma^{2}\right)\sum_{t=1}^{T}w _{t}\eta_{t}^{2}+\log\left(\frac{1}{\delta}\right).\]
With the above result in hand, we complete the convergence analysis by showing how to define the sequence \(\{w_{t}\}\) with the desired properties. For the stochastic Mirror Descent algorithm with fixed step sizes \(\eta_{t}=\frac{\eta}{\sqrt{T}}\), we set \(w_{T}=\frac{1}{12\sigma^{2}\eta^{2}}\) and \(w_{t-1}=w_{t}+\frac{6}{7}\sigma^{2}\eta^{2}w_{t}^{2}\) for all \(1\leq t\leq T\). For Stochastic Mirror Descent algorithm with time-varying step sizes \(\eta_{t}=\frac{\eta}{\sqrt{t}}\), we set \(w_{T}=\frac{1}{12\sigma^{2}\eta^{2}\left(\sum_{t=1}^{T}\frac{1}{t}\right)}\) and \(w_{t-1}=w_{t}+6\sigma^{2}\eta_{t}^{2}w_{t}^{2}\) for all \(1\leq t\leq T\). In the appendix, we show that these choices have the give us the results in Theorem 3.1.
### Analysis of Accelerated Stochastic Mirror Descent
In this section, we extend the analysis detailed in the previous section to analyze the Accelerated Stochastic Mirror Descent Algorithm (Algorithm (2)). We assume that \(f\) satisfies the following condition: for all \(x,y\in\mathcal{X}\)
\[f(y)\leq f(x)+\left\langle\nabla f\left(x\right),y-x\right\rangle+G\left\|y-x \right\|+\frac{L}{2}\left\|y-x\right\|^{2}. \tag{5}\]
Note that \(L\)-smooth functions, \(G\)-Lipschitz functions, and their sums all satisfy the above condition. The full convergence guarantees are given in Theorem B.3. We will only highlight the application of the previous analysis in this case. As before, we define \(\xi_{t}:=\widehat{\nabla}f\left(x_{t}\right)-\nabla f\left(x_{t}\right)\).
We also start with the inequalities shown in the standard analysis, e.g, from Lan (2020) (proof in the Appendix).
**Lemma 3.5**.: _Lan (2020) For every iteration \(t\), we have_
\[B_{t} :=\frac{\eta_{t}}{\alpha_{t}}\left(f\left(y_{t}\right)-f\left(x^{ *}\right)\right)-\frac{\eta_{t}\left(1-\alpha_{t}\right)}{\alpha_{t}}\left(f \left(y_{t-1}\right)-f\left(x^{*}\right)\right)\] \[\quad-\frac{\eta_{t}^{2}}{1-L\alpha_{t}\eta_{t}}G^{2}+\mathbf{D }_{\psi}\left(x^{*},z_{t}\right)-\mathbf{D}_{\psi}\left(x^{*},z_{t-1}\right)\] \[\leq\eta_{t}\left\langle\xi_{t},x^{*}-z_{t-1}\right\rangle+\frac {\eta_{t}^{2}}{1-L\alpha_{t}\eta_{t}}\left\|\xi_{t}\right\|_{*}^{2}.\]
We now turn our attention to our main concentration argument. Similar to the previous section, we define
\[Z_{t} =w_{t}B_{t}-v_{t}\mathbf{D}_{\psi}\left(x^{*},z_{t-1}\right), \forall\,1\leq t\leq T\] \[\text{where }v_{t} =6\sigma^{2}w_{t}^{2}\eta_{t}^{2}\] \[\text{and }S_{t} =\sum_{i=t}^{T}Z_{i}, \forall\,1\leq t\leq T+1\]
Notice that here we are following the exact same step as before. By transferring the error terms in the RHS of Lemma 3.5 into the Bregman divergence terms \(\mathbf{D}_{\psi}\left(x^{*},z_{t-1}\right)\), we can absorb them by setting the coefficients appropriately. In the same manner, we can show the following theorem.
**Theorem 3.6**.: _Suppose that \(\frac{w_{t}\eta_{t}^{2}}{1-L\alpha_{t}\eta_{t}}\leq\frac{1}{4\sigma^{2}}\) for every \(0\leq t\leq T\). For every \(1\leq t\leq T+1\), we have_
\[\mathbb{E}\left[\exp\left(S_{t}\right)\mid\mathcal{F}_{t}\right] \leq\exp\Bigg{(}3\sigma^{2}\sum_{i=t}^{T}w_{i}\frac{\eta_{i}^{2}}{1-L\alpha_ {i}\eta_{i}}\Bigg{)}.\]
**Corollary 3.7**.: _Suppose the sequence \(\left\{w_{t}\right\}\) satisfies the conditions of Theorem 3.6. For any \(\delta>0\), the following event holds with probability at least \(1-\delta\):_
\[\sum_{t=1}^{T}w_{t}\left(\frac{\eta_{t}}{\alpha_{t}}\left(f\left( y_{t}\right)-f\left(x^{*}\right)\right)-\frac{\eta_{t}\left(1-\alpha_{t} \right)}{\alpha_{t}}\left(f\left(y_{t-1}\right)-f\left(x^{*}\right)\right) \right)+w_{T}\mathbf{D}_{\psi}\left(x^{*},z_{T}\right)\] \[\leq w_{0}\mathbf{D}_{\psi}\left(x^{*},z_{0}\right)+\left(G^{2}+3 \sigma^{2}\right)\sum_{t=1}^{T}w_{t}\frac{\eta_{t}^{2}}{1-L\alpha_{t}\eta_{t }}+\log\left(\frac{1}{\delta}\right).\]
With the above result in hand, we can complete the convergence analysis by showing how to define the sequence \(\left\{w_{t}\right\}\) with the desired properties. Theorem B.3 can be obtained from corollaries B.4 and B.5 provided in the appendix, for constant and time-varying step sizes.
## 4 Non-convex case: Stochastic Gradient Descent and AdaGrad
In this section, we consider non-convex objectives and analyze the Stochastic Gradient Descent algorithm (Algorithm 3) along with two versions of AdaGrad: (1) AdaGrad-Norm Ward et al. (2019) (Algorithm 4), where the step-size is a scalar, and (2) the original AdaGrad algorithm Duchi et al. (2011) (Algorithm (5)), where the step-size for each coordinates varies. Since AdaGrad-Norm is simpler to analyze, most results for AdaGrad have been for this scalar version either in-expectation Ward et al. (2019); Faw et al. (2022); Li and Orabona (2020, 2019); Liu et al. (2022); Ene et al. (2021) or high-probability Kavis et al. (2021). For
the standard AdaGrad algorithm, to the best of our knowledge, Defossez et al. (2022) is the only work that has analyzed the standard version of AdaGrad in expectation, but their result does not adapt to noise and requires a strong assumption: the stochastic gradients are uniformly bounded. On the other hand, our high probability result for vanilla AdaGrad adapts to noise and holds under relatively mild assumptions.
Recall that, we assume that the optimization problem has domain \(\mathcal{X}=\mathbb{R}^{d}\). As usual in non-convex analysis, we assume that \(f\) is an \(L\)-smooth function: \(\left\|\nabla f(x)-\nabla f(y)\right\|\leq L\left\|x-y\right\|\) for all \(x,y\in\mathbb{R}^{d}\). Smoothness implies the following quadratic upperbound that we will utilize: for all \(x,y\in\mathbb{R}^{d}\)
\[f(y)-f(x)\leq\left\langle\nabla f(x),y-x\right\rangle+\frac{L}{2}\left\|y-x \right\|^{2}. \tag{6}\]
### Analysis of Stochastic Gradient Descent
In this section, we will prove the following convergence guarantee of Algorithm 3.
**Theorem 4.1**.: _Assume \(f\) is \(L\)-smooth and satisfies Assumptions (1'), (2), (3). Let \(\Delta_{1}\coloneqq f(x_{1})-f_{*}\). With probability at least \(1-\delta\), the iterate sequence \((x_{t})_{t\geq 1}\) output by Algorithm 3 satisfies_
_(1) Setting \(\eta_{t}=\min\left\{\frac{1}{L};\sqrt{\frac{\Delta_{1}}{\sigma^{2}LT}}\right\}\),_
\[\frac{1}{T}\sum_{t=1}^{T}\left\|\nabla f(x_{t})\right\|^{2}\leq\frac{2\Delta_ {1}L}{T}+5\sigma\sqrt{\frac{\Delta_{1}L}{T}}+\frac{12\sigma^{2}\log\frac{1}{ \delta}}{T};\]
_(2) Setting \(\eta_{t}=\frac{1}{L\sqrt{t}}\),_
\[\frac{1}{T}\sum_{t=1}^{T}\left\|\nabla f(x_{t})\right\|^{2}\leq\frac{2\Delta_ {1}L+3\sigma^{2}\left(1+\log T\right)+12\sigma^{2}\log\frac{1}{\delta}}{\sqrt {T}}.\]
**Comparison with prior works:** When the time horizon \(T\) is known to the algorithm, by choosing the step size \(\eta\) in part (1) of Theorem 4.1, the bound is adaptive to noise, i.e, when \(\sigma=0\) we recover \(O(\frac{1}{T})\) convergence rate of the (deterministic) gradient descent algorithm. Notice that the bound in this case does not have a \(\log T\) term incurred. When \(T\) is unknown, the extra \(\log T\) appears as a result of setting a time-varying step size \(\eta_{t}=\frac{1}{L\sqrt{t}}\). This \(\log T\) appears as an additive term to the \(\log\frac{1}{\delta}\) term, as opposed to being multiplicative, i.e, \(\log T\log\frac{1}{\delta}\) as in previous works Li and Orabona (2020); Madden et al. (2020); Li and Liu (2022).
**Analysis:** To proceed, we define for \(t\geq 1\)
\[\Delta_{t}:=f(x_{t})-f_{*};\quad\xi_{t}:=\widehat{\nabla}f(x_{t})-\nabla f(x_ {t}).\]
We let \(\mathcal{F}_{t}:=\sigma\left(\xi_{1},\ldots,\xi_{t-1}\right)\) denote the natural filtration. Note that \(x_{t}\) is \(\mathcal{F}_{t}\)-measurable. The following lemma serves as a fundamental step of our analysis; the proof of which can be found in the appendix.
**Lemma 4.2**.: _For \(t\geq 1\), we have_
\[C_{t} \coloneqq\eta_{t}\left(1-\frac{L\eta_{t}}{2}\right)\left\|\nabla f (x_{t})\right\|^{2}+\Delta_{t+1}-\Delta_{t}\] \[\leq\left(L\eta_{t}^{2}-\eta_{t}\right)\left\langle\nabla f(x_{t} ),\xi_{t}\right\rangle+\frac{L\eta_{t}^{2}}{2}\left\|\xi_{t}\right\|^{2}. \tag{7}\]
```
Parameters: \(x_{1},\eta>0\). for \(t=1\) to \(T\) \(b_{t}=\sqrt{b_{0}^{2}+\sum_{i=1}^{t}\left\|\widehat{\nabla}f(x_{i})\right\|^{2}}\) \(x_{t+1}=x_{t}-\frac{\eta}{b_{t}}\widehat{\nabla}f(x_{t})\)
```
**Algorithm 4** AdaGrad-Norm
Now we can follow the similar concentration argument from the convex setting. The difference now is the error term in the RHS of (7) can be transferred into the gradient term \(\left\|\nabla f(x_{t})\right\|^{2}\) instead of a function value gap term. This actually makes things easier since this term can be readily absorbed by the gradient term in \(C_{t}\), and we do not have to carefully impose an additional condition on \(w_{t}\) to make a telescoping sum. For \(w_{t}\geq 0\), we define
\[Z_{t} =w_{t}C_{t}-v_{t}\left\|\nabla f(x_{t})\right\|^{2}, \forall\,1\leq t\leq T\] \[\text{where }v_{t} =3\sigma^{2}w_{t}^{2}\eta_{t}^{2}(\eta_{t}L-1)^{2}\] \[\text{and }S_{t} =\sum_{i=t}^{T}Z_{i}. \forall\,1\leq t\leq T+1\]
Using the same technique as in the previous Section, we can prove the following key inequality.
**Theorem 4.3**.: _Suppose for all \(1\leq t\leq T\), \(\eta_{t},w_{t}\) satisfying \(0\leq w_{t}\eta_{t}^{2}L\leq\frac{1}{2\sigma^{2}}\) then_
\[\mathbb{E}\left[\exp\left(S_{t}\right)\mid\mathcal{F}_{t}\right] \leq\exp\left(3\sigma^{2}\sum_{s=t}^{T}\frac{w_{t}\eta_{t}^{2}L}{2}\right). \tag{8}\]
Markov's inequality gives us the following guarantee.
**Corollary 4.4**.: For all \(1\leq t\leq T\), if \(\eta_{t}L\leq 1\) and \(0\leq w_{t}\eta_{t}^{2}L\leq\frac{1}{2\sigma^{2}}\) then
\[\sum_{t=1}^{T}\left[w_{t}\eta_{t}\left(1-\frac{\eta_{t}L}{2} \right)-v_{t}\right]\left\|\nabla f(x_{t})\right\|^{2}+w_{T}\Delta_{T+1}\] \[\leq w_{1}\Delta_{1}+\left(\sum_{t=2}^{T}(w_{t}-w_{t-1})\Delta_{t}+3 \sigma^{2}\sum_{t=1}^{T}\frac{w_{t}\eta_{t}^{2}L}{2}\right)+\log\frac{1}{ \delta}. \tag{9}\]
Equipped with Lemmas 4.2 and 4.3, we are ready to prove Theorem 4.1 by specifying the choice of \(w_{t}\) that satisfy the condition of Lemma 4.3. In the first case, we choose \(\eta_{t}=\eta\), \(w_{t}=w=\frac{1}{6\sigma^{2}\eta}\) where \(\eta=\min\{\frac{1}{L};\sqrt{\frac{\Delta_{1}}{\sigma^{2}LT}}\}\). In the second case, we set \(\eta_{t}=\frac{\eta}{\sqrt{t}}\), \(w_{t}=w=\frac{1}{6\sigma^{2}\eta}\) where \(\eta=\frac{1}{L}\). We show the full proof in the appendix.
### High probability convergence of AdaGrad-Norm and AdaGrad
In this section, we present our main results for the high probability convergence for non-convex objectives of AdaGrad-Norm Ward et al. (2019) (Algorithm 4) as well as the standard AdaGrad Duchi et al. (2011) algorithm (Algorithm 5) that updates each coordinate separately. Here, \(d\in\mathbb{N}\) denotes the dimension of the problem, \(v_{i}\) denotes the \(i\)-th coordinate of a vector \(v\), and \(\widehat{\nabla}_{i}f(x_{t})\) denotes the \(i\)-th coordinate of the stochastic gradient at time \(t\).
Comparison with prior works.Ward et al. (2019); Faw et al. (2022) show the convergence of AdaGrad-Norm with polynomial dependency on \(\text{poly}\left(\frac{1}{\delta}\right)\) where \(1-\delta\) is the success probability. The latter relaxes several assumptions made in the former, including the boundedness of the gradients and noise variance.
When assuming a sub-Gaussian noise, Kavis et al. (2021) show a convergence in high probability, but still assume that the gradients are bounded which circumvents many of the difficulties due to the error term. We remove this assumption and establish the convergence of AdaGrad-Norm in the theorem 4.5. Unlike existing work, the technique employed to prove this theorem readily extends to the standard version of AdaGrad (Algorithm 5) with per-coordinate update.
For simplicity, we let \(\Delta_{t}:=f(x_{t})-f_{*}\), where \(f_{*}\) is any valid lower bound for \(f\).
**Theorem 4.5**.: _If \(f\) is \(L\)-smooth and satisfies assumptions (1'), (2) and (3). With probability at least \(1-\delta\), the iterate sequence \((x_{t})_{t\geq 1}\) output by AdaGrad-Norm (Algorithm 4) satisfies_
\[\frac{1}{T}\sum_{t=1}^{T}\left\|\nabla f(x_{t})\right\|^{2}\leq g(\delta)\cdot O \left(\frac{\sigma}{\sqrt{T}}+\frac{r(\delta)}{T}\right).\]
_where_
\[g(\delta) :=O\left(\Delta_{1}+c(\delta)\sqrt{\log\frac{T}{\delta}}+L\log \left(\sigma\sqrt{T}+r(\delta)\right)\right)\] \[c(\delta) :=O\left(\sigma^{3}\log\left(\frac{1}{\delta}\right)+\sigma\log \left(1+\sigma^{2}T+\sigma^{2}\log\frac{1}{\delta}\right)+\sigma\log\left( \sigma\sqrt{T}+r(\delta)\right)\right),\text{ and }\] \[r(\delta) :=O(\Delta_{1}+\sigma^{2}\log\frac{1}{\delta}+L\log L)\]
_are polylog terms._
The next theorem show the first convergence result in high-probability for vanilla AdaGrad in the non-convex regime.
**Theorem 4.6**.: _If \(f\) is \(L\)-smooth and satisfies assumptions (1'), (2) and (3). With probability at least \(1-\delta\), the iterate sequence \((x_{t})_{t\geq 1}\) output by AdaGrad (Algorithm 5) satisfies_
\[\frac{1}{T}\sum_{t=1}^{T}\left\|\nabla f(x_{t})\right\|_{1}^{2}\leq g(\delta) \cdot O\left(\frac{\left\|\sigma\right\|_{1}}{\sqrt{T}}+\frac{r(\delta)}{T} \right),\]
_where_
\[g(\delta) :=O\left(\Delta_{1}+\left(d\sigma_{\max}+\sum_{i=1}^{d}c_{i}( \delta)\right)\sqrt{\log\frac{dT}{\delta}}+dL\log\left(\left\|\sigma\right\|_ {1}\sqrt{T}+r(\delta)\right)\right),\] \[c_{i}(\delta) :=O\left(\sigma_{i}^{3}\log\left(\frac{d}{\delta}\right)+\sigma_ {i}\log\left(1+\sigma_{i}^{2}T+\sigma_{i}^{2}\log\frac{d}{\delta}\right)+ \left\|\sigma\right\|_{1}\log\left(\left\|\sigma\right\|_{1}\sqrt{T}+r( \delta)\right)\right),\text{ and }\] \[r(\delta) :=O\left(\Delta_{1}+\left\|\sigma^{2}\right\|_{1}\log\left( \frac{d}{\delta}\right)+\left\|\sigma\right\|_{1}\sqrt{\log\frac{d}{\delta}}+ Ld\log L\right),\]
_are polylog terms._
Both of these results are adaptive to noise: the rate \(\tilde{O}\left(\frac{1}{\sqrt{T}}\right)\) will improve to \(\tilde{O}\left(\frac{1}{T}\right)\) as the noise \(\sigma\) approaches \(0\). Furthermore, these results hold regardless of how \(\eta\) and \(b_{0}\) is set.
Analysis overview.The first key new technique is unlike prior works, we do not use the division by the step size, which makes the analysis of AdaGrad-Norm and AdaGrad virtually the same. We can thus focus on AdaGrad-Norm. To obtain a high probability bound, our analysis of AdaGrad-Norm utilizes the same martingale concentration technique as presented throughout this paper to bound the error terms \(\eta_{t}\left\langle\nabla f(x_{t}),\xi_{t}\right\rangle\). However, the step size \(\eta_{t}=\frac{\eta}{b_{t}}\) now has a dependency on the randomness at time \(t\) due to \(b_{t}\), preventing us from applying Lemma 2.2. To circumvent this, inspired by Ward et al. (2019), we introduce a proxy step size \(a_{t}:=b_{t-1}^{2}+\left\|\nabla f(x_{t})\right\|^{2}\) that replaces the stochastic gradient with the true gradient at time \(t\) for analysis purposes. Using that along with standard smoothness analysis, we obtain:
**Lemma 4.7**.: _For \(t\geq 1\), let \(\xi_{t}=\widehat{\nabla}f(x_{t})-\nabla f(x_{t})\), \(a_{t}^{2}:=b_{t-1}^{2}+\left\|\nabla f(x_{t})\right\|^{2}\), and \(M_{t}=\max_{i\leq t}\left\|\xi_{i}\right\|\), then we have_
\[\sum_{t=1}^{T}\frac{\left\|\nabla f(x_{t})\right\|^{2}}{b_{t}}\leq\frac{\Delta _{1}}{\eta}+\frac{M_{T}}{2}\left[\sum_{t=1}^{T}\frac{\left\|\nabla f(x_{t}) \right\|^{2}}{a_{t}^{2}}+\sum_{t=1}^{T}\frac{\left\|\xi_{t}\right\|^{2}}{b_{t }^{2}}\right]-\sum_{t=1}^{T}\frac{1}{a_{t}}\left\langle\nabla f(x_{t}),\xi_{t }\right\rangle+\sum_{t=1}^{T}\frac{L\eta}{2b_{t}^{2}}\left\|\widehat{\nabla}f( x_{t})\right\|^{2}.\]
Now, the randomness at time \(t\) of the error term \(\frac{1}{a_{t}}\left\langle\nabla f(x_{t}),\xi_{t}\right\rangle\) only depends on \(\xi_{t}\), which follows a sub-Gaussian distribution with mean \(0\). Hence, we can utilize our previous techniques to bound \(-\sum_{t=1}^{T}\frac{1}{a_{t}}\left\langle\nabla f(x_{t}),\xi_{t}\right\rangle\) with high probability. Comparing to the analysis in expectation from Ward et al. (2019), terms like \(\sum_{t=1}^{T}\frac{\left\|\nabla f(x_{t})\right\|^{2}}{a_{t}^{2}}\) must be handled more carefully to obtain a high probability bound. A bound for \(M_{T}\) has also been derived in previous works by Li and Orabona (2020); Liu et al. (2022). Combining with Lemma 4.7, we obtain the following lemma.
**Lemma 4.8**.: _With probability at least \(1-2\delta\), we have_
\[\sum_{t=1}^{T}\frac{\left\|\nabla f(x_{t})\right\|^{2}}{b_{t}}\leq\frac{ \Delta_{1}}{\eta}+\sigma\sqrt{\log\frac{T}{\delta}}\left[8\log\left(\frac{b_{ T}}{b_{0}}\right)+5\sum_{t=1}^{T}\frac{\left\|\xi_{t}\right\|^{2}}{b_{t}^{2}} \right]+\sigma\sqrt{\log\frac{1}{\delta}}+L\eta\log\frac{b_{T}}{b_{0}}.\]
Since, \(\sum_{t=1}^{T}\frac{\left\|\nabla f(x_{t})\right\|^{2}}{b_{t}}\geq\frac{1}{b_ {T}}\sum_{t=1}^{T}\left\|\nabla f(x_{t})\right\|^{2}\), it suffices to bound \(b_{T}\) and \(\sum_{t=1}^{T}\frac{\left\|\xi_{t}\right\|^{2}}{b_{t}^{2}}\) from this point on (see Lemma D.1 and Lemma D.6). The analysis for these terms utilize similar martingale techniques throughout this paper, where the details are deferred to Section D of the Appendix. For the coordinate version of AdaGrad, since our techniques only rely on addition and scalar multiplication, we can (with some effort) generalize our technique to the standard per-coordinate AdaGrad algorithm. The full proofs for vanilla AdaGrad are presented in Section E of the Appendix.
## 5 Conclusion
In this work, we present a generic approach to prove high probability convergence of stochastic gradient methods under sub-Gaussian noise. In the convex case, we show high probability bounds for stochastic and accelerated stochastic mirror descent that depend on the distance from the initial solution to the optimal solution and do not require the bounded domain or bounded Bregman divergence assumptions. In the non-convex case, we apply the same approach and obtain a high probability bound for SGD that improves over existing works. We also show that the boundedness of the gradients can be removed when showing high probability convergence of AdaGrad-Norm. Finally, we show that our analysis for AdaGrad-Norm can be extended to the standard per-coordinate AdaGrad algorithm to obtain one of the first high probability convergence result for standard AdaGrad.
For future work, it would be interesting to see whether our method can be applied to analyze AdaGrad-Norm and/or AdaGrad in the convex setting without restrictive assumptions. Extending this approach to the heavy tail setting and finding its applications in other problems are some of the potential future directions.
|
2307.16652 | Sequential and Shared-Memory Parallel Algorithms for Partitioned Local
Depths | In this work, we design, analyze, and optimize sequential and shared-memory
parallel algorithms for partitioned local depths (PaLD). Given a set of data
points and pairwise distances, PaLD is a method for identifying strength of
pairwise relationships based on relative distances, enabling the identification
of strong ties within dense and sparse communities even if their sizes and
within-community absolute distances vary greatly. We design two algorithmic
variants that perform community structure analysis through triplet comparisons
of pairwise distances. We present theoretical analyses of computation and
communication costs and prove that the sequential algorithms are communication
optimal, up to constant factors. We introduce performance optimization
strategies that yield sequential speedups of up to $29\times$ over a baseline
sequential implementation and parallel speedups of up to $19.4\times$ over
optimized sequential implementations using up to $32$ threads on an Intel
multicore CPU. | Aditya Devarakonda, Grey Ballard | 2023-07-31T13:32:39Z | http://arxiv.org/abs/2307.16652v1 | # Sequential and Shared-Memory Parallel Algorithms for Partitioned Local Depths
###### Abstract
In this work, we design, analyze, and optimize sequential and shared-memory parallel algorithms for partitioned local depths (PaLD). Given a set of data points and pairwise distances, PaLD is a method for identifying strength of pairwise relationships based on relative distances, enabling the identification of strong ties within dense and sparse communities even if their sizes and within-community absolute distances vary greatly. We design two algorithmic variants that perform community structure analysis through triplet comparisons of pairwise distances. We present theoretical analyses of computation and communication costs and prove that the sequential algorithms are communication optimal, up to constant factors. We introduce performance optimization strategies that yield sequential speedups of up to \(29\times\) over a baseline sequential implementation and parallel speedups of up to \(19.4\times\) over optimized sequential implementations using up to \(32\) threads on an Intel multicore CPU.
## 1 Introduction.
Partitioned local depths (PaLD) is a method for revealing community structure in distance-based data [2]. Given pairwise distances (or dissimilarities) of a set of points, PaLD computes another pairwise measure called cohesion that measures closeness based on relative distances. By relying on relative distance, PaLD is able to use a universal threshold to distinguish between strong and weak ties without defining neighborhoods by a single number of neighborhoods, neighborhood size, or absolute distance threshold. In this way, PaLD can identify neighborhoods of varying size and density, making it useful for data where the relationships among points behave differently across the space.
The input to PaLD is a distance matrix, and the output is a cohesion matrix. As detailed in Section 2, computing cohesion requires determining the size of the local neighborhood of each pair of points and then computing contributions to cohesion values based on neighborhood sizes. In each case, the fundamental operation is a comparison of the pairwise distances among triplets of points. Given \(n\) points, this yields an arithmetic complexity of \(O(n^{3})\). The goal of this paper is to develop efficient sequential and shared-memory parallel algorithms for scaling PaLD to datasets of size up to \(O(10^{5})\), making it computationally feasible to analyze ones that fit in memory on a single server. Section 3 presents the structure of the PaLD computation and our two main algorithmic approaches, which we call pairwise and triplet, respectively. As an \(O(n^{3})\) computation, PaLD shares many similarities with dense matrix multiplication (GEMM), and our algorithmic design borrows from ideas of cache-efficient algorithms for GEMM [3, 9, 18]. For example, the basic computation is a comparison between distances of points \(x,y,z\), which involves distance matrix entries \(d_{xy}\), \(d_{yz}\), and \(d_{xz}\) and has an access pattern similar to the fused multiply-adds (FMA) within GEMM. There are a few key differences between PaLD and GEMM. First, because of symmetric distances, the order of the points is irrelevant, so rather than requiring consideration of all \(n^{3}\) possible values of \(x,y,z\), we need consider only \(\binom{n}{3}\approx n^{3}/6\) unique triplets. Second, while the memory access of distances is regular, the updates of the cohesion requires branching based on distance comparisons. Finally, the computation requires two passes because cohesion updates depend on the sizes of local neighborhoods. Each pass requires a varying mix of integer and floating point operations in addition to the branching. The pairwise and triplet approaches navigate a tradeoff between exploiting symmetry and achieving regular data access and parallelization.
In Section 4 we prove a lower bound on the cache efficiency of any PaLD algorithm, and we show that both of our algorithms achieve optimal cache performance, up to constant factors. By exploiting symmetry and applying cache blocking, we obtain data locality in cache and minimize the number of reads and writes of matrix values. Section 5 details our low-level optimizations of the two PaLD algorithms. We show that branch avoidance has the highest impact on sequential performance given the high cost of branch misprediction [10, 13, 14]. Along with other optimizations including cache blocking and vectorization, we show performance improvements over naive sequential code of up to \(29\times\). In Section 6 we design, optimize, and evaluate OpenMP parallel versions
of the two PaLD algorithms. We show that the pairwise algorithm enables regular data access patterns and loop-based parallelism that can largely avoid write conflicts. The triplet algorithm exploits more symmetry to reduce arithmetic operations but requires task-based parallelism due to more complicated data access patterns and write conflicts. We also apply Non-Uniform Memory Access (NUMA) optimizations when scaling across sockets. We achieve strong scaling speedups up to \(19.4\times\) for pairwise and \(13.2\times\) for triplet over their optimized sequential versions on 32 threads. Finally, we describe a text analysis application in Section 7, demonstrating the utility of PaLD on larger datasets than previously considered, and we show a parallel speedup of \(16.7\times\) on a task with \(n=2712\) using 32 threads.
## 2 Background.
Given a set of points and a pairwise distance metric, partitioned local depth (PaLD) algorithms determine the pairwise cohesion between all pairs of points in a dataset [2]. Assuming that the dataset comprises sufficiently separated subsets, cohesion values are invariant to contraction and dilation of distances within subset distances. Community structure revealed by cohesion values capture the concept of near neighbors based on relative positioning, adapting to varying density. This approach is more flexible than standard cluster labeling or nearest neighbor approaches. Density-based approaches (e.g. DBSCAN) [5, 6, 7] that attempt to combine points into high- and low-density groups based on pairwise distances include thresholding (tuning) parameters to reflect locality and cluster size. Likewise, \(k\)-nearest neighbor (KNN) approaches [8] attempt to group points via comparisons against their \(k\) nearest neighbors (using absolute distances). The tuning parameter, \(k\), controls the neighborhood size for a given point and is often fixed for all points. Cohesion values depend on triplet distance comparisons (as opposed to absolute distances) which require only measures of relative similarity and can be more reliable than exact numerical distances for analyzing high dimensional, non-Euclidean data. PaLD requires \(O(n^{3})\) operations to compute cohesion values without assumptions on underlying probability distribution or tuning parameters.
Given a set of points \(\mathcal{S}\), the _local focus_ of a pair of points \(x,y\in\mathcal{S}\) is the set of all points within distance \(d_{xy}\) of either \(x\) or \(y\), where \(d_{xy}\) is the distance between \(x\) and \(y\): \(\mathcal{U}_{xy}=\{z\in\mathcal{S}\mid d_{xz}\leq d_{xy}\textbf{ or }\ d_{yz}\leq d_{xy}\}.\) We let \(u_{xy}=|\mathcal{U}_{xy}|\) denote the size of the local focus.
The _local depth_ of a point \(x\in\mathcal{S}\) is the probability that, given a uniformly chosen random second point \(Y\in\mathcal{S}\) and a random third point \(Z\) chosen uniformly from the local focus \(\mathcal{U}_{xY}\), \(Z\) is closer to \(x\) than \(Y\):
\[\ell_{x}=\Pr\left[d_{Zx}<d_{ZY}\mid Y\sim\mathbb{U}(\mathcal{S}\backslash\{x\} ),Z\sim\mathbb{U}(\mathcal{U}_{xY})\right].\]
The cohesion of a point \(z\) to another point \(x\) is a part of the local depth \(\ell_{x}\) and is defined as
\[c_{xz}=\Pr\left[Z=z\textbf{ and }\ d_{Zx}<d_{ZY}\right]. \tag{2}\]
The random variables \(Y\) and \(Z\) in Eq. (2) are chosen from the same distributions as in Eq. (1); we drop the notation here and later. This implies that \(\ell_{x}=\sum_{z\in\mathcal{S}}c_{xz}\), or that cohesion is partitioned local depth. The cohesion matrix, \(C\), can be used to analyze community structure. For example, two points have particularly strong cohesion if the impact of one of the points to the other is more than that expected from a random focus point of another random point.
## 3 PaLD Algorithms Design.
In order to compute the cohesion of all pairs of points, we can again use the law of total probability to partition \(c_{xz}\) across all points \(y\in\mathcal{S}\):
\[c_{xz}=\sum_{y\in\mathcal{S}}\Pr\left[Y=y\textbf{ and }\ Z=z\textbf{ and }\ d_{Zx}<d_{ZY}\right].\]
Using the law of conditional probability, this becomes
\[c_{xz}=\sum_{y\in\mathcal{S}}\Pr\left[d_{zx}<d_{zy}\mid Y=y,Z=z\right]\\ \cdot\Pr\left[Z=z\mid Y=y\right]\cdot\Pr\left[Y=y\right]\]
which implies
\[c_{xz}=\sum_{y\in\mathcal{S}}\mathbb{I}_{d_{xz}\leq d_{yz}}\cdot\frac{ \mathbb{I}_{d_{xz}\leq d_{xy}}}{u_{xy}}\cdot\frac{1}{n-1}=\frac{1}{n-1}\sum_ {y\in\mathcal{S}}g_{xyz}, \tag{3}\]
where \(\mathbb{I}\) is the indicator function and we have defined
\[g_{xyz}=\mathbb{I}_{d_{xz}\leq d_{yz}}\cdot\mathbb{I}_{d_{xz}\leq d_{xy}}\,/ \,u_{xy}. \tag{4}\]
The task is then to compute \(g_{xyz}\) for all \(x,y,z\in\mathcal{S}\), a total of \(n^{3}\) values. However, only approximately \(1/3\)rd of the \(g_{xyz}\) values are nonzero because, given three points with unique pairwise distance values, only one pair has the minimum distance. For example, given points \(x,y,z\in\mathcal{S}\), if \(x\) and \(y\) are the closest pair, then \(g_{xxy}\) and \(g_{yzx}\) are nonzero, but \(g_{xyz}=g_{yxz}=g_{xxy}=g_{zxy}=0\). To compute the nonzero values \(g_{xzy}\) and \(g_{yzx}\), we need the values \(u_{xz}\) and \(u_{yz}\). The size of any given local focus can be computed as \(u_{xy}=\sum_{z\in\mathcal{S}}\mathbb{I}_{d_{xz}\leq d_{xy}\textbf{ or }d_{yz}\leq d_{xy}}\).
We consider two algorithmic approaches to computing the local focus sizes and the final cohesion matrix
that take advantage of the symmetry. The first approach, which we call the _pairwise algorithm_, considers all \(\binom{n}{2}\) pairs of points, and for each pair, first determines the size of its local focus and then computes contributions to the cohesion matrix from all points within the local focus. The second approach, which we call the _triplet algorithm_, considers all \(\binom{n}{3}\) triplets of points, and for each triplet, determines which of the two local foci the triplet contributes to and then (in a second pass) determines which of the two cohesion matrix entries the triplet contributes to. We analyze and compare the two algorithms in Section 4.
### Pairwise Algorithm.
The entry-wise pairwise algorithm is given as Algorithm 1. The idea is to perform the computations for each pair of points \(x\) and \(y\). To compute \(g_{xyz}\) for each third point \(z\), we first must compute the size of the local focus, \(u_{xy}\). This requires a pass over all \(n\) points with two comparisons and a possible integer increment. A second pass over all \(n\) points determines, for points in the local focus, which of the points \(x\) or \(y\) the third point supports, and the cohesion matrix is updated accordingly. Note that only one local focus size need be stored at any one time, requiring minimal temporary memory.
To improve the cache locality, we block the algorithm as follows: instead of considering only one pair of points, we consider two sets of points \(\mathcal{X}\) and \(\mathcal{Y}\) and consider all the pairs \((x,y)\in\mathcal{X}\times\mathcal{Y}\). In this way, we obtain locality on the distance matrix block \(D_{\mathcal{X},\mathcal{Y}}\) and a temporary block of local focus sizes \(U_{\mathcal{X},\mathcal{Y}}\).
As in the entry-wise algorithm, the blocked algorithm makes two passes over all \(n\) third points. The first pass computes \(U_{\mathcal{X},\mathcal{Y}}\), a local focus size block, and the second pass makes updates to the cohesion matrix.
Figure 1 shows the dependencies among the distance, local focus, and cohesion matrices for the blocked (\(b=4\)) pairwise algorithm. The red blocks correspond to the entries of the distance matrix that are read and re-used while processing the pair of blocks \(\mathcal{X}\) and \(\mathcal{Y}\) (the pattern is the same in both passes, though \(D_{\mathcal{X},\mathcal{Y}}\) remains in fast memory through both passes. The orange blocks represent entries of the local focus matrix which are computed in fast memory during the first pass and used during the second pass. The green blocks of the cohesion matrix are re-used during the second pass before being written back to slow memory. The blue blocks represent dependencies between entries of \(D,U\), and \(C\) for one entry-wise iteration.
### Triplet Algorithm.
The entry-wise triplet algorithm is given as Algorithm 2. In Algorithm 1, if a third point \(z\) is in the local focus of \(x\) and \(y\) and is closer to \(x\), then only the support of \(z\) for \(x\) is recorded in \(C\) (\(c_{xz}\) is updated). If \(z\) is closer to \(x\) in a focus with \(y\), then \(x\) is closer to \(z\) in its focus with \(y\). The idea of the triplet algorithm is to minimize the number of distance comparisons. By performing all the updates for each triplet of points, we can avoid redundant comparisons. However, this method requires that the local focus sizes are pre-computed for all pairs of points within the triplet, so it requires more temporary memory.
We can also block the triplet algorithm to obtain better cache locality. Instead of a single triplet of points, we consider three blocks \(\mathcal{X},\mathcal{Y},\mathcal{Z}\) and all triplets \((x,y,z)\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\). We obtain locality on cache blocks of all three matrices: distance, local focus, and cohesion. Note that a first pass is required to compute the local focus matrix in its entirety, and then blocks of the local focus matrix are read from slow memory during the second pass as needed.
Figure 2 illustrates the dependencies among the distance, local focus, and cohesion matrices for the (blocked) triplet algorithm. In the first pass, the blocked triplet algorithm reads 3 blocks from the distance matrix, corresponding to the triplet pairs:
Figure 1: Dependency structure of the blocked pairwise algorithm. The highlighted regions represent quantities with temporal locality. Quantities in red correspond to reads and ones in green correspond to writes. Orange entries are computed and used in fast memory and then discarded. Blue represents entry-wise dependencies within each matrix/vector.
\((x,y),(x,z),(y,z)\), and writes to the corresponding 3 blocks of the local focus matrix. Note that the distance and local focus matrices are symmetric so only the upper triangular parts are required. The cohesion matrix is not symmetric, thus in the second pass 6 blocks must be updated by performing distance comparisons (by reading \(D_{\mathcal{X},\mathcal{Y}},D_{\mathcal{X},\mathcal{Z}},D_{\mathcal{Y},\mathcal{ Z}}\)) and utilizing entries of the local focus matrix (by reading \(U_{\mathcal{X},\mathcal{Y}},U_{\mathcal{X},\mathcal{Z}},U_{\mathcal{Y},\mathcal{ Z}}\)).
```
1:\(D\in\mathbb{R}^{n\times n}\) Distance Matrix
2:\(C\in\mathbb{R}^{n\times n}\) Cohesion Matrix
3:Initialize \(U=\text{tru}(2*\texttt{ones}(n),1)\)
4:for\(x=1\) to \(n-1\)do
5:for\(y=x+1\) to \(n\)do
6:for\(z=y+1\) to \(n\)do
7:if\(d_{xy}<d_{xz}\) and \(d_{xy}<d_{yz}\)then
8:// \(x,y\) is closest pair
9:\(u_{xz}=u_{xz}+1\)
10:\(u_{yz}=u_{yz}+1\)
11:elseif\(d_{xz}<d_{yz}\)then
12:// \(x,z\) is closest pair
13:else
14:else
15:\(u_{yz}=u_{yz}+1\)
16:\(u_{xy}=u_{xy}+1\)
17:else
18:// \(y,z\) is closest pair
19:\(u_{xy}=u_{xy}+1\)
20:\(u_{xz}=u_{xz}+1\)
21:for\(x=1\)to\(n-1\)do
22:for\(y=x+1\)to\(n\)do
23:for\(z=y+1\) to \(n\)do
24:if\(d_{xy}<d_{xz}\) and \(d_{xy}<d_{yz}\)then
25:\(c_{xy}=c_{xy}+1/u_{xz}\)
26:\(c_{yx}=c_{yx}+1/u_{yz}\)
27:elseif\(d_{xz}<d_{yz}\)then
28:\(c_{xz}=c_{xz}+1/u_{xy}\)
29:\(c_{zx}=c_{zx}+1/u_{yz}\)
30:else
31:\(c_{yz}=c_{yz}+1/u_{xy}\)
32:\(c_{zy}=c_{zy}+1/u_{xz}\)
```
**Algorithm 2** Triplet Sequential Algorithm.
## 4 Sequential Algorithm Analysis.
We model performance using the model, \(\gamma F+\beta W\), where \(F\) and \(W\) represent an algorithm's computation and bandwidth costs, respectively, and \(\gamma\) (time per operation) and \(\beta\) (time per word moved) represent hardware parameters. We analyze communication cost assuming a two-level memory hierarchy, which contains fast memory (cache) of size \(M\) words and slow memory (DRAM) with unbounded size. We assume that computation can only be performed on operands residing in fast memory. If operands are in slow memory, then they must first be read into fast memory. We limit analysis in this section to a two-level memory hierarchy, but this memory model can be used to analyze communication for each adjacent pair of levels in a multi-level memory hierarchy.
### Communication Lower Bounds.
We use the framework in [1] to derive communication lower bounds. The lower bound of [1, Theorem 2.6] applies to all three-nested-loops (3NL) computations as defined in that paper. We reproduce the 3NL definition here using the same notation, with sets \(S_{a},S_{b},S_{c}\subseteq[n]\times[n]\) where \([n]=\{1,2,\ldots,n\}\) and mappings \(\mathbf{a}:S_{a}\rightarrow\mathcal{M}\), \(\mathbf{b}:S_{b}\rightarrow\mathcal{M}\), \(\mathbf{c}:S_{c}\rightarrow\mathcal{M}\), where \(\mathcal{M}\) is slow memory. For each \((i,j)\in S_{c}\), we also have a set \(S_{ij}\subseteq[n]\).
**Definition 1**: ([1, Definition 2.4]) _A computation is considered to be three-nested-loops (3NL) if it includes computing, for all \((i,j)\in S_{c}\) with \(S_{ij}\),_
\[\text{Mem}(\mathbf{c}(i,j))=f_{ij}(\{g_{ijk}(\text{Mem}(\mathbf{a}(i,k)), \text{Mem}(\mathbf{b}(k,j))\}_{k\in S_{ij}}),\]
_where (a) mappings \(\mathbf{a}\), \(\mathbf{b}\), \(\mathbf{c}\) are all one-to-one into slow memory, and (b) functions \(f_{ij}\) and \(g_{ijk}\) depend nontrivially on their arguments._
We first verify that the cohesion matrix computation defined by Eqs. (3.3) and (3.4) is 3NL when the distance matrix is stored explicitly in memory. To satisfy the first constraint, we define the mappings \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbf{c}\) as all mapping onto the distance matrix (that is, each mapping is one-to-one but the three mappings are not disjoint). Here \(\mathbf{a}(x,y)\) maps to the distance matrix entry \(d_{xy}\). To satisfy the second constraint, we see that computing \(g_{xyz}\) depends nontrivially on \(\mathbf{a}(x,y)\) and \(\mathbf{b}(y,z)\), as both values must be compared with \(d_{xz}\) to evaluate the indicator functions, and computing \(c_{xz}\) depends nontrivially on its arguments, as it computes the sum over all values. As argued in Section 3, the number of 3NL operations is \(\sum_{i,j}|S_{ij}|=O(n^{3})\). Then, by [1, Theorem 2.6], the bandwidth cost lower bound for PaLD is \(W=\Omega(n^{3}/\sqrt{M})\).
### Cost Analysis.
The blocked algorithms are described in Section 3 with memory reference patterns
Figure 2: Dependency structure of the blocked triplet algorithm. The highlighted regions represent entries with temporal locality. Matrices in red correspond to reads and ones in green correspond to writes. Matrices in orange correspond to writes during the first pass and reads during the second pass. Blue represents the entry-wise dependencies within each matrix.
depicted in Figs. 1 and 2. The loop structures of the blocked algorithms are shown (with OpenMP parallelization) in Figs. 5 and 7. We focus on the sequential costs in this section and discuss parallelization in Section 6. Since the algorithms require mixed comparison and arithmetic instructions, we explicitly define the hardware parameters \(\gamma_{cmp}\) and \(\gamma_{fma}\) to represent the time per instruction for floating-point comparisons and FMAs, respectively. We ignore the cost of integer arithmetic. Figure 5 shows the loop structure of the blocked pairwise algorithm where inner loop computations match Algorithm 1. We use \(b\) to represent the block size for the pairwise algorithm.
**Theorem 4.1**: _The blocked pairwise algorithm has the leading order computation and communication costs:_
\[F =(5\gamma_{cmp}+1\gamma_{fma})\cdot n{n\choose 2}\approx 3n^{3}\mbox{ flops.}\] \[W =4\sqrt{2}\ \frac{n^{3}}{\sqrt{M}}\approx 5.7\ \frac{n^{3}}{ \sqrt{M}}\mbox{ words moved.}\]
The blocked pairwise algorithm selects \({n/b+1\choose 2}\) unique sets of points \(\mathcal{X},\mathcal{Y}\) with \(|\mathcal{X}|=|\mathcal{Y}|=b\). A total of \(nb^{2}\) iterations are required to determine if a third point, \(z\), is in the local focus for each \((x,y)\in\mathcal{X}\times\mathcal{Y}\). The local focus update requires 2 floating-point comparisons followed by 1 integer accumulate into \(u_{xy}\). The cohesion update requires 3 floating-point comparisons and 1 FMA, as the reciprocals of elements of \(U_{\mathcal{X},\mathcal{Y}}\) can be precomputed once. When \(\mathcal{X}=\mathcal{Y}\), only \(n{b\choose 2}\) iterations are required to perform local focus and cohesion updates. There are \(n/b\) such overlapping sets. Multiplying over the iterations, summing the work over the local focus and cohesion update loops, and multiplying by \(\gamma_{cmp}\) and \(\gamma_{fma}\) yields the computation cost.
Each of the \({n/b+1\choose 2}\) possible combinations of \(\mathcal{X}\times\mathcal{Y}\) points requires reading the \(b\times b\) block \(D_{\mathcal{X},\mathcal{Y}}\) from slow memory. In the first pass to compute the local focus sizes, for each third point, \(z\), we read the two \(b\times 1\) vectors \(D_{\mathcal{X},z}\) and \(D_{\mathcal{Y},z}\) from slow memory. The local focus block \(U_{\mathcal{X},\mathcal{Y}}\) is computed and remains resident in fast memory. Similarly, each iteration of the second pass cohesion update requires reading the \(b\times 1\) vectors \(D_{\mathcal{X},z},D_{\mathcal{Y},z},C_{\mathcal{X},z}\) and \(C_{\mathcal{Y},z}\) from slow memory. After each iteration within the second pass, \(C_{\mathcal{X},z}\) and \(C_{\mathcal{Y},z}\) must be written to slow memory. We must maintain \(2b^{2}\) words of data in fast memory for \(D_{\mathcal{X},\mathcal{Y}}\) and \(U_{\mathcal{X},\mathcal{Y}}\), along with a constant number of length-\(b\) vectors, so \(b\leq\sqrt{M/2}\) to leading order. Multiplying and summing these reads and writes over all iterations yields the leading order communication cost \(4n^{3}/b\), and choosing \(b\approx\sqrt{M/2}\) yields the result.
Figure 7 shows the loop structure of the blocked triplet algorithm, and the inner loop computations match Algorithm 2. The local focus sizes and cohesion matrix updates are computed in two separate passes, and two block sizes \(\hat{b}\) and \(\tilde{b}\) can be tuned independently.
**Theorem 4.2**: _The blocked triplet algorithm has the leading order computation and communication costs:_
\[F =(6\gamma_{cmp}+2\gamma_{fma})\cdot{n\choose 3}\approx 1.33n^{3} \mbox{ flops.}\] \[W =\left(\sqrt{6}+4\sqrt{3}\right)\frac{n^{3}}{\sqrt{M}}\approx 9.4 \frac{n^{3}}{\sqrt{M}}\mbox{ words moved.}\]
The blocked local focus and cohesion matrix passes have the same loop structure, each selecting \({n/b+2\choose 3}\) triplets of sets \(\mathcal{X},\mathcal{Y}\), and \(\mathcal{Z}\) each of size \(b\) points, though the value of \(b\) differs in the two passes. The triplet algorithm contains 3 types of symmetry: \(\mathcal{X}=\mathcal{Y}=\mathcal{Z}\), \(\mathcal{X}\neq\mathcal{Y}=\mathcal{Z}\), and \(\mathcal{X}=\mathcal{Y}\neq\mathcal{Z}\). While our implementation accounts for each type of symmetry, we ignore it in our leading order cost analysis. The local focus and cohesion update inner iterations each require 3 distance comparisons to determine the pair of points with minimum distance. The cohesion update iteration additionally requires 2 FMAs to update entries of the cohesion matrix. Multiplying operations by their respective \(\gamma\) terms and summing work over the two passes proves the computation cost.
There are \({n/\hat{b}+2\choose 3}\) possible combinations of triplet blocks in the local focus pass. The local focus update must read 2 \(\hat{b}\times\hat{b}\) blocks of \(D\), read 2 \(\hat{b}\times\hat{b}\) blocks of \(U\), and write 2 \(\hat{b}\times\hat{b}\) blocks of \(U\) from/to slow memory. Note that the block \(D_{\mathcal{X},\mathcal{Y}}\) can be read and the block \(U_{\mathcal{X},\mathcal{Y}}\) read and written only \({n/\hat{b}+1\choose 2}\) times since they remain fixed while blocks \(\mathcal{Z}\) vary in the innermost loop. The cohesion update requires reading 2 \(\hat{b}\times\tilde{b}\) blocks of \(D\) and \(U\), respectively, followed by reading and writing 4 \(\tilde{b}\times\tilde{b}\) blocks of \(C\). The blocks \(D_{\mathcal{X},\mathcal{Y}}\) and \(U_{\mathcal{X},\mathcal{Y}}\) are read from slow memory and the blocks \(C_{\mathcal{X},\mathcal{Y}}\) and \(C_{\mathcal{Y},\mathcal{X}}\) can be read and written \({n/\hat{b}+1\choose 2}\) times. The total I/O cost is then \(n^{3}/\hat{b}+2n^{3}/\tilde{b}\), assuming that all blocks can be stored in fast memory. This requires that \(\hat{b}\leq\sqrt{M/6}\) and \(\tilde{b}\leq\sqrt{M/12}\) to leading order. Choosing block sizes at their approximate maximum value yields the communication cost.
The constants for the communication cost in Theorem 4.2 can be improved by unblocking the innermost loop over \(\mathcal{Z}\) for the local focus and cohesion update passes, which allows for a slightly larger block size. We use this technique for the pairwise algorithm, and it is useful in practice for matrix multiplication as well [18].
However, incorporating this optimization did not allow for auto-vectorization during cohesion updates where some updates require a stride of \(n\). Blocking all three loops allowed for unit-stride for all cohesion updates. We provide more details in the following section.
We can conclude from Theorems 4.1 and 4.2 that the pairwise variant requires more computation than the triplet variant, but it moves less data. Both sequential variants attain the 3NL lower bound of \(\Omega(n^{3}/\sqrt{M})\) and are communication-optimal within a constant factor. We will show in the next section how additional performance optimizations can yield large speedups. The optimized sequential algorithms serve as the baselines from which we derive efficient shared-memory parallel algorithms.
## 5 Sequential Performance Optimization.
We study the performance improvements achieved by each optimization, the tuning parameters introduced, and performance tradeoffs between the pairwise and triplet variants. All algorithms were written in C and compiled with the Intel C compiler (icc) release 2021.06. The code was compiled with the following compiler flags: -Ofast -mavx512 -opt-mmm-usage=high. Experiments are performed on a single-node, dual-socket platform with two Intel Xeon Gold 6226R CPUs (16 cores per socket). We run 5 trials for each experiment and use the mean to compute speedups. We observe low runtime variance across trials, so we omit error bars for simplicity. We perform experiments on randomly generated distance matrices for powers of two \(n\in\{128,\ldots,4096\}\). Our code can handle arbitrary square matrix sizes, but we limit performance evaluation to powers of two. We begin performance tuning by applying one level of blocking to Algorithm 1 (naive pairwise) and Algorithm 2 (naive triplet). We show speedups relative to the previous optimization tried in Fig. 3 with a fixed \(n=2048\) matrix. Overall speedup over naive pairwise (resp. naive triplet) may be obtained by multiplying speedups across all optimizations. Naive triplet resulted in a speedup of \(1.11\times\) over naive pairwise due to less computation. Introducing one level of blocking to naive pairwise led to a speedup of \(1.07\times\). Applying blocking to the triplet variant led to speedups of \(1.20\times\) over naive triplet (\(1.33\times\) over naive pairwise). Algorithms 1 and 2 require branches to correctly update \(U\) and \(C\) based on distance comparisons. Distance comparisons can be vectorized, but updates to \(U\) and \(C\) cannot due to branching. We avoid branches in both algorithms by computing auxiliary mask variables and performing FMAs with these explicit masks. For Algorithm 1, we compute the masks: \(r=d_{xz}<d_{xy}\)**or \(d_{yz}<d_{xy}\)** and \(s=d_{xz}<d_{yz}\). The variable \(r\) indicates that \(z\) is in the \((x,y)\) local focus and \(s\) determines the entry of \(C\) to update. \(C\) can be updated via two FMAs: \(c_{xz}=c_{xz}+r\cdot s\cdot(1/u_{xy})\) and \(c_{yz}=c_{yz}+(r)(1-s)(1/u_{xy})\). Branch avoidance introduces a performance tradeoff by increasing computation (e.g. performing FMAs with explicit zeros) but eliminates branch misprediction overhead. For Algorithm 1, branch avoidance enables a fixed stride length for updates of \(C\) and facilitates other compiler optimizations (e.g. auto-vectorization and loop unrolling). Branch avoidance alone yielded a speedup of \(1.7\times\) over naive pairwise. While branch avoidance allows for vectorization, updates to \(c_{xz}\) and \(c_{yz}\) require a stride length of \(n\). After blocking, we reduce the stride length to 1 by updating columns of \(C\) instead (see Fig. 1). The combination achieved speedups of \(20.2\times\) over naive pairwise.
Algorithm 2 must determine the closest pair of points from a triplet \((x,y,z)\). We avoid branches in Algorithm 2 by computing three masks from three floating point comparisons: \(r=d_{xy}<d_{xz}\)**and \(d_{xy}<d_{yz}\), \(s=(1-r)(d_{xz}<d_{yz})\), and \(t=(1-r)(1-s)\). \(C\) can then be updated using six FMAs:
\[c_{xy}=c_{xy}+r\left(1/u_{xz}\right), \quad c_{yx}=c_{yx}+r\left(1/u_{yz}\right),\] \[c_{xz}=c_{xz}+s\left(1/u_{xy}\right), \quad c_{zx}=c_{zx}+s\left(1/u_{yz}\right),\] \[c_{yz}=c_{yz}+t\left(1/u_{xy}\right), \quad c_{zy}=c_{zy}+t\left(1/u_{xz}\right).\]
Applying branch avoidance to the triplet algorithm yields a speedup of \(0.98\times\) due to the stride-\(n\) updates to \(C\). When combined with blocking, however, we attain speedups of \(20\times\) over naive triplet. Triplet with branch avoidance and blocking yields a speedup of \(1.1\times\) over pairwise with the same optimizations. We were able to extract additional speedup by replacing floating point operations with integer operations during local focus updates, and ignoring equality in pairwise/triplet
Figure 3: Speedup achieved from various performance optimizations applied to the Pairwise and Triplet algorithms. Speedups are arranged by algorithm and relative to the previous performance optimization attempted. The naive implementations of pairwise and triplet have a speedup of 1.
distance comparisons. Each entry of \(U\) counts the number of points in the local focus based on distance comparisons, with results stored in a mask register. If \(U\) is stored as a floating point array, then each increment to update \(U\) requires an expensive integer mask to 32-bit floating point cast operation. We avoid this by storing \(U\) as an integer array during the local focus computation. This allowed us to combine casting with computing reciprocals prior to cohesion updates.
The theoretical formulation of PaLD [2] allows for ties in pairwise distances (e.g., \(d_{xz}==d_{yz}\)). When ties occur, support is split between cohesion entries \(c_{xz}\) and \(c_{yz}\) (i.e. \(c_{xz}=c_{xz}+r\cdot s\cdot(0.5/u_{xy})\)). In finite arithmetic, floating point equality is unlikely due to round-off and truncation. Avoiding ties is critical for Algorithm 2 which contains more distance tie permutations than pairwise. Introducing these additional optimizations yields self-relative speedups (over naive) of \(25.5\times\) and \(26.2\times\) for pairwise and triplet, respectively. Overall, optimized triplet achieves a speedup of \(1.14\times\) over optimized pairwise for \(n=2048\). We also perform block size tuning for each algorithm. We experiment with (powers of two) block sizes in the range \([2^{5},2^{10}]\). Optimized pairwise attains a maximum speedup of \(25.5\times\) for \(n=2048\) after tuning.
For optimized triplet, updates to \(U\) require storing 3 distinct blocks of \(D\) and 3 distinct blocks of \(U\) in cache. Updates to \(C\) require 3 distinct blocks of \(D\), 3 distinct blocks of \(U\), and 6 distinct blocks of \(C\) in cache. This suggests that different block sizes may be better than a fixed block size. Figure 4 (bottom) illustrates the speedups observed (over Algorithm 2) for various block size combinations for the optimized triplet algorithm. We observe a maximum speedup of \(26.2\times\) over naive triplet with \(\hat{b}=256\) and \(\tilde{b}=128\). In Table 1 we compare running times (and speedups) of optimized pairwise and optimized triplet over a range of input matrix sizes. For small matrix sizes, where \(D,U\) and \(C\) all fit in cache, optimized pairwise is fastest (e.g. speedup of \(1.58\times\) over triplet at \(n=128\)). This is because \(n/b\) is a small integer where lower order terms dominate (see Theorem 4.2). For larger matrices, optimized triplet performs better (speedup of \(1.26\times\) over pairwise at \(n=4096\)) due to lower computation cost. In practice, we expect triplet to be the better sequential variant for most applications of PaLD. If distances ties must be handled correctly, then pairwise is the better variant due to fewer branches.
Finally, we note that optimized pairwise attains \(27.7\%\) of hardware peak at \(n=2048\) and optimized triplet attains \(28\%\) at \(n=8192\). Our Intel CPU has a single-core, single precision peak of \(249.6\) Gflops/sec. Single precision comparisons on our CPU have a cycle-per-instruction (CPI) of 1 while all other single precision ops have a CPI of 0.5. Thus, floating point comparisons are twice as expensive. See Appendix A for details on percentage of peak calculations for each algorithm.
The combination of all optimizations achieves speedups of \(25.5\times\) and \(29\times\) for pairwise and triplet, respectively, over naive pairwise (for \(n=2048\)). We observe speedups of \(23\times\) and \(26.2\times\) over naive triplet.
## 6 Shared-Memory Parallel Algorithms
This section presents the OpenMP parallelization of the optimized sequential pairwise and triplet algorithms. Figure 5 shows the OpenMP version of the blocked pairwise algorithm. The blocked pairwise algorithm first computes \(U_{\mathcal{X},\mathcal{Y}}\) with a pass over all \(n\) points \(z\). The local focus \(z\)-loop can be parallelized across \(p\) threads using the OpenMP parallel for construct. All threads must write to \(U_{\mathcal{X},\mathcal{Y}}\) so a sum-reduction is required to resolve write conflicts. The cohesion update pass requires the quantities \(1/u_{xy}\ \forall\ (x,y)\ \in\ \mathcal{X}\times\mathcal{Y}\), which can be parallelized without write conflicts. Cohesion updates
\begin{table}
\begin{tabular}{r|c|c} \(n\) & Pairwise Optimized & Triplet Optimized \\ \hline \hline
128 & **0.00117 (1.58\(\times\))** & 0.00185 \\ \hline
256 & **0.00497 (1.34\(\times\))** & 0.00665 \\ \hline
512 & **0.0188 (1.18\(\times\))** & 0.0221 \\ \hline
1024 & 0.1274 & **0.1208 (1.05\(\times\))** \\ \hline
2048 & 0.9942 & **0.8734 (1.14\(\times\))** \\ \hline
4096 & 8.3623 & **6.6111 (1.26\(\times\))** \\ \end{tabular}
\end{table}
Table 1: Running time in seconds (and speedup) comparison of pairwise and triplet algorithms.
Figure 4: Speedup achieved from block size tuning for pairwise (top) and triplet (bottom) for \(n=2048\).
are within each column of \(C\) to entries of \(C_{\mathcal{X},z}\) and \(C_{\mathcal{Y},z}\). The cohesion pass can be parallelized without write conflicts by splitting the \(z\)-loop across \(p\) threads. Figure 6 illustrates the write patterns for optimized OpenMP pairwise for \(n=16\), \(b=4\), and \(p=8\). Updates to entries of \(C\) requires corresponding entries from \(D\), so \(D\) can also be partitioned column-wise. The pairwise algorithm is amenable to NUMA optimizations due to the regular data dependencies.
Figure 7 shows the OpenMP version of the blocked triplet algorithm. The triplet approach requires reading all of \(D\) for local focus and cohesion update passes. Blocking is performed over triplets of points, \(\mathcal{X},\mathcal{Y},\mathcal{Z}\), and updates to \(U\) and \(C\) become irregular. We use the OpenMP tasking model [17] for parallelism. Each triplet block, \(\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\), is a new task that can be executed by any available thread. Tasks in the local focus pass write to 3 blocks of \(U\). \(C\) is not symmetric, so the cohesion update pass writes to 6 blocks. Write conflicts arise when multiple tasks need to update the same blocks of \(U\) or \(C\). We resolve conflicts by annotating dependencies using the depend clause with the inout modifier. Figure 8 shows the write conflicts for the local focus pass. Each vertex represents one of the \(\binom{n/b+2}{3}\) tasks and is labeled by \(\mathcal{X},\mathcal{Y},\mathcal{Z}\) block values, and edges represent conflicts. The degree for each vertex varies based on the symmetry in the block. This leads to irregular dependencies which we will show in Section 6.1 are not as amenable to NUMA optimizations.
### OpenMP Performance.
We use OpenMP version 4.5 and test the OpenMP algorithms on randomly generated dense distance matrices with \(n\in\{2048,4096,8192\}\). We incorporate NUMA optimizations into the pairwise algorithm by controlling thread affinity via the OMP_PROC_BIND and OMP_PLACES environment variables. We map OpenMP threads to physical cores, by assigning OpenMP thread ids 0 to 16 to CPU 0 and threads 17 to 31 to CPU 1. A static loop schedule yields best performance due to the pairwise algorithm's regular dependencies. Each thread reads columns of \(D\) and \(C\) from thread-local fast memory so updates to \(C\) are spatially local. Thread binding ensures that accesses are temporally local by assigning fixed column blocks of \(D/C\) to threads. OpenMP allocates memory pages using a first-touch policy by default. If a single thread allocates \(D\), then \(D\) resides in the memory hierarchy of the thread's CPU. \(D\) is typically computed outside the scope of the OpenMP algorithms, so we also study the effects of partitioning \(D\) across sockets (i.e. memory binding).
Figure 9 shows the speedup achieved by introducing thread binding only and thread + memory binding into the OpenMP pairwise algorithm across three matrix sizes, \(n\in\{2048,4096,8192\}\). We use the OpenMP pairwise algorithm without NUMA-aware optimizations as our baseline and report speedups for 32 OpenMP threads. When we use thread binding only, we observe average speedups of \(1.4\times,1.5\times\), and \(1.13\times\) for \(n=2048,4098\), and 8192, respectively. Thread binding with memory binding yields average speedups of speedup of \(1.7\times,1.69\times\), and \(1.2\times\) over the baseline. We did not perform TLB optimizations, therefore, we observe decreasing speedups for large matrix sizes. We also found that NUMA optimizations are useful at smaller thread counts, \(2\leq p\leq 16\), by mapping half the threads to CPU 0 and the other half to CPU 1. This mapping provides access to the fast memory hierarchies on both CPUs. We observe speedups ranging from \(1.05\times\)
Figure 5: Blocked OpenMP pairwise Algorithm.
Figure 6: Distance matrix reads and Local Focus/Cohesion writes for parallel pairwise code with \(n=16\), \(b=4\), and \(p=8\). All threads have write conflicts to the \(U\) block for each pair \(\mathcal{X},\mathcal{Y}\) (in red), so synchronization is required via reductions. Only one \(U\) block is needed in fast memory at any given point in time. Writes to \(C\) are within one column, so column blocks can be partitioned across threads without write conflicts.
(\(n=4096,p=2\)) to \(1.33\times\) (\(n=2048,p=16\)) when splitting threads (where \(p\leq 16\)) across sockets. We experimented with thread binding for the OpenMP triplet algorithm but not memory binding due to the irregular data dependencies. However, we did not observe significant performance improvements over the baseline, so we omit these results from Fig. 9. We obtain best OpenMP scaling when using the untied clause, which allows suspended tasks to be resumed on any available thread. Suspended tasks may cause additional reads from slow memory after restart. Hence, we do not expect NUMA optimizations to be helpful. We perform strong scaling experiments in Fig. 10 of the OpenMP variants under the same settings as for Fig. 9 and report self-relative efficiency achieved. We report efficiencies with and without NUMA optimizations. The pairwise algorithm without NUMA optimizations achieves efficiencies of \(24.2\%,33.5\%\), and \(50.6\%\) at \(p=32\) for \(n=2048,4096\) and \(8192\), respectively. Including NUMA optimizations yields efficiencies of \(42.9\%,56.6\%\), and \(60.5\%\) for \(p=32\). The triplet algorithm achieves efficiencies of \(28.0\%,29.2\%\), and \(40.9\%\) without NUMA optimizations and \(36.9\%,34.9\%\), and \(41.2\%\) with NUMA optimizations for \(p=32\). The triplet algorithm is the faster sequential baseline, hence the OpenMP triplet efficiencies are lower than those reported for OpenMP pairwise. We also study weak scaling of the two algorithms with and without NUMA optimizations. We fix \(n^{3}/p\) over the range of \(p\) tested. We use the matrix sizes \(n_{1}\in\{2048,4096,8192\}\), where \(n_{1}\) is the matrix size at \(p=1\). Figure 11 shows the results of the weak scaling experiments. The pairwise algorithm without NUMA optimizations attains weak scaling efficiencies of \(30.6\%,48.2\%\), and \(61.4\%\) for \(n_{1}=2048,4096\), and \(8192\), respectively at \(32\) threads. With NUMA optimizations, the efficiencies increase to \(59.1\%,~{}63.6\%\), and \(65.6\%\) for each of the matrix size settings at \(p=32\). Triplet without NUMA optimizations achieves weak scaling efficiencies of \(44.2\%,49.1\%\), and \(50.1\%\) and \(47.6\%,49.1\%\), and \(50.1\%\) with NUMA optimizations at \(p=32\).
## 7 Text Analysis Application
We demonstrate the utility of PaLD on larger datasets than previously considered [2] for semantic analysis of words extracted from Shakespeare sonnets [11]. Words are converted to vectors using the pre-trained fastText word embedding [4, 12], yielding a dataset of 2712 words. We compute Euclidean distance between embedding vectors and generate the cohesion matrix \(C\) using
Figure 8: Task diagram for parallel triplet with \(n/b=4\), where nodes are labeled by their \(\mathcal{X},\mathcal{Y},\mathcal{Z}\) block values. Edges represent write conflicts for \(U\) between tasks.
Figure 7: Blocked OpenMP triplet Algorithm.
Figure 9: OpenMP pairwise speedup from NUMA optimizations with \(n\in\{2048,4096,8192\}\) and \(p=32\).
the OpenMP pairwise algorithm. Figure 12 shows words associated with _guilt_ and _halt_ obtained from PaLD and from analyzing only the distance matrix \(D\). PaLD is parameter-free, with strong ties determined by a universal threshold (see [2]), whereas analysis using \(D\) requires a user-tuned distance or neighbor-count cutoff. Note the differing sizes of strong-tie neighborhoods between the two words. PaLD finds 20 words with strong ties to _guilt_ and 5 words for _halt_. The 20 closest words to _guilt_ based on distance correspond to a cutoff of 2.26. We observe significant overlap between the two sets, though PaLD reports stronger ties to _expiate_ and _conscience_. PaLD finds 5 words with strong ties to _halt_. To illustrate the pitfalls of tuning an absolute distance threshold, we apply the distance cutoff 2.26 for _halt_, which yields 23 words including several unrelated ones (e.g. _just_ and _say_). This suggests that absolute distance thresholds are not robust to varying density and distance scales within word neighborhoods. A distance cutoff of 2.14 is required for _halt_ to match results obtained from PaLD. Applying the cutoff to _guilt_ identifies only 8 related words, missing several words like _expiate_. We attain a speedup of \(16.7\times\) using the NUMA optimized OpenMP pairwise algorithm at \(p=32\) and an overall run time of 0.178 seconds.
## 8 Conclusion
This paper presents several sequential and shared-memory parallel algorithms for PaLD [2]. We prove that sequential variants are communication-optimal, up to constant factors. We illustrate that branch avoidance is critical to attaining high performance; achieving a speedup of up to \(29\times\) over naive sequential variants. Based on our theoretical and empirical studies, we conclude that the triplet variant is the faster sequential algorithm for large matrices due to less computation. However, we show that the pairwise algorithm is more amenable to parallelization due to regular data dependencies and load balance. We observe strong scaling speedups up to \(19.4\times\) (\(60.5\%\) efficiency), and weak scaling efficiencies of up to \(65.6\%\) at \(p=32\) after incorporating NUMA-aware optimizations. With the performance achieved on the text analysis application, we show that PaLD can be scaled to nearly any dataset with a distance matrix that fits in the memory of a single server.
Figure 11: Self-relative weak scaling efficiency of OpenMP Pairwise (top) and Triplet (bottom).
Figure 12: Word clouds from PaLD analysis (left column) and distance analysis (right column) of the words _guilt_ and _halt_. Font size is proportional to cohesion values and inverse distances.
Figure 10: Self-relative strong scaling efficiency of OpenMP Pairwise (top) and Triplet (bottom).
## Acknowledgements
We would like to thank Kenneth S. Berenhaut for helpful feedback on the presentation of PaLD and discussions on applying PaLD to semantic analysis of word embedding in Section 7. We would also like to thank Yixin Zhang for code contributions to preliminary versions of the pairwise algorithms. This work is supported by the National Science Foundation under Grant No. OAC-2106920 and the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research program under Award Number DE-SC-0023296.
|
2309.10957 | Approximation Algorithms for Quantum Max-$d$-Cut | We initiate the algorithmic study of the Quantum Max-$d$-Cut problem, a
quantum generalization of the well-known Max-$d$-Cut problem. The Quantum
Max-$d$-Cut problem involves finding a quantum state that maximizes the
expected energy associated with the projector onto the antisymmetric subspace
of two, $d$-dimensional qudits over all local interactions. Equivalently, this
problem is physically motivated by the $SU(d)$-Heisenberg model, a spin glass
model that generalized the well-known Heisenberg model over qudits. We develop
a polynomial-time randomized approximation algorithm that finds product-state
solutions of mixed states with bounded purity that achieve non-trivial
performance guarantees. Moreover, we prove the tightness of our analysis by
presenting an algorithmic gap instance for Quantum Max-d-Cut problem with $d
\geq 3$. | Charlie Carlson, Zackary Jorquera, Alexandra Kolla, Steven Kordonowy, Stuart Wayland | 2023-09-19T22:53:17Z | http://arxiv.org/abs/2309.10957v2 | # Approximation Algorithms for Quantum Max-\(d\)-Cut
###### Abstract
We initiate the algorithmic study of the Quantum Max-\(d\)-Cut problem, a quantum generalization of the well-known Max-\(d\)-Cut problem. The Quantum Max-\(d\)-Cut problem involves finding a quantum state that maximizes the expected energy associated with the projector onto the antisymmetric subspace of two, \(d\)-dimensional qudits over all local interactions. Equivalently, this problem is physically motivated by the \(SU(d)\)-Heisenberg model, a spin glass model that generalized the well-known Heisenberg model over qudits. We develop a polynomial-time randomized approximation algorithm that finds product-state solutions of mixed states with bounded purity that achieve non-trivial performance guarantees. Moreover, we prove the tightness of our analysis by presenting an algorithmic gap instance for Quantum Max-\(d\)-Cut with \(d\geq 3\).
## 1 Introduction
The quantum Heisenberg model is a family of spin glass Hamiltonians defined by nearest-neighbor interactions [14]. This model, especially the antiferromagnetic variant, is well-studied in condensed matter physics [14, 15, 16, 17, 18, 19] and has recently gained attention in computer science since it can be seen as a quantum generalization of the Max-Cut problem. The quantum Heisenberg model is also commonly referred to as Quantum Max-Cut in the literature [1, 2, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 233, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 314, 315, 316, 317, 318, 324, 319, 325, 326, 327, 328, 333, 340, 341, 329, 335, 336, 337, 338, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 434, 445, 446, 447, 448, 449, 425, 449, 436, 444, 450, 451, 452, 453, 454, 455, 456, 457, 467, 478, 48, 490, 411, 42, 434, 449, 44, 451, 452, 453, 454, 456, 457, 467, 478, 48, 491, 402, 403, 404, 405, 406, 407, 409, 411, 42, 434, 44, 458, 459, 467, 478, 492, 403, 404, 405, 406, 407, 409, 411, 42, 434, 44, 452, 453, 454, 456, 457, 467, 478, 493, 400, 411, 42, 434, 458, 459, 467, 479, 48, 490, 411, 42, 434, 44, 450, 406, 407, 408, 411, 42, 434, 44, 453, 454, 467, 478, 493, 400, 411, 42, 434, 44, 455, 456, 457, 467, 479, 480, 411, 42, 434, 44, 458, 459, 470, 408, 411, 42, 434, 44, 450, 409, 411, 42, 434, 45, 467, 47, 48, 492, 403, 404, 405, 406, 407, 409, 411, 42, 434, 44, 45, 467, 478, 493, 400, 411, 42, 434, 44, 45, 467, 48, 494, 405, 406, 407, 409, 411, 42, 434, 44, 45, 467, 495, 406, 407, 408, 411, 42, 434, 44, 45, 467, 48, 496, 409, 411, 42, 434, 44, 46, 47, 497, 498, 410, 42, 44, 45, 467, 48, 493, 400, 411, 42, 434, 44, 45, 467, 494, 405, 406, 407, 409, 411, 42, 434, 44, 45, 467, 495, 409, 410, 42, 44, 47, 48, 49, 411, 434, 44, 45, 467, 496, 411, 44, 47, 497, 498, 410, 42, 44, 453, 467, 499, 411, 44, 47, 499, 42, 403, 404, 405, 406, 407, 409, 411, 42, 434, 44, 45, 467, 48, 493, 400, 411, 42, 434, 44, 45, 467, 49, 410, 42, 44, 49, 43, 467, 49, 411, 44, 453, 467, 49, 411, 44, 47, 49, 412, 44, 45, 467, 49, 413, 44, 47, 49, 42, 44, 45, 467, 49
function, to some value \(k\) (often denoted with \(k\)-GCSP). Finding the optimal assignment to a GCSP is usually computationally intractable, so one relaxes to the optimization version: find an assignment that achieves the largest expected payoff possible. An algorithm is an \(\alpha\)-approximation for the GCSP if it can output a solution that is guaranteed to achieve a solution with value \(\geq\alpha\)OPT, where OPT is the maximum expected payoff. Every GCSP admits the canonical \(\alpha\)-approximation algorithm using a relaxation to a semidefinite program (SDP) followed by a "rounding" to a solution in the original space [14, 15]. The goal is to use the SDP solution's value as a way to bound the performance of the rounded solution. Assuming the unique games conjecture (UGC), it is NP-hard to improve upon this algorithm in general for GCSPs [14].
A popular class of GCSPs can be phrased as coloring problems on graphs. Given \(G=(V,E)\), the goal is to find an assignment \(\tau:V\rightarrow[d]\) that maximizes the size of the set \(\{(u,v)\in E\mid\tau(u)\neq\tau(v)\}\). The simplest non-trivial case of \(d=2\) is called Max-Cut, a standard NP-complete problem [13]. The gold standard algorithm for Max-Cut was provided by Goemans and Williamson and is a 0.878-approximation algorithm that relies on SDPs and random hyperplane rounding [12]. This algorithm is the basis for the aforementioned canonical GCSP algorithm, so improving on this bound is NP-hard (assuming UGC) [12, 1]. Even without UGC, it is known that it is NP-hard to do better than a 16/17-approximation [15, 16]. An extension to Max-Cut is Max-\(d\)-Cut, in which the domain is increased to some \(d\geq 2\). A simple randomized algorithm for Max-\(d\)-Cut results in a \((1-1/d)\)-approximation. Frieze and Jerrum extend [12] to an algorithm that additively improves on the randomized bound by \(\Theta\left(\frac{\ln d}{d^{2}}\right)\)[10]. Two other attempts at generalizing Goemans and Williamson to \(d>2\) also produce similar approximation guarantees [14, 15].
The quantum analog of a \(k\)-GCSP is the \(k\)-local Hamiltonian problem (\(k\)-LHP): the variables are qudits, and payoffs are represented by local observables, \(\{h_{1},\ldots,h_{m}\}\), each acting non-trivially over \(\leq k\) qudits. The \(\{h_{\alpha}\}\) terms are known as the local Hamiltonians and it is common to assume they are all PSD [1]. The problem Hamiltonian is given by \(H=\mathbf{E}_{\alpha}\,h_{\alpha}\) and the task is finding the maximum eigenvalue \(\lambda_{\max}(H)\)[1].1 This framework of \(k\)-LHPs encapsulates many different physically motivated models, such as estimating the energy of quantum many-body systems, and thus has become an important problem to research. The \(k\)-LHP is QMA-hard to solve in general [13] and even in the case in which the local Hamiltonians interact on pairs of qudits (referred to as 2-local) [11]. We focus our attention on solving these problems classically, which begs the question of how to efficiently represent a generic quantum solution to a \(k\)-LHP since quantum states can require \(O(d^{n})\) bits of information to write down in general. This problem is typically circumvented by only considering an efficiently representable subset of quantum states called an _ansatz_. A very common ansatz is that of _product-states_ in which no entanglement is present and can be written using \(O(poly(n,d))\) bits. Even with this vast simplification, algorithms outputting non-trivial solutions are still possible [1, 2, 10]. Unsurprisingly, allowing some entanglement allows one to improve on these bounds [1, 10, 11, 12]. Nonetheless, it has been shown that product-state solutions are sufficiently good approximations of the optimal solution for dense graphs [1].
Footnote 1: Often, \(k\)-LHPs concern themselves with finding the ground state or \(\lambda_{\min}(H)\). However, as was done in [1], we take the \(k\)-LHP to be a maximization problem.
The natural "quantum" generalization of Max-Cut, known as Quantum Max-Cut, can be seen as replacing assignments of colors to assignments of possibly entangled qubits. Then the notion of two assignments being different, the payoff function, is replaced by the energy of 2-local subsystems of qubits on the projector onto the antisymmetric subspace, which for two qubits is given by the projector onto the singlet state, \(|\psi^{-}\rangle=\frac{1}{\sqrt{2}}|01\rangle-\frac{1}{\sqrt{2}}|10\rangle\). Moreover, the natural "quantum" generalization to Max-\(d\)-Cut, known as Quantum Max-\(d\)-Cut, can be seen by replacing the \(d\) colors, \([d]\), with \(d\)-dimensional qudits and replacing the payoff with a projector onto the antisymmetric subspace of 2, \(d\)-dimensional qudits, which is the rank \(\binom{d}{2}\) projector onto the states \(\{\frac{1}{\sqrt{2}}|ab\rangle-\frac{1}{\sqrt{2}}|ba\rangle\mid a<b\in[d]\}\). We hope that our work on Quantum Max-\(d\)-Cut can help in the research of LHPs over qudits as Max-Cut and Max-\(d\)-Cut did in the research of GCSPs. Moreover, with the increasing interest in high-dimensional quantum computing, i.e., quantum computers that use qudits [13, 14], optimization over qudits has become more of interest.
#### Previous work
Approximating ground states to \(k\)-LHPs over qubits with classical algorithms has a rich body of research, especially for the Quantum Max-Cut problem. However, approximating ground states to \(k\)-LHPs over qudits is less rich. Gharibian and Kempe gave a polynomial-time approximation scheme (PTAS) for computing product-state solution to dense \(k\)-LHPs [11]; their algorithm admits a \(d^{1-k}\)-approximation to the energy of the optimal product-state solution. After that, Brandao and Harrow gave multiple PTASes for \(k\)-LHPs over qudits for planar graphs, dense graphs, and low threshold rank graphs [1].
Of particular interest to this paper is the work in finding product-state solutions to the Quantum Max-Cut problem. In particular, the SDP-based algorithm by Briet, de Olivera Filho, and Vallentin [1] gives a PTAS with an approximation ratio of \(0.956\) to the optimal product-state solution [14]. Then, subsequent work by Gharibian and Parekh [15] give a similar PTAS based on an SDP derived from the noncommutative sum-of-squares (ncSoS) hierarchy, with an approximation ratio of \(0.498\) to the best (possibly entangled) state. Their rounding algorithm used the theory of the Bloch sphere, a unit sphere, \(S^{2}\), that corresponds bijectively to valid pure state density matrices. In this paper, we extend these results to the Quantum Max-\(d\)-Cut problem.
#### Our Results
We present a noncommutative sum-of-squares SDP-based randomized approximation algorithm for the Quantum Max-\(d\)-Cut problem that finds a product-state solution of mixed states with bounded purity. We give the following theorem to describe its approximation ratio.
**Theorem 1** (Approximation Ratios For Approximating Quantum Max-\(d\)-Cut).: _There exists an efficient approximation algorithm for Quantum Max-\(d\)-Cut that admits an \(\alpha_{d}\)-approximation, where the constants \(\alpha_{d}\) (for \(d\geq 2\)) satisfy,_
1. \(\alpha_{d}>\frac{1}{2}\left(1-1/d\right)\)__
2. \(\alpha_{d}-\frac{1}{2}\left(1-1/d\right)\sim\frac{1}{2d^{3}}\)__
3. \(\alpha_{2}\geq 0.498767,\alpha_{3}\geq 0.372995,\alpha_{4}\geq 0.388478, \alpha_{5}\geq 0.406128,\alpha_{10}\geq 0.450614,\alpha_{100}\geq 0.4950005\)__
**Remark 1**.: As a heuristic to judge our algorithm, we can observe that the energy achieved by the maximally mixed state over all vertices is no worse than \(\frac{1}{2}(1-1/d)\) times the optimal. Theorem 1, Items (i) and (ii) show that our algorithm performs better than this trivial assignment for all \(d\geq 2\). Similar metrics are used classically, where a random assignment gives a \(1-1/d\) approximation in expectation for Max-\(d\)-Cut.
We note that the approximation ratio for the \(d=2\) case (i.e., Quantum Max-Cut) is the same as for the Gharibian-Parekh algorithm; in that case, the algorithm and the analysis are unchanged. For all other cases, \(d\geq 3\), we show that our analysis is tight by providing an algorithmic gap that matches these ratios. While not stated here, we note that the exact values of \(\alpha_{d}\) for \(d\geq 3\) are given by an analytical formula.
**Theorem 2** (Algorithmic gap of Quantum Max-\(d\)-Cut).: _The approximation algorithm for Quantum Max-\(d\)-Cut that rounds to mixed product-states using the basic SDP has algorithmic gap \(\alpha_{d}\) for \(d\geq 3\)._
This is rather interesting as for Quantum Max-Cut, there is no known hard instance that gives an algorithmic gap of \(\alpha_{2}\)[14].
Additionally, we present an SDP-based algorithm in approximating the optimal product-state solution of Quantum Max-\(d\)-Cut. We give the following theorem to describe its approximation ratio.
**Theorem 3** (Approximation Ratios For Approximating The Optimal product-state Solution of Quantum Max-\(d\)-Cut).: Quantum Max-\(d\)-Cut _admits an \(\beta_{d}\)-approximation to the optimal product-state solution with respect to the basic SDP, where the constants \(\beta_{d}\) (for \(d\geq 2\)) satisfy,_
1. \(\beta_{d}=2\alpha_{d}\) _for_ \(d\geq 3\)__
_
2. \(\beta_{2}\geq 0.956337\)__[_BdOFV10; HNPTW22_]_
Furthermore, because we can relate the ratio of approximating the optimal product-state solution with that of approximating the maximal energy of Quantum Max-\(d\)-Cut by a factor of two, it follows that we also beat the metric of a random product-state solution for approximating the optimal product-state solution.
**Corollary 1** (Beating Random Assignment for Approximating The Optimal Product-State Solution of Quantum Max-\(d\)-Cut).: Quantum Max-\(d\)-Cut _admits an \(\beta_{d}\)-approximation to the optimal product-state solution with respect to the basic SDP, where the constants \(\beta_{d}\) (for \(d\geq 2\)) satisfy,_
1. \(\beta_{d}>1-1/d\)__
2. \(\beta_{d}-(1-1/d)\sim\frac{1}{d^{3}}\)__
In achieving these results, we extend the methods of [1, 1], which only work for Quantum Max-Cut, to Quantum Max-\(d\)-Cut for arbitrary \(d\geq 2\) in two major ways. We first present an SDP based on the noncommutative sum-of-squares (ncSoS) Hierarchy for many-body systems over qudits using a generalization of the Pauli matrices called the generalized Gell-Mann matrices. While useful for defining observables, they lose many other useful properties of the Pauli matrices. Namely, they are not unitary (for \(d\geq 3\)), which makes defining the SDP difficult. In particular, arguing that the SDP vectors are unit vectors is not as trivial as observing that \(P^{2}=I\), for a Pauli matrix \(P\), which requires \(P\) to be unitary. Our SDP shares many similarities to SDPs that have been used before [1, 1, 2, 1], however, to work with our rounding algorithm we must make additional guarantees about our SDP, which allow us to simplify the SDP further. Second, we observe that the notion of a Bloch sphere does not exist for qudits of dimension \(d\geq 3\)[1] and so one can not round to the unit sphere in which the Bloch vectors for pure states lie. To get around this, we round to mixed states with bounded purity.
#### Paper Organization
To show these results, we use a matrix basis that can be seen as a generalization of the Pauli matrices called the generalized Gell-Mann matrices, which we introduce in Section2.1 along with the Preliminaries in Section2. This matrix basis will allow us to write the Hamiltonian in a convenient way, allowing for both an SDP relaxation and a Bloch vector representation of quantum states. With this, in Section3, we outline the main focus of our work, namely the Quantum Max-\(d\)-Cut problem and many special cases that aid in the analysis of our algorithm. Then, in Section4, we discuss the notion of Bloch vectors for qudits and discuss the critical geometric challenges of rounding to product-states that did not exist for qubits. In understanding these challenges, we discuss a resolution that will become the primary inspiration for our rounding algorithm. Following that, in Section5, we derive an SDP relaxation through the second level of the ncSoS hierarchy that also enforces all two body moments to be valid density matrices. Additionally, we make several new observations crucial for our rounding algorithm about the ncSoS hierarchy for higher dimensional qudits. Then, the main technical contribution of our work is in proving Theorems1 and 3, which we do in Section6 by presenting our full algorithm and analyzing its approximation ratio for all \(d\geq 2\) using the Gaussian hypergeometric function. Then, we show our analysis is tight by providing an algorithmic gap instance in Section7 and proving Theorem2. Lastly, we give open problems and future directions in Section8.
## 2 Preliminaries
We use the notation \([n]:=\{1,\ldots,n\}\) and \([a,b]:=\{a,a+1,\ldots,b\}\). Let \(S_{n}\) denote the _symmetric group_ over the set \([n]\), which has order \(n!\) and \(A_{n}:=\{\sigma\in S_{n}\mid\operatorname{sgn}(\sigma)=1\}\) denote the _alternating group_, which has order \(n!/2\). Let \(\mathcal{H}_{d}:=\mathbb{C}^{d}\) be a \(d\)-dimensional Hilbert Space, where a \(d\)_-dimensional qudit_ or a _state_ refers to a unit vector in \(\mathcal{H}_{d}\). When \(d=2\), we call these states _qubits_. In the case of qubits, it is common to represent the _standard basis_ as \(\{|0\rangle,|1\rangle\}\), which intends to represent the the quantum analogs of classical bits. However, for
qudits we find it more convenient to denote the _standard basis_ as \(\{|i\rangle\ |\ i\in[d]\}\) and so unless otherwise stated, this is the convention we will use for all levels of qudits. Let \(\mathcal{H}_{d}^{\otimes n}\) be the space of \(n\), \(d\)-dimensional qudits. We use \(\langle\psi|\) to denote the _conjugate transpose_ of \(|\psi\rangle\in\mathcal{H}_{d}^{\otimes n}\) and \(\langle\psi,\phi\rangle\) to denote the inner product of two states. The _standard basis_ for \(\mathcal{H}_{d}^{\otimes n}\) is denoted by \(\{|i_{1}i_{2}\dots i_{n}\rangle:=|i_{1}\rangle\otimes|i_{2}\rangle\otimes \dots\otimes|i_{n}\rangle\ |\ i_{1},i_{2},\dots,i_{n}\in[d]\}\).
We use the notation \(\mathcal{L}(\mathcal{H}_{d}^{\otimes n})\equiv\mathbb{C}^{d^{n}\times d^{n}}\) to denote the Hilbert-Schmidt space of linear operators from \(\mathcal{H}_{d}^{\otimes n}\) to itself, \(\mathcal{H}_{d}^{\otimes n}\rightarrow\mathcal{H}_{d}^{\otimes n}\). For an operator/matrix \(A\in\mathcal{L}(\mathcal{H}_{d}^{\otimes n})\), \(A\) is _positive semidefinite_ or _PSD_ (denoted \(A\succcurlyeq 0\)) if \(\forall|\psi\rangle\in\mathcal{H}_{d}^{\otimes n}\), \(\langle\psi A|\psi\rangle\geq 0\). Equivalently, all the eigenvalues of \(A\) are non-negative. We use \(A^{*}:=\overline{A^{\top}}\) to denote the adjoint/conjugate transpose. A matrix, \(A\), is _Hermitian_ if \(A^{*}=A\). We use the notation \(\mathcal{D}(\mathcal{H}_{d}^{\otimes n}):=\{\rho\in\mathcal{L}(\mathcal{H}_{d }^{\otimes n})\ |\ \rho^{*}=\rho,\ \rho\succcurlyeq 0,\ \mathrm{tr}(\rho)=1\}\) to denote the subset of _density matrices_ on \(n\) qudits. A density matrix \(\rho\) is _pure_ if it is a projector onto some state \(|\psi\rangle\in\mathcal{H}_{d}^{\otimes n}\), namely, \(\rho=|\psi\rangle\!\langle\psi|\). Equivalently, if \(\mathrm{tr}(\rho^{2})=1\). More generally, for an arbitrary density matrix \(\rho\in\mathcal{D}(\mathcal{H}_{d}^{\otimes n})\), we refer to quantity \(\mathrm{tr}(\rho^{2})\) as its _purity_. We denote by \(\rho^{*}:=\frac{1}{d^{n}}I\) a special density matrix called the _maximally mixed state_. Special cases of operators are the _unitary group_, denoted \(U(d):=\{U\in\mathcal{L}(\mathcal{H}_{d})\ |\ U^{*}U=I\}\). A subgroup of this group is the _special unitary group_, denoted \(SU(d):=\{U\in U(d)\ |\ \det(U)=1\}\).
The _symmetric subspace_ of \(H_{d}^{\otimes n}\), the space of \(n\), \(d\)-dimensional qudits, is given by
\[\wedge^{n}\mathcal{H}_{d}:=\{|\psi\rangle\in H_{d}^{\otimes n}\ |\ |\psi\rangle=P(\sigma)|\psi\rangle\ \forall\sigma\in S_{n}\}=\mathrm{span}\{|\psi\rangle^{\otimes n}\ |\ |\psi\rangle\in H_{d}\}\]
where \(P:S_{n}\to GL(\mathcal{H}_{d}^{\otimes n})\) is a representation of \(S_{n}\) given by \(P(\sigma):=\sum_{i_{1},\dots,i_{n}}|i_{\sigma^{-1}(1)}\dots i_{\sigma^{-1}(n)} \rangle\!\langle i_{1}\dots i_{n}|\)[14]. Similarly, the _antisymmetric subspace_ of \(H_{d}^{\otimes n}\), is given by
\[\vee^{n}\mathcal{H}_{d}:=\{|\psi\rangle\in H_{d}^{\otimes n}\ |\ |\psi\rangle= \mathrm{sgn}(\sigma)P(\sigma)|\psi\rangle\ \forall\sigma\in S_{n}\}\]
For \(n=2\), we note that these subspaces are orthogonal complements of each other. Moreover, the significance of these subspaces comes from rich use of representation theory in quantum information [14], which we will not delve into further in this paper.
We generally use subscripts to indicate quantum subsystems. For instance, for a single qudit operator \(M\in\mathcal{L}(\mathcal{H}_{d})\), we use \(M_{i}\in\mathcal{L}(\mathcal{H}_{d}^{\otimes n})\) to denoted the \(n\) qudit operator that acts as \(M\) on the \(i\)th qudit and identity on all other qudits else. Moreover, for a \(2\)-qudit operator, \(H\), we use \(H_{ij}\) to denote that it acts on the \(i\) and \(j\) qubits. We note, order matters.
Let \(G=(V,E)\) be an unweighted, undirected graph and \(G=(V,E,w)\) be a weighted, underiected graph. Without loss of generality, we let \(V=[n]\). When viewing the vertices of a graph as a system of qudit, we use the notation \(\mathcal{H}_{d}^{\otimes V}\). We denote by \(K_{n}\) a special graph called the _complete graph on \(n\) vertices_ (or the _\(n\)-clique_) to be the graph with an edge between every pair of vertices.
Often, to show that the analysis of an algorithm is tight, we use the notion of an _algorithmic_ gap.
**Definition 1** (Algorithmic Gap).: Let \(\mathcal{P}\) be a maximization problem and \(A\) an approximation algorithm. For instance \(\mathcal{I}\) of \(\mathcal{P}\), let \(A(\mathcal{I})\) be the expected value of the solution outputted by the approximation algorithm and \(OPT(\mathcal{I})\) be the true optimal value. The algorithmic gap of the instance \(\mathcal{I}\) is the quantity
\[\mathrm{Gap}_{A}(\mathcal{I})=\frac{A(\mathcal{I})}{\mathrm{OPT}(\mathcal{I})} \tag{1}\]
The algorithmic gap of \(A\) for the problem \(\mathcal{P}\) is the quantity
\[\inf_{\mathcal{I}}\left\{\mathrm{Gap}_{A}(\mathcal{I})\right\} \tag{2}\]
The _approximation ratio_ is then a lower bound on this quantity. Namely, it's a constant, \(\alpha\), such that for all instances \(\mathcal{I}\), \(A(\mathcal{I})\geq\alpha\mathrm{OPT}(\mathcal{I})\).
Lastly, we use \(\sim\) to denote asymptotic equivalence. That is, two function, \(f,g:\mathbb{R}\rightarrow\mathbb{R}\) are asymptotic equivalent, denoted \(f\sim g\), if \(\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}=1\) or equivalently if \(f(x)=g(x)(1+o_{x}(1))\).
### Matrix Basis
When working in higher level qudit systems, it becomes less obvious how to describe quantum states and Hamiltonians. In the qubit case, we have the Pauli matrices, which have become immensely useful to the study of Hamiltonian optimization. For qudits, there are multiple options for matrix bases that generalize the Pauli matrices, maintaining different useful properties [1]. In this section, we look at one such basis called the generalized Gell-Mann matrix basis. We also discuss another commonly used basis of unitary matrices based on the clock and shift matrices in Appendix D, but we note that they ultimately fall short when using them as a basis for Bloch vectors.
**Definition 2** (Generalized Gell-Mann Matrices [1, 16, 17]).: The _generalized Gell-Mann matrices_ are a higher dimensional extension of the Pauli Matrices (Example 1) and the Gell-Mann matrices (Example 2) as generators of the \(\mathfrak{su}(d)\) Lie algebra.2 They are given by the following \(d^{2}-1\) matrices, which can be divided into three categories and defined as the following matrices:
Footnote 2: We note that the generalized Gell-Mann matrices are hermitian by definition. However, in mathematics, \(\mathfrak{su}(d)\) is the algebra of \(d\times d\), traceless, skew-Hermitian matrices. Nonetheless, it is common practice in physics to consider generators such that \(\{i\Lambda^{a}\}_{a}\) gives a real basis for \(\mathfrak{su}(d)\). We will follow the physics convention in this paper.
* Symmetric \[\Lambda^{+}_{ab}:=|a\rangle\!\langle b|+|b\rangle\!\langle a|,\ \ \ \ 1\leq a<b\leq d\]
* Antisymmetric \[\Lambda^{-}_{ab}:=-i|a\rangle\!\langle b|+i|b\rangle\!\langle a|,\ \ \ \ 1 \leq a<b\leq d\]
* Diagonal \[\Lambda^{d}_{a}:=\sqrt{\frac{2}{a(a+1)}}\left(\sum_{b=1}^{a}|b\rangle\! \langle b|-a|a+1\rangle\!\langle a+1|\right),\ \ \ \ 1\leq a\leq d-1\]
Often, when taking summations over these matrices or when writing their structure constants, it is more useful to use a single index, which we will denote by \(\Lambda^{a}\). This definition comes more naturally from a recursive definition of the generators from \(\mathfrak{su}(k-1)\) to \(\mathfrak{su}(d)\)[17]. The relationship between these two definitions is given by the following indices, with \(1\leq a<b\leq d\).
\[S_{ab} =b^{2}+2(a-b)-1 \tag{3}\] \[A_{ab} =b^{2}+2(a-b)\] (4) \[D_{a} =a^{2}+2a \tag{5}\]
Then
\[\Lambda^{+}_{ab}=\Lambda^{S_{ab}},\ \ \ \ \Lambda^{-}_{ab}=\Lambda^{A_{ab}},\ \ \ \ \Lambda^{d}_{a}=\Lambda^{D_{a}} \tag{6}\]
Note that we will often switch between these two definitions depending on which model suits the situation best.
The generalized Gell-Mann matrices also form a real basis for the vector space of \(d\times d\), hermitian, traceless matrices that are orthogonal under the Hilbert-Schmidt inner product: \(\operatorname{tr}(\Lambda^{a}\Lambda^{b})=2\delta_{ab}\) (where \(\delta_{ab}\) is the Kronecker delta). Additionally, combined with the identity, or rather \(\Lambda^{0}:=\sqrt{\frac{2}{d}}I\), these \(d^{2}\) matrices give a real basis for all hermitian matrices, i.e., the space of single qudit observables, \(\mathcal{HM}_{d}\). It is for this reason they are well suited to be used to express Hamiltonian optimization problems, and in particular the Quantum Max-\(d\)-Cut Hamiltonian. Lastly, as a complex vector space, they are a basis for \(\mathcal{L}(\mathbb{C}^{d})\).
Of particular interest is the algebraic structure of these matrices and thus of \(\mathfrak{su}(d)\) through their commutator/anti-commutator relations. Namely, by choosing the generalized Gell-Mann matrices as a basis, we can use the
fact that the commutator (and anti-commutator) can be written in this basis in the following way.
\[[\Lambda^{a},\Lambda^{b}] =\Lambda^{a}\Lambda^{b}-\Lambda^{b}\Lambda^{a} =2i\sum_{c=1}^{d^{2}-1}f_{abc}\Lambda_{c} a,b\in[d^{2}-1] \tag{7}\] \[\{\Lambda^{a},\Lambda^{b}\} =\Lambda^{a}\Lambda^{b}+\Lambda^{b}\Lambda^{a} =\frac{4}{d}\delta_{ab}I+2\sum_{c=1}^{d^{2}-1}d_{abc}\Lambda_{c} a,b\in[d^{2}-1] \tag{8}\]
Where the totally antisymmetric structure constants of \(\mathfrak{su}(d)\) are given by \(f_{abc}\) and the totally symmetric constants are given by \(d_{abc}\). We note that these constants have known closed formulas [1] that can be taken advantage of to define SDP constraints. Additionally, putting these together, we get the following product property.
\[\Lambda^{a}\Lambda^{b}=\frac{2}{d}\delta_{ab}I+\sum_{c=1}^{d^{2}-1}d_{abc} \Lambda_{c}+i\sum_{c=1}^{d^{2}-1}f_{abc}\Lambda_{c} \tag{9}\]
We note that these can be extended to include \(\Lambda^{0}\) by having \(f_{0ab}=0\) and \(d_{0ab}=\frac{2}{d}\) for all \(a,b\in[0,d^{2}-1]\), then \(\{\Lambda^{a},\Lambda^{b}\}=2\sum_{c=1}^{d^{2}-1}d_{abc}\Lambda_{c}\). We, however, will not do this as it is more convenient to separate the identity terms from the non-identity terms.
**Remark 2**.: For the purposes of this paper, we do not care about the exact definitions of the Gell-Mann matrices and instead just their algebraic structure. That is, we require a set \(\{\Gamma_{a}\}_{a\in[d^{2}-1]}\) of \(d\times d\), traceless, hermitian matrices that satisfy \(\operatorname{tr}(\Gamma_{a}\Gamma_{b})=2\delta_{ab}\). However, we use the specific set. \(\{\Lambda_{a}\}_{a\in[d^{2}-1]}\), of Gell-Mann matrices to make our calculation easier.
## 3 The Quantum Max-\(d\)-Cut Hamiltonian and the \(Su(d)\) Heisenberg model
In this paper, we focus on the \(d\)-dimensional qudit generalization of the well-studied Heisenberg model, called the \(SU(d)\) Heisenberg model with no local terms (often known as the \(SU(N)\) Heisenberg model in physics literature [10, 1, 2]). This model is defined as having the edge interaction
\[h^{\operatorname{Heis}_{d}}=\frac{1}{4}\sum_{a=1}^{d^{2}-1}\Gamma^{a}\otimes \Gamma^{a} \tag{10}\]
where \(\{\Gamma^{a}\}_{a\in[d^{2}-1]}\) is some set of traceless hermitian matrices satisfying \(\operatorname{tr}(\Gamma_{a}\Gamma_{b})=2\delta_{ab}\) and are generators of \(\mathfrak{su}(d)\). Of particular interest is when \(\{\Gamma^{a}\}_{a\in[d^{2}-1]}=\{\Lambda^{a}\}_{a\in[d^{2}-1]}\) are the generalized Gell-Mann matrices, defined in Definition 2.
Then, for a graph \(G=(V,E)\), we can define the full problem Hamiltonian to be \(H_{G}:=\sum_{e\in E}w_{e}h_{e}\). Following the convention for Hamiltonian optimization problems, we define the ground state energy to the minimal eigenvalue (rather than the maximal energy as is the case for Quantum Max-Cut).
We note that up to adding an identity term, (10) is nothing but the projector onto the symmetric subspace of \(\mathcal{H}_{d}^{\otimes 2}:=(\mathbb{C}^{d})^{\otimes 2}\). We can formalize this with the following proposition.
**Proposition 1**.: _For \(P_{sym}\) being the orthogonal projector onto the symmetric subspace of \(\mathcal{H}_{d}^{\otimes 2}\), we have that \(\frac{1}{2}\left(\frac{d+1}{d}\right)I+h^{\operatorname{Heis}_{d}}=P_{sym}\)._
We note that the \(SU(d)\) Heisenberg model is known to be universal in the sense that any other Hamiltonian can be simulated by a Hamiltonian made from these \(SU(d)\) Heisenberg edge interactions [14].
With this, we can now define the Quantum Max-\(d\)-Cut edge interaction and Hamiltonian, which we will show is closely related to the \(SU(d)\) Heisenberg model.
**Definition 3** (The Quantum Max-\(d\)-Cut edge interaction).: For \(2\)-qudits, we define the Quantum Max-\(d\)-Cut _edge interaction_ to be the projector onto the antisymmetric subspace of two, \(d\)-dimensional qudits. Using an orthonormal basis for the antisymmetric subspace, we get that
\[h:=\sum_{1\leq a<b\leq d}\left(\frac{1}{\sqrt{2}}|ab\rangle-\frac{1}{\sqrt{2}}| ba\rangle\right)\left(\frac{1}{\sqrt{2}}\langle ab|-\frac{1}{\sqrt{2}}\langle ba|\right) \tag{11}\]
We note that, because this is a \(2\)-qudit orthogonal projector onto the antisymmetric subspace, it is exactly the projector onto the complement of the symmetric subspace, i.e., \(h=I-P_{\text{sym}}\). Using Proposition1, we get that the Quantum Max-\(d\)-Cut edge interaction can be decomposed in the following way.
**Proposition 2** (Alternate definition for the Quantum Max-\(d\)-Cut edge interaction).: _The Quantum Max-\(d\)-Cut edge interaction can be written in terms of the generalized Gell-Mann matrices in the following way._
\[h=\frac{1}{2}\left(\frac{d-1}{d}\right)I-\frac{1}{4}\sum_{a=1}^{d^{2}-1} \Lambda^{a}\otimes\Lambda^{a} \tag{12}\]
We prove Propositions1 and 2 in AppendixB.
We note that from the definition, the Quantum Max-\(d\)-Cut edge interaction is naturally invariant under conjugation by local unitaries. Moreover, because Quantum Max-\(d\)-Cut is nothing but the \(SU(d)\) Heisenberg model and an identity term, finding it's ground state, or in our case the maximal eigenvalue, is QMA-complete as was proven by Piddock and Montanaro [14].
**Definition 4** (Quantum Max-\(d\)-Cut).: Let \(G=(V,E,w)\) be a graph with edge weights, which we refer to as the _interaction graph_. The Quantum Max-\(d\)-Cut _problem Hamiltonian_, \(H_{G}\in\mathcal{L}\left(\mathcal{H}_{d}^{\otimes V}\right)\), is given by
\[H_{G}=\frac{1}{W}\sum_{(u,v)\in E}w_{uv}h_{uv}=\underset{(u,v)\sim E}{\mathbf{E }}h_{uv} \tag{13}\]
where \(W=\sum_{(u,v)\in E}w_{uv}\) denotes the sum of the weights and \(h_{uv}\in\mathcal{L}\left(\mathcal{H}_{d}^{\otimes V}\right)\) denotes the Quantum Max-\(d\)-Cut edge interaction applied to the qudits \(u\) and \(v\), namely, \(h_{uv}\otimes I_{V\setminus\{u,v\}}\).
Equivalently, we can think of the interaction graph as giving a distribution over the the local Hamiltonians. Then we ask to optimize the expected energy over the local Hamiltonians. With that, we now give definitions to express the main focus of this work.
**Definition 5** (Energy of Quantum Max-\(d\)-Cut).: Let \(H_{G}\) be an instance of Quantum Max-\(d\)-Cut. The _energy_ of a state \(|\psi\rangle\in\mathcal{H}_{d}^{\otimes V}\), is the quantity \(\langle\psi|H_{G}|\psi\rangle\). The maximum energy, also referred to as the _value_, of \(H_{G}\) is
\[\text{QMax-$d$-Cut}(G)=\lambda_{\text{max}}(H_{G})=\max_{\begin{subarray}{c}| \psi\rangle\in\mathcal{H}_{d}^{\otimes V}\\ \langle\psi|\psi\rangle=1\end{subarray}}\langle\psi|H_{G}|\psi\rangle\]
Equivalently, we can define this using density matrices.
\[\text{QMax-$d$-Cut}(G)=\max_{\begin{subarray}{c}\rho\in\mathcal{D}\left( \mathcal{H}_{d}^{\otimes V}\right)\\ \text{s.t. }\rho^{2}=\rho\end{subarray}}\text{tr}(\rho H_{G})\]
We note that the constraint \(\rho^{2}=\rho\) requires that the density matrices represent pure states (note, we could equivalently have said that \(\text{tr}(\rho^{2})=1\)). We make this constraint explicit because we will also look at a relaxation of Quantum Max-\(d\)-Cut by allowing for ancilla bits, which will, in turn, allow us to drop this constraint.
**Definition 6** (Energy of Quantum Max-\(d\)-Cut with ancillas).: Let \(H_{G}\) be an instance of Quantum Max
\(d\)-Cut. Given a state with \(r\geq 0\) ancilla bits, \(|\psi\rangle\in\mathcal{H}_{d}^{\otimes V}\otimes\mathcal{H}_{d}^{\otimes r}\), it's energy is the quantity \(\langle\psi|H_{G}\otimes I_{A}|\psi\rangle\), for which, we denote with subscript \(A\) the space of \(r\) additional ancilla bits. The maximum energy of \(H_{G}\) is also given by
\[\text{QMax-$d$-Cut}_{\text{ancilla}}(G)=\max_{\begin{subarray}{c} \text{s.t. }|\psi\rangle\in\mathcal{H}_{d}^{\otimes V}\otimes\mathcal{H}_{d}^{ \otimes r}\text{ for }r\geq 0\\ \langle\psi|\psi\rangle=1\end{subarray}}\]
We can then make the following observations about this new problem.
**Proposition 3**.: _The problem of optimizing over states with ancilla qubits is equivalent to optimizing over mixed density matrices._
\[\text{QMax-$d$-Cut}_{\text{ancilla}}(G)=\max_{\rho\in\mathcal{D}\left( \mathcal{H}_{d}^{\otimes V}\right)}\operatorname{tr}(\rho H_{G})\]
_By equivalent, we mean that for every state with ancilla qubits, there is a density matrix that achieves the same energy._
This follows as a result of the Schmidt decomposition theorem and the concepts of purification and reduction [1]. Namely, for a state \(|\psi\rangle\in\mathcal{H}_{d}^{\otimes V}\otimes\mathcal{H}_{d}^{\otimes r}\), its energy can be expressed as follows.
\[\langle\psi|H_{G}\otimes I_{A}|\psi\rangle=\operatorname{tr}(|\psi\rangle \!\langle\psi|H_{G}\otimes I_{A})=\operatorname{tr}(\operatorname{tr}_{A}(| \psi\rangle\!\langle\psi|)H_{G})\]
Where \(\operatorname{tr}_{A}(|\psi\rangle\!\langle\psi|)\) is the partial trace of \(|\psi\rangle\!\langle\psi|\) over the ancilla bits in \(A\), which can be expressed as the mixed state density matrix, \(\rho:=\operatorname{tr}_{A}(|\psi\rangle\!\langle\psi|)\in\mathcal{D}\left( \mathcal{H}_{d}^{\otimes V}\right)\). In other words, any state with ancilla bits can be represented as a mixed state with no ancilla bits that achieves the same energy. Going the other direction, using the notation of purification, we know that any density matrix can be expressed as a partial trace of a pure state on a larger space.
**Remark 3**.: While allowing for ancillas can be seen as a relaxation of Quantum Max-\(d\)-Cut, the maximal energy is the same in both problems. So, in a sense, it is only a relaxation of the search space. Therefore, we will often refer to both definitions interchangeably with \(\text{QMax-$d$-Cut}(G)\).
For the purposes of rounding, we will consider two special cases of Quantum Max-\(d\)-Cut that restrict the search space to be over product states. This is a common first step in studying algorithms for Local Hamiltonian problems as it is a well-understood ansatz. The first of which is when the solution space is over product states of pure states. This is the ansatz originally used to study the Quantum Max-Cut problem [1, 2, 3] as well as general LHPs [11, 12] and has been used as a starting point for many subsequent algorithms [13, 14].
**Definition 7** (Pure product state value).: The product state value of pure states of \(H_{G}\) is
\[\text{PureProd}_{\text{QMC}_{d}}(G)=\max_{\begin{subarray}{c}\forall v\in V, |\psi_{v}\rangle\in\mathbb{C}^{d}\\ \langle\psi_{v}|\psi_{v}\rangle=1\end{subarray}}\langle\psi_{G}|H_{G}|\psi_{G}\rangle\]
where \(|\psi_{G}\rangle=\bigotimes_{v\in V}|\psi_{v}\rangle\). Equivalently, we can write this using density matrices.
\[\text{PureProd}_{\text{QMC}_{d}}(G)=\max_{\begin{subarray}{c}\forall v\in V, \rho_{v}\in\mathcal{D}(\mathbb{C}^{d})\\ \text{s.t. }\rho_{v}^{2}=\rho\end{subarray}}\operatorname{tr}(\rho_{G}H_{G})\]
where \(\rho_{G}=\bigotimes_{v\in V}\rho_{v}\).
Next, we will consider a product state solution of mixed states. Unlike before, in Definition 6, we won't look at a mere relaxation of the pure state variant, and instead, we will restrict to a certain level of mixed states that would no longer include pure states.
**Definition 8** (Mixed product state value).: The product state value of mixed states of \(H_{G}\) is
\[\textsc{MixedProd}_{\textsc{QMC}_{d}}(G)=\max_{\begin{subarray}{c}\forall v\in V,\rho_{v}\in\mathcal{D}(\mathbb{C}^{d})\\ \text{s.t. }\operatorname{tr}(\rho_{v}^{2})=\frac{1}{d-1}\end{subarray}} \operatorname{tr}(\rho_{G}H_{G})\]
where \(\rho_{G}=\bigotimes_{v\in V}\rho_{v}\).
We note that the constraint \(\operatorname{tr}(\rho_{v}^{2})=\frac{1}{d-1}\) is used to specify the "amount" of mixture that we want our states to have. This quantity is often referred to as the purity of a density matrix \(\rho\). The significance of this will be made clear in Section4 and Proposition6. For now, we note that in the \(d=2\) case, this is nothing but \(\textsc{PureProd}_{\textsc{QMC}_{2}}(G)\). Secondly, we note that the maximally mixed state has a purity of \(\operatorname{tr}\left(\frac{1}{d^{2}}I\right)=\frac{1}{d}\) and so for large \(d\), we are optimizing over states that are close to maximally mixed, especially considering that pure states give \(\operatorname{tr}(\left|\psi\middle\rangle\!\!\left\langle\psi\right|^{2})=1\). Nonetheless, as stated in Theorem1, Item1, we show that our algorithm achieves an improvement over a random assignment for all \(d\geq 2\), which in expectation is the same as the energy of the maximally mixed state. However, this improvement rapidly diminishes as stated in Theorem1, Item2.
## 4 Bloch Vectors and Product States
The use of the Bloch vectors is very important in the rounding algorithm by Gharibian and Parekh [1]. Their algorithm can be seen as an application of the Briet, de Olivera Filho, and Vallentin rounding algorithm [1], commonly referred to as projection rounding. The Bloch vectors for single qubits gave a unit sphere, called the Bloch sphere, to round to. The Bloch vectors on this sphere correspond bijectively to valid pure-state density matrices for single qubits. More generally, these Bloch vectors give a unit ball, called the Bloch ball, whose vectors correspond bijectively to valid density matrices, with the pure states being on the surface and mixed state being inside the ball. The Bloch vectors on the Bloch sphere have certain properties that make them favorable to round to, namely if two Bloch vectors, \(\vec{b}\) and \(\vec{b}^{\prime}\), have inner product \(\langle\vec{b},\vec{b}^{\prime}\rangle=-1\), their corresponding states are orthogonal under the Hilbert-Schmit/Frobenius inner product, i.e., \(\operatorname{tr}(\rho\rho^{\prime})=0\) for \(\rho\) and \(\rho^{\prime}\) being their respective density matrices. In this section, we look at the Bloch vectors more closely and their extension to qudits for use in a generalization of the Gharibian-Parekh rounding algorithm.
Bloch vectors for qudits have been extensively studied [11, 12, 13, 14], however not much is known about the geometry for \(d\geq 3\) other than that they don't occupy a ball. We denote the convex set of Bloch vectors that correspond with valid density matrices with \(\Omega_{d}\). We present an overview of the geometry of \(\Omega_{d}\) and discuss its implications for rounding. Then, we will look at product state solutions for Quantum Max-\(d\)-Cut. For a more complete discussion, see [1]. Additionally, in an effort to have a self-contained paper, we provide proofs for all the propositions given in this section in AppendixC.
In general, we can always decompose a matrix into a unique linear combination over some matrix basis. The generalized Gell-Mann (GGM) matrix basis gives a very convenient basis because the matrices are pairwise orthogonal, traceless, and hermitian. Combined with the identity matrix, we can easily enforce the unit trace constraint for density matrices. That is, given some density matrix, \(\rho\in\mathcal{D}(\mathbb{C}^{d})\), we can decompose it in the following way.
\[\rho=\frac{1}{d}I+\vec{b}\cdot\vec{\Lambda} \tag{14}\]
Where \(\vec{\Lambda}\) is used to represent the \((d^{2}-1)\)-operator vector of all GGM matrices and \(\vec{b}\cdot\vec{\Lambda}\) is the real linear combination of the GGM matrices with \(\vec{b}\) being the \((d^{2}-1)\)-vector component vector. We call \(\vec{b}\in\Omega_{d}\subseteq\mathbb{R}^{d^{2}-1}\) the Bloch vector. The matrix \(\rho^{*}=\frac{1}{d}I\) is a special density matrix called _the maximally mixed state_. This term is fixed because \(\operatorname{tr}(\rho)=1\) for all density matrices, and the GGM are all traceless. We note the following.
**Proposition 4** (Outsphere/Circumsphere).: _Given a density matrix \(\rho\) and it's corresponding Bloch vector \(\vec{b}\), we always have that \(\|\vec{b}\|\leq\sqrt{\frac{d-1}{2d}}\). Moreover, if \(\rho\) is a pure state, i.e., \(\rho^{2}=\rho\), then \(\|\vec{b}\|=\sqrt{\frac{d-1}{2d}}\)._
We refer to the sphere of radius \(\sqrt{\frac{d-1}{2d}}\) as the outsphere or circumsphere, which defines the minimal ball that contains \(\Omega_{d}\). All pure states lie on the surface of this ball, while all mixed states are inside the ball. While a useful property, we will not be able to turn this observation into a Gharibian-Parekh style rounding algorithm.
**Remark 4**.: While every valid density matrix has a unique Bloch vector, it is not the case that every vector, \(\vec{b}\) such that \(\|\vec{b}\|\leq\sqrt{\frac{d-1}{2d}}\), gives a valid density matrix, i.e., \(\Omega_{d}\) is not a ball for \(d\geq 3\). This is because the resulting matrix might not be PSD. This is easy to see in the \(k=3\) case with \(\frac{1}{3}I+\frac{1}{\sqrt{3}}\Lambda_{1}^{d}\), which has an eigenvalue of \(\frac{1}{3}-\frac{1}{\sqrt{3}}<0\). However, we note that this is true when \(d=2\), as we could equivalently define a density matrix with \(\operatorname{tr}(\rho^{2})\leq 1\) instead of \(\rho\succcurlyeq 0\).
This presents the first problem with adapting the Gharibian-Parekh rounding algorithm to higher-level qudit systems. We can't round to the circumsphere, which contains all pure states, and expect to get a valid density matrices like is the case for \(d=2\). In fact, the region of valid pure states is quite small. This is characterized by the fact that the set of pure states is nothing but \(\mathbb{C}P^{d-1}\), the complex projective space or the projective Hilbert space in \(d\) complex dimensions. Then when considering density matrices, we note that there is an isometric embedding of \(\mathbb{C}P^{d-1}\) into \(\mathcal{HM}_{d}\), the space of \(d\times d\) hermitian matrices with the Hilbert-Schmidt metric, for which they occupy a \(2(d-1)\) dimensional surface on the circumsphere [1]. So, rounding to pure states requires a more complete understanding of the geometry of pure states.
If, however, we allow for rounding to mixed-state solutions, then we still have hope. Within the convex region of valid Bloch vectors, \(\Omega_{d}\), there is a well-known maximal ball, which gives a sphere that we can still round to.
**Proposition 5**.: _Within \(\Omega_{d}\), there is a maximal ball of radius \(\frac{1}{\sqrt{2d(d-1)}}\) that consists entirely of Bloch vectors that correspond to valid density matrices._
We call the surface of this ball the insphere. To relate this back to subsection 8, we give the following relation.
**Proposition 6**.: _For a density matrix, \(\rho\), with corresponding Bloch vector, \(\vec{b}\), we have that \(\operatorname{tr}(\rho^{2})=\frac{1}{d-1}\) if and only if \(\|\vec{b}\|=\frac{1}{\sqrt{2d(d-1)}}\)._
We can observe that for \(d=2\), the outsphere is equal to the insphere, and so the Gharibian-Parekh rounding algorithm can be seen as rounding to Bloch vectors on the insphere. However, in all other cases, the insphere has a radius strictly smaller than that of the outsphere. Nonetheless, we argue that our rounding algorithm is a natural generalization of the Gharibian-Parekh rounding algorithm.
### Analysis of Product state solutions
In this section, we look at the energy of Quantum Max-\(d\)-Cut for pure states and mixed product state solutions. We start with an analysis of product state solutions made up of pure states.
As described above, all pure states have Bloch vectors with \(\|\vec{b}\|=\sqrt{\frac{d-1}{2d}}\). Because it is useful to work with unit vectors, we rescale \(\Omega_{d}\) by a factor of \(\sqrt{\frac{2d}{d-1}}\), which gives the new equation for the Bloch vector decomposition as
\[\rho=\frac{1}{d}I+\sqrt{\frac{d-1}{2d}}\vec{b}\cdot\vec{\Lambda} \tag{15}\]
Now, pure density matrices have unit Bloch vectors. We next give the following important observation.
**Proposition 7** ([13, 14]).: _For two pure states, \(\rho\) and \(\rho^{\prime}\), with Bloch vectors \(\vec{b}\) and \(\vec{b}^{\prime}\) as defined in (15), we have that \(\langle\vec{b},\vec{b}^{\prime}\rangle\geq-\frac{1}{d-1}\). Furthermore, these pure states are orthogonal, i.e., \(\operatorname{tr}(\rho\rho^{\prime})=0\), if and only if \(\langle\vec{b},\vec{b}^{\prime}\rangle=-\frac{1}{d-1}\)._
As a special case, for \(d=2\), this gives us the property that antipodal points correspond to orthogonal states. And in general, for \(d\geq 2\), this lines up with the classical Max-\(d\)-Cut problem [13] where the colors are represented by vertices on a simplex, which also have this same value for the inner product between different vertices. In the quantum setting, this notion is refereed to as the eigenvalue simplex.
Next, we look at the energy achieved by a product state solution of pure states. We consider the case of having a solution of the form \(|\psi_{G}\rangle=\bigotimes_{v\in V}|\psi_{v}\rangle\) to be a pure product state.
**Proposition 8**.: _For any graph \(G\), we always have that \(\textsc{PureProd}_{\textsc{QMC}_{d}}(G)\leq 1/2\)._
We note that in contrast, QMax-\(d\)-Cut\((G)\) can be as large as \(1\).
Proof.: For each \(v\in V\) let \(\vec{b}_{v}\in S^{d^{2}-2}\) be the Bloch vector for \(\rho_{v}:=|\psi_{v}\rangle\!\langle\psi_{v}|\) as given by the decomposition in (15). Then the energy of \(|\psi_{G}\rangle\) is given by
\[\langle\psi_{G}|H_{G}|\psi_{G}\rangle=\operatorname{tr}(\rho_{G}H_{G})= \operatorname*{\mathbf{E}}_{(u,v)\sim E}\operatorname{tr}\left(h_{uv}\rho_{u} \otimes\rho_{v}\right) \tag{16}\]
We then consider only a single one of these edge interaction terms to get that
\[\operatorname{tr}\left(h_{uv}\rho_{u}\otimes\rho_{v}\right) =\operatorname{tr}\left(\left(\frac{d-1}{2d}I-\frac{1}{4}\sum_{a= 1}^{d^{2}-1}\Lambda^{a}\otimes\Lambda^{a}\right)\left(\frac{1}{d}I+\sqrt{\frac {d-1}{2d}}\;\vec{b}_{u}\cdot\vec{\Lambda}\right)\otimes\left(\frac{1}{d}I+ \sqrt{\frac{d-1}{2d}}\;\vec{b}_{v}\cdot\vec{\Lambda}\right)\right)\] \[=\frac{d-1}{2d}\operatorname{tr}\left(\frac{1}{d^{2}}I\right)- \frac{d-1}{8d}\sum_{a=1}^{d^{2}-1}\vec{b}_{u}(a)\vec{b}_{v}(a)\operatorname{tr }(\Lambda_{a}^{2}\otimes\Lambda_{a}^{2})\] \[=\frac{d-1}{2d}-\frac{d-1}{2d}\left\langle\vec{b}_{u},\vec{b}_{v}\right\rangle\] \[=\frac{1}{2}\left(\frac{d-1}{d}\right)\left(1-\langle\vec{b}_{u},\vec{b}_{v}\rangle\right)\]
We can then upper bound this using Proposition 7 which gives us that \(\operatorname{tr}\left(h_{uv}\rho_{u}\otimes\rho_{v}\right)\leq\frac{1}{2}\). And putting that all together, we get that the energy of \(|\psi_{G}\rangle\) is equivalently given by
\[\operatorname{tr}(\rho_{G}H_{G})=\frac{1}{2}\left(\frac{d-1}{d}\right) \operatorname*{\mathbf{E}}_{(u,v)\sim E}\left(1-\langle\vec{b}_{u},\vec{b}_{ v}\rangle\right)\leq\frac{1}{2}\]
Note, unlike for the qubit case, this does not easily allow us to re-express the optimization over Bloch vectors because the set of pure states, while well defined, makes up a small portion of the surface of the circumsphere. We can, of course, restrict to the subset of \(S^{d^{2}-2}\) which are Bloch vectors that correspond to valid density matrices, as we do in the following proposition, but that does not lend well to a rounding algorithm.
**Proposition 9**.: _We can rewrite the pure product state value as follows where_
\[\Omega_{d}^{\textsc{ext}}=\{\vec{b}\in S^{d^{2}-2}\ |\ \vec{b}\star\vec{b}= \vec{b}\}\subset\Omega_{d}\]
_is the subset of valid Bloch vectors that are extremal points, i.e., that correspond to pure states. We use \((\vec{b}\star\vec{b})_{c}:=\sqrt{\frac{2d}{d-1}}\sum_{a,b=1}^{d^{2}-1}d_{abc}b _{a}b_{b}\) to denote the quantity referred to as the star product (scaled according to (15))._
\[\textsc{PureProd}_{\textsc{QMC}_{d}}(G)=\max_{f:V\rightarrow\Omega_{d}^{ \textsc{ext}}}\frac{1}{2}\left(\frac{d-1}{d}\right)\operatorname*{\mathbf{E} }_{(u,v)\sim E}\left(1-\langle f(u),f(v)\rangle\right) \tag{17}\]
Proof (sketch).: This follows from the steps used in the proof of Proposition 8 combined with the fact that Bloch vectors for pure states are exactly the set \(\Omega_{d}^{\textsc{ext}}=\{\vec{b}\in S^{d^{2}-2}\ |\ \vec{b}\star\vec{b}=\vec{b}\}\)[14, 15, 16, 17].
Regardless of whether we could round to a pure product state solution, we can't hope to get a rounding algorithm with an approximation ratio of more than \(1/2\). The situation is worse when we consider the product state solution of mixed states. Specifically, we consider states whose Bloch vectors lie on the maximal sphere in \(\Omega_{d}\), which has radius \(\sqrt{\frac{1}{2d(d-1)}}\). Again, it is useful to work with unit vectors, so we rescale \(\Omega_{d}\) by a factor of \(\sqrt{2d(d-1)}\), which gives the new equation for the Bloch vector decomposition as
\[\rho=\frac{1}{d}I+\frac{1}{\sqrt{2d(d-1)}}\vec{b}\cdot\vec{\Lambda} \tag{18}\]
Now, any density matrix with a purity of \(\operatorname{tr}(\rho^{2})=\frac{1}{d-1}\) will have a unit Bloch vector. We note that pure states in this model have Bloch vectors with \(\|\vec{b}\|=d-1\); we, however, will not use this fact. Additionally, we note that we no longer have the property that \(\langle\vec{b}_{u},\vec{b}_{v}\rangle\geq-\frac{1}{d-1}\) when \(\vec{b}_{u},\vec{b}_{v}\in S^{d^{2}-2}\) and instead we have that \(1\geq\langle\vec{b}_{u},\vec{b}_{v}\rangle\geq-1\), which is true for all inner products between unit vectors. Importantly, this means we lose the notion of orthogonality among these mixed states.
Now, we can look at the energy achieved by a product state solution of mixed states with bounded purity. Namely, \(\rho_{G}=\bigotimes_{v\in V}\rho_{v}\), where \(\rho_{v}\) is a mixed state which lies on the maximal sphere of density matrices, i.e., \(\operatorname{tr}(\rho_{v}^{2})=\frac{1}{d-1}\) for all \(v\in V\).
**Proposition 10**.: _We can rewrite the mixed product state value as follows._
\[\textsc{MixedProd}_{\textsc{QMC}_{d}}(G)=\max_{f:V\to S^{d^{2}-2}}\frac{1}{2} \left(\frac{d-1}{d}\right)\underset{(u,v)\sim E}{\mathbf{E}}\left(1-\frac{ \langle f(u),f(v)\rangle}{(d-1)^{2}}\right) \tag{19}\]
_Furthermore, we have that \(\textsc{MixedProd}_{\textsc{QMC}_{d}}(G)\leq\frac{1}{2}-o_{d}(1)\)._
Proof.: For each \(v\in V\) let \(\vec{b}_{v}\in S^{d^{2}-2}\) be its Bloch vector as given by the decomposition in (18). Then, as we did for Proposition7, we consider the energy for only a single edge interaction to get that
\[\operatorname{tr}\left(h_{uv}\rho_{u}\otimes\rho_{v}\right) =\operatorname{tr}\left(\left(\frac{d-1}{2d}I-\frac{1}{4}\sum_{a =1}^{d^{2}-1}\Lambda^{a}\otimes\Lambda^{a}\right)\left(\frac{1}{d}I+\frac{1}{ \sqrt{2d(d-1)}}\,\vec{b}_{u}\cdot\vec{\Lambda}\right)\otimes\left(\frac{1}{d}I +\frac{1}{\sqrt{2d(d-1)}}\,\vec{b}_{v}\cdot\vec{\Lambda}\right)\right)\] \[=\frac{d-1}{2d}\operatorname{tr}\left(\frac{1}{d^{2}}I\right)- \frac{1}{8d(d-1)}\sum_{a=1}^{d^{2}-1}\vec{b}_{u}(a)\vec{b}_{v}(a)\operatorname {tr}((\Lambda^{a})^{2}\otimes(\Lambda^{a})^{2})\] \[=\frac{d-1}{2d}-\frac{1}{2d(d-1)}\,\langle\vec{b}_{u},\vec{b}_{v}\rangle\] \[=\frac{1}{2}\left(\frac{d-1}{d}\right)\left(1-\frac{\langle\vec{b }_{u},\vec{b}_{v}\rangle}{(d-1)^{2}}\right)\]
This expression is maximized when \(\langle\vec{b}_{u},\vec{b}_{v}\rangle=-1\), which gives us that
\[\operatorname{tr}\left(h_{uv}\rho_{u}\otimes\rho_{v}\right)\leq\frac{(d-1)^{2 }+1}{2d(d-1)}=\frac{1}{2}-\frac{d-2}{2d(d-1)}=\frac{1}{2}-o_{d}(1) \tag{20}\]
Then the energy of the mixed product state solution \(\rho_{G}=\bigotimes_{v\in V}\rho_{v}\) is given by.
\[\operatorname{tr}(H_{G}\rho_{G})=\frac{1}{2}\left(\frac{d-1}{d}\right) \underset{(u,v)\sim E}{\mathbf{E}}\left(1-\frac{\langle\vec{b}_{u},\vec{b}_{v }\rangle}{(d-1)^{2}}\right)\leq\frac{1}{2}-o_{d}(1)\]
Finally, we can optimize over the Bloch vectors \(\{\vec{b}_{u}\}_{v\in V}\), because they are in bijection with density matrices such that \(\operatorname{tr}(\rho^{2})=\frac{1}{d-1}\), as shown by Propositions5 and 6, and the fact that the GGM matrices and the identity form a real basis for Hermitian matrices.
**Remark 5**.: While the approximation ratio for any algorithm that gives these mixed states is bounded by \(\frac{1}{2}-o_{d}(1)\), we note that this is not too far from the \(\frac{1}{2}\) bound for pure states. At it's worse, when \(d=3,4\) we have that the energy is at most \(\frac{5}{12}\approx 0.41667\).
Lastly, we look at the energy of a random assignment of pure states, or rather the maximally mixed state \(\rho_{G}^{*}=\frac{1}{d^{n}}I=\bigotimes_{u\in V}\rho^{*}\), where \(\rho^{*}=\frac{1}{d}I\) we use to denote the maximally mixed state of a single qudit. This can be seen as analogous to the classical random assignment. Furthermore, it is also a product state solution.
**Proposition 11**.: _The energy of the maximally mixed states is \(\frac{1}{2}\left(1-\frac{1}{d}\right)\)._
Proof.: We consider the maximally mixed state over all vertices, which is given by \(\rho_{G}^{*}=\bigotimes_{u\in V}\rho_{v}^{*}\). We first consider the energy of a single-edge interaction.
\[\operatorname{tr}\left(h_{uv}\rho_{*}\otimes\rho_{*}\right) =\operatorname{tr}\left(\left(\frac{d-1}{2d}I-\frac{1}{4}\sum_{a =1}^{d^{2}-1}\Lambda^{a}\otimes\Lambda^{a}\right)\left(\frac{1}{d}I\right) \otimes\left(\frac{1}{d}I\right)\right)\] \[=\frac{d-1}{2d}\operatorname{tr}\left(\frac{1}{d^{2}}I\right)- \frac{1}{4d^{2}}\sum_{a=1}^{d^{2}-1}\operatorname{tr}(\Lambda^{a}\otimes \Lambda^{a})\] \[=\frac{d-1}{2d}=\frac{1}{2}\left(1-\frac{1}{d}\right)\]
which can then be extended to the full graph to get
\[\operatorname{tr}\left(H_{G}\rho_{*}^{\otimes V}\right)=\frac{1}{2}\left(1- \frac{1}{d}\right)\]
Combined with the trivial upper bound of QMax-\(d\)-Cut\((G)\leq 1\), we note that this gives an approximation no worse than \(\frac{1}{2}\left(1-\frac{1}{d}\right)\).
## 5 Sum-of-Squares Hierarchies and SDP relaxations
In this section, we derive the Semidefinite Program (SDP) that we used to relax the Quantum Max-\(d\)-Cut problem. We do so in a general way that provides a general framework to derive SDPs for other local Hamiltonian problems that optimize over higher-level qudits. We note that much of this framework generalizes existing derivations of SDPs that optimize Hamiltonians over qubits [1, 2, 1, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 88, 89, 91, 80, 83, 85, 89, 92, 86, 87, 88, 89, 93, 88, 94, 89, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 228, 207, 208, 211, 229, 215, 216, 217, 218, 229, 220, 221, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 230, 232, 234, 238, 239, 241, 231, 235, 236, 237, 239, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 287, 288, 289, 291, 288, 289, 280, 283, 285, 286, 287, 288, 289, 292, 293, 288, 289, 281, 289, 294, 281, 282, 283, 285, 286, 287, 288, 289, 295, 289, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 306, 308, 311, 333, 34, 309, 324, 309, 332, 34, 306, 311, 334, 309, 333, 34, 35, 36, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 50, 51, 52, 54, 53, 57, 59, 51, 53, 54, 58, 59, 52, 50, 51, 53, 57, 59, 52, 54, 55, 56, 57, 58, 59, 50, 51, 54, 59, 52, 55, 57, 59, 53, 50, 51, 55, 56, 57, 59, 54, 51, 58, 59, 50, 52, 59, 50, 53, 51, 56, 57, 59, 51, 58, 59, 52, 50, 54, 55, 59, 52, 53, 54, 55, 56, 57, 58, 59, 50, 51, 50, 52, 59, 51, 54, 55, 59, 52, 51, 56, 57, 58, 59, 50, 51, 59, 52, 52, 53, 54, 55, 57, 59, 50, 53, 51, 58, 59, 50, 54, 55, 59, 51, 55, 59, 52, 55, 59, 53, 56, 57, 59, 50, 54, 55, 57, 59, 52, 56, 58, 59, 50, 55, 51, 59, 52, 57, 58, 59, 50, 51, 50, 52, 59, 53, 54, 55, 56, 57, 59, 51, 58, 59, 52, 59, 53, 59, 50, 54, 55, 59, 50, 55, 56, 57, 59, 51, 52, 59, 53, 54, 55, 57, 59, 54, 56, 58, 59, 50, 57, 59, 51, 50, 52, 59, 50, 51, 52, 59, 53, 56, 57, 59, 52, 51, 59, 54, 55, 59, 52,
As we did for single qudits, it is useful to decompose this into a basis of orthogonal hermitian operators for the complex vector space \(\mathcal{L}\left(\mathcal{H}_{d}^{\otimes n}\right)\) (where, again, \(\mathcal{H}_{d}^{\otimes n}:=(\mathbb{C}^{d})^{\otimes n}\)). For this, we extend the generalized Gell-Mann matrices to get the following basis. We note that, as a real vector space, the following basis also gives us the space of hermitian operators.
\[\mathcal{P}_{d}^{n}:=\{\Lambda^{a}\ |a\in[0,d^{2}-1]\}^{\otimes n} \tag{22}\]
We again note that \(\Lambda^{0}:=\sqrt{\frac{2}{d}}I\) is used to denote the normalized identity.
**Remark 6**.: Due to how we defined the subscript notation, we note that \(\Lambda_{u}^{a}:=\Lambda_{u}^{a}\otimes I_{[n]\setminus\{u\}}\notin\mathcal{P }_{d}^{n}\) is not a basis element. It is instead a scalar multiple of the basis element \(\Lambda_{u}^{a}\otimes\Lambda_{[n]\setminus\{u\}}^{0}\). Nonetheless, for the sake of readability, we often will conflate the two, using the fact that \(I=\sqrt{\frac{d}{2}}\Lambda^{0}\), to simplify many of our expressions.
Then our local Hamiltonian optimization problem can be expressed as an optimization over real vectors \(\vec{y}\in\mathbb{R}^{d^{2n}}\) such that \(\rho_{\vec{y}}:=\frac{1}{2^{n}}\sum_{A\in\mathcal{P}_{d}^{n}}y(A)A\) is a valid density matrix (in particular, that \(\rho_{\vec{y}}\succcurlyeq 0\) and that \(\operatorname{tr}(\rho_{\vec{y}})=1\)). Conversely, given a density matrix \(\rho\in\mathcal{D}\left(\mathcal{H}_{d}^{\otimes n}\right)\), we can define the vector such that \(y_{\rho}(A):=\operatorname{tr}(\rho A)\) for \(A\in\mathcal{P}_{d}^{n}\). To influence how we define the SDP, we consider the bilinear form, \(M:\mathcal{L}(\mathcal{H}_{d}^{\otimes n})\times\mathcal{L}(\mathcal{H}_{d}^ {\otimes n})\to\mathbb{C}\), defined below, that will become our moment matrix.
\[M(A,B):=\operatorname{tr}(\rho A^{*}B) \tag{23}\]
As a matrix, this is nothing but \(M=\left(\operatorname{tr}(\rho A^{*}B)\right)_{A,B\in\mathcal{P}_{d}^{n}}\).3 We note that this gives an equivalent definition of a density matrix (namely, \(M\) such that \(M\succcurlyeq 0\), it knows about matrix multiplication, i.e., \(M(A,B)=M(A^{\prime},B^{\prime})\) such that \(A^{*}B=A^{\prime*}B^{\prime}\), and that \(M(I,I)=1\)). Then, this gives us an equivalent way to optimize; namely, we optimize \(M(I,H)\) over bilinear forms, \(M\), that satisfy these properties.
Footnote 3: Note that while we define the matrix with the basis, \(\mathcal{P}_{d}^{n}\), we will often give constraint/specific matrix entries using scalar multiples of the basis elements that align with the index notation, \(\Lambda_{u}^{a}\). See Remark 6.
We note that because our basis is of exponential size, we are optimizing over an exponential number of variables. To counteract this, we consider the notion of a pseudo-density matrix. First, we define a new basis that bounds the Pauli weight of the basis elements.
**Definition 10**.: For a basis element \(A\in\mathcal{P}_{d}^{n}\), we define its _Pauli weight_, denoted \(\omega(A)\), to be the number of non-identity terms (up to scalar multiples) or the number of qudits such that \(A\) doesn't act as the identity (up to scalar multiples). For example \(\Lambda_{u}^{a}\otimes\Lambda_{v}^{b}\otimes\Lambda_{[n]\setminus\{u,v\}}^{0}\) has Pauli weight \(2\) assuming \(a,b\neq 0\). We extend this notion to an arbitrary operator to be the largest Pauli weight among all of its non-zero components.
Equivalently, we can think of the basis elements as monomials and an arbitrary operator \(A\in\mathcal{H}_{d}^{\otimes n}\) as a polynomial. The degree of each monomial is its Pauli weight, and the maximum degree among all terms in \(A\) is the degree of \(A\). We then define the following basis for the subspace of operators with Pauli weight/degree at most \(t\), denoted \(\mathcal{L}^{(t)}(\mathcal{H}_{d}^{\otimes n})\).
\[\mathcal{P}_{d}^{n}(t):=\{A\in\mathcal{P}_{d}^{n}\ |\ \omega(A)\leq t\} \tag{24}\]
**Definition 11** (degree-\(2t\) pseudo-density Matrix).: A matrix \(\tilde{\rho}\in\mathcal{L}(\mathcal{H}_{d}^{\otimes n})\) is called a _degree-\(2t\) pseudo-density matrix_ over \(n\), \(d\)-dimensional qudits if \(\operatorname{tr}(\tilde{\rho})=1\), \(\tilde{\rho}^{*}=\tilde{\rho}\), and \(\operatorname{tr}(\tilde{\rho}A^{*}A)\geq 0\) for all \(A\in\mathcal{L}^{(t)}(\mathcal{H}_{d}^{\otimes n})\). Additionally, we denote the space of degree-\(2t\) pseudo-density matrices as
\[\tilde{\mathcal{D}}^{(2t)}(\mathcal{H}_{d}^{\otimes n}):=\{\tilde{\rho}\in \mathcal{L}(\mathcal{H}_{d}^{\otimes n})\ |\ \operatorname{tr}(\tilde{\rho})=1,\ \tilde{\rho}^{*}=\tilde{\rho},\ \operatorname{tr}( \tilde{\rho}A^{*}A)\geq 0\text{ for all }A\in\mathcal{L}^{(t)}(\mathcal{H}_{d}^{ \otimes n})\}\]
This can be most easily seen as a relaxation of the PSD constraint on the density matrices.
**Proposition 12** (degree-\(2n\) pseudo-density Matrices are Valid Density Matrices).: \(\tilde{\mathcal{D}}^{(2n)}(\mathcal{H}_{d}^{\otimes n})=\mathcal{D}(\mathcal{H }_{d}^{\otimes n})\)
Proof.: By definition, for \(\tilde{\rho}\in\tilde{\mathcal{D}}^{(2n)}(\mathcal{H}_{d}^{\otimes n})\), we have that \(\operatorname{tr}(\tilde{\rho}A^{*}A)\geq 0\) for all \(A\in\mathcal{L}^{(n)}(\mathcal{H}_{d}^{\otimes n})=\mathcal{L}(\mathcal{H}_{d}^ {\otimes n})\). This is nothing but the PSD definition, so \(\tilde{\rho}\succcurlyeq 0\) and thus is a valid density matrix.
And so we have the following hierarchy that indicates how ncSoS relaxes the problem of optimizing over valid density matrices.
\[\tilde{\mathcal{D}}^{(2)}(\mathcal{H}_{d}^{\otimes n})\supset\tilde{\mathcal{ D}}^{(4)}(\mathcal{H}_{d}^{\otimes n})\supset\cdots\supset\tilde{\mathcal{D}}^{(2(n-1 ))}(\mathcal{H}_{d}^{\otimes n})\supset\tilde{\mathcal{D}}^{(2n)}(\mathcal{H }_{d}^{\otimes n})=\mathcal{D}(\mathcal{H}_{d}^{\otimes n}) \tag{25}\]
**Remark 7**.: While a pseudo-density matrix is defined to be Hermitian, and thus it can be written as a real combination of \(\mathcal{P}_{d}^{n}\), the definition says nothing about the components of terms in \(\mathcal{P}_{d}^{n}\setminus\mathcal{P}_{d}^{n}(2t)\). Furthermore, because we are working with 2 local Hamiltonians and thus the objective does not depend on them either (even for \(t=1\)), the values of these components can be ignored or, without loss of generality, can be set to 0. In other words, we can assume that the pseudo-density matrix \(\tilde{\rho}\in\mathcal{L}^{(2t)}(\mathcal{H}_{d}^{\otimes n})\) has bounded Pauli weight.
All in all, this allows us to rephrase our optimization over vectors \(\tilde{y}\in\mathbb{R}^{O(n^{t}d^{2t})}\), such that \(\tilde{\rho}_{\tilde{y}}:=\frac{1}{2^{n}}\sum_{A\in\mathcal{P}_{d}^{n}(2t)}y( A)A\) is a valid degree-\(2t\) pseudo-density matrix. We note that \(|\tilde{y}|\) is now polynomial in \(n\) and \(d\), but exponential in the SoS degree, \(t\). Equivalently, we can look at the bilinear form \(\tilde{M}:\mathcal{L}^{(t)}(\mathcal{H}_{d}^{\otimes n})\times\mathcal{L}^{( t)}(\mathcal{H}_{d}^{\otimes n})\to\mathbb{C}\), defined below, that will become our pseudo moment matrix.
\[\tilde{M}(A,B):=\operatorname{tr}(\tilde{\rho}A^{*}B) \tag{26}\]
What makes this useful is that it still has all the same properties on \(\tilde{M}=(\operatorname{tr}(\tilde{\rho}A^{*}B))_{A,B\in\mathcal{P}_{d}^{n}( 2t)}\) as it did before (namely that \(\tilde{M}\) on the subspace \(\mathcal{L}^{(t)}(\mathcal{H}_{d}^{\otimes n})\), is still PSD). And because the basis is polynomial in \(n\) and \(d\), so is the size of the matrix \(\tilde{M}\) in the basis of \(\mathcal{P}_{d}^{n}(2t)\).
Before we give the SDP that we use in this paper, we make some additional observations about the ncSoS hierarchy. Namely, we look at degree one terms.
**Lemma 1** (degree-one terms).: _Let \(\{h_{\alpha}\}_{\alpha}\) be a set of local Hamiltonians invariant under conjugation of local unitaries. For any (pseudo)-density matrix \(\rho=\frac{1}{2^{n}}\sum_{A\in\mathcal{P}_{d}^{n}}y(A)A\), there is another density matrix \(\rho^{\prime}=\frac{1}{2^{n}}\sum_{A\in\mathcal{P}_{d}^{n}}y^{\prime}(A)A\) such that that \(y^{\prime}(A)=0\) for all \(A\in\mathcal{P}_{d}^{n}(1)\setminus\{I\}\) being a degree one basis vector, which achieves the same expected energy, \(\mathbf{E}_{\alpha}\left[\operatorname{tr}(\rho h_{\alpha})\right]=\mathbf{E }_{\alpha}\left[\operatorname{tr}(\rho^{\prime}h_{\alpha})\right]\)._
We prove this lemma in Appendix D. This allows us to assume without loss of generality that there are no degree-one components in \(\rho\) and, in turn, will allow us to simplify the SDP and upper bound its value. Namely, as a result, we have the following property of the bilinear form, \(M\), using (9).
\[M(\Lambda_{u}^{a},\Lambda_{u}^{b})=\operatorname{tr}(\rho\Lambda_{u}^{a} \Lambda_{u}^{b})=\frac{2}{d}\delta_{ab}+\operatorname{tr}(\rho\mathcal{O})= \frac{2}{d}\delta_{ab} \tag{27}\]
Where \(\mathcal{O}\) is some complex combination of degree-one terms, and thus, by Lemma 1, their component's onto \(\rho\) are zero. As a direct result, this allows us to view the degree-two ncSoS SDP as a real SDP (combined with the fact that \(M(\Lambda_{u}^{a},\Lambda_{v}^{b})=M(\Lambda_{v}^{b},\Lambda_{u}^{a})\) for \(u\neq v\)). In previous work, this was done by considering the new SDP matrix \(M^{\prime}=\frac{1}{2}(M^{\top}+M)\), which we observe is not needed for degree-two ncSoS. However, this is still needed for SDPs built off of higher degree ncSoS as Lemma 1 says nothing about degree-three terms or odd degree terms in general. In fact, as we discuss in Appendix D, odd degree terms cannot, in general, be ignored or without loss of generality be set to zero.
We finish our a priori analysis of the SDP by writing it as a vector program similar to that used in [15]. To do this we use the fact that \(M\succcurlyeq 0\) and thus can be represented as a Gram matrix of the set of vectors denoted by \(\{|I\rangle\}\cup\{|\Lambda_{u}^{a}\rangle\mid\forall a\in[d^{2}-1],\ u\in V\} \subset\mathbb{R}^{n(d^{2}-1)+1}\). Alternatively, since the identity entries in \(M\) are constant, we consider the sub-matrix with Gram matrix of the vectors denoted by \(\{|\Lambda_{u}^{a}\rangle\mid\forall a\in[d^{2}-1],\ u\in V\}\). Here, we use the bra-ket notation to denote the SDP vectors, \(|\Lambda_{u}^{a}\rangle\in\mathbb{R}^{n(d^{2}-1)}\). The SDP vectors are such that \(\langle\Lambda_{u}^{a}|\Lambda_{v}^{b}\rangle=M(\Lambda_{u}^{a},\Lambda_{v}^{b})\). It is clearly a relaxation of Definition 9 as was observed in (25).
With that, we can start defining our SDP. Before we look at the SDP that we use in this paper, we consider the SDP that is equivalent to the level-two ncSoS. This is an interesting SDP because it can be seen as the natural qudit generalization of the SDP used in previous work [16, 15].
**Definition 12** (The level-two ncSoS SDP).: Given a graph \(G=(V,E,w)\), and 2-local Hamiltonians, invariant under conjugation of local unitaries, \(h_{uv}=\sum_{a,b=0}^{d^{2}-1}c^{ab}_{uv}\Lambda^{a}_{u}\Lambda^{b}_{v}\), for all \((u,v)\in E\), the value of the level-two ncSoS SDP is given by the following (for which we denote its value by \(\text{Lv2SoSSDP}(G)\)).
maximize \[\operatorname*{\mathbf{E}}_{(u,v)\sim E}\left[\sum_{a,b=0}^{d^{2}- 1}c^{ab}_{uv}\left\langle\Lambda^{a}_{u}|\Lambda^{b}_{v}\right\rangle\right]\] (28a) subject to \[\left\langle\Lambda^{a}_{u}|\Lambda^{b}_{u}\right\rangle=\frac{2 }{d}\delta_{ab}\qquad\forall u\in V,\;a,b\in[d^{2}-1],\] (28b) \[\left|A\right\rangle\in\mathbb{R}^{(d^{2}-1)n}\quad\forall A\in \mathcal{P}^{n}_{d}(1)\setminus\{I\}\] (28c)
Again, note that for constraints (28b) we don't index by basis elements of \(\mathcal{P}^{n}_{d}(1)\) explicitly and instead implicitly give the constraint that \(\left\langle\Lambda^{a}_{u}\otimes\Lambda^{0}_{V\setminus\{u\}}|\Lambda^{a}_{ u}\otimes\Lambda^{0}_{V\setminus\{u\}}\right\rangle=\left(\frac{2}{d}\right)^{n} \delta_{ab}\).
While this was not the case in previous work on Quantum Max-Cut, we note that this SDP (for \(d\geq 3\)) overshoots even the trivial upper bound of QMax-\(d\text{-Cut}(G)\leq 1\) for some simple graphs. As an example, this is the case for complete graphs of size \(1<n<d\) (when \(d\geq 3\)).
**Proposition 13**.: _For Quantum Max-\(d\)-Cut with interaction graph, \(K_{n}\), being the unweighted complete graph on \(n\) vertices, the level-two ncSoS SDP gets a value of \(\text{Lv2SoSSDP}_{QMC_{d}}(K_{n})=\frac{(d-1)(d+n)}{2d(n-1)}\)._
Proof.: Let \(\{\left|A\right\rangle\mid A\in\mathcal{P}^{n}_{d}(1)\setminus\{I\}\}\) be a solution for Definition12, and let \(\tilde{\rho}\) be the corresponding degree-two pseudo-density matrix as given by the correspondence, \(\left\langle A|B\right\rangle=\operatorname{tr}(\tilde{\rho}AB)\). Also, let the labels of the vertices be given by \(V=[n]\).
We first give an upper bound using the sum of squares proof technique. Namely, for \(r\geq 2\) non-identity basis matrices, \(\left(\Lambda^{a}_{u_{i}}\right)_{i\in[r]}\in\left(\mathcal{P}^{n}_{d}(1) \setminus\{I\}\right)^{\times r}\), that all pair-wise commute, we consider the degree-one matrix \(A=\sum_{i=1}^{r}\Lambda^{a_{i}}_{u_{i}}\), and it's square.
\[A^{2}=\left(\sum_{i=1}^{r}\Lambda^{a_{i}}_{u_{i}}\right)^{2}=\sum_{i=1}^{r} \left(\Lambda^{a_{i}}_{u_{i}}\right)^{2}+2\sum_{i<j\in[r]}\Lambda^{a_{i}}_{u_ {i}}\Lambda^{a_{j}}_{u_{j}}=\frac{2r}{d}I+\mathcal{O}+2\sum_{i<j\in[r]}\Lambda ^{a_{i}}_{u_{i}}\Lambda^{a_{j}}_{u_{j}}\]
Where \(\mathcal{O}\) is some linear combination of degree-one terms, which has \(\operatorname{tr}(\tilde{\rho}\tilde{\mathcal{O}})=0\) by Lemma1. Then we consider the following
\[0 \leq\operatorname{tr}(\tilde{\rho}A^{2})=\frac{2r}{d}+2 \operatorname{tr}\left(\tilde{\rho}\sum_{i<j\in[r]}\Lambda^{a_{i}}_{u_{i}} \Lambda^{a_{j}}_{u_{j}}\right)\] \[\Longrightarrow-\frac{r}{d} \leq\operatorname{tr}\left(\tilde{\rho}\sum_{i<j\in[r]}\Lambda^{ a_{i}}_{u_{i}}\Lambda^{a_{j}}_{u_{j}}\right)=\sum_{i<j\in[r]}\left\langle \Lambda^{a_{i}}_{u_{i}}|\Lambda^{a_{j}}_{u_{j}}\right\rangle\]
Putting this into the expression for the value of the SDP, (28a), with \(r=n\), gives us that
\[\operatorname*{\mathbf{E}}_{(u,v)\sim E}\left[\frac{1}{2}\left( \frac{d-1}{d}\right)-\frac{1}{4}\sum_{a=1}^{d^{2}-1}\left\langle\Lambda^{a}_{u} |\Lambda^{a}_{v}\right\rangle\right] =\frac{1}{2}\left(\frac{d-1}{d}\right)-\frac{1}{4}\sum_{a=1}^{d^{2 }-1}\operatorname*{\mathbf{E}}_{(u,v)\sim E}\left[\left\langle\Lambda^{a}_{u} |\Lambda^{a}_{v}\right\rangle\right]\] \[\leq\frac{1}{2}\left(\frac{d-1}{d}\right)-\frac{1}{4}\sum_{a=1}^{ d^{2}-1}\left(-\frac{2}{d(n-1)}\right)\] \[=\frac{(d-1)(d+n)}{2d(n-1)}\]
We can then show that this is an equality by considering the following possible SDP matrix, \(M\), with Cholesky decomposition giving the SDP vectors that gets the same value as above.
\[M(\Lambda_{u}^{a},\Lambda_{v}^{b})=\begin{cases}\frac{2}{d}&\text{if $a=b$ and $u=v$}\\ -\frac{2}{d(n-1)}&\text{if $a=b$ and $u\neq v$}\\ 0&\text{otherwise}\end{cases}\]
Finally, we can show that this matrix is PSD by observing that, up to some permutation of the rows and columns, it can be written as a block diagonal matrix, consisting of \(d^{2}-1\), blocks of size \(n\times n\), given by
\[\frac{2}{d}\begin{bmatrix}1&-\frac{1}{n-1}&-\frac{1}{n-1}&\cdots&-\frac{1}{n- 1}\\ -\frac{1}{n-1}&1&-\frac{1}{n-1}&\cdots&-\frac{1}{n-1}\\ -\frac{1}{n-1}&-\frac{1}{n-1}&1&\cdots&-\frac{1}{n-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ -\frac{1}{n-1}&-\frac{1}{n-1}&-\frac{1}{n-1}&\cdots&1\end{bmatrix} \tag{29}\]
which is nothing but the Gram matrix of the vectors in an \((n-1)\)-simplex, with the scalar \(\frac{2}{d}\). Meaning it and the matrix \(M\) are PSD and thus have a Cholesky decomposition.
Because this SDP gives a value greater than the trivial bound, we add additional constraints to correct for this. That is, we define our SDP to be the 2nd level of the ncSoS hierarchy in addition to enforcing that all two body moments are true density matrices. This mirrors the classical notion of the basic/simple SDP [11, 12], which optimizes over valid probability distributions over the local assignments to the payoff functions and uses inner products to enforce consistency between them. For general \(k\)-local Hamiltonian problems, we would extend this idea to enforce all \(k\)-body moments to be true density matrices.
We note that enforcing two body moments to be true density matrices doesn't change the number of variables in \(\vec{y}\), but rather just adds the additional constraints from the level 4 ncSoS that act on the existing variables. That is to say, taking the pseudo-density framework, for every distinct pair \(u\neq v\in[n]\), we enforce that \(\rho_{uv}:=\operatorname{tr}_{[n]\setminus\{i,v\}}(\tilde{\rho})\succcurlyeq 0\). We note that this idea has been used previously [1, 1, 2, 13, 14].
**Definition 13**.: [The SDP] Given a graph \(G=(V,E,w)\), and 2-local Hamiltonians, invariant under conjugation of local unitaries, \(h_{uv}=\sum_{a,b=0}^{d^{2}-1}c^{ab}_{uv}\Lambda_{u}^{a}\Lambda_{v}^{b}\), for all \((u,v)\in E\), the value of the SDP is given by (for which we denote its value by SDP(\(G\)))
\[\underset{\begin{subarray}{c}\rho_{uv}\in\mathcal{D}(\mathcal{H }_{d}^{\otimes 2})\\ \forall u<v\in V\end{subarray}}{\text{maximize}} \operatorname{\mathbf{E}}_{(u,v)\sim E}\left[\operatorname{tr} (\rho_{uv}h_{uv})\right]\] (30a) subject to \[\langle\Lambda_{u}^{a}|\Lambda_{u}^{b}\rangle =\frac{2}{d}\delta_{ab} \forall u\in V,\ a,b\in[d^{2}-1], \tag{30b}\] \[\langle\Lambda_{u}^{a}|\Lambda_{v}^{b}\rangle =\operatorname{tr}(\rho_{uv}\Lambda^{a}\otimes\Lambda^{b}) \forall u<v\in V,\ \forall a,b\in[d^{2}-1],\] (30c) \[\operatorname{tr}(\rho_{uv}\Lambda^{a}\otimes I) =0 \forall u<v\in V,\ \forall a\in[d^{2}-1],\] (30d) \[\operatorname{tr}(\rho_{uv}I\otimes\Lambda^{a}) =0 \forall u<v\in V,\ \forall a\in[d^{2}-1],\] (30e) \[|A\rangle \in\mathbb{R}^{(d^{2}-1)n} \forall A\in\mathcal{P}_{d}^{n}(1)\setminus\{I\} \tag{30f}\]
Here, the only difference is the addition of new decision variables, \(\rho_{uv}\in\mathcal{D}(\mathcal{H}_{d}^{\otimes 2})\), and the constraints (30c) to (30e). This is clearly still a relaxation of Definition9 as a true density matrix has all of it's two body moments being true density matrices. These new constraints allow the SDP to achieve the aforementioned trivial upper bound.
**Proposition 14**.: _For Quantum Max-\(d\)-Cut with some interaction graph \(G=(V,E)\), our SDP gets a value of \(\text{SDP}_{\text{QMC}_{d}}(G)\leq 1\)._
Proof.: Let \(\{\rho_{uv}\in\mathcal{D}(\mathcal{H}_{d}^{\otimes 2})\}\) be the local density matrices and \(\{|A\rangle\mid A\in\mathcal{P}_{d}^{n}(1)\setminus\{I\}\}\) be the SDP vectors for a solution to Definition 13. We can use the fact that \(h_{uv}=I-(P_{\mathrm{sym}})_{uv}\), being the Quantum Max-\(d\)-Cut edge interations as defined in Definition 3, and that \(P_{\mathrm{sym}}^{2}=P_{\mathrm{sym}}\) to get that \(\mathrm{tr}(\rho_{uv}P_{\mathrm{sym}})=\mathrm{tr}(\rho_{uv}P_{\mathrm{sym}} ^{2})\geq 0\). Then
\[\mathop{\mathbf{E}}_{(u,v)\sim E}\left[\mathrm{tr}(\rho_{uv}h_{uv })\right] =\mathop{\mathbf{E}}_{(u,v)\sim E}\left[\mathrm{tr}(\rho_{uv}(I-P_ {\mathrm{sym}}))\right]\] \[=1-\mathop{\mathbf{E}}_{(u,v)\sim E}[\mathrm{tr}(\rho_{uv}P_{ \mathrm{sym}})]\] \[\leq 1\]
Additionally, we have that.
**Lemma 2**.: _The vector program of Definition 13 is an efficiently computable semidefinite program that provides an upper bound on Definition 9._
Proof.: Let \(\rho_{\mathrm{OPT}}\) be the density matrix that achieves that maximal energy in Definition 9. We can then prove this in three parts. First, we show that a solution of two body local density matrices \(\{\rho_{uv}\in\mathcal{D}(\mathcal{H}_{d}^{\otimes 2})\}\) that satisfy the consistency constraints putforward by the SDP vectors \(\{|A\rangle\mid A\in\mathcal{P}_{d}^{n}(1)\setminus\{I\}\}\) gives a valid degree-two pseudo-density matrix, \(\tilde{\rho}\), with all two body moments corresponding to valid density matrices. Second, that \(\rho_{\mathrm{OPT}}\) is also such a solution. Lastly, we show that the vector program of Definition 13 is efficiently computable.
Given the local density matrices \(\{\rho_{uv}\in\mathcal{D}(\mathcal{H}_{d}^{\otimes 2})\}\) and vectors \(\{|A\rangle\mid A\in\mathcal{P}_{d}^{n}(1)\setminus\{I\}\}\) that satisfy the constraints (30b) to (30e), we consider the Gram matrix, \(M\), of the SDP vectors, which is PSD. Adding back the identity (i.e., we let \(M(I,I)=1\) and \(M(I,\Lambda_{u}^{a})=0=M(\Lambda_{u}^{a},I)\) for \(a\in[d^{2}-1],u\in V\)), we get a bilinear form that corresponds to a valid degree-two pseudo-density matrix, \(\tilde{\rho}\). This follows from the fact that (30b), combined with Lemma 1, encodes the product property of (9). Moreover, we have that \(\mathrm{tr}_{[n]\setminus\{u,v\}}(\tilde{\rho})=\rho_{uv}\) for all \(u\neq v\in V\) by constraint (30c). In particular, \(\mathrm{tr}(\mathrm{tr}_{[n]\setminus\{u,v\}}(\tilde{\rho})\Lambda^{a}\otimes \Lambda^{b})=\mathrm{tr}(\tilde{\rho}\Lambda_{u}^{a}\Lambda_{v}^{b})=\langle \Lambda_{u}^{a}|\Lambda_{v}^{b}\rangle=\mathrm{tr}(\rho_{uv}\Lambda^{a} \otimes\Lambda^{b})\), meaning their linear decompositions on \(\mathcal{P}_{d}^{n}(2)\) are the same. Therefore, all of \(\tilde{\rho}\)'s two body moments are valid density matrices.
Next, consider the solution to (30) given by \(\rho_{uv}=\mathrm{tr}_{[n]\setminus\{u,v\}}(\rho_{\mathrm{OPT}})\) and \(\langle\Lambda_{u}^{a}|\Lambda_{v}^{b}\rangle=\mathrm{tr}(\rho_{\mathrm{OPT}} \Lambda_{u}^{a}\Lambda_{v}^{b})\). As observed by (25), \(\rho_{\mathrm{OPT}}\in\mathcal{D}^{(2)}(\mathcal{H}_{d}^{\otimes}n)\) is also a valid degree-two pseudo-density matrix and so there exist vectors such that \(\langle\Lambda_{u}^{a}|\Lambda_{v}^{b}\rangle=\mathrm{tr}(\rho_{\mathrm{OPT}} \Lambda_{u}^{a}\Lambda_{v}^{b})\). Moreover, the energy is the same, as \(\mathrm{tr}(\rho_{\mathrm{opt}}h_{uv})=\mathrm{tr}(\rho_{uv}h_{uv})\). Therefore,
\[\mathrm{SDP}(G)\geq\mathop{\mathbf{E}}_{(u,v)\sim E}\left[\mathrm{tr}\left( \rho_{\mathrm{OPT}}h_{uv}\right)\right]\]
We now argue that this vector program can be solved efficiently by showing that it is a real SDP with polynomial variables and polynomial constraints. To do this, we rephrase this program as optimizing over a matrix, \(M\in\mathbb{R}^{(\mathcal{P}_{d}^{n}(1)\setminus\{I\})\times(\mathcal{P}_{d}^ {n}(1)\setminus\{I\})}\), and replace \(\langle A|B\rangle\) with \(M(A,B)\) in the constraints. We also add the constraints \(M\succcurlyeq 0\) and \(M(A,B)=M(B,A)\), which before were implied by the fact we are optimizing over real vectors. Additionally, even though the local density matrices, \(\rho_{uv}\in\mathcal{D}(\mathcal{H}_{d}^{\otimes 2})\), may be complex, we can use the standard approach of separating the real and complex components and observing that \(\rho_{uv}\succcurlyeq 0\) if and only if
\[\begin{bmatrix}\mathrm{Re}(\rho_{uv})&-\mathrm{Im}(\rho_{uv})\\ \mathrm{Im}(\rho_{uv})&\mathrm{Re}(\rho_{uv})\end{bmatrix}\succcurlyeq 0\]
This is then a real SDP with linear equalities on the entries of \(M\) and \(\rho_{uv}\) for the constraints and a linear objective on the entries of \(\rho_{uv}\). Therefore, this can be solved efficiently in \(poly(n,d)\) time. Finally, we consider the Cholesky decomposition of \(M\) to get the vectors \(\{|A\rangle\mid A\in\mathcal{P}_{d}^{n}(1)\setminus\{I\}\}\).
We note that enforcing all two-body moments to be valid density matrices implies useful properties. Namely, that \(\operatorname{tr}(\rho_{uv}P_{\text{sym}})\geq 0\), which implies the following, by Proposition1 and that \(P_{\text{sym}}\) is a projector.
\[0 \leq\operatorname{tr}(\rho_{uv}P_{\text{sym}})\] \[0 \leq\frac{1}{2}\left(\frac{d+1}{d}\right)+\frac{1}{4}\sum_{a=1}^{ d^{2}-1}\operatorname{tr}(\rho_{uv}\Lambda^{a}\otimes\Lambda^{a})\] \[\sum_{a=1}^{d^{2}-1}\left\langle\Lambda^{a}_{u}|\Lambda^{a}_{v}\right\rangle \geq-2\left(\frac{d+1}{d}\right) \tag{31}\]
This can be seen to mirror the Frieze-Jerrum SDP constraint that \(\langle u|v\rangle\geq-\frac{1}{d-1}\), for SDP vectors \(|u\rangle\) and \(|v\rangle\)[13]. Going one step further, we define the following vectors for each vertex, \(u\in V\), which are the concatenation of all SDP vectors associated with the vertex, \(u\), then normalized. We refer to these vectors as the _stacked SDP vectors_.
\[|u\rangle:=\frac{1}{\sqrt{\frac{2}{d}(d^{2}-1)}}\bigoplus_{a=1}^{d^{2}-1}| \Lambda^{a}_{u}\rangle\in\mathbb{R}^{n(d^{2}-1)^{2}} \tag{32}\]
The normalization is used to ensure that the stacked vectors are unit vectors, and it comes from the fact that
\[\sum_{a=1}^{d^{2}-1}\left\langle\Lambda^{a}_{u}|\Lambda^{a}_{u}\right\rangle= \frac{2}{d}(d^{2}-1) \tag{33}\]
Using these stacked SDP vectors, we can reexamine (31), which now becomes exactly the Frieze-Jerrum SDP constraint, \(\langle u|v\rangle\geq-\frac{1}{d-1}\)[13]. We note that even if we didn't assume odd degree terms were \(0\), this fact is still true as (9) gives only diagonal generalize Gell-Mann matrices, which cancel out as per the anticommutation constants [1].
Lastly, we note that for the ncSoS SDP hierarchy for \(t\geq 2\), it would make more sense to use an operator program as was done in Definition3.3 of [13] as it would allow you to encode the complicated commutation and anticommutation relations more easily. Additionally, we can not ignore the identity terms as we did in the paper, as entries indexed by at least one identity term are not in general constant.
### Product State SDP
As was observed in [13] for Quantum Max-Cut, we can also define an SDP relaxation for the problem of optimizing Quantum Max-\(d\)-Cut over pure product-state solutions (Definition7 or equivalently Proposition9) by relaxing the size of the vectors we maximize over. We also enforce the constraint \(\langle u|v\rangle\geq-\frac{1}{d-1}\) on the SDP vectors as it is true of the vectors in \(\Omega^{\text{ext}}_{d}\) by Proposition7. All in all, we get the following SDP.
**Definition 14** (The Product State SDP).: For a Quantum Max-\(d\)-Cut instance with interaction graph \(G=(V,E,w)\), the value of the product state SDP is given by (for which we denote its value by \(\operatorname{ProdSDP}_{\text{QMC}_{d}}(G)\))
maximize \[\quad\frac{1}{2}\left(\frac{d-1}{d}\right)\underset{(u,v)\sim E }{\mathbf{E}}[1-\langle u|v\rangle]\] (34a) subject to \[\quad\langle u|v\rangle\geq-\frac{1}{d-1} \quad\forall u,v\in V,\] (34b) \[\quad|u\rangle\in S^{n-1} \quad\forall u\in V\] (34c)
**Remark 8**.: This is exactly the Frieze-Jerrum SDP [13] with an additional factor of
## 6 Rounding
In this section, we prove Theorems1 and 3. To do this, we give a brief overview of the projection rounding technique [1] used to round to mixed product states of qudits in much the same way that Gharibian and Parekh rounded to pure product state of single qubits [1]. For this paper, the end goal is to get a mixed product state solution. As was discussed in Section4, this can be represented efficiently by Bloch vectors for each vertex \(u\in V\) in the graph. So using the SDP defined in Definition13, for each vertex, \(u\in V\), we will round the stacked SDP vector \(\ket{u}\), given in (32), to a Bloch vector \(\vec{b}_{u}\) (or in the case of using Definition14, we round using the SDP vectors directly).
Projection rounding, first introduced by Briet, de Olivera Filho, and Vallentin [1] can be seen as a generalization of Halfspace rounding introduced introduced by Goemans and Williamson [1]. The algorithm goes as follows.
**Algorithm 1** (Quantum Max-\(d\)-Cut Rounding Algorithm).: _Input: \(\ket{u}\in S^{\ell-1}\) for each \(u\in V\) (where \(\ell=(d^{2}-1)^{2}n\) or \(\ell=n\))_
1. Pick a random matrix with i.i.d. standard Gaussian entries, \(\mathbf{Z}\sim\mathcal{N}(0,1)^{(d^{2}-1)\times\ell}\).
2. _Output:_\(\vec{b}_{u}:=\mathbf{Z}\ket{u}/\|\mathbf{Z}\ket{u}\|\) for each \(u\in V\).
Briet, de Olivera Filho, and Vallentin gave the following analytical tool to analyze the expectation of the inner product between two rounded vectors.
**Lemma 3** (Lemma2.1 from [1]).: _Let \(-1\leq\gamma\leq 1\), and let \(\vec{u}\) and \(\vec{v}\) be two \(n\)-dimensional unit vectors such that \(\langle\vec{u},\vec{v}\rangle=\gamma\). Let \(\mathbf{Z}\in\mathbb{R}^{k\times n}\) be a random matrix with entries chosen from \(kn\) i.i.d. standard Gaussians, \(\mathcal{N}(0,1)\). Then_
\[F^{*}(k,\gamma):=\mathbf{E}\left\langle\frac{\mathbf{Z}\vec{u}}{\|\mathbf{Z} \vec{u}\|},\frac{\mathbf{Z}v}{\|\mathbf{Z}v\|}\right\rangle=\frac{2\gamma}{k }\left(\frac{\Gamma((k+1)/2)}{\Gamma(k/2)}\right)^{2}\,{}_{2}F_{1}\left(\frac{ 1}{2},\frac{1}{2};\frac{k}{2}+1;\gamma^{2}\right)\]
_Where \(\,{}_{2}F_{1}(\cdot,\cdot;\cdot;\cdot)\) is the Gaussian hypergeometric function._
Our full algorithm is to solve the SDP given in Definition13 and then use the stacked vectors, given by (32), as input to Algorithm1. For finding the approximation ratio, \(\alpha_{d}\), it is enough to consider the worst-case edge. That is we consider an \(\alpha_{d}\) such that for all \((u,v)\in E\) we have that
\[\mathbf{E}\left[\frac{1}{2}\left(\frac{d-1}{d}\right)\left(1-\frac{\langle \vec{b}_{u},\vec{b}_{v}\rangle}{(d-1)^{2}}\right)\right]\geq\alpha_{d}\frac{1 }{2}\left(\frac{d-1}{d}\right)\left(1-\left(d+1\right)\langle u|v\rangle\right) \tag{35}\]
Here, the left hand side comes from Proposition10 and the right hand side comes from applying (32) to the SDP objective, (30a), of Definition13; namely,
\[\operatorname{tr}(\rho_{uv}h_{uv}) =\frac{1}{2}\left(\frac{d-1}{d}\right)-\frac{1}{4}\sum_{a=1}^{d^{ 2}-1}\left\langle\Lambda_{u}^{a}|\Lambda_{v}^{a}\right\rangle\] \[=\frac{1}{2}\left(\frac{d-1}{d}\right)-\frac{1}{2d}(d^{2}-1) \left\langle u|v\right\rangle\] \[=\frac{1}{2}\left(\frac{d-1}{d}\right)\left(1-\left(d+1\right) \left\langle u|v\right\rangle\right)\]
We calculate this worst case value by minimizing over \(\gamma=\langle u|v\rangle\geq-\frac{1}{d-1}\), as per the observation give in (31). We also have that \(\gamma\leq\frac{1}{d+1}\) as otherwise the bound on the optimal energy would be negative, which
is not possible as \(h_{uv}\) is PSD. This bound can be derived from \(\operatorname{tr}(\rho_{uv}h_{uv})\geq 0\) in much the same way we derived (31). This then gives us the following expression for \(\alpha_{d}\).
\[\alpha_{d}=\min_{-\frac{1}{d-1}\leq\gamma<\frac{1}{d+1}}\frac{1-\frac{F^{*}(d^{ 2}-1,\gamma)}{(d-1)^{2}}}{1-(d+1)\gamma} \tag{36}\]
We note that for \(d=2\), this is exactly the qubit case and thus gives the same expression and approximate ratio. Moreover, as was done for the analysis of Quantum Max-Cut, we can then numerically analyze (36), which gives the following (shown in Table 1). For completeness, we also give the optimal approx ratios for mixed states with \(\operatorname{tr}(\rho^{2})=\frac{1}{d-1}\) (which is given by (20) in the proof of Proposition 10). We show that the values of \(\gamma_{d}\) are exact (for \(d\geq 3\)) in Lemma 4.
Similarly, for the approximation to the optimal product-state solution (Definition 7), our full algorithm is to solve the SDP given in Definition 14 and then use the SDP vectors as input to Algorithm 1. The approximation ratios, \(\beta_{d}\), are then such that.
\[\operatorname*{\mathbf{E}}_{\mathbf{Z}}\left[\frac{1}{2}\left(\frac{d-1}{d} \right)\left(1-\frac{\langle\vec{b}_{u},\vec{b}_{v}\rangle}{(d-1)^{2}}\right) \right]\geq\beta_{d}\frac{1}{2}\left(\frac{d-1}{d}\right)(1-\langle u|v\rangle) \tag{37}\]
Here, the left hand side again comes from Proposition 10 and the right hand side comes directly from the SDP objective, (30a), of Definition 13. Finally,
\[\beta_{d}=\min_{-\frac{1}{d-1}\leq\gamma<1}\frac{1-\frac{F^{*}(d^{2}-1,\gamma )}{(d-1)^{2}}}{1-\gamma} \tag{38}\]
where the constraint \(\gamma=\langle u|v\rangle\geq-\frac{1}{d-1}\) comes directly from the product-state SDP, (34b). We hold off on solving for these values numerically as they will be determined exactly by Theorem 3.
**Lemma 4** (The Bad Angle For \(\alpha_{d}\)).: _For \(d\geq 3\), the value of \(-\frac{1}{d-1}\leq\gamma<\frac{1}{d+1}\) that achieves the approximation ratio in (36) is \(\gamma=-\frac{1}{d-1}\)._
Proof.: In order to prove this, we first prove that for integers \(d\geq 3\) and for \(\gamma\) in the range \(-\frac{1}{d-1}\leq\gamma<\frac{1}{d+1}\), we have that the partial derivative of (36) with respect to \(\gamma\) is non-negative:
\[\frac{\partial}{\partial\gamma}\left\{\frac{1-\frac{F^{*}(d^{2}-1,\gamma)}{( d-1)^{2}}}{1-(d+1)\gamma}\right\}\geq 0\]
It then follows that (36) is minimized when \(\gamma=-\frac{1}{d-1}\) takes the smallest value in this range.
\begin{table}
\begin{tabular}{c|c|c|c} \(d\) & \(\alpha_{d}\) & \(\gamma_{d}\) & Opt Mixed Prod State Ratio \\ \hline
2 & 0.498767 & \(-0.9659\) & \(\frac{1}{2}=0.5\) \\
3 & 0.372996 & \(-1/2\) & \(\frac{5}{2}\approx 0.416667\) \\
4 & 0.388478 & \(-\frac{1}{3}\) & \(\frac{12}{3}\approx 0.416667\) \\
5 & 0.406129 & \(-\frac{1}{4}\) & \(\frac{17}{40}=0.425\) \\
10 & 0.450614 & \(-\frac{1}{9}\) & \(\frac{401}{900}\approx 0.455556\) \\
100 & 0.495001 & \(-\frac{1}{99}\) & \(\frac{4901}{9900}\approx 0.495051\) \\ \end{tabular}
\end{table}
Table 1: The numerically calculated approximation ratios, the value of \(\gamma_{d}\) that minimized the expression (known as the βbad angleβ), and optimal mixed product state approximation ratios with \(\operatorname{tr}(\rho^{2})=\frac{1}{d-1}\) for different values of \(d\) (from (20)). For \(d\geq 3\), these values of \(\gamma_{d}\) are exact as stated in Lemma 4.
First, recall the series expansion definition of the Gaussian hypergeometric function:
\[{}_{2}F_{1}\left(a,b;c;z\right):=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n} }\frac{z^{n}}{n!}=1+\frac{ab\,z}{c}\frac{z}{1}+\frac{a(a+1)b(b+1)}{c(c+1)}\frac{ z^{2}}{2}+\cdots \tag{39}\]
where \((x)_{n}=\prod_{k=0}^{n-1}(x+k)\) denotes the rising factorial. Next, we note that the derivative of the Gaussian hypergeometric function with respect \(z\) is the following.
\[\frac{dz}{d}\,\,{}_{2}F_{1}\left(a,b;c;z\right)=\frac{ab}{c}\,{}_{2}F_{1} \left(a+1,b+1;c+1;z\right) \tag{40}\]
As an immediate result of this, we know that for \(a,b,c>0\), the derivative of the Gaussian hypergeometric function is positive for \(z>0\). Moreover, all of its derivatives are positive. In other words, the Gaussian hypergeometric function for \(a,b,c\geq 0\) is concave up when \(z>0\).
Using (40), we have that
\[\frac{\partial}{\partial\gamma}\left\{F^{*}\left(d^{2}-1,\gamma \right)\right\}=\frac{2}{d^{2}-1}\left(\frac{\Gamma\left(\frac{d^{2}}{2} \right)}{\Gamma\left(\frac{1}{2}\left(d^{2}-1\right)\right)}\right)^{2}\, \frac{\gamma^{2}\,{}_{2}F_{1}\left(\frac{3}{2},\frac{3}{2};\frac{1}{2}\left( d^{2}+3\right);\gamma^{2}\right)+\left(d^{2}+1\right)\,{}_{2}F_{1}\left( \frac{1}{2},\frac{1}{2};\frac{1}{2}\left(d^{2}+1\right);\gamma^{2}\right)}{ \left(d^{2}+1\right)}\]
Then,
\[\frac{\partial}{\partial\gamma}\left\{\frac{1-\frac{F^{*}(d^{2}-1 \gamma)}{(d-1)^{2}}}{1-(d+1)\gamma}\right\} =\frac{(d+1)(d-1)^{2}-(d+1)F^{*}\left(d^{2}-1,\gamma\right)-(1-(d+ 1)\gamma)\frac{\partial}{\partial\gamma}\left\{F^{*}\left(d^{2}-1,\gamma \right)\right\}}{(d-1)^{2}(1-(d+1)\gamma)^{2}}\] \[=\frac{d+1}{(1-(d+1)\gamma)^{2}}-\frac{2}{d^{2}-1}\left(\frac{ \Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{1}{2}\left(d^{2}-1 \right)\right)}\right)^{2}\left(\frac{1}{(d-1)^{2}\left(d^{2}+1\right)(1-(d+ 1)\gamma)^{2}}\right)\] \[\cdot\left(\left(d^{2}+1\right)\,{}_{2}F_{1}\left(\frac{1}{2}, \frac{1}{2};\frac{1}{2}\left(d^{2}+1\right);\gamma^{2}\right)\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad+\gamma^{2}(1-(d+1)\gamma) \,{}_{2}F_{1}\left(\frac{3}{2},\frac{3}{2};\frac{1}{2}\left(d^{2}+3\right); \gamma^{2}\right)\Bigg{)}\]
For which, we direct our attention to the following, which we denoted by \(g\).
\[g(d,\gamma):=\frac{1}{d^{2}+1}\left(\left(d^{2}+1\right)\,{}_{2}F_{1}\left( \frac{1}{2},\frac{1}{2};\frac{1}{2}\left(d^{2}+1\right);\gamma^{2}\right)+ \gamma^{2}(1-(d+1)\gamma)\,{}_{2}F_{1}\left(\frac{3}{2},\frac{3}{2};\frac{1}{2} \left(d^{2}+3\right);\gamma^{2}\right)\right)\]
We seek to upper bound this in the range \(-1\leq\gamma\leq\frac{1}{d+1}\). We can observe that the partial derivative on \(\gamma\) has two roots at \(\gamma=0,\frac{1}{d+1}\) for all \(d\), as
\[\frac{\partial}{\partial\gamma}\left\{g(d,\gamma)\right\}=\frac{3\gamma(1-(d+ 1)\gamma)\left(3\gamma^{2}\,{}_{2}F_{1}\left(\frac{5}{2},\frac{5}{2};\frac{1}{ 2}\left(d^{2}+5\right);\gamma^{2}\right)+\left(d^{2}+3\right)\,{}_{2}F_{1} \left(\frac{3}{2},\frac{3}{2};\frac{1}{2}\left(d^{2}+3\right);\gamma^{2}\right) \right)}{\left(d^{2}+1\right)\left(d^{2}+3\right)}\]
So to bound the maximum value in this range, we consider the maximum value at these points combined with the point at \(\gamma=-1\). Clearly, \(g(d,0)=1\) for all \(d\) as is evident by the series expansion, (39). Additionally, \(g(d,-1)>g(d,-\frac{1}{d+1})>g(d,\frac{1}{d+1})\) as the Gaussian hypergeometric function is concave up for \(\gamma^{2}>0\) and the term \(\gamma^{2}(1-(d+1)\gamma)\) is maximized when \(\gamma=-1\). Therefore, we are left to maximize \(g(d,-1)\) over \(d\geq 3\).
\[g(d,-1)=\,{}_{2}F_{1}\left(\frac{1}{2},\frac{1}{2};\frac{1}{2}\left(d^{2}+1 \right);1\right)+\frac{(d+2)}{d^{2}+1}\,{}_{2}F_{1}\left(\frac{3}{2},\frac{3}{2 };\frac{1}{2}\left(d^{2}+3\right);1\right)\]
By the series expansion, we have that \(\,{}_{2}F_{1}\left(\frac{a}{2},\frac{b}{2};\frac{1}{2}\left(d^{2}+c\right);1 \right)=1+\Theta(\frac{1}{d^{2}})\) and is decreasing and thus \(g(d,-1)=1+\Theta(\frac{1}{d})\). Thus \(g(d,\gamma)\leq g(d,-1)\leq g(3,-1)<2\) and is decreasing (for \(d\geq 3\)).
Then, we can rearrange (41) and use the upper bound \(g(d,\gamma)<2\), to get
\[\frac{\partial}{\partial\gamma}\left\{\frac{1-\frac{F^{*}(d^{2}-1,\gamma)}{(d-1)^{2}}}{1-(d+1)\gamma}\right\} >\frac{d+1}{(1-(d+1)\gamma)^{2}}-\frac{4}{d^{2}-1}\left(\frac{ \Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{1}{2}\left(d^{2}-1 \right)\right)}\right)^{2}\left(\frac{1}{(d-1)^{2}\left(d^{2}+1\right)(1-(d+1) \gamma)^{2}}\right)\] \[=\frac{d+1}{(1-(d+1)\gamma)^{2}}\left(1-\frac{4}{d^{2}-1}\left( \frac{\Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{1}{2}\left(d^{2}- 1\right)\right)}\right)^{2}\left(\frac{1}{(d-1)^{2}\left(d^{2}+1\right)(d+1)} \right)\right)\]
Which we want to show is positive. To do this, we show that both terms in the product are positive. First, we can observe that \(\frac{d+1}{(1-(d+1)\gamma)^{2}}>0\) for all \(d\geq 3\) and \(\gamma<\frac{1}{d+1}\). Next, we can use Gautschi's inequality, which states that for \(x>1\), \(\frac{\Gamma(x)}{\Gamma(x-\frac{1}{2})}<\sqrt{x}\), to get the following for \(d\geq 3\).
\[\frac{4}{d^{2}-1}\left(\frac{\Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left( \frac{1}{2}\left(d^{2}-1\right)\right)}\right)^{2}<\frac{4}{d^{2}-1}\frac{d^{2 }}{2}\leq\left(\frac{3}{2}\right)^{2}\]
Thus, the whole term is positive for \(d\geq 3\), and so the product is also positive for all \(d\geq 3\).
**Lemma 5** (The Bad Angle for \(\beta_{d}\)).: _For \(d\geq 3\), the value of \(-\frac{1}{d-1}\leq\gamma<1\) that achieves the approximation ratio in (38) is \(\gamma=-\frac{1}{d-1}\)._
Proof (sketch).: This follows by the same logic as the proof for Lemma 4. We note that
\[\frac{\partial}{\partial\gamma}\left\{\frac{1-\frac{F^{*}(d^{2}-1,\gamma)}{(d-1)^{2}}}{1-\gamma}\right\} =\frac{(d-1)^{2}-F^{*}\left(d^{2}-1,\gamma\right)-(1-\gamma)\frac{ \partial}{\partial\gamma}\left\{F^{*}\left(d^{2}-1,\gamma\right)\right\}}{(d- 1)^{2}(1-\gamma)^{2}}\] \[=\frac{1}{(1-\gamma)^{2}}\Bigg{(}1-\frac{2}{d^{2}-1}\left(\frac{ \Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{1}{2}\left(d^{2}-1 \right)\right)}\right)^{2}\left(\frac{1}{(d-1)^{2}(d^{2}+1)}\right)\] \[\qquad\qquad\qquad\qquad\cdot\left(\left(d^{2}+1\right)\,_{2}F_{ 1}\left(\frac{1}{2},\frac{1}{2};\frac{1}{2}\left(d^{2}+1\right);\gamma^{2} \right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.\left.+\,\gamma^{2}(1-\gamma )\,_{2}F_{1}\left(\frac{3}{2},\frac{3}{2};\frac{1}{2}\left(d^{2}+3\right); \gamma^{2}\right)\right)\right)\]
For which, the term with the two Gaussian hypergeometric functions can be show to be less than \(\frac{3}{2}\) for \(d\geq 3\).
As a result of Lemmas 4 and 5, we can plug the minimizers into (36) and (38) to get the following.
\[\alpha_{d}=\frac{1}{2}\left(\frac{d-1}{d}\right)\left(1-\frac{F^{*}(d^{2}-1,- \frac{1}{d-1})}{(d-1)^{2}}\right),\quad\beta_{d}=\left(\frac{d-1}{d}\right) \left(1-\frac{F^{*}(d^{2}-1,-\frac{1}{d-1})}{(d-1)^{2}}\right)\quad\text{ for }d\geq 3 \tag{42}\]
### Proof of Theorem 1 and Theorem 3
Finally, we can prove Theorem 1.
**Theorem 4** (Restatement of Theorem 1).: _There exists an efficient approximation algorithm for Quantum Max-\(d\)-Cut that admits an \(\alpha_{d}\)-approximation, where the constants \(\alpha_{d}\) (for \(d\geq 2\)) satisfy,_
1. \(\alpha_{d}\geq\frac{1}{2}\left(1-\frac{1}{d}\right)\)__
2. \(\alpha_{d}-\frac{1}{2}\left(1-\frac{1}{d}\right)\sim\frac{1}{2d^{3}}\)__
3. \(\alpha_{2}\geq 0.498767,\alpha_{3}\geq 0.372995,\alpha_{4}\geq 0.388478,\alpha_{5} \geq 0.406128,\alpha_{10}\geq 0.450614,\alpha_{100}\geq 0.4950005\)__
Proof.: We prove Theorem1, Items2 and 2 for \(d\geq 3\). The \(d=2\) case follows from Item3. First, we use (42), which is a result of Lemma4 and simplify:
\[\alpha_{d} =\frac{1}{2}\left(1-\frac{1}{d}\right)-\frac{F^{*}\left(d^{2}-1,- \frac{1}{d-1}\right)}{2d(d-1)}\] \[=\frac{1}{2}\left(1-\frac{1}{d}\right)+\frac{1}{(d-1)^{3}d(d+1)} \left(\frac{\Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{d^{2}-1}{2} \right)}\right)^{2}\,{}_{2}F_{1}\left(\frac{1}{2},\frac{1}{2};\frac{1}{2}(d^ {2}+1);\frac{1}{(d-1)^{2}}\right) \tag{43}\]
Using the series expansion, (39), for the values \(a,b=1/2,c=\frac{1}{2}(d^{2}+1),z=\frac{1}{(d-1)^{2}}\) as in (43), we always have that (fot any \(n\geq 0\))
\[\frac{\left(\frac{1}{2}\right)_{n}\left(\frac{1}{2}\right)_{n}}{\left(\frac{ 1}{2}(d^{2}+1)\right)_{n}}\frac{1}{(d-1)^{2n}n!}>0\]
Then plugging in our values gives the series approximation of \(1+O(d^{-4})\). Furthermore, because each term in the series is positive, the \(O(d^{-4})\) term is positive. This proves Theorem1, Item2.
Next, we look at the expression in Theorem1, Item2, which using (43) gives the following equality. Moreover, we use the series approximation of the Gaussian hypergeometric function with the above inputs to get the asymptotic equivalence below. The series approximation, \(1+o_{d}(1)\), tells us that it is asymptotically equivalent to \(1\).
Furthermore, by using Stirling's approximation/the Lanczos approximation, which says that \(\Gamma(z)\sim\sqrt{2\pi}z^{z-1/2}e^{-z}\) (as \(z\to\infty\)), we get the following.
\[\left(\frac{\Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{1}{2}\left( d^{2}-1\right)\right)}\right)^{2}\sim\frac{1}{2e}\frac{(d^{2})^{d^{2}-1}}{(d^{2}-1 )^{d^{2}-2}}=\frac{1}{2e}\frac{d^{2}}{\left(1-d^{-2}\right)^{-2}\left(1-d^{-2 }\right)^{d^{2}}}\sim\frac{d^{2}}{2}\]
Finally, we can put it all together to get that
\[\alpha_{d}-\frac{1}{2}\left(1-\frac{1}{d}\right)\sim\frac{1}{d^{5}}\left( \frac{\Gamma\left(\frac{d^{2}}{2}\right)}{\Gamma\left(\frac{1}{2}\left(d^{2} -1\right)\right)}\right)^{2}\sim\frac{1}{2d^{3}}\]
This proves Theorem1, Item2.
Theorem1, Item2 is proven numerically and can be seen in Table1.
Next, we can prove Theorem3.
**Theorem 5** (Restatement of Theorem3).: Quantum Max-\(d\)-Cut _admits an \(\beta_{d}\)-approximation to the optimal product-state solution with respect to the basic SDP, where the constants \(\beta_{d}\) satisfy,_
1. \(\beta_{d}=2\alpha_{d}\) _for_ \(d\geq 3\)__
2. \(\beta_{2}\geq 0.956337\)__[_BdOFV10; HNPTW22_]__
Proof.: First, for Theorem3, Item2, this follows from (42), which are results of Lemmas4 and 5. Then, Theorem3, Item2 is proven numerically.
The Complete Graph As The Algorithmic Gap
In this section, we show that our analysis for our rounding algorithm is tight. This is often done through the notion of the algorithmic gap, defined in Definition1. In particular, we show that the algorithmic gap of our algorithm matches the approximation ratio.
**Theorem 6** (Restatement of Theorem2).: _The approximation algorithm for Quantum Max-\(d\)-Cut that rounds to mixed product states using the basic SDP has algorithmic gap \(\alpha_{d}\) for \(d\geq 3\)._
To show that this is the algorithmic gap, it suffices to give an instance that achieves the ratio \(\alpha_{d}\) in expectation. This is because the analysis determining the approximation ratio can be seen as a lower bound on the algorithmic gap. The problem instance we will use will be that of the complete graph on \(d\) vertices, denoted \(K_{d}\).
**Lemma 6** (Energy of \(K_{d}\)).: _The energy of Quantum Max-\(d\)-Cut with the compete graph on \(d\) vertices, \(K_{d}\), as it's interaction graph is 1._
Proof.: For this, we consider the completely antisymmetric state on \(d\), \(d\)-dimensional qudits, defined as follows.
\[|\Psi\rangle=\frac{1}{\sqrt{d!}}\sum_{\sigma\in S_{d}}\operatorname{sgn}( \sigma)|\sigma(1)\rangle\otimes|\sigma(2)\rangle\otimes\cdots\otimes|\sigma(d)\rangle \tag{44}\]
We note that this is the unique antisymmetric state (up to a phase) in \(\mathcal{H}_{d}^{\otimes d}\). We can then write this state using the \(\binom{d}{2}\) dimensional antisymmetric subspace of \(\mathcal{H}_{d}^{\otimes 2}\) (which we note, the projector onto which is exactly the Quantum Max-\(d\)-Cut edge interaction from Definition3). That is, for \(\{i,j\}\subseteq[d]\), we define \(|\psi_{ij}\rangle=\frac{1}{\sqrt{2}}\left(|i\rangle\otimes|j\rangle-|j\rangle \otimes|i\rangle\right)\) to be the unique antisymmetric state (up to a phase) in \(\operatorname{span}\{|i\rangle,|j\rangle\}^{\otimes 2}\). Then, the completely antisymmetric state can be written as follows.
\[|\Psi\rangle=\frac{\sqrt{2}}{\sqrt{d!}}\sum_{\sigma\in A_{d}}|\psi_{\sigma(1), \sigma(2)}\rangle\otimes|\sigma(3)\rangle\otimes\cdots\otimes|\sigma(d)\rangle \tag{45}\]
We prove this by considering the bijective function \(f:S_{d}\to S_{d},\sigma\mapsto(\sigma(1)\;\sigma(2))\circ\sigma\). We can prove that \(f\) is bijective using the fact that it is its own inverse: \(f(f(\sigma))=(\sigma(2)\;\sigma(1))\circ(\sigma(1)\;\sigma(2))\circ\sigma=\sigma\) for all \(\sigma\in S_{d}\). Next, we observe that when restricted to \(A_{d}\), the alternating group, we get \(\operatorname{im}(f|_{A_{d}})=(1\;2)A_{d}=\{\sigma\in S_{d}\;|\;\operatorname {sgn}(\sigma)=-1\}\), the co-set of odd permutations. Finally, we can do the following.
\[|\Psi\rangle =\frac{1}{\sqrt{d!}}\sum_{\sigma\in S_{d}}\operatorname{sgn}( \sigma)|\sigma(1)\rangle\otimes|\sigma(2)\rangle\otimes\cdots\otimes|\sigma(d)\rangle\] \[=\frac{1}{\sqrt{d!}}\left(\sum_{\sigma\in A_{d}}|\sigma(1) \rangle\otimes|\sigma(2)\rangle\otimes\cdots\otimes|\sigma(d)\rangle-\sum_{ \sigma\in(1\;2)A_{d}}|\sigma(1)\rangle\otimes|\sigma(2)\rangle\otimes\cdots \otimes|\sigma(d)\rangle\right)\] \[=\frac{1}{\sqrt{d!}}\sum_{\sigma\in A_{d}}\left(|\sigma(1) \rangle\otimes|\sigma(2)\rangle\otimes\cdots\otimes|\sigma(d)\rangle-|f( \sigma)(1)\rangle\otimes|f(\sigma)(2)\rangle\otimes\cdots\otimes|f(\sigma)(d) \rangle\right)\] \[=\frac{1}{\sqrt{d!}}\sum_{\sigma\in A_{d}}\left(|\sigma(1) \rangle\otimes|\sigma(2)\rangle\otimes\cdots\otimes|\sigma(d)\rangle-|\sigma( 2)\rangle\otimes|\sigma(1)\rangle\otimes\cdots\otimes|\sigma(d)\rangle\right)\] \[=\frac{1}{\sqrt{d!}}\sum_{\sigma\in A_{d}}\left(|\sigma(1) \rangle\otimes|\sigma(2)\rangle-|\sigma(2)\rangle\otimes|\sigma(1)\rangle \right)\otimes|\sigma(3)\rangle\otimes\cdots\otimes|\sigma(d)\rangle\] \[=\frac{\sqrt{2}}{\sqrt{d!}}\sum_{\sigma\in A_{d}}|\psi_{\sigma(1), \sigma(2)}\rangle\otimes|\sigma(3)\rangle\otimes\cdots\otimes|\sigma(d)\rangle\]
If we fix a \(\{u,v\}\subseteq[n]\), then we can do the same process to get the following by using the function \(f:S_{d}\to S_{d},\sigma\mapsto(\sigma(u)\ \sigma(v))\circ\sigma\).
\[\ket{\Psi}=\frac{\sqrt{2}}{\sqrt{d!}}\sum_{\sigma\in A_{d}}\ket{\psi_{\sigma(u),\sigma(v)}}_{u,v}\otimes\bigotimes_{k\in[d]\setminus\{u,v\}}\ket{\sigma(k)}_{k} \tag{46}\]
Finally, we can consider the energy of \(\ket{\Psi}\) on the Quantum Max-\(d\)-Cut Hamiltonian \(H_{K_{d}}=\frac{2}{d(d-1)}\sum_{u\neq v\in V}h_{uv}\). We first consider the energy of only a single-edge interaction.
\[\bra{\Psi}h_{uv}\Psi\rangle =\frac{2}{d!}\left(\sum_{\sigma\in A_{d}}\bra{\psi_{\sigma(u), \sigma(v)}}_{u,v}\otimes\bigotimes_{k\in[d]\setminus\{u,v\}}\bra{\sigma(k)}_{ k}\right)h_{uv}\left(\sum_{\sigma\in A_{d}}\ket{\psi_{\sigma(u),\sigma(v)}}_{u,v} \otimes\bigotimes_{k\in[d]\setminus\{u,v\}}\ket{\sigma(k)}_{k}\right)\] \[=\frac{2}{d!}\sum_{\sigma\in A_{d}}\bra{\psi_{\sigma(u),\sigma(v )}}_{u,v}h_{uv}\ket{\psi_{\sigma(u),\sigma(v)}}_{u,v}\] \[=\frac{2}{d!}\sum_{\sigma\in A_{d}}1=1\]
And thus, we have that
\[\bra{\Psi}H_{K_{d}}\Psi\rangle=\mathop{\mathbf{E}}_{(u,v)\sim E}\bra{\Psi}h_{ uv}\Psi\rangle=1\]
Next, we can give the proof for Theorem2 by looking at the SDP vectors. We note that in Theorem1, we already showed that the algorithmic gap is at least \(\alpha_{d}\). It then suffices to give an instance, \(G\), where our rounding algorithm outputs a solution with value \(\alpha_{d}\text{OPT}(G)\) in expectation. We show that the compete graph on \(d\) vertices, \(K_{d}\), is such an instance.
Proof.: Consider the graph \(K_{d}\) and it's corresponding Hamiltonian \(H_{K_{d}}\). Then let \(M\) be an optimal SDP solution with corresponding vector program solution \(\{\ket{A}\ |\ A\in\mathcal{P}_{d}^{d}(1)\}\) to Definition13.
We can observe that the SDP, being a relaxation, must get a value at least \(\textsc{SDP}_{\textsc{QMC}_{d}}(K_{d})\geq 1\) (here, we use Lemma6). And, by Proposition14, we know that \(\textsc{SDP}_{\textsc{QMC}_{d}}(K_{d})\leq 1\). Moreover, this is true for each edge interaction as observed in the proof of Proposition14. All in all, this means that our SDP solution for \(H_{K_{d}}\) has that \(\bra{u}v\rangle=-\frac{1}{d-1}\) for each \(u<v\in V\). This is exactly the bad angle, and thus, plugging it into (1), we get the following, where the denominator is \(1\), by Lemma6.
\[\textsc{Gap}_{A}(K_{d}) =\mathop{\mathbf{E}}_{Z}\left[\frac{1}{2}\left(\frac{d-1}{d} \right)\mathop{\mathbf{E}}_{uv\sim E}\left(1-\frac{\langle\vec{b}_{u},\vec{b} _{v}\rangle}{(d-1)^{2}}\right)\right]\] \[=\frac{1}{2}\left(\frac{d-1}{d}\right)\left(1-\frac{F^{*}(d^{2}-1,-\frac{1}{d-1})}{(d-1)^{2}}\right)\] \[=\alpha_{d}\]
The last equality follows from Lemma4 for \(d\geq 3\).
## 8 Conclusion and Future Directions
In this paper, we look at an approximation algorithm for Quantum Max-\(d\)-Cut and prove it beats random assignment. In particular, our algorithm finds a mixed product state solution. Moreover, we show that our
analysis is tight by finding an algorithmic gap instance for our algorithm. We, however, believe that our algorithm is not optimal and that it is possible to round directly to pure product state solutions using a clever choice for a frame. Nonetheless, our paper makes progress on approximating LHPs over qudits. We give the following open problems.
Does there exist a pure product state rounding algorithm that achieves a better approximation ratio than ours? Gharibian and Kempe [10] as well as Brandao and Harrow [1], for example, looked at a rounding algorithm to pure product state solution for general LHPs over qudits, but required assumptions for their graphs. Does there exist algorithms that do not require these assumptions?
Additionally, after the Gharibian-Parekh algorithm, many subsequent papers considered rounding to entangled states [1, 11, 12, 13, 14] and achieved approximation ratios better than the optimal product state ratio of \(1/2\). Can similar algorithms using higher levels of the ncSoS hierarchy be used for Quantum Max-\(d\)-Cut. The major road block for this is in establishing something analogous to the "star bound" [14] for qudits. This may prove to be challenging as odd degree terms can not be in general ignored. However, with recent work looking at the algebra of swap operators to solve Quantum Max-Cut and its relaxation [13, 15], the question arises if similar results can be achieved for Quantum Max-\(d\)-Cut by looking at polynomial optimization in the qudit swap operators.
In the analysis of classical approximation algorithms for CSPs/GCSPs, one metric that is used to show the optimality of an SDP-based rounding algorithm is the integrality gap. If the integrality gap matches the approximation ratio, we say that the rounding algorithm is optimal for the SDP. Work done by Hwang, Neeman, Parekh, Thompson, and Wright [12] showed that the integrality gap of the 2nd level ncSoS SDP for Quantum Max-Cut, assuming a plausible conjecture in Gaussian geometry, matches the approximation ratio of the Gharibian-Parekh algorithm. It is natural to ask if we can show an integrity gap for the 2nd level ncSoS SDP that also enforces two-body moments to be valid density matrices for the problem of Quantum Max-\(d\)-Cut or even Quantum Max-Cut.
Finally, we observed that the state that achieved the maximal energy on the complete graph of \(d\) vertices is the unique element (up to a phase) in the asymmetric subspace of \(d\), \(d\)-dimensional qudits, denoted \(|\Psi\rangle\). A natural problem to look at could be the \(d\)-LHP where the local Hamiltonians are projectors onto this state, \(|\Psi\rangle\!\langle\Psi|\), and interactions are determined by a \(d\)-uniform hypergraph. Another, potentially, more interesting family of problems is the "quantum" generalization of the hypergraph coloring problem. That is, a hypergraph coloring is a coloring of the vertices in \([d]\) that maximizes the number of non-monochromatic hyperedge. The "quantum" generalization of this problem could be to find a state, that for each hyperedge of order \(k\), minimizes its component in the subspace \(\mathrm{span}\{|\psi\rangle^{\otimes k}\mid|\psi\rangle\in\mathcal{H}_{d}\}\), which is nothing but the symmetric subspace of \(\mathcal{H}_{d}^{\otimes k}\). In other words, for each hyperedge with order \(k\), the local Hamiltonian would be \(I-P_{\mathrm{sym}}^{d,k}\), where \(P_{\mathrm{sym}}^{d,k}\) is the projector onto the symmetric subspace of \(k\), \(d\)-dimensional qudits.
## Acknowledgements
This work was done in part while some authors were visiting the Simons Institute for the Theory of Computing. Additionally, the authors thank John Wright and Ian Jorquera for the helpful discussions.
|
2301.06917 | Deformations and abelian extensions on anti-pre-Lie algebras | In this paper, we introduce the representation of anti-pre-Lie algebras and
give the second cohomology group of anti-pre-Lie algebras. As applications,
first, we study linear deformations of anti-pre-Lie algebras. The notion of a
Nijenhuis operator on an anti-pre-Lie algebra is introduced which can generate
a trivial linear deformation of an anti-pre-Lie algebra. Then, we study formal
deformations of anti-pre-Lie algebras. We show that the infinitesimal of a
formal deformation is a 2-cocycle with the coefficients in the regular
representation and depends only on its cohomology class. Moreover, if the
second cohomology group $H^2(A;A)$ is trivial, then the anti-pre-Lie algebra is
rigid. Finally, we introduce the notion of abelian extensions. We show that
abelian extensions are classified by the second cohomology group $H^2(A;V)$. | Shanshan Liu, Zhao Chen, Liangyun Chen | 2022-11-07T03:03:06Z | http://arxiv.org/abs/2301.06917v2 | # Anti-L-dendriform algebras, formal deformations and abelian extensions on anti-pre-Lie algebras
###### Abstract.
In this paper we introduce the representations of anti-pre-Lie algebras and give the second cohomology group of anti-pre-Lie algebras. We introduce the notions of anti-L-dendriform algebras and give the relation between anti-L-dendriform algebras and anti-pre-Lie algebras. We introduce the notion of \(\mathcal{O}\)-operators on anti-pre-Lie algebras, by which we construct anti-L-dendriform algebras. Due to the usage of the second cohomology group, it makes us successfully study the formal deformations and abelian extensions of anti-pre-Lie algebras. We show that the infinitesimal of a formal deformation is a 2-cocycle and depends only on its cohomology class. Moreover, if the second cohomology group \(H^{2}(A;A)\) is trivial, then the anti-pre-Lie algebra is rigid. Finally, we introduce the notion of abelian extensions. We show that abelian extensions are classified by the second cohomology group \(H^{2}(A;V)\).
Key words and phrases:anti-pre-Lie algebra, cohomology, anti-L-dendriform algebra, formal deformation, abelian extension
###### Contents
* 1 Introduction
* 2 Representations and second cohomology groups of anti-pre-Lie algebras
* 3 Anti-L-dendriform algebras
* 4 Formal deformations of anti-pre-Lie algebras
* 5 Abelian extensions of anti-pre-Lie algebras
## 1. Introduction
The notion of a pre-Lie algebra (also called left-symmetric algebras, quasi-associative algebras, Vinberg algebras and so on) has been introduced independently by M. Gerstenhaber in deformation theory of rings and algebras [9]. Pre-Lie algebra arose from the study of affine manifolds and affine structures on Lie group [15], homogeneous convex cones [19]. Its defining identity is weaker than associativity. This algebraic structure describes some properties of cochains space in Hochschild cohomology of an associative algebra, rooted trees and vector fields on affine spaces. Moreover, it is playing an increasing role in algebra, geometry and physics due to their applications in nonassociative algebras, combinatorics, numerical Analysis and quantum field theory, see also [1, 2, 4, 7]. There is a close relationship between pre-Lie algebras and Lie algebras: a pre-Lie algebra \((A,\cdot)\) gives rise to a Lie algebra \((A,[\cdot,\cdot]_{C})\) via the commutator bracket, which is called the subadjacent Lie algebra and denoted by \(A^{C}\). Furthermore, the map \(L:A\longrightarrow\operatorname{gl}(A)\), defined by \(L_{x}y=x\cdot y\) for all \(x,y\in A\), gives rise to a representation of the subadjacent Lie algebra \(A^{C}\) on \(A\). There also is a close relationship between pre-Lie algebras and L-dendriform algebras:
Introduction
Let \(\mathcal{O}\) be a finite field, \(\mathcal{O}\) be a finite field, \(\mathcal{O}\) be a finite field, \(\mathcal{O}\) is a finite field, \(\mathcal{O}
\[[x,y]\cdot z+[y,z]\cdot x+[z,x]\cdot y=0, \tag{2}\]
_where_
\[[x,y]=x\cdot y-y\cdot x. \tag{3}\]
Let \((A,\cdot)\) be an anti-pre-Lie algebra. The commutator \([x,y]=x\cdot y-y\cdot x\) gives a Lie algebra \((A,[\cdot,\cdot])\), which is denoted by \(A^{C}\) and called the **sub-adjacent Lie algebra** of \((A,\cdot)\). \((A,\cdot)\) is called the **compatible anti-pre-Lie algebra** of \(A^{C}\). Moreover, \((A,-L)\) is a representation of the sub-adjacent Lie algebra \(A^{C}\), where \(L:A\longrightarrow\operatorname{gl}(A)\) is a linear map defined by \(L(x)(y)=x\cdot y\) for all \(x,y\in A\).
**Definition 2.2**.: \(A\) **morphism** _from an anti pre-Lie algebra \((A,\cdot)\) to an anti-pre-Lie algebra \((A^{\prime},\cdot^{\prime})\) is a linear map \(f:A\longrightarrow A^{\prime}\) such that for all \(x,y\in A\), the following equation is satisfied:_
\[f(x\cdot y)=f(x)\cdot^{\prime}f(y),\ \ \forall x,y\in A. \tag{4}\]
**Definition 2.3**.: \(A\) **representation** _of an anti-Lie algebra \((A,\cdot)\) on a vector space \(V\) consist of a pair \((\rho,\mu)\), where \(\rho,\mu:A\longrightarrow\operatorname{gl}(V)\) is a linear map such that for all \(x,y\in A\), the following equalities are satisfied:_
\[\rho(x)\circ\rho(y)-\rho(y)\circ\rho(x) = \rho[y,x], \tag{6}\] \[\mu(x\cdot y)-\rho(x)\circ\mu(y) = \mu(y)\circ\rho(x)-\mu(y)\circ\mu(x),\] (7) \[\mu(y)\circ\mu(x)-\mu(x)\circ\mu(y)+\rho[x,y] = \mu(y)\circ\rho(x)-\mu(x)\circ\rho(y). \tag{5}\]
We denote a representation of an anti-pre-Lie algebra \((A,\cdot)\) by a triple \((V,\rho,\mu)\). Furthermore, let \(L,R:A\longrightarrow\operatorname{gl}(A)\) be linear maps, where \(L_{x}y=x\cdot y,R_{x}y=y\cdot x\). Then \((A,L,R)\) is also a representation, which is called the **regular representation**.
We define a bilinear operation \(\cdot_{A\oplus V}:\otimes^{2}(A\oplus V)\longrightarrow(A\oplus V)\) by
\[(x+u)\cdot_{A\oplus V}(y+v):=x\cdot y+\rho(x)(v)+\mu(y)(u),\ \ \ \ \forall x,y\in A,u,v\in V. \tag{8}\]
It is straightforward to obtain the following result.
**Proposition 2.4**.: _With the above notations, \((A\oplus V,\cdot_{A\oplus V})\) is an anti-pre-Lie algebra, which is denoted by \(A\ltimes_{(\rho,\mu)}V\) and called the_ **semi-direct product** _of the anti-pre-Lie algebra \((A,\cdot)\) and the representation \((V,\rho,\mu)\)._
**Proposition 2.5**.: _Let \((V,\rho,\mu)\) be a representation of an anti-pre-Lie algebra \((A,\cdot)\). Then \((V,\rho-\mu)\) is a representation of the sub-adjacent Lie algebra \(A^{C}\)._
Proof.: For all \(x,y\in A\), by (5), (6) and (7), we have
\[[(\rho-\mu)(x),(\rho-\mu)(y)]-(\rho-\mu)[x,y]\] \[= [\rho(x),\rho(y)]-[\rho(x),\mu(y)]-[\mu(x),\rho(y)]+[\mu(x),\mu(y) ]-\rho[x,y]+\mu[x,y]\] \[= \rho(x)\circ\rho(y)-\rho(y)\circ\rho(x)-\rho(x)\circ\mu(y)+\rho( y)\circ\mu(x)+\mu(x\cdot y)-\mu(y\cdot x)\] \[= \rho[y,x]+\mu(y)\circ\rho(x)-\mu(y)\circ\mu(x)-\mu(x)\circ\rho(y)+ \mu(x)\circ\mu(y)\] \[= 0,\]
which implies that
\[[(\rho-\mu)(x),(\rho-\mu)(y)]=(\rho-\mu)[x,y].\]
This finishes the proof.
Let \((V,\rho,\mu)\) be a representation of an anti-pre-Lie algebra \((A,\cdot)\). For all \(x\in A,u\in V,\xi\in V^{*}\), define \(\rho^{*}:A\longrightarrow\operatorname{gl}(V^{*})\) and \(\mu^{*}:A\longrightarrow\operatorname{gl}(V^{*})\) as usual by
\[\langle\rho^{*}(x)(\xi),u\rangle=-\langle\xi,\rho(x)(u)\rangle,\quad\langle \mu^{*}(x)(\xi),u\rangle=-\langle\xi,\mu(x)(u)\rangle.\]
**Theorem 2.6**.: _Let \((A,\cdot)\) be an anti-pre-Lie algebra and \((V,\rho,\mu)\) a representation. Then \((V^{*},\mu^{*}-\rho^{*},\mu^{*})\) is a representation of \((A,\cdot)\), which is called the_ **dual representation** _of \((V,\rho,\mu)\)._
Proof.: For all \(x,y\in A,\xi\in V^{*}\) and \(u\in V\), by (5), (6) and (7), we have
\[\langle((\mu^{*}-\rho^{*})(x)(\mu^{*}-\rho^{*})(y)-(\mu^{*}-\rho^ {*})(y)(\mu^{*}-\rho^{*})(x)-(\mu^{*}-\rho^{*})[y,x])(\xi),u\rangle\] \[= \langle(\mu^{*}(x)\mu^{*}(y)-\mu^{*}(x)\rho^{*}(y)-\rho^{*}(x) \mu^{*}(y)+\rho^{*}(x)\rho^{*}(y)-\mu^{*}(y)\mu^{*}(x)+\mu^{*}(y)\rho^{*}(x)\] \[+\rho^{*}(y)\mu^{*}(x)-\rho^{*}(y)\rho^{*}(x)-\mu^{*}[y,x]+\rho^{ *}[y,x])(\xi),u\rangle\] \[= \langle\xi,(\mu(y)\mu(x)-\rho(y)\mu(x)-\mu(y)\rho(x)+\rho(y)\rho( x)-\mu(x)\mu(y)+\rho(x)\mu(y)+\mu(x)\rho(y)\] \[-\rho(x)\rho(y)+\mu[y,x]-\rho[y,x])(u)\rangle\] \[= \langle\xi,(-\rho(y)\mu(x)+\rho(y)\rho(x)+\rho(x)\mu(y)-\rho(x) \rho(y)+\mu(y\cdot x)-\mu(x\cdot y))(u)\rangle\] \[= \langle\xi,(\mu(x)\rho(y)-\mu(x)\mu(y)-\mu(y)\rho(x)+\mu(y)\mu(x) +\rho[x,y])(u)\rangle\] \[= 0,\]
which implies that
\[(\mu^{*}-\rho^{*})(x)\circ(\mu^{*}-\rho^{*})(y)-(\mu^{*}-\rho^{*})(y)\circ( \mu^{*}-\rho^{*})(x)=(\mu^{*}-\rho^{*})[y,x]. \tag{9}\]
By (6), we have
\[\langle(\mu^{*}(x\cdot y)-(\mu^{*}-\rho^{*})(x)\mu^{*}(y)-\mu^{*}(y)(\mu^{*}- \rho^{*})(x)+\mu^{*}(y)\mu^{*}(x))(\xi),u\rangle\] \[= \langle(\mu^{*}(x\cdot y)-\mu^{*}(x)\mu^{*}(y)+\rho^{*}(x)\mu^{*} (y)+\mu^{*}(y)\rho^{*}(x))(\xi),u\rangle\] \[= \langle\xi,(-\mu(x\cdot y)-\mu(y)\mu(x)+\mu(y)\rho(x)+\rho(x)\mu( y))(u)\rangle\] \[= 0,\]
which implies that
\[\mu^{*}(x\cdot y)-(\mu^{*}-\rho^{*})(x)\circ\mu^{*}(y)=\mu^{*}(y)\circ(\mu^{*} -\rho^{*})(x)-\mu^{*}(y)\circ\mu^{*}(x).\]
By (6) and (7), we have
\[\langle(\mu^{*}(y)\mu^{*}(x)-\mu^{*}(x)\mu^{*}(y)+(\mu^{*}-\rho^ {*})[x,y]-\mu^{*}(y)(\mu^{*}-\rho^{*})(x)+\mu^{*}(x)(\mu^{*}-\rho^{*})(y))( \xi),u\rangle\] \[= \langle(\mu^{*}[x,y]-\rho^{*}[x,y]+\mu^{*}(y)\rho^{*}(x)-\mu^{*}( x)\rho^{*}(y))(\xi),u\rangle\] \[= \langle\xi,(-\mu[x,y]+\rho[x,y]+\rho(x)\mu(y)-\rho(y)\mu(x))(u)\rangle\] \[= \langle\xi,(-\mu(y)\rho(x)+\mu(y)\rho(x)+\mu(x)\rho(y)-\mu(x)\mu( y)+\rho[x,y])(u)\rangle\] \[= 0,\]
which implies that
\[\mu^{*}(y)\circ\mu^{*}(x)-\mu^{*}(x)\circ\mu^{*}(y)+(\mu^{*}-\rho^{*})[x,y]=\mu ^{*}(y)\circ(\mu^{*}-\rho^{*})(x)-\mu^{*}(x)\circ(\mu^{*}-\rho^{*})(y). \tag{11}\]
By (9), (10) and (11), we deduce that \((V^{*},\mu^{*}-\rho^{*},\mu^{*})\) is a representation of \((A,\cdot)\).
**Corollary 2.7**.: _Let \((V,\rho,\mu)\) be a representation of an anti-pre-Lie algebra \((A,\cdot)\). Then the dual representation of \((V^{*},\mu^{*}-\rho^{*},\mu^{*})\) is \((V,\rho,\mu)\)._
Proof.: It is straightforward.
Consider the dual representation of the regular representation, we have
**Corollary 2.8**.: _Let \((A,\cdot)\) be an anti-pre-Lie algebra. Then \((A^{*},R^{*}-L^{*},R^{*})\) is a representation of \((A,\cdot)\)._
**Proposition 2.9**.: _Let \((V,\rho,\mu)\) be a representation of an anti-pre-Lie algebra \((A,\cdot)\). Then the following conditions are equivalent:_
* \((V,\mu-\rho,\mu)\) _is a representation of the anti-pre-Lie algebra_ \((A,\cdot)\)_,_
* \((V^{*},\rho^{*},\mu^{*})\) _is a representation of the anti-pre-Lie algebra_ \((A,\cdot)\)_,_
* \(\mu(x\cdot y)+\mu(y\cdot x)=0\)_, for all_ \(x,y\in A\)_._
Proof.: By Theorem 2.6 and Corollary 2.7, we obtain that condition (i) is equivalent to condition (ii). If \((V,\mu-\rho,\mu)\) is a representation of \((A,\cdot)\), by (6), for all \(x,y\in A\), we have
\[0 = \mu(x\cdot y)-(\mu-\rho)(x)\circ\mu(y)-\mu(y)\circ(\mu-\rho)(x)+ \mu(y)\circ\mu(x)\] \[= \mu(x\cdot y)-\mu(x)\circ\mu(y)+\rho(x)\circ\mu(y)+\mu(y)\circ \rho(x)\] \[= 2\mu(x\cdot y)-\mu(x)\circ\mu(y)+\mu(y)\circ\mu(x),\]
which implies that
\[2\mu(x\cdot y)-\mu(x)\circ\mu(y)+\mu(y)\circ\mu(x)=0. \tag{12}\]
By (7), for all \(x,y\in A\), we have
\[0 = \mu(y)\circ\mu(x)-\mu(x)\circ\mu(y)+(\mu-\rho)[x,y]-\mu(y)\circ( \mu-\rho)(x)+\mu(x)\circ(\mu-\rho)(y)\] \[= \mu[x,y]-\rho[x,y]+\mu(y)\circ\rho(x)-\mu(x)\circ\rho(y)\] \[= \mu[x,y]+\mu(y)\circ\mu(x)-\mu(x)\circ\mu(y),\]
which implies that
\[\mu[x,y]+\mu(y)\circ\mu(x)-\mu(x)\circ\mu(y)=0. \tag{13}\]
By (12) and (13), we have \(\mu(x\cdot y)+\mu(y\cdot x)=0\). The converse part can be proved similarly. We omit details. Thus, we deduce that condition (i) is equivalent to condition (iii).
Let \((V,\rho,\mu)\) be a representation of an anti-pre-Lie algebra \((A,\cdot)\). The set of \(n\)-cochains is given by
\[C^{n}(A;V)=\operatorname{Hom}(\wedge^{n}A,V),\quad\forall n\geq 0.\]
Now, we define \(1\)-coboundary operator and \(2\)-coboundary operator of \((A,\cdot)\) with respect to the representation \((V,\rho,\mu)\). For all \(f\in C^{1}(A;V)\) and \(x,y\in A\), define \(\mathrm{d}^{1}:C^{1}(A;V)\longrightarrow C^{2}(A;V)\) by
\[\mathrm{d}^{1}(f)(x,y)=\rho(x)f(y)+\mu(y)f(x)-f(x\cdot y).\]
For all \(f\in C^{2}(A;V)\) and \(x,y,z\in A\), a \(2\)-coboundary operator of \((A,\cdot)\) on \(V\) consists a pair of maps \((\mathrm{d}^{2}_{1},\mathrm{d}^{2}_{2})\), define \(\mathrm{d}^{2}_{i}:C^{2}(A;V)\longrightarrow C^{3}(A;V)\) by
\[\mathrm{d}^{2}_{1}(f)(x,y,z) = \rho(x)f(y,z)-\rho(y)f(x,z)-\mu(z)f(y,x)+\mu(z)f(x,y)\] \[-f(y,x\cdot z)+f(x,y\cdot z)+f([x,y],z),\]
and
\[\mathrm{d}^{2}_{2}(f)(x,y,z) = \mu(x)(f(y,z)-f(z,y))+\mu(y)(f(z,x)-f(x,z))+\mu(z)(f(x,y)-f(y,x))\] \[+f([x,y],z)+f([y,z],x)+f([z,x],y).\]
We denote the set of closed \(2\)-cochains by \(Z^{2}(A;V)\) and the set of exact \(2\)-cochains by \(B^{2}(A;V)\).
**Proposition 2.10**.: _With the above notations, we have \(B^{2}(A;V)\subset Z^{2}(A;V)\)._
Proof.: For all \(f\in C^{1}(A;V)\), \(\mathrm{d}^{1}f\in B^{2}(A;V)\), by (1), (5) and (6), for all \(x,y,z\in A\), we have
\[\mathrm{d}^{2}_{1}(\mathrm{d}^{1}f)(x,y,z)\] \[= \rho(x)(\mathrm{d}^{1}f)(y,z)-\rho(y)(\mathrm{d}^{1}f)(x,z)-\mu(z) (\mathrm{d}^{1}f)(y,x)+\mu(z)(\mathrm{d}^{1}f)(x,y)\] \[-(\mathrm{d}^{1}f)(y,x\cdot z)+(\mathrm{d}^{1}f)(x,y\cdot z)+( \mathrm{d}^{1}f)([x,y],z)\] \[= \rho(x)\rho(y)f(z)+\rho(x)\mu(z)f(y)-\rho(x)f(y\cdot z)-\rho(y) \rho(x)f(z)-\rho(y)\mu(z)f(x)+\rho(y)f(x\cdot z)\] \[-\mu(z)\rho(y)f(x)-\mu(z)\mu(x)f(y)+\mu(z)f(y\cdot x)+\mu(z)\rho( x)f(y)+\mu(z)\mu(y)f(x)-\mu(z)f(x\cdot y)\] \[-\rho(y)f(x\cdot z)-\mu(x\cdot z)f(y)+f(y\cdot(x\cdot z))+\rho(x )f(y\cdot z)+\mu(y\cdot z)f(x)-f(x\cdot(y\cdot z))\] \[+\rho[x,y]f(z)+\mu(z)f([x,y])-f([x,y]\cdot z)\] \[= 0.\]
By (2) and (7), we have
\[\mathrm{d}^{2}_{2}(\mathrm{d}^{1}f)(x,y,z)\] \[= \mu(x)((\mathrm{d}^{1}f)(y,z)-(\mathrm{d}^{1}f)(z,y))+\mu(y)(( \mathrm{d}^{1}f)(z,x)-(\mathrm{d}^{1}f)(x,z))+\mu(z)((\mathrm{d}^{1}f)(x,y)\] \[-(\mathrm{d}^{1}f)(y,x))+(\mathrm{d}^{1}f)([x,y],z)+(\mathrm{d}^ {1}f)([y,z],x)+(\mathrm{d}^{1}f)([z,x],y)\] \[= \mu(x)\rho(y)f(z)+\mu(x)\mu(z)f(y)-\mu(x)f(y\cdot z)-\mu(x)\rho(z )f(y)-\mu(x)\mu(y)f(z)+\mu(x)f(z\cdot y)\] \[+\mu(y)\rho(z)f(x)+\mu(y)\mu(x)f(z)-\mu(y)f(z\cdot x)-\mu(y)\rho( x)f(z)-\mu(y)\mu(z)f(x)+\mu(y)f(x\cdot z)\] \[+\mu(z)\rho(x)f(y)+\mu(z)\mu(y)f(x)-\mu(z)f(x\cdot y)-\mu(z)\rho( y)f(x)-\mu(z)\mu(x)f(y)+\mu(z)f(y\cdot x)\] \[+\rho[x,y]f(z)+\mu(z)f([x,y])-f([x,y]\cdot z)+\rho[y,z]f(x)+\mu(x) f([y,z])-f([y,z]\cdot x)\] \[+\rho[z,x]f(y)+\mu(y)f([z,x])-f([z,x]\cdot y)\] \[= 0.\]
Thus, we obtain that \(B^{2}(A;V)\subset Z^{2}(A;V)\).
We denote by \(H^{2}(A;V)=Z^{2}(A;V)/B^{2}(A;V)\) the corresponding cohomology groups of the anti-pre-Lie algebra \((A,\cdot)\) with the coefficient in the representation \((V,\rho,\mu)\).
## 3. Anti-L-dendriform algebras
In this section, first we introduce the notion of anti-L-dendriform algebras and give relations between anti-L-dendriform algebras and anti-pre-Lie algebras. Then we introduce the notion of \(\mathcal{O}\)-operators on anti-pre-Lie algebras, by which we construct anti-L-dendriform algebras.
**Definition 3.1**.: _An_ **anti-L-dendriform algebra**_\((A,\triangleright,\triangleleft)\) is a vector space \(A\) equipped with two bilinear products \(\triangleright,\triangleleft:A\otimes A\longrightarrow A\), such that for all \(x,y,z\in A\), the following equalities are satisfied_
\[x\triangleright(y\triangleright z)-y\triangleright(x\triangleright z)-(y \triangleright x)\triangleright z+(x\triangleright y)\triangleright z-(y \triangleleft x)\triangleright z=0, \tag{15}\] \[x\triangleright(y\triangleleft z)-(x\triangleright y)\triangleleft z+(y \triangleleft x)\triangleleft z+y\triangleleft(x\triangleleft z)+y \triangleleft(x\triangleright z)=0,\] (16) \[y\triangleleft(x\triangleleft z)+y\triangleleft(x\triangleright z )+(x\triangleright y)\triangleright z-(y\triangleleft x)\triangleright z-(y \triangleright x)\triangleright z\] \[+(x\triangleleft y)\triangleright z-x\triangleleft(y \triangleright z)-x\triangleleft(y\triangleleft z)=0. \tag{14}\]
**Proposition 3.2**.: _Let \((A,\triangleright,\triangleleft)\) be an anti-L-dendriform algebra. The bilinear product \(\cdot:A\otimes A\longrightarrow A\) given by_
\[x\cdot y=x\triangleright y-y\triangleleft x,\quad\forall x,y\in A,\]
_defines an anti-pre-Lie algebra. \((A,\cdot)\) is called the associated anti-pre-Lie algebra of \((A,\triangleright,\triangleleft)\) and \((A,\triangleright,\triangleleft)\) is called the compatible anti-\(L\)-dendriform algebra structure on the anti-pre-Lie algebra \((A,\cdot)\)._
Proof.: By (14), (15) and (16), it is straightforward to obtain that \((A,\cdot)\) is an anti-pre-Lie algebra.
**Proposition 3.3**.: _Let \(A\) be a vector space with two bilinear products \(\triangleright,\triangleleft:A\otimes A\longrightarrow A\). Then \((A,\triangleright,\triangleleft)\) is an anti-\(L\)-dendriform algebra if and only if \((A,\cdot)\) is an anti-pre-Lie algebra and \((A,L_{\star},-L_{\star})\) is a representation of \((A,\cdot)\)._
Proof.: If \((A,\triangleright,\triangleleft)\) is an anti-\(L\)-dendriform algebra, then for all \(x,y,z\in A\), by (14), we have
\[L_{\star}(x)L_{\star}(y)(z)-L_{\star}(y)L_{\star}(x)(z)-L_{ \star}([y,x])(z)\] \[= x\triangleright(y\triangleright z)-y\triangleright(x\triangleright z )-(y\triangleright x)\triangleright z+(x\triangleright y)\triangleright z-(y \triangleleft x)\triangleright z\] \[= 0,\]
which implies that
\[L_{\star}(x)\circ L_{\star}(y)-L_{\star}(y)\circ L_{\star}(x)=L_{\star}([y, x]). \tag{17}\]
By (15), we have
\[-L_{\star}(x\cdot y)(z)+L_{\star}(x)L_{\star}(y)(z)+L_{\star}(y)L _{\star}(x)(z)+L_{\star}(y)L_{\star}(x)(z)\] \[= -(x\triangleright y)\triangleleft z+(y\triangleleft x) \triangleleft z+x\triangleright(y\triangleleft z)+y\triangleleft(x \triangleright z)+y\triangleleft(x\triangleleft z)\] \[= 0,\]
which implies that
\[-L_{\star}(x\cdot y)-L_{\star}(x)\circ(-L_{\star}(y))=-L_{\star}(y)\circ L _{\star}(x)-(-L_{\star}(y))\circ(-L_{\star}(x)). \tag{18}\]
By (16), we have
\[L_{\star}(y)L_{\star}(x)(z)-L_{\star}(x)L_{\star}(y)(z)+L_{\star }[x,y](z)+L_{\star}(y)L_{\star}(x)(z)-L_{\star}(x)L_{\star}(y)(z)\] \[= y\triangleleft(x\triangleleft z)-x\triangleleft(y\triangleleft z )+(x\triangleright y)\triangleright z-(y\triangleleft x)\triangleright z\] \[+y\triangleleft(x\triangleright z)-x\triangleleft(y\triangleright z )-(y\triangleright x)\triangleright z+(x\triangleleft y)\triangleright z\] \[= 0,\]
which implies that
\[(-L_{\star}(y))\circ(-L_{\star}(x))-(-L_{\star}(x))\circ(-L_{\star}(y))+L_{ \star}[x,y]=-L_{\star}(y)\circ L_{\star}(x)-(-L_{\star}(x))\circ L_{\star}(y). \tag{19}\]
Thus, by (17), (18) and (19), we obtain that \((A,L_{\star},-L_{\star})\) is a representation of \((A,\cdot)\). The converse part can be proved similarly. We omit details. The proof is finished.
**Corollary 3.4**.: _Let \((A,\triangleright,\triangleleft)\) be an anti-\(L\)-dendriform algebra on an anti-pre-Lie algebra \((A,\cdot)\). Then \((A^{\star},-L_{\star}^{\star}-L_{\star}^{\star},-L_{\star}^{\star})\) is a representation of \((A,\cdot)\)._
Proof.: By Theorem 2.6 and Proposition3.3, it is straightforward.
**Definition 3.5**.: _Let \((A,\cdot)\) be an anti-pre-Lie algebra and \((V,\rho,\mu)\) a representation of \((A,\cdot)\). A linear map \(T:V\longrightarrow A\) is called an \(\mathcal{O}\)_**-operator** _if for all \(u,v\in V\), the following equation is satisfied_
\[T(u)\cdot T(v)=T(\rho(T(u))(v)+\mu(T(v))(u)). \tag{20}\]
**Theorem 3.6**.: _Let \((A,\cdot)\) be an anti-pre-Lie algebra and \((V,\rho,\mu)\) a representation of \((A,\cdot)\). Suppose that \(T:V\longrightarrow A\) is an \(\mathcal{O}\)-operator. Then there exists an anti-\(L\)-dendriform algebra structure on \(V\) defined by_
\[u\triangleright^{v}v=\rho(T(u))v,\quad u\triangleleft^{v}v=-\mu(T(u))v,\quad \forall u,v\in V.\]
Proof.: For all \(u,v,w\in V\), by (5) and (20), we have
\[u\triangleright^{v}(v\triangleright^{v}w)-v\triangleright^{v}(u \triangleright^{v}w)-(v\triangleright^{v}u)\triangleright^{v}w+(u \triangleleft^{v}v)\triangleright^{v}w+(u\triangleright^{v}v)\triangleright^{v}w-( v\triangleleft^{v}u)\triangleright^{v}w\] \[= \rho(T(u))\rho(T(v))w-\rho(T(v))\rho(T(u))w-\rho(T(\rho(T(v))u))w\] \[-\rho(T(\mu(T(u))v))w+\rho(T(\rho(T(u))v))w+\rho(T(\mu(T(v))u))w\] \[= \rho(T(u))\rho(T(v))w-\rho(T(v))\rho(T(u))w+\rho(T(u),T(v))w\] \[= 0,\]
which implies that (14) holds. By (6) and (20), we have
\[v\triangleleft^{v}(u\triangleleft^{v}w)+v\triangleleft^{v}(u \triangleright^{v}w)+(u\triangleright^{v}v)\triangleright^{v}w-(v\triangleleft^{v}u) \triangleright^{v}w-(v\triangleright^{v}u)\triangleright^{v}w+(u\triangleleft^{v}v) \triangleright^{v}w\] \[-\mu(T(v))\rho(T(u))w\] \[= -\mu(T(u)\cdot T(v))w+\mu(T(\rho(T(u))v))w+\mu(T(\mu(T(v))u))w\] \[= 0,\]
which implies that (15) holds. By (7) and (20), we have
\[v\triangleleft^{v}(u\triangleleft^{v}w)+v\triangleleft^{v}(u \triangleright^{v}w)+(u\triangleright^{v}v)\triangleright^{v}w-(v\triangleleft^{v}u )\triangleright^{v}w-(v\triangleright^{v}u)\triangleright^{v}w+(u\triangleleft^{v} v)\triangleright^{v}w\] \[-u\triangleleft^{v}(v\triangleright^{v}w)-u\triangleleft^{v}(v \triangleleft^{v}w)\] \[= \mu(T(v))\mu(T(u))w-\mu(T(v))\rho(T(u))w+\rho(T(\rho(T(u))v))w+ \rho(T(\mu(T(v))u))w\] \[-\rho(T(\rho(T(v))u))w-\rho(T(\mu(T(u))v))w+\mu(T(u))\rho(T(v))w- \mu(T(u))\mu(T(v))w\] \[= -\rho[T(u),T(v)]w+\rho(T(\rho(T(u))v))w+\rho(T(\mu(T(v))u))w-\rho (T(\rho(T(v))u))w-\rho(T(\mu(T(u))v))w\] \[= 0,\]
which implies that (16) holds. This finishes the proof.
**Corollary 3.7**.: _With the above conditions. \(T\) is a homomorphism from the associated anti-pre-Lie algebra of \((V,\triangleright,\triangleleft)\) to anti-pre-Lie algebra \((A,\cdot,\cdot)\). Moreover, \(T(V)=\{T(u)|u\in V\}\subset A\) is an anti-pre-Lie subalgebra of \((A,\cdot)\) and there is an induced anti-\(L\)-dendriform algebra structure on \(T(V)\) given by_
\[T(u)\triangleright T(v)=T(u\triangleright^{v}v),\quad T(u)\triangleleft T(v)=T( u\triangleleft^{v}v),\quad\forall u,v\in V.\]
**Theorem 3.8**.: _Let \((A,\cdot)\) be an anti-pre-Lie algebra. Then there exists a compatible anti-\(L\)-dendriform algebra structure on \((A,\cdot)\) such that \((A,\cdot)\) is the associated anti-pre-Lie algebra if and only if there exists an invertible \(\mathcal{O}\)-operator \(T\) associated to a representation \((V,\rho,\mu)\)._
Proof.: Let \(T\) be an invertible \(\mathcal{O}\)-operator associated to a representation \((V,\rho,\mu)\). By Theorem 3.6, and Corollary 3.7, there exists an anti-\(L\)-dendriform algebra on \(T(V)\) given by
\[x\triangleright y=T(\rho(x)T^{-1}(y)),\quad y\triangleleft x=-T(\mu(y)T^{-1}(x )),\quad\forall x,y\in A.\]
Moreover, by (20), we have
\[x\triangleright y-y\triangleleft x=T(\rho(x)T^{-1}(y)+\mu(y)T^{-1}(x))=x \cdot y.\]
Conversely, Let \((A,\triangleright,\triangleleft)\) be an anti-L-dendriform algebra and \((A,\cdot)\) be the associated anti-pre-Lie algebra. By Theorem 3.3, \((A,L_{*},-L_{*})\) is a representation of \((A,\cdot)\) and \(\operatorname{Id}:\operatorname{A}\longrightarrow\operatorname{A}\) is an \(\mathcal{O}\)-operator of \((A,\cdot,\alpha)\) associated to \((A,L_{*},-L_{*})\).
**Theorem 3.9**.: _Let \((A,\cdot)\) be an anti-pre-Lie algebra and \(\mathcal{B}\in A^{*}\wedge A^{*}\) a nondegenerate bilinear map such that_
\[\mathcal{B}(x,y\cdot z)-\mathcal{B}(y,x\cdot z)=\mathcal{B}([y,x],z),\quad \forall x,y,z\in A\]
_Then there exists a compatible anti-L-dendriform algebra structure on \((A,\cdot)\) given by_
\[\mathcal{B}(x\triangleright y,z)=-\mathcal{B}(y,[z,x]),\quad\mathcal{B}(x \triangleleft y,z)=\mathcal{B}(y,z\cdot x),\quad\forall x,y,z\in A.\]
Proof.: Define a linear map \(\mathcal{B}^{\sharp}:A\longrightarrow A^{*}\) is by
\[\langle\mathcal{B}^{\sharp}(x),y\rangle=\mathcal{B}(x,y),\quad\forall x,y\in A.\]
For all \(x,y,z\in A,\xi,\eta,\gamma\in A^{*}\), set \(x=(\mathcal{B}^{\sharp})^{-1}(\xi),y=(\mathcal{B}^{\sharp})^{-1}(\eta),z=( \mathcal{B}^{\sharp})^{-1}(\gamma)\), we have
\[\langle(\mathcal{B}^{\sharp})^{-1}(\xi)\cdot(\mathcal{B}^{\sharp })^{-1}(\eta)-(\mathcal{B}^{\sharp})^{-1}((R^{*}-L^{*})((\mathcal{B}^{\sharp} )^{-1}(\xi)))(\eta)+R^{*}((\mathcal{B}^{\sharp})^{-1}(\eta))(\xi)),\gamma\rangle\] \[= \langle x\cdot y,\mathcal{B}^{\sharp}(z)\rangle-\langle(\mathcal{ B}^{\sharp})^{*}(\mathcal{B}^{\sharp})^{-1}((R^{*}-L^{*})((\mathcal{B}^{\sharp} )^{-1}(\xi))(\eta)),z\rangle-\langle(\mathcal{B}^{\sharp})^{*}(\mathcal{B}^{ \sharp})^{-1}(R^{*}((\mathcal{B}^{\sharp})^{-1}(\eta))(\xi)),z\rangle\] \[= \langle x\cdot y,\mathcal{B}^{\sharp}(z)\rangle+\langle(R^{*}-L^ {*})((\mathcal{B}^{\sharp})^{-1}(\xi))(\eta),z\rangle+\langle R^{*}((\mathcal{ B}^{\sharp})^{-1}(\eta))(\xi),z\rangle\] \[= \langle x\cdot y,\mathcal{B}^{\sharp}(z)\rangle-\langle\mathcal{ B}^{\sharp}(y),z\cdot x-x\cdot z\rangle-\langle\mathcal{B}^{\sharp}(x),z\cdot y\rangle\] \[= \mathcal{B}(z,x\cdot y)-\mathcal{B}(y,[z,x])-\mathcal{B}(x,z\cdot y)\] \[= 0,\]
which implies that
\[(\mathcal{B}^{\sharp})^{-1}(\xi)\cdot(\mathcal{B}^{\sharp})^{-1}(\eta)=( \mathcal{B}^{\sharp})^{-1}((R^{*}-L^{*})((\mathcal{B}^{\sharp})^{-1}(\xi))( \eta)+R^{*}((\mathcal{B}^{\sharp})^{-1}(\eta))(\xi)).\]
Thus, we deduce that \((\mathcal{B}^{\sharp})^{-1}\) is an \(\mathcal{O}\)-operator associated to the representation \((A^{*},R^{*}-L^{*},R^{*})\). By Theorem 3.8, there is a compatible anti-L-dendriform algebra structure on \(A\) defined by
\[\mathcal{B}(x\triangleright y,z) = \mathcal{B}((\mathcal{B}^{\sharp})^{-1}((R^{*}-L^{*})(x) \mathcal{B}^{\sharp}(y)),z)\] \[= \langle(R^{*}-L^{*})(x)\mathcal{B}^{\sharp}(y),z\rangle\] \[= -\langle\mathcal{B}^{\sharp}(y),z\cdot x-x\cdot z\rangle\] \[= -\mathcal{B}(y,[z,x]).\]
Similarly, we have \(\mathcal{B}(x\triangleleft y,z)=\mathcal{B}(y,z\cdot x)\). The proof is finished.
## 4. Formal deformations of anti-pre-Lie algebras
In this section, we study formal deformations of anti-pre-Lie algebras. We show that the infinitesimal of a formal deformation is a \(2\)-cocycle and depends only on its cohomology class. Moreover, if the second cohomology group \(H^{2}(A;A)\) is trivial, then the anti-pre-Lie algebra is rigid.
In the sequel, we will denote the anti-pre-Lie multiplication \(\cdot\) by \(\omega\)
**Definition 4.1**.: _Let \((A,\omega)\) be an anti-pre-Lie algebra and \(\omega_{t}=\omega+\sum_{i=1}^{+\infty}\omega_{i}t^{i}:A[[t]]\otimes A[[t]] \longrightarrow A[[t]]\) a \(\mathbb{K}[[t]]\)-bilinear map, where \(\omega_{i}:A\otimes A\longrightarrow A\) is a linear map. If \((A[[t]],\omega_{t})\) is still an anti-pre-Lie algebra, we say that \(\{\omega_{i}\}_{i\geq 1}\) generates a \(1\)**-parameter formal deformation** of an anti-pre-Lie algebra \((A,\omega)\)._
If \(\{\omega_{i}\}_{i\geq 1}\) generates a \(1\)-parameter formal deformation of an anti-pre-Lie algebra \((A,\omega)\), for all \(x,y,z\in A\) and \(n=1,2,\dots\), we have
\[\sum_{\genfrac{}{}{0.0pt}{}{i\neq n}{i,j\geq 0}}\omega_{i}(x,\omega_{j}(y,z))- \omega_{i}(y,\omega_{j}(x,z))-\omega_{i}(\omega_{j}(y,x),z)+\omega_{i}(\omega_ {j}(x,y),z)=0. \tag{21}\]
Moreover, we have
\[\sum_{\genfrac{}{}{0.0pt}{}{i\neq n}{i,j\geq 0}}\omega_{i}(x, \omega_{j}(y,z))-\omega_{i}(y,\omega_{j}(x,z))-\omega_{i}(\omega_{j}(y,x),z)+ \omega_{i}(\omega_{j}(x,y),z)\] \[= -\mathrm{d}_{1}^{2}\omega_{n}(x,y,z).\]
For all \(x,y,z\in A\) and \(n=1,2,\dots\), we have
\[\sum_{\genfrac{}{}{0.0pt}{}{i\neq n}{i,j\geq 0}}\omega_{i}( \omega_{j}(x,y)-\omega_{j}(y,x),z)+\omega_{i}(\omega_{j}(y,z)-\omega_{j}(z,y),x)+\omega_{i}(\omega_{j}(z,x)-\omega_{j}(x,z),y)=0. \tag{23}\]
Moreover, we have
\[\sum_{\genfrac{}{}{0.0pt}{}{i\neq n}{i,j\geq 0}}\omega_{i}( \omega_{j}(x,y)-\omega_{j}(y,x),z)+\omega_{i}(\omega_{j}(y,z)-\omega_{j}(z,y),x)+\omega_{i}(\omega_{j}(z,x)-\omega_{j}(x,z),y)\] \[= -\mathrm{d}_{2}^{2}\omega_{n}(x,y,z). \tag{24}\]
**Proposition 4.2**.: _Let \(\omega_{t}=\omega+\sum_{i=1}^{+\infty}\omega_{i}t^{i}\) be a \(1\)-parameter formal deformation of an anti-pre-Lie algebra \((A,\omega)\). Then \(\omega_{1}\) is a \(2\)-cocycle of the anti-pre-Lie algebra \((A,\omega)\) with coefficients in the regular representation._
Proof.: When \(n=1\), for all \(x,y,z\in A\), by (21), we have
\[0 = x\cdot\omega_{1}(y,z)-y\cdot\omega_{1}(x,z)-\omega_{1}(y,x)\cdot z +\omega_{1}(x,y)\cdot z\] \[+\omega_{1}(x,y\cdot z)-\omega_{1}(y,x\cdot z)-\omega_{1}(y\cdot x,z)+\omega_{1}(x\cdot y,z)\] \[= \mathrm{d}_{1}^{2}\omega_{1}(x,y,z),\]
and by (23), we have
\[0 = (\omega_{1}(x,y)-\omega_{1}(y,x))\cdot z+(\omega_{1}(y,z)-\omega_ {1}(z,y))\cdot x+(\omega_{1}(z,x)-\omega_{1}(x,z))\cdot y\] \[+\omega_{1}([x,y],z)+\omega_{1}([y,z],x)+\omega_{1}([z,x],y)\] \[= \mathrm{d}_{2}^{2}\omega_{1}(x,y,z),\]
Thus, \(\omega_{1}\) is a \(2\)-cocycle of the anti-pre-Lie algebra \((A,\omega)\) with coefficients in the regular representation.
**Definition 4.3**.: _The \(2\)-cocycle \(\omega_{1}\) is called the_ **infinitesimal** _of the \(1\)-parameter formal deformation \((A[[t]],\omega_{t})\) of the anti-pre-Lie algebra \((A,\omega)\)._
**Definition 4.4**.: _Let \(\omega_{t}^{\prime}=\omega+\sum_{i=1}^{+\infty}\omega_{t}^{\prime}t^{i}\) and \(\omega_{t}=\omega+\sum_{i=1}^{+\infty}\omega_{i}t^{i}\) be two \(1\)-parameter formal deformations of an anti-pre-Lie algebra \((A,\omega)\). A_ **formal isomorphism** _from \((A[[t]],\omega_{t}^{\prime})\) to \((A[[t]],\omega_{t})\) is a power series \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}t^{i}\), where \(\varphi_{i}:A\longrightarrow A\) are linear maps with \(\varphi_{0}=\mathrm{Id}\), such that_
\[\Phi_{t}\circ\omega_{t}^{\prime}=\omega_{t}\circ(\Phi_{t}\otimes\Phi_{t}).\]
_Two \(1\)-parameter formal deformations \((A[[t]],\omega_{t}^{\prime})\) and \((A[[t]],\omega_{t})\) are said to be_ **equivalent** _if there exists a formal isomorphism \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}t^{i}\) from \((A[[t]],\omega_{t}^{\prime})\) to \((A[[t]],\omega_{t})\)._
**Theorem 4.5**.: _Let \((A,\omega)\) be an anti-pre-Lie algebra. If two \(1\)-parameter formal deformations \(\omega^{\prime}_{t}=\omega+\sum_{i=1}^{+\infty}\omega^{\prime}_{i}i^{i}\) and \(\omega_{t}=\omega+\sum_{i=1}^{+\infty}\omega_{i}i^{i}\) are equivalent, then the infinitesimals \(\omega^{\prime}_{1}\) and \(\omega_{1}\) are in the same cohomology class of \(H^{2}(A;A)\)._
Proof.: Let \(\omega^{\prime}_{t}\) and \(\omega_{t}\) be two \(1\)-parameter formal deformations. By Proposition 4.2, we have \(\omega^{\prime}_{1},\omega_{1}\in Z^{2}(A;A)\). Let \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}i^{i}\) be the formal isomorphism. Then for all \(x,y\in A\), we have
\[\omega^{\prime}_{t}(x,y) = \Phi_{t}^{-1}\circ\omega_{t}(\Phi_{t}(x),\Phi_{t}(y))\] \[= (\mathrm{Id}-\varphi_{1}t+\dots)\omega_{t}(x+\varphi_{1}(x)t+ \dots,y+\varphi_{1}(y)t+\dots)\] \[= (\mathrm{Id}-\varphi_{1}t+\dots)\Big{(}x\cdot y+\big{(}x\cdot \varphi_{1}(y)+\varphi_{1}(x)\cdot y+\omega_{1}(x,y)\big{)}t+\dots\Big{)}\] \[= x\cdot y+\Big{(}x\cdot\varphi_{1}(y)+\varphi_{1}(x)\cdot y+ \omega_{1}(x,y)-\varphi_{1}(x\cdot y)\Big{)}t+\dots.\]
Thus, we have
\[\omega^{\prime}_{1}(x,y)-\omega_{1}(x,y) = x\cdot\varphi_{1}(y)+\varphi_{1}(x)\cdot y-\varphi_{1}(x\cdot y)\] \[= \mathrm{d}^{1}\varphi_{1}(x,y),\]
which implies that \(\omega^{\prime}_{1}-\omega_{1}=\mathrm{d}^{1}\varphi_{1}\).
Thus, we have \(\omega^{\prime}_{1}-\omega_{1}\in B^{2}(A;A)\). This finishes the proof.
**Definition 4.6**.: _A \(1\)-parameter formal deformation \((A[[t]],\omega_{t})\) of an anti-pre-Lie algebra \((A,\omega)\) is said to be_ **trivial** _if it is equivalent to \((A,\omega)\), i.e. there exists \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}i^{i}\), where \(\varphi_{i}:A\longrightarrow A\) are linear maps with \(\varphi_{0}=\mathrm{Id}\), such that_
\[\Phi_{t}\circ\omega_{t}=\omega\circ(\Phi_{t}\otimes\Phi_{t}).\]
**Definition 4.7**.: _Let \((A,\omega)\) be an anti-pre-Lie algebra. If all \(1\)-parameter formal deformations are trivial, then \((A,\omega)\) is called_ **rigid**_._
**Theorem 4.8**.: _Let \((A,\omega)\) be an anti-pre-Lie algebra. If \(H^{2}(A;A)=0\), then \((A,\omega)\) is rigid._
Proof.: Let \(\omega_{t}=\omega+\sum_{i=1}^{+\infty}\omega_{i}i^{i}\) be a \(1\)-parameter formal deformation and assume that \(n\geq 1\) is the minimal number such that \(\omega_{n}\) is not zero. By (22), (24) and \(H^{2}(A;A)=0\), we have \(\omega_{n}\in B^{2}(A;A)\). Thus, there exists \(\varphi_{n}\in C^{1}(A;A)\) such that \(\omega_{n}=\mathrm{d}^{1}(-\varphi_{n})\). Let \(\Phi_{t}=\mathrm{Id}+\varphi_{n}t^{n}\) and define a new formal deformation \(\omega^{\prime}_{t}\) by \(\omega^{\prime}_{t}(x,y)=\Phi^{-1}_{t}\circ\omega_{t}(\Phi_{t}(x),\Phi_{t}(y))\). Then \(\omega^{\prime}_{t}\) and \(\omega_{t}\) are equivalent. By straightforward computation, for all \(x,y\in A\), we have
\[\omega^{\prime}_{t}(x,y) = \Phi^{-1}_{t}\circ\omega_{t}(\Phi_{t}(x),\Phi_{t}(y))\] \[= (\mathrm{Id}-\varphi_{n}t^{n}+\dots)\omega_{t}(x+\varphi_{n}(x)t^ {n},y+\varphi_{n}(y)t^{n})\] \[= (\mathrm{Id}-\varphi_{n}t^{n}+\dots)\Big{(}x\cdot y+\big{(}x\cdot \varphi_{n}(y)+\varphi_{n}(x)\cdot y+\omega_{n}(x,y)\big{)}t^{n}+\dots\Big{)}\] \[= x\cdot y+\Big{(}x\cdot\varphi_{n}(y)+\varphi_{n}(x)\cdot y+ \omega_{n}(x,y)-\varphi_{n}(x\cdot y)\Big{)}t^{n}+\dots.\]
Thus, we have \(\omega^{\prime}_{1}=\omega^{\prime}_{2}=\dots=\omega^{\prime}_{n-1}=0\). Moreover, we have
\[\omega^{\prime}_{n}(x,y) = x\cdot\varphi_{n}(y)+\varphi_{n}(x)\cdot y+\omega_{n}(x,y)-\varphi _{n}(x\cdot y)\] \[= \mathrm{d}^{1}\varphi_{n}(x,y)+\omega_{n}(x,y)\] \[= 0.\]
Keep repeating the process, we obtain that \((A[[t]],\omega_{t})\) is equivalent to \((A,\omega)\). The proof is finished.
## 5. Abelian extensions of anti-pre-Lie algebras
In this section, we study abelian extensions of anti-pre-Lie algebras using the cohomological approach. We show that abelian extensions are classified by the second cohomology group \(H^{2}(A;V)\).
**Definition 5.1**.: _Let \((A,\cdot)\) and \((V,\cdot_{V})\) be two anti-pre-Lie algebras. An_ **extension** _of \((A,\cdot)\) by \((V,\cdot_{V})\) is a short exact sequence of anti-pre-Lie algebra:_
_where \((\hat{A},\cdot_{\hat{A}},)\) is an anti-pre-Lie algebra._
_It is called an_ **abelian extension** _if \((V,\cdot_{V})\) is an abelian anti-pre-Lie algebra, i.e. for all \(u,v\in V,u\cdot_{V}v=0\)._
**Definition 5.2**.: \(A\) **section** _of an extension \((\hat{A},\cdot_{\hat{A}})\) of an anti-pre-Lie algebra \((A,\cdot)\) by \((V,\cdot_{V})\) is a linear map \(s:A\longrightarrow\hat{A}\) such that \(p\circ s=\mathrm{Id}_{\mathrm{A}}\)._
Let \((\hat{A},\cdot_{\hat{A}})\) be an abelian extension of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\) and \(s:A\longrightarrow\hat{A}\) a section. For all \(x,y\in A\), define linear maps \(\theta:A\otimes A\longrightarrow V\) by
\[\theta(x,y)=s(x)\cdot_{\hat{A}}s(y)-s(x\cdot y).\]
And for all \(x,y\in A,u\in V\), define \(\rho,\mu:A\longrightarrow\mathrm{gl}(V)\) respectively by
\[\rho(x)(u) = s(x)\cdot_{\hat{A}}u,\] \[\mu(x)(u) = u\cdot_{\hat{A}}s(x).\]
Obviously, \(\hat{A}\) is isomorphic to \(A\oplus V\) as vector spaces. Transfer the anti-pre-Lie algebra structure on \(\hat{A}\) to that on \(A\oplus V\), we obtain an anti-pre-Lie algebra \((A\oplus V,\diamond)\), where \(\diamond\) is given by
\[(x+u)\diamond(y+v)=x\cdot y+\theta(x,y)+\rho(x)(v)+\mu(y)(u),\quad\forall\ x,y \in A,u,v\in V. \tag{25}\]
**Lemma 5.3**.: _With the above notations, \((V,\rho,\mu)\) is a representation of the anti-pre-Lie algebra \((A,\cdot)\)._
Proof.: For all \(x,y\in A\), \(u\in V\), by (1), we have
\[0 = x\diamond(y\diamond u)-y\diamond(x\diamond u)-(y\diamond x) \diamond u+(x\diamond y)\diamond u\] \[= x\diamond\rho(y)(u)-y\diamond\rho(x)(u)-(y\cdot x+\theta(y,x)) \diamond u+(x\cdot y+\theta(x,y))\diamond u\] \[= \rho(x)\rho(y)(u)-\rho(y)\rho(x)(u)-\rho([y,x])(u),\]
and
\[0 = u\diamond(x\diamond y)-x\diamond(u\diamond y)-(x\diamond u) \diamond y+(u\diamond x)\diamond y\] \[= u\diamond(x\cdot y+\theta(x,y))-x\diamond\mu(y)(u)-\rho(x)(u) \diamond y+\mu(x)(u)\diamond y\] \[= \mu(x\cdot y)(u)-\rho(x)\mu(y)(u)-\mu(y)\rho(x)(u)+\mu(y)\mu(x)(u),\]
which implies that
\[\rho(x)\circ\rho(y)-\rho(y)\circ\rho(x) = \rho([y,x]),\] \[\mu(x\cdot y)-\rho(x)\circ\mu(y) = \mu(y)\circ\rho(x)-\mu(y)\circ\mu(x).\]
For all \(x,y\in A\), \(u\in V\), by (2), we have
\[0 = (x\diamond y-y\diamond x)\diamond u+(y\diamond u-u\diamond y) \diamond x+(u\diamond x-x\diamond u)\diamond y\] \[= \mu(x\diamond y)\diamond u+(y\diamond u-u\diamond y)\diamond x+(u \diamond x-x\diamond u)\diamond y\] \[= \mu(x\diamond y)\diamond u+(y\
\[= (x\cdot y+\theta(x,y)-y\cdot x-\theta(y,x))\diamond z+(y\cdot z+ \theta(y,z)-z\cdot y-\theta(z,y))\diamond x\] \[+(z\cdot x+\theta(z,x)-x\cdot z-\theta(x,z))\diamond y\] \[= \theta(x\cdot y,z)+\mu(z)\theta(x,y)-\theta(y\cdot x,z)-\mu(z) \theta(y,x)+\theta(y\cdot z,x)+\mu(x)\theta(y,z)-\theta(z\cdot y,x)\] \[-\mu(x)\theta(z,y)+\theta(z\cdot x,y)+\mu(y)\theta(z,x)-\theta(x \cdot z,y)-\mu(y)\theta(x,z)\] \[= \mathrm{d}_{2}^{2}\theta(x,y,z).\]
Thus, \(\theta\) is a \(2\)-cocycle of the anti-pre-Lie algebra \((A,\cdot)\) with coefficients in the representation \((V,\rho,\mu)\). The proof is finished.
**Proposition 5.5**.: _Let \((\hat{A},\cdot_{\hat{A}})\) be an abelian extension of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\). Then two different sections give rise to the same representation of \((A,\cdot)\),_
Proof.: Choosing two different sections \(s_{1},s_{2}:A\longrightarrow\hat{A}\), by Lemma 5.3, we obtain two representations \((V,\rho_{1},\mu_{1})\) and \((V,\rho_{2},\mu_{2})\). Define \(\varphi:A\longrightarrow V\) by \(\varphi(x)=s_{1}(x)-s_{2}(x)\). Then for all \(x\in A\), we have
\[\rho_{1}(x)(u)-\rho_{2}(x)(u) = s_{1}(x)\cdot_{\hat{A}}u-s_{2}(x)\cdot_{\hat{A}}u\] \[= (\varphi(x)+s_{2}(x))\cdot_{\hat{A}}u-s_{2}(x)\cdot_{\hat{A}}u\] \[= \varphi(x)\cdot_{\hat{A}}u\] \[= 0,\]
which implies that \(\rho_{1}=\rho_{2}\). Similarly, we have \(\mu_{1}=\mu_{2}\). This finishes the proof.
**Definition 5.6**.: _Let \((\hat{A}_{1},\cdot_{\hat{A}_{1}})\) and \((\hat{A}_{2},\cdot_{\hat{A}_{2}})\) be two abelian extensions of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\). They are said to be_ **isomorphic** _if there exists an anti-pre-Lie algebra isomorphism \((V,\rho,\mu)\)
\(\zeta:(\hat{A}_{1},\cdot_{\hat{A}_{1}})\longrightarrow(\hat{A}_{2},\cdot_{\hat{A}_{ 2}})\) such that the following diagram is commutative:_
**Lemma 5.7**.: _Let \((\hat{A}_{1},\cdot_{\hat{A}_{1}})\) and \((\hat{A}_{2},\cdot_{\hat{A}_{2}})\) be two isomorphic abelian extensions of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\). Then they give rise to the same representation of \((A,\cdot)\)_
Proof.: Let \(s_{1}:A_{1}\longrightarrow\hat{A}_{1}\) and \(s_{2}:A_{2}\longrightarrow\hat{A}_{2}\) be two sections of \((\hat{A}_{1},\cdot_{\hat{A}_{1}})\) and \((\hat{A}_{2},\cdot_{\hat{A}_{2}})\) respectively. By Lemma 5.3, we obtain that \((V,\rho_{1},\mu_{1})\) and \((V,\rho_{2},\mu_{2})\) are their representations respectively. Define \(s_{1}^{\prime}:A_{1}\longrightarrow\hat{A}_{1}\) by \(s_{1}^{\prime}=\zeta^{-1}\circ s_{2}\). Since \(\zeta:(\hat{A}_{1},\cdot_{\hat{A}_{1}})\longrightarrow(\hat{A}_{2},\cdot_{ \hat{A}_{2}})\) is an anti-pre-Lie algebra isomorphism satisfying the commutative diagram in Definition 5.6, by \(p_{2}\circ\zeta=p_{1}\), we have
\[p_{1}\circ s_{1}^{\prime}=p_{2}\circ\zeta\circ\zeta^{-1}\circ s_{2}=\operatorname {Id}_{\operatorname{A}}.\]
Thus, we obtain that \(s_{1}^{\prime}\) is a section of \((\hat{A}_{1},\cdot_{\hat{A}_{1}})\). For all \(x\in A,u\in V\), we have
\[\rho_{1}(x)(u)=s_{1}^{\prime}(x)\cdot_{\hat{A}_{1}}u=(\zeta^{-1}\circ s_{2})(x )\cdot_{\hat{A}_{1}}u=\zeta^{-1}(s_{2}(x)\cdot_{\hat{A}_{2}}u)=\rho_{2}(x)(u),\]
which implies that \(\rho_{1}=\rho_{2}\). Similarly, we have \(\mu_{1}=\mu_{2}\). This finishes the proof.
In the sequel, we fix a representation \((V,\rho,\mu)\) of an anti-pre-Lie algebra \((A,\cdot)\) and consider abelian extensions that induce the given representation.
**Theorem 5.8**.: _Abelian extensions of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\) are classified by \(H^{2}(A;V)\)._
Proof.: Let \((\hat{A},\cdot_{\hat{A}})\) be an abelian extension of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\). Choosing a section \(s:A\longrightarrow\hat{A}\), by Theorem 5.4, we obtain that \(\theta\in Z^{2}(A;V)\). Now we show that the cohomological class of \(\theta\) does not depend on the choice of sections. In fact, let \(s_{1}\) and \(s_{2}\) be two different sections. Define \(\varphi:A\longrightarrow V\) by \(\varphi(x)=s_{1}(x)-s_{2}(x)\). Then for all \(x,y\in A\), we have
\[\theta_{1}(x,y) = s_{1}(x)\cdot_{\hat{A}}s_{1}(y)-s_{1}(x\cdot y)\] \[= \big{(}s_{2}(x)+\varphi(x)\big{)}\cdot_{\hat{A}}\big{(}s_{2}(y)+ \varphi(y)\big{)}-s_{2}(x\cdot y)-\varphi(x\cdot y)\] \[= s_{2}(x)\cdot_{\hat{A}}s_{2}(y)+\rho(x)\varphi(y)+\mu(y)\varphi( x)-s_{2}(x\cdot y)-\varphi(x\cdot y)\] \[= \theta_{2}(x,y)+\mathrm{d}^{1}\varphi(x,y),\]
which implies that \(\theta_{1}-\theta_{2}=\mathrm{d}^{1}\varphi\). Therefore, we obtain that \(\theta_{1}-\theta_{2}\in B^{2}(A;V)\), \(\theta_{1}\) and \(\theta_{2}\) are in the same cohomological class.
Now we prove that isomorphic abelian extensions give rise to the same element in \(H^{2}(A;V)\). Assume that \((\hat{A}_{1},\cdot_{\hat{A}_{1}})\) and \((\hat{A}_{2},\cdot_{\hat{A}_{2}})\) are two isomorphic abelian extensions of an anti-pre-Lie algebra \((A,\cdot)\) by \(V\), and \(\zeta:(\hat{A}_{1},\cdot_{\hat{A}_{1}})\longrightarrow(\hat{A}_{2},\cdot_{ \hat{A}_{2}})\) is an anti-pre-Lie algebra isomorphism satisfying the commutative diagram in Definition 5.6. Assume that \(s_{1}:A\longrightarrow\hat{A}_{1}\) is a section of \(\hat{A}_{1}\). By \(p_{2}\circ\zeta=p_{1}\), we have
\[p_{2}\circ(\zeta\circ s_{1})=p_{1}\circ s_{1}=\operatorname{Id}_{\operatorname{A}}.\]
Thus, we obtain that \(\zeta\circ s_{1}\) is a section of \(\hat{A}_{2}\). Define \(s_{2}=\zeta\circ s_{1}\). Since \(\zeta\) is an isomorphism of anti-pre-Lie algebras and \(\zeta\mid_{V}=\operatorname{Id}_{\operatorname{V}}\), for all \(x,y\in A\), we have
\[\theta_{2}(x,y) = s_{2}(x)\cdot_{\hat{A}_{2}}s_{2}(y)-s_{2}(x\cdot y)\] \[= (\zeta\circ s_{1})(x)\cdot_{\hat{A}_{2}}(\zeta\circ s_{1})(y)-( \zeta\circ s_{1})(x\cdot y)\] \[= \zeta(s_{1}(x)\cdot_{\hat{A}_{1}}s_{1}(y)-s_{1}(x\cdot y))\]
\[= \theta_{1}(x,y),\]
Thus, isomorphic abelian extensions give rise to the same element in \(H^{2}(A;V)\).
Conversely, given two 2-cocycles \(\theta_{1}\) and \(\theta_{2}\), by (25), we can construct two abelian extensions \((A\oplus V,\diamond_{1})\) and \((A\oplus V,\diamond_{2})\). If \(\theta_{1},\theta_{2},\in H^{2}(A;V)\), then there exists \(\varphi:A\longrightarrow V\), such that \(\theta_{1}=\theta_{2}+\mathrm{d}^{1}\varphi\). We define \(\zeta:A\oplus V\longrightarrow A\oplus V\) by
\[\zeta(x+u)=x+u+\varphi(x),\quad\forall\ x\in A,u\in V.\]
For all \(x,y\in A,u,v\in V\), by \(\theta_{1}=\theta_{2}+\mathrm{d}^{1}\varphi\), we have
\[\zeta((x+u)\diamond_{1}(y+v))-\zeta(x+u)\diamond_{2}\zeta(y+v)\] \[= \zeta(x\cdot y+\theta_{1}(x,y)+\rho(x)(v)+\mu(y)(u))-(x+u+ \varphi(x))\diamond_{2}(y+v+\varphi(y))\] \[= \theta_{1}(x,y)+\varphi(x\cdot y)-\theta_{2}(x,y)-\rho(x)\varphi (y)-\mu(y)\varphi(x)\] \[= \theta_{1}(x,y)-\theta_{2}(x,y)-\mathrm{d}^{1}\varphi(x,y)\] \[= 0,\]
which implies that \(\zeta\) is an anti-pre-Lie algebra isomorphism from \((A\oplus V,\diamond_{1})\) to \((A\oplus V,\diamond_{2})\). Moreover, it is obvious that the diagram in Definition 5.6 is commutative. This finishes the proof.
**Acknowledgement:** This work is supported by NSF of Jilin Province (No. YDZJ202201ZYTS589), NNSF of China (Nos. 12271085, 12071405) and the Fundamental Research Funds for the Central Universities.
|
2309.04275 | Symmetries of exotic spheres via complex and quaternionic Mahowald
invariants | We deduce the existence of smooth $U(1)$- and $Sp(1)$-actions on certain
exotic spheres using complex and quaternionic analogues of the Mahowald
{(root)} invariant. In particular, we prove that the complex (respectively,
quaternionic) Mahowald invariant takes an element of $\pi_k^s$ represented by a
homotopy sphere $\Sigma^k$ to an element {of $\pi_{k+\ell}$} represented by
another homotopy sphere $\Sigma^{k+\ell}$ equipped with a smooth $U(1)$-
(respectively, $Sp(1)$-) action with fixed points the original homotopy sphere
$\Sigma^k\subset \Sigma^{k+\ell}$. This work is motivated by results of Stolz
on the classical Mahowald invariant and smooth $C_2$-actions on homotopy
spheres. | Boris Botvinnik, J. D. Quigley | 2023-09-08T11:53:00Z | http://arxiv.org/abs/2309.04275v1 | # Symmetries of exotic spheres via complex and quaternionic Mahowald invariants
###### Abstract.
We deduce the existence of smooth \(U(1)\)- and \(Sp(1)\)-actions on certain exotic spheres using complex and quaternionic analogues of the Mahowald (root) invariant. In particular, we prove that the complex (respectively, quaternionic) Mahowald invariant takes an element of \(\pi_{k}^{\ell}\) represented by a homotopy sphere \(\Sigma^{k}\) to an element of \(\pi_{k+\ell}\) represented by another homotopy sphere \(\Sigma^{k+\ell}\) equipped with a smooth \(U(1)\)- (respectively, \(Sp(1)\)-) action with fixed points the original homotopy sphere \(\Sigma^{k}\subset\Sigma^{k+\ell}\). This work is motivated by results of Stolz on the classical Mahowald invariant and smooth \(C_{2}\)-actions on homotopy spheres.
###### Contents
* 1 Introduction
* 2 Stunted real, complex, and quaternionic projective spectra
* 3 Mahowald invariants and equivariant bordism
* 4 The main theorem
* 5 Applications
* 6 Future Directions
## 1. Introduction
The \(n\)-sphere \(S^{n}\) equipped with its standard smooth structure is distinguished among smooth \(n\)-manifolds. From the perspective of smooth transformation groups, \(S^{n}\subseteq\mathbb{R}^{n+1}\) admits a smooth nontrivial \(SO(n+1)\)-action, making it the most symmetric of all smooth simply-connected \(n\)-manifolds. From the perspective of Riemannian geometry, the \(n\)-sphere of radius \(R\) is particularly nice, with constant sectional curvature \(1/R^{2}\), Ricci curvature \((n-1)/R^{2}\), and scalar curvature \(n(n-1)/R^{2}\).
On the other hand, decades of efforts in geometric topology and stable homotopy theory [14, 15, 16, 17, 18] have shown that exotic spheres exist in most dimensions, but relatively little is known about their smooth transformation groups and curvature.
On the transformation group side, upper bounds on their degrees of symmetry (the maximal dimension of a compact Lie group acting smoothly and effectively on \(M\)) have appeared in [16, 17, 18, 19]. In cases where exotic spheres can be described explicitly, e.g., the Brieskorn representation of the Kervaire spheres, it is possible to define actions of high-dimensional compact Lie groups explicitly. However, for an arbitrary exotic sphere, lower bounds are harder to come by, cf. [11, 12, 13, 14, 15, 16]. Schultz highlighted the following question in 1985:
**Question** (Schultz, [15]).: Let \(\Sigma^{n}\) be an exotic \(n\)-sphere, \(n\geq 5\). Does \(\Sigma^{n}\) support a nontrivial smooth \(U(1)\)-action?
In [14], the second author applied a result of Schultz [15] to provide some positive answers to Schultz's question using stable homotopy theory. In this work, we provide a new homotopy theoretic way of detecting nontrivial smooth \(U(1)\)- and \(Sp(1)\)-actions on exotic spheres.
In Definition 3.1, we define real, complex, and quaternionic Mahowald invariants
\[M_{\mathbb{R}},M_{\mathbb{C}},M_{\mathbb{H}}:\pi_{*}^{s}\hookrightarrow\pi_{*}^ {s}\]
which carry elements in the stable stems to cosets in the stable stems. The construction is based on relevant analogues of the Lin's theorem [13] (see Theorem 2.2). If \(\alpha\in\pi_{k}^{s}\) and \(M_{\mathbb{R}}(\alpha)\subseteq\pi_{k+n}^{s}\), we define the real Mahowald filtration of \(\alpha\) to be \(n\); complex and quaternionic Mahowald filtrations are defined similarly.
Recall that the Pontryagin-Thom isomorphism
\[\mathcal{P}:\Omega_{n}^{\mathrm{fr}}\cong\pi_{n}^{s}\]
identifies the \(n\)-th framed bordism group of a point with the \(n\)-th stable homotopy group of the sphere, so it makes sense to talk about the Mahowald invariant of a framed manifold.
In [11], Stolz proved that, under certain hypotheses on \(k\) and \(n\), if \([\Sigma^{k+n},\bar{b}]\in M_{\mathbb{R}}([\Sigma^{k},\bar{a}])\), then there exists a smooth \(C_{2}\)-action on \(\Sigma^{k+n}\#\Sigma^{\prime}\), for some homotopy sphere \(\Sigma^{\prime}\) which bounds a parallelizable manifold, with fixed points \(\Sigma^{k}\). Our main theorem gives complex and quaternionic analogues of Stolz's result: it relates complex and quaternionic Mahowald invariants to interesting smooth \(U(1)\)- and \(Sp(1)\)-actions on homotopy spheres.
**Theorem A** (Theorem 4.1 and Remark 4.3).: The following statements hold:
1. Let \((\Sigma^{k},\Sigma^{k+2n})\) be a pair of homotopy spheres. Suppose there exist framings \(\bar{a}\) and \(\bar{b}\) for \(\Sigma^{k}\) and \(\Sigma^{k+2n}\), respectively, such that 1. The codimension \(2n\) is bounded above by the complex Mahowald filtration \(m\) of \([\Sigma^{k},\bar{a}]\), and 2. we have \[[\Sigma^{k+2n},\bar{b}]\in\begin{cases}M_{\mathbb{C}}([\Sigma^{k},\bar{a}])& \text{if }2n=m,\\ 0&\text{if }2n<m.\end{cases}\] Then there exists a smooth \(U(1)\)-action on \(\Sigma^{k+2n}\#\Sigma^{\prime}\), for some homotopy \((k+2n)\)-sphere \(\Sigma^{\prime}\) which bounds a parallelizable manifold, with fixed points \(\Sigma^{k}\).
2. Let \((\Sigma^{k},\Sigma^{k+4n})\) be a pair of homotopy spheres. Suppose there exist framings \(\bar{a}\) and \(\bar{b}\) for \(\Sigma^{k}\) and \(\Sigma^{k+4n}\), respectively, such that 1. The codimension \(4n\) is bounded above by the quaternionic Mahowald filtration \(m\) of \([\Sigma^{k},\bar{a}]\), and 2. we have \[[\Sigma^{k+4n},\bar{b}]\in\begin{cases}M_{\mathbb{H}}([\Sigma^{k},\bar{a}])& \text{if }4n=m,\\ 0&\text{if }4n<m.\end{cases}\] Then there exists a smooth \(Sp(1)\)-action on \(\Sigma^{k+4n}\#\Sigma^{\prime}\), for some homotopy \((k+4n)\)-sphere \(\Sigma^{\prime}\) which bounds a parallelizable manifold, with fixed points \(\Sigma^{k}\).
We apply Theorem A to deduce the existence of nontrivial \(U(1)\)- and \(Sp(1)\)-actions on some homotopy spheres in Section 5. By iterating the Mahowald invariant, we are able to produce actions not only of \(C_{2}\), \(U(1)\), and \(Sp(1)\), but also products of these groups. For instance, we prove the following:
**Theorem B** (Corollary 5.3(3)).: The homotopy spheres \(S^{7}\) and \(\Sigma^{21}\) corresponding to \(\sigma\) and \(\sigma^{3}\) admit nontrivial \((U(1)\times Sp(1))\)- and \(U(1)^{\times 2}\)-actions with fixed points the homotopy spheres \(S^{1}\) and \(S^{3}\) corresponding to \(\eta\) and \(\eta^{3}\), respectively.
In Section 5, we deduce Theorem B by explicitly calculating certain complex and quaternionic Mahowald invariants using the Adams spectral sequence. It turns out this is unnecessary: if the complex or quaternionic Mahowald invariant of \(\alpha\) does not contain \(\alpha\), then it agrees with the real Mahowald invariant, up to adding an element in the image of the classical \(J\)-homomorphism:
**Theorem C** (Theorem 3.2).: Let \(\alpha\neq 1\in\pi_{n}^{s}\). Then either \(M_{\mathbb{C}}(\alpha)=M_{\mathbb{R}}(\alpha)\) in \(\operatorname{coker}(J)\) or \(M_{\mathbb{C}}(\alpha)=\alpha\). Similarly, either \(M_{\mathbb{H}}(\alpha)=M_{\mathbb{R}}(\alpha)\) in \(\operatorname{coker}(J)\), or \(M_{\mathbb{H}}(\alpha)=\alpha\).
**Remark 1.1**.: On one hand, Theorem C allows us to easily deduce complex and quaternionic Mahowald invariants from known real Mahowald invariant computations. On the other hand, the computations of Section 5 suggest that the complex and quaternionic Mahowald invariants can be computed more efficiently than their real counterpart. We discuss this further in Section 6.1.
Our results also relate to the long-standing problem of understanding the curvature of exotic spheres. The most well-understood form of curvature in this setting is scalar curvature: Stolz [10] showed that a simply connected, closed, spin manifold of dimension \(n\geq 5\) admits a metric with positive scalar curvature if and only if its \(\alpha\)-invariant is trivial. Since every homotopy sphere of dimension \(n\geq 3\) is spin, Stolz's result completely determines which homotopy spheres admit Riemannian metrics of positive scalar curvature.
It turns out that nontrivial smooth group actions are closely connected to scalar curvature. In [10], Lawson-Yau proved that if a compact manifold admits a smooth, effective action by any compact, connected, nonabelian Lie group (that is, if it admits a nontrivial \(Sp(1)\)-action), then it admits a Riemannian metric of positive scalar curvature. Combined with Theorem A, we obtain:
**Corollary 1.2**.: _A homotopy sphere \(\Sigma^{k+4n}\) representing a nontrivial quaternionic Mahowald invariant admits a Riemannian metric of positive scalar curvature._
### Linear outline
In Section 2, we discuss real, complex, and quaternionic stunted projective spectra. We prove a quaternionic analogue of Lin's Theorem [12] following an argument of Ravenel [14] which allows us to define quaternionic Mahowald invariants in the sequel.
In Section 3, we define real, complex, and quaternionic Mahowald invariants in terms of the stable homotopy groups of spheres and stunted projective spectra. We then adapt the work of Stolz in the real case [10, Sec. 1] to express these Mahowald invariants in terms of equivariant bordism groups.
In Section 4, we prove Theorem A by adapting Stolz's proof from the real case [10, Sec. 3] to the complex and quaternionic cases. As a consequence of the proof, we also quickly obtain a proof of Theorem C.
In Section 5, we compute some low-dimensional complex and quaternionic Mahowald invariants. We feed these into Theorem A to deduce the existence of interesting \(U(1)\)- and \(Sp(1)\)-actions on some homotopy spheres.
In Section 6, we discuss several questions and problems motivated by this work.
### Conventions
1. We work integrally in Section 2 and in the \(p\)-complete setting in every section after that.
2. If \(X\) is a spectrum, we write \(\widehat{X}=\lim_{i}X/(p^{i})\) for the \(p\)-completion of \(X\) and \(\tilde{X}=\lim_{i}X/(p^{i})\) for the \(p\)-adic cocompletion of \(X\).
3. We write \(\mathbb{K}\) for \(\mathbb{R}\), \(\mathbb{C}\), and \(\mathbb{H}\) and define \(d_{\mathbb{K}}:=\dim_{\mathbb{R}}\mathbb{K}\).
4. We write \(G_{\mathbb{K}}\) for the group of units in \(\mathbb{K}\).
5. We write 'ASS' for the Adams spectral sequence and 'MASS' for the modified Adams spectral sequence.
### Acknowledgments
The authors thank William Balderrama, Dan Dugger, John Greenlees, Doug Ravenel, John Rognes, XiaoLin Danny Shi, Stephan Stolz, and Zhouli Xu for helpful discussions. The first author was partially supported by Simons collaboration grant 708183. The second author was partially supported by NSF grants DMS-2039316 amd DMS-2314082, as well as an AMS-Simons Travel Grant. The second author also thanks the Max Planck Institute for providing a wonderful working environment and financial support during the beginning of this project.
## 2. Stunted real, complex, and quaternionic projective spectra
In this section, we discuss the stunted projective spectra needed to define the real, complex, and quaternionic Mahowald invariants. Our main theorem is Theorem 2.2(3), the quaternionic analogue of Lin's Theorem [1], which allows us to define quaternionic Mahowald invariants.
**Definition 2.1**.: For each \(\mathbb{K}\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}\), let \(\gamma_{\mathbb{K}}\) denote the tautological \(\mathbb{K}\)-line bundle over \(\mathbb{K}P^{\infty}\).
For each \(i\in\mathbb{Z}\), the _\(i\)-th stunted \(\mathbb{K}\)-projective spectrum \(\mathbb{K}P^{\infty}_{i}\)_ is the Thom spectrum of the \(i\)-fold Whitney sum of \(\gamma_{\mathbb{K}}\) over \(\mathbb{K}P^{\infty}\),
\[\mathbb{K}P^{\infty}_{i}:=\operatorname{Th}(\mathbb{K}P^{\infty},i\gamma_{ \mathbb{K}}).\]
Let
\[\mathbb{K}P^{\infty}_{-\infty}:=\lim_{i}\mathbb{K}P^{\infty}_{-i}\]
be the inverse limit taken along the maps induced by the inclusions \(i\gamma_{\mathbb{K}}\to(i+1)\gamma_{\mathbb{K}}\).
The main theorem we need about these stunted projective spectra is the following.
**Theorem 2.2**.: _For each \(\mathbb{K}\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}\), some desuspension of the \(2\)-complete sphere spectrum is a retract of \(\mathbb{K}P^{\infty}_{-\infty}\). More precisely:_
1. _(Lin,_ _[_1_]__) There is an equivalence of spectra_ \(S^{-1}\simeq\mathbb{R}P^{\infty}_{-\infty}\)_._
2. _(Ravenel,_ _[_1_]__) The spectrum_ \(S^{-2}\) _is a retract of_ \(\mathbb{C}P^{\infty}_{-\infty}\)_._
3. _The spectrum_ \(S^{-4}\) _is a retract of_ \(\mathbb{H}P^{\infty}_{-\infty}\)_._
_Here, we emphasize that the statements hold only after \(2\)-completion._
We will prove Part (3) of Theorem 2.2 by modifying Ravenel's proof of Part (2) from [10].
**Remark 2.3**.: We focus on the \(2\)-primary setting in this paper, but point out the following:
1. The odd-primary analogue of Lin's Theorem is due to Gunawardena [1]; one replaces \(\mathbb{R}P^{\infty}_{-\infty}\) by an appropriate inverse limit of stunted lens spectra.
2. Ravenel's result concerning \(\mathbb{C}P^{\infty}_{-\infty}\) and our result concerning \(\mathbb{H}P^{\infty}_{-\infty}\) hold after completion at _any_ prime. We only make \(2\)-primary computations in this paper, but suggest some odd-primary computations in Section 6.1.
### Proof of Theorem 2.2
Our proof of Theorem 2.2(3) is a fairly straightforward modification of Ravenel's proof of Theorem 2.2(2). To avoid rewriting Ravenel's entire paper with '\(\mathbb{H}\)' in place of '\(\mathbb{C}\)', we will state the quaternionic analogue of each key result from [10, Secs. 1 and 2], adding details and proofs only when they differ substantially from the complex case (or when the proof of the complex case is omitted by Ravenel).
**Convention 2.4**.: _In the remainder of this section only, we will write \(\mathbb{H}P_{i}:=\mathbb{H}P^{\infty}_{i}\). We write \(X^{(j)}\) for the \(j\)-skeleton of \(X\)._
**Lemma 2.5** (Analogue of [10, Lem. 1.4]).: _There is an equivalence_
\[\mathbb{H}P_{i}/\mathbb{H}P^{(4i)}_{i}\simeq\mathbb{H}P_{i+1}.\]
The following lemma can be proven using James periodicity for quaternionic stunted projective spectra [1].
**Lemma 2.6** (Analogue of [12, Lem. 1.6]).: _For \(j>i\), there is an equivalence \(D\mathbb{H}P_{i}^{(4j-4)}\simeq\Sigma^{4}\mathbb{H}P_{-j}^{-4i-4}\)._
As mentioned above, the bundle map \(-n\gamma_{\mathbb{H}}\to(-n+1)\gamma_{\mathbb{H}}\) gives rise to an inverse system
\[\mathbb{H}P_{0}\leftarrow\mathbb{H}P_{-1}\leftarrow\mathbb{H}P_{-2} \leftarrow\cdots.\]
Note also that since \(\mathbb{H}P_{0}\simeq\Sigma^{\infty}\mathbb{H}P_{+}^{\infty}\simeq\Sigma^{ \infty}BSp(1)_{+}\), we may define a \(p\)-th power map
\[[p]:\mathbb{H}P_{0}\rightarrow\mathbb{H}P_{0}.\]
We defer the proof of the following result to Section 2.2.
Recall the conventions of Section 1.2 and Convention 2.4.
**Theorem 2.7** (Analogue of [12, Thm. 1.7]).: _For \(i>0\), the composite_
\[\widetilde{\mathbb{H}P_{-i}}\rightarrow\widetilde{\mathbb{H}P_{0}}\xrightarrow {[p]}\widetilde{\mathbb{H}P_{0}}\]
_induces an isomorphism in the cohomotopy group \(\pi^{k}\) for \(k>1-2i\)._
For \(i>0\), we have cofiber sequences
\[\mathbb{H}P_{-k-i}\rightarrow\mathbb{H}P_{-k}\rightarrow\Sigma\mathbb{H}P_{- k-i}^{(-4k-4)} \tag{2.1}\]
by Lemma 2.5, and using Lemma 2.6, the right-hand map dualizes to
\[\Sigma^{3}\mathbb{H}P_{k}^{(4k+4i-4)}\to D\mathbb{H}P_{-k}.\]
Letting \(i\) go to \(\infty\), we get a map
\[g:\Sigma^{3}\mathbb{H}P_{k}\to D\mathbb{H}P_{-k} \tag{2.2}\]
for all integers \(k\).
For \(k\geq 0\), we also have the map
\[D[p]:D\mathbb{H}P_{0}\xrightarrow{D[p]}D\mathbb{H}P_{0}\to D\mathbb{H}P_{-k}, \tag{2.3}\]
where the last map is the dual of the map induced by the map \(-k\gamma_{\mathbb{H}}\to 0\).
For each \(k\geq 0\), we obtain1
Footnote 1: There is a typo in [12] here: \(k\leq 0\) should be \(k\geq 0\).
\[e:\Sigma^{3}\mathbb{H}P_{k}\vee\bigvee_{i=1}^{\infty}\Sigma^{3}\mathbb{H}P_{0 }\to D\mathbb{H}P_{-k}. \tag{2.4}\]
The map on the left-hand summand is \(g\) and the map on the \(i\)-th summand is the composite
\[\Sigma^{3}\mathbb{H}P_{0}\xrightarrow{[p]^{i}}\Sigma^{3}\mathbb{H}P_{0} \rightarrow\Sigma^{3}\mathbb{H}P_{k}\xrightarrow{g}D\mathbb{H}P_{-k}.\]
The assumption \(k\geq 0\) is required since the unlabeled map is induced by the inclusion \(0\to k\gamma_{\mathbb{H}}\).
**Theorem 2.8**.: _For \(k\geq 0\), there is a map_
\[e^{\prime}:\widetilde{D\mathbb{H}P_{-k}}\rightarrow\Sigma^{3}\mathbb{H}P_{k} \vee\prod_{i=1}^{\infty}\Sigma\widetilde{\mathbb{H}P}_{0}\]
_such that_
1. \(e^{\prime}e\) _is the composite of_ \(p\)_-adic completion and the inclusion of a sum into a product,_
2. \(e^{\prime}\) _has fiber_ \(\widehat{S}^{0}\)_, and_
3. \(e^{\prime}\) _is a retraction, so_ \(D\mathbb{H}P_{-k}\simeq\widehat{S}^{0}\vee\Sigma^{3}\overline{\mathbb{H}P}_{k }\vee\prod_{i=1}^{\infty}\Sigma\overline{\mathbb{H}P}_{0}\)_._
Proof.: Using that \(\pi^{k}(X)\cong\pi_{-k}(DX)\) and dualizing Theorem 2.7, we see that the composite
\[D\widetilde{\mathbb{H}P_{0}}\xrightarrow{D[p]}D\widetilde{\mathbb{H}P_{0}} \to D\widetilde{\mathbb{H}P_{-i}}\]
is a \((4i-2)\)-equivalence. Letting \(i\to\infty\) shows that
\[D\widetilde{\mathbb{H}P_{0}}\simeq\lim_{i\to\infty}D\widetilde{\mathbb{H}P_{-i}}\]
is an equivalence.
Dualizing (2.1) and letting \(i\to\infty\), we obtain a cofiber sequence
\[\Sigma^{3}\widetilde{\mathbb{H}P_{k}}\xrightarrow{g}D\widetilde{\mathbb{H}P_ {-k}}\to\lim_{i\to\infty}D\widetilde{\mathbb{H}P_{-i}}\]
for \(k\geq 0\). The right-hand spectrum is equivalent to \(D\widetilde{\mathbb{H}P_{0}}\), so the right-hand map is a retraction onto \(D\widetilde{\mathbb{H}P_{0}}\) split by \(D[p]\), and the left-hand map is a splitting for \(g\). Thus we have
\[D\widetilde{\mathbb{H}P_{-k}}\simeq\Sigma^{3}\widetilde{\mathbb{H}P}_{k}\lor D \widetilde{\mathbb{H}P_{0}}\]
for \(k\geq 0\).
If \(k=0\), then we have a cofiber sequence
\[D\mathbb{H}P_{0}\xrightarrow{D[p]}D\mathbb{H}P_{0}\to\Sigma\widetilde{ \mathbb{H}P_{0}}.\]
By induction on \(n\), we obtain a diagram
where the rows are cofiber sequences and the right-hand vertical map collapses the \((n+1)\)st wedge summand. Letting \(n\to\infty\), we get a cofiber sequence
\[\lim_{D[p]}D\widetilde{\mathbb{H}P_{0}}\to D\widetilde{\mathbb{H}P_{0}} \xrightarrow{e^{\prime\prime}}\prod_{i=1}\Sigma\widetilde{\mathbb{H}P_{0}}.\]
The map \(e^{\prime}\) from the theorem statement is the composite
\[e^{\prime}:D\widetilde{\mathbb{H}P_{-k}}\xrightarrow{\simeq}\Sigma^{3} \widetilde{\mathbb{H}P_{k}}\lor D\widetilde{\mathbb{H}P_{0}}\xrightarrow{id \lor e^{\prime\prime}}\Sigma^{3}\widetilde{\mathbb{H}P_{k}}\vee\prod_{i=1} \Sigma\widetilde{\mathbb{H}P_{0}}.\]
Now, part (1) is clear by construction. For part (2), we have
\[\lim_{D[p]}D\widetilde{\mathbb{H}P_{0}}\simeq D\lim_{[p]}\widetilde{\mathbb{H }P_{0}}.\]
But
\[\lim_{[p]}\widetilde{\mathbb{H}P_{1}}\simeq\lim_{\cdot p}\Sigma\widetilde{BSp }(1)\simeq*,\]
so
\[\lim_{[p]}\widetilde{\mathbb{H}P_{0}}\simeq\widetilde{S^{0}}\]
and \(e^{\prime}\) has fiber \(\widehat{S^{0}}\). Since this \(\widehat{S^{0}}\) is dual to the bottom cell of \(\mathbb{H}P_{0}\), which is a retract, \(e^{\prime}\) must be a retraction; this proves part (3).
Turning the triangle (2.1), we have a cofiber sequence
\[\mathbb{H}P_{k-i}^{(4k-4)}\to\mathbb{H}P_{k-i}\to\mathbb{H}P_{k}\]
which dualizes to give
\[D\mathbb{H}P_{k}\to D\mathbb{H}P_{k-i}\to\Sigma^{4}\mathbb{H}P_{-k}^{(4i-4k-4)}.\]
Completing, letting \(i\to\infty\), and identifying middle terms as in the previous proof, we obtain a cofiber sequence
\[D\widetilde{\mathbb{H}P_{k}}\to D\widetilde{\mathbb{H}P_{0}}\to\Sigma^{4} \widehat{\mathbb{H}P_{-k}}.\]
Taking the inverse limit as \(k\to\infty\), the fiber becomes contractible, proving:
**Corollary 2.9**.: _There is an equivalence \(\lim_{k}\widehat{\mathbb{H}P_{-k}}\simeq\Sigma^{-4}D\widetilde{\mathbb{H}P_{ 0}}\)._
### Proof of Theorem 2.7
Recall the _modified Adams spectral sequence_ ([10, Sec. 2]). If
\[Y=Y_{0}\gets Y_{1}\gets Y_{2}\leftarrow\cdots\]
is an Adams resolution of \(Y\) and
\[X=X_{0}\to X_{1}\to X_{2}\to\cdots\]
is a diagram with \(H^{*}(\lim_{i}X_{i})=0\), then there is a homotopy commutative diagram
Taking \(W_{s}=\bigcup_{i+j=s}F(X_{i},Y_{j})\), the MASS for \([X,Y]_{*}\) is the spectral sequence associated with the homotopy exact couple given by the diagram
\[F(X,Y)=W_{0}\gets W_{1}\gets W_{2}\leftarrow\cdots.\]
The key properties of the MASS are summarized in the following:
**Lemma 2.10** ([10, Lems. 2.12 and 2.16]).:
1. _With notation as above, suppose that each map_ \(X_{i}\to X_{i+1}\) _is injective in mod_ \(p\) _cohomology. Then the_ \(E_{2}\)_-term for the MASS for_ \([X,Y]_{*}\) _is given by_ \[E_{2}^{s,t}=\bigoplus_{i\geq 0}\operatorname{Ext}_{A}^{s-i,t+1-i}(H^{*}(Y),H^{*} (X_{i+1},X_{i})).\]
2. _Suppose each map_ \(X_{i}\to X_{i+1}\) _is trivial in mod_ \(p\) _cohomology. Then the resulting MASS for_ \([X,Y]_{*}\) _is isomorphic (from_ \(E_{2}\) _onward) to the standard ASS for_ \([X,Y]_{*}\)_._
The \(E_{2}\)-term of the (M)ASS for \(\widehat{\mathbb{C}P_{-i}^{\infty}}\) is computed in [10, Lem. 2.5], while the \(E_{2}\)-term for the MASS for the cohomotopy of \(\widehat{\mathbb{C}P_{0}^{\infty}}\) can be computed using Lemma 2.10. The \(E_{2}\)-terms coincide, and fairly standard arguments (cf. [10, pp. 433-434]) imply that the isomorphism between \(E_{2}\)-terms is induced by the map between diagrams.
To prove Theorem 2.7, we take \(Y=S^{0}\) (so \(Y_{\bullet}\) is an Adams resolution of the sphere) and take \(X\to X_{\bullet}\) to be the diagram
\[\widetilde{\mathbb{H}P_{-i}}=\widetilde{\operatorname{Th}}(\mathbb{H}P^{ \infty},-i\gamma_{\mathbb{H}})\to\widetilde{\operatorname{Th}}(\mathbb{H}P^{ \infty},(-i-1)\gamma_{\mathbb{H}}+\gamma_{\mathbb{H}}^{p})\to\widetilde{ \operatorname{Th}}(\mathbb{H}P^{\infty},(-i-2)\gamma_{\mathbb{H}}+2\gamma^{p} )\to\cdots.\]
Each map induces zero in mod \(p\) cohomology (cf. [12, Lem. 2.17]), so the associated MASS coincides with the standard ASS for the cohomotopy of \(\mathbb{H}P_{-i}\) from \(E_{2}\) onward by Lemma 2.10. Moreover, this diagram maps to the diagram
\[\widetilde{\mathbb{H}P_{0}}\to\widetilde{\mathbb{H}P_{1}}\to\widetilde{\mathbb{ H}P_{2}}\to\cdots\]
which gives the MASS for the cohomotopy of \(\widetilde{\mathbb{H}P_{0}}\).
This \(E_{2}\)-page can be computed using the short exact sequence of \(A\)-modules
\[0\to H\to C\to\Sigma^{-1}H\to 0,\]
where \(H=\lim_{i}H^{*}(\widetilde{\mathbb{H}P_{-i}})\) and \(C=\lim_{i}H^{*}(\widetilde{\mathbb{C}P_{-i}})\), following the proof of [12, Lem. 2.5]. On the other hand, the \(E_{2}\)-term of the MASS associated to the bottom row (again with \(Y=S^{0}\)) can be computed using Lemma 2.10. The two \(E_{2}\)-terms agree, and Ravenel's argument that the map of diagrams comes from the claimed map follows through with little modification.
## 3. Mahowald invariants and equivariant bordism
In this section, we relate the real, complex, and quaternionic Mahowald invariants to \(C_{2}\)-, \(U(1)\)-, and \(Sp(1)\)-equivariant bordism groups, respectively. Our main result is Proposition 3.17.
### Classical definition
For \(\mathbb{K}\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}\), we define \(d_{\mathbb{K}}:=\dim_{\mathbb{R}}\mathbb{K}\). For all \(n\geq 0\), Theorem 2.2 implies that there are compatible maps
\[\mathrm{pr}_{\mathbb{K}}:S^{-d_{\mathbb{K}}}\to\mathbb{K}P_{-n}^{\infty}. \tag{3.1}\]
We use these to make the following definition:
**Definition 3.1**.: Let \(\alpha\in\pi_{t}S^{0}\). The _\(\mathbb{K}\)-Mahowald invariant of \(\alpha\)_ is the coset of completions of the diagram
where \(N>0\) is minimal such that the left-hand composite is nontrivial.
We compute several examples in Section 5. In those computations, we observe that \(M_{\mathbb{C}}(\alpha)=M_{\mathbb{R}}(\alpha)\) if \(\alpha\neq M_{\mathbb{C}}(\alpha)\), and similarly for \(M_{\mathbb{H}}(\alpha)\). The inclusions \(\mathbb{R}\hookrightarrow\mathbb{C}\hookrightarrow\mathbb{H}\) induce inclusions \(\mathbb{R}P^{\infty}\to\mathbb{C}P^{\infty}\to\mathbb{H}P^{\infty}\) along with bundle maps between their tautological line bundles, which allow us to compare \(M_{\mathbb{R}}\), \(M_{\mathbb{C}}\), and \(M_{\mathbb{H}}\).
In fact, we can prove the following relationship between these different kinds of Mahowald invariants. We defer the proof to Section 4:
**Theorem 3.2**.: _Let \(\alpha\neq 1\in\pi_{n}^{s}\). Then either \(M_{\mathbb{C}}(\alpha)=M_{\mathbb{R}}(\alpha)\) in \(\mathrm{coker}(J)\) or \(M_{\mathbb{C}}(\alpha)=\alpha\). Similarly, either \(M_{\mathbb{H}}(\alpha)=M_{\mathbb{R}}(\alpha)\) in \(\mathrm{coker}(J)\), or \(M_{\mathbb{H}}(\alpha)=\alpha\)._
**Remark 3.3**.: Rewriting Definition 3.1 in terms of homotopy groups, the \(\mathbb{K}\)-Mahowald invariant of \(\alpha\) is the coset of \(\beta\) in \(\pi_{t+d_{\mathbb{K}}N}^{s}\) such that \(j(\beta)=\mathrm{pr}_{\mathbb{K}}(\alpha)\) in the diagram
\[\begin{CD}\pi_{t}^{s}\\ @V{}V{\mathrm{pr}_{\mathbb{K}}}V\\ \pi_{t+d_{\mathbb{K}}N}^{s}@>{}>{}>\pi_{t-d_{\mathbb{K}}}\mathbb{K}P_{-N}^{ \infty}.\end{CD}\]
Our goal for the rest of the section is to use the Pontryagin-Thom construction to reinterpret the equality \(j(\beta)=\operatorname{pr}_{\mathbb{K}}(\alpha)\in\pi_{*}\mathbb{K}P_{-N}^{\infty}\) in terms of equivariant bordism groups.
### Definition in terms of normal and equivariant bordism
We now rewrite the diagram of Remark 3.3 in terms of (nonequivariant) bordism groups. The Pontryagin-Thom isomorphism \(\pi_{k}^{s}\cong\Omega_{k}^{\operatorname{fr}}\) identifies the \(k\)-th stable stem with the \(k\)-th framed bordism group. More generally, the Pontryagin-Thom construction yields an isomorphism
\[\pi_{i}\operatorname{Th}(X,V)\cong\Omega_{i}(X,V) \tag{3.2}\]
between the \(i\)-th homotopy group of the Thom spectrum \(\operatorname{Th}(X,V)\) and the _normal bordism group_\(\Omega_{i}(X,V)\):
**Definition 3.4**.: Let \(X\) be a topological space and \(V\) a virtual vector bundle over \(X\). The \(i\)_-th normal bordism group_\(\Omega_{i}(X,V)\) is the group of triples \((M,f,\bar{f})\) with
1. \(M\) is an \(i\)-dimensional closed manifold,
2. \(f:M\to X\) is a map, and
3. \(\bar{f}:f^{*}V^{+}\oplus TM\to f^{*}V^{-}\) is a stable isomorphism of vector bundles,
where \(V^{+}\) and \(V^{-}\) are vector bundles over \(X\) with \(V=V^{+}-V^{-}\in KO^{0}(X)\).
To bring group actions into the picture, we will identify the homotopy groups of stunted projective spectra with certain equivariant bordism groups. We need one more definition:
**Definition 3.5**.: We define the following _equivariant bordism groups_:
1. Let \(\mathbb{R}_{\mathbb{R}}^{n,k}=\mathbb{R}^{n\sigma+k}\) denote \(\mathbb{R}^{n+k}\) where \(C_{2}\) acts by sign on the first \(n\) components and trivially on the last \(k\) components. Let \(\epsilon_{\mathbb{R}}^{n,k}\) denote the trivial bundle with fibers \(\mathbb{R}_{\mathbb{R}}^{n,k}\). If \(M\) is a \(C_{2}\)-manifold, a \(C_{2}\)-equivariant bundle isomorphism \(\bar{g}:TM\oplus\epsilon_{\mathbb{R}}^{s,t}\xrightarrow{\cong}\epsilon_{ \mathbb{R}}^{n+s,k+t}\) is called an \((n,k)\)-\(\mathbb{R}\)_-framing_. We define \(\Omega_{n,k}^{\mathbb{R}}\) to be the bordism group of free \(C_{2}\)-manifolds with an \((n,k)\)-\(\mathbb{R}\)-framing.
2. Let \(\mathbb{R}_{\mathbb{C}}^{2n,k}=\mathbb{C}^{n}\oplus\mathbb{R}^{k}\) be the \(U(1)\)-representation where \(U(1)\) acts by rotation on each copy of \(\mathbb{C}\) and trivially on each copy of \(\mathbb{R}\). Let \(\epsilon_{\mathbb{C}}^{2n,k}\) denote the trivial (real) bundle with fibers \(\mathbb{R}_{\mathbb{C}}^{2n,k}\). If \(M\) is an \(U(1)\)-manifold, a \(U(1)\)-equivariant bundle isomorphism \(\bar{g}:TM\oplus\epsilon_{\mathbb{H}}^{s,t}\xrightarrow{\cong}\epsilon_{ \mathbb{H}}^{2n+s,k+t}\) is called a \((2n,k)\)-\(\mathbb{C}\)_-framing_. We define \(\Omega_{2n,k}^{\mathbb{C}}\) to be the bordism group of free \(U(1)\)-manifolds with a \((2n,k)\)-\(\mathbb{C}\)-framing.
3. Let \(\mathbb{R}_{\mathbb{H}}^{4n,k}=\mathbb{H}^{n}\oplus\mathbb{R}^{k}\) be the \(Sp(1)\)-representation where \(Sp(1)\) acts on each copy of \(\mathbb{H}\) in the standard way and trivially on each copy of \(\mathbb{R}\). Let \(\epsilon_{\mathbb{H}}^{4n,k}\) denote the trivial (real) bundle with fibers \(\mathbb{R}_{\mathbb{H}}^{4n,k}\). If \(M\) is an \(Sp(1)\)-manifold, an \(Sp(1)\)-equivariant bundle isomorphism \(\bar{g}:TM\oplus\epsilon_{\mathbb{H}}^{s,t}\xrightarrow{\cong}\epsilon_{ \mathbb{H}}^{4n+s,k+t}\) is called a \((4n,k)\)-\(\mathbb{H}\)_-framing_. We define \(\Omega_{4n,k}^{\mathbb{H}}\) to be the bordism group of free \(Sp(1)\)-manifolds with a \((4n,k)\)-\(\mathbb{H}\)-framing.
More succinctly: let \(G_{\mathbb{R}}=C_{2}\), \(G_{\mathbb{C}}=U(1)\), and \(G_{\mathbb{H}}=Sp(1)\). Then \(\mathbb{R}_{\mathbb{K}}^{d_{\mathbb{K}}n,k}=\mathbb{K}^{n}\oplus\mathbb{R}^{k}\), where \(\mathbb{K}\) is the standard representation of \(G_{\mathbb{K}}\) and \(\mathbb{R}\) is the trivial representation, and we get that \(\Omega_{d_{\mathbb{K}}n,k}^{\mathbb{K}}\) is the bordism group of free \(G_{\mathbb{K}}\)-manifolds with a \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framing.
By (3.2), we have an isomorphism
\[\pi_{k}(\mathbb{K}P_{-n}^{\infty})\cong\pi_{k}(\Sigma^{-d_{\mathbb{K}}n} \operatorname{Th}(\mathbb{K}P^{\infty},-n\gamma_{\mathbb{K}}))\cong\Omega_{k+d_ {\mathbb{K}}n}(\mathbb{K}P^{\infty},-n\gamma_{\mathbb{K}}).\]
The normal bordism groups on the right-hand side admit the following description in terms of equivariant bordism groups:
**Proposition 3.6**.: _There is an isomorphism_
\[\Omega_{k+d_{\mathbb{K}}n}(\mathbb{K}P^{\infty},-n\gamma_{\mathbb{K}}) \xrightarrow{\cong}\Omega_{d_{\mathbb{K}}n,k}^{\mathbb{K}}.\]
Proof.: We follow the proof for the case \(\mathbb{K}=\mathbb{R}\) which appears around [10, Eqn. 1.10]. Let \([N,f,\bar{f}]\in\Omega_{k+d_{\mathbb{K}}n}(\mathbb{K}P^{\infty},-n\gamma_{ \mathbb{K}})\), so
1. \(N\) is a closed \(k+d_{\mathbb{K}}n\)-manifold,
2. \(f:N\to\mathbb{K}P^{\infty}\) is a map, and
3. \(\bar{f}:TN\to f^{*}(n\gamma_{\mathbb{K}})\) is a stable isomorphism.
We define \([\tilde{N},\tilde{\bar{f}}]\in\Omega^{\mathbb{K}}_{d_{\mathbb{K}}n,k}\) as follows. Let
\[\tilde{N}:=S(f^{*}(\gamma_{\mathbb{K}}))\]
denote the unit sphere bundle in the pullback of the tautological \(\mathbb{K}\)-line bundle along \(f\). Inspecting the \(G_{\mathbb{K}}\) action on \(\gamma_{\mathbb{K}}\), we see that \(\tilde{N}\) is a free \(G_{\mathbb{K}}\)-manifold. We define
\[\tilde{\bar{f}}:T\tilde{N}\oplus\epsilon^{0,t}\to\epsilon^{d_{\mathbb{K}}n,k+t}\]
to be the \(G_{\mathbb{K}}\)-equivariant bundle isomorphism induced by the isomorphism
\[\bar{f}:TN\oplus\epsilon^{t}\to f^{*}(n\gamma_{\mathbb{K}})\oplus\epsilon^{k+ t}=(\tilde{N}\times\mathbb{R}^{d_{\mathbb{K}}n,k+t}_{\mathbb{K}})/G_{\mathbb{K}}.\]
The assignment \([N,f,\bar{f}]\mapsto[\tilde{N},\tilde{\bar{f}}]\) defines an isomorphism
\[\Omega_{k+d_{\mathbb{K}}n}(\mathbb{K}P^{\infty},-n\gamma_{\mathbb{K}})\to \Omega^{\mathbb{K}}_{d_{\mathbb{K}}n,k}.\]
Indeed, passage to the quotient manifold gives an inverse.
**Remark 3.7**.: Note that if \([N,f,\bar{f}]\in\Omega_{k+d_{\mathbb{K}}n}(\mathbb{K}P^{\infty},-n\gamma_{ \mathbb{K}})\) is an element with \(N\)\(d_{\mathbb{K}}\)-connected, then
\[\tilde{N}=S(f^{*}(\gamma_{\mathbb{K}}))\cong G_{\mathbb{K}}\times N.\]
Explicitly:
1. If \(\mathbb{K}=\mathbb{R}\), then \(\tilde{N}=C_{2}\times N\) when \(N\) is simply connected;
2. If \(\mathbb{K}=\mathbb{C}\), then \(\tilde{N}=S^{1}\times N\) when \(N\) is 2-connected;
3. If \(\mathbb{K}=\mathbb{H}\), then \(\tilde{N}=S^{3}\times N\) when \(N\) is 4-connected.
We are now ready to describe the \(\mathbb{K}\)-Mahowald invariant in terms of equivariant bordism. Let
\[\bar{c}_{n}^{\mathbb{K}}:TS(\mathbb{R}^{d_{\mathbb{K}}n,0}_{\mathbb{K}})\oplus \epsilon^{0,1}_{\mathbb{K}}\xrightarrow{\cong}\epsilon^{d_{\mathbb{K}}n,0}_{ \mathbb{K}}\]
denote the \(G_{\mathbb{K}}\)-equivariant vector bundle isomorphism obtained by restricting the isomorphism
\[TD^{d_{\mathbb{K}}n,0}\cong\epsilon^{d_{\mathbb{K}}n,0}_{\mathbb{K}}\]
to \(S(\mathbb{R}^{d_{\mathbb{K}}n,0}_{\mathbb{K}})\subset D^{d_{\mathbb{K}}n,0}_{ \mathbb{K}}\) and composing with
\[TS(\mathbb{R}^{d_{\mathbb{K}}n,0}_{\mathbb{K}})\oplus\epsilon^{0,1}_{\mathbb{ K}}\cong TD^{d_{\mathbb{K}}n,0}_{\mathbb{K}}\mid_{S(\mathbb{R}^{d_{\mathbb{K}}n,0}_{ \mathbb{K}})}.\]
For all \(m\in\mathbb{Z}\), let
\[\cdots\to\pi_{*}(S^{d_{\mathbb{K}}m})\xrightarrow{j_{m}}\pi_{*}(\mathbb{K}P^{ \infty}_{m})\xrightarrow{p_{m}}\pi_{*}(\mathbb{K}P^{\infty}_{m+1}) \xrightarrow{t_{m+1}}\pi_{*-1}(S^{d_{\mathbb{K}}m})\to\cdots\]
be the long exact sequence in homotopy obtained from the cofiber sequence
\[S^{d_{\mathbb{K}}m}\to\mathbb{K}P^{\infty}_{m}\to\mathbb{K}P^{\infty}_{m+1}.\]
**Proposition 3.8**.: _Let \((M^{k},\bar{a})\) and \((N^{k+d_{\mathbb{K}}n},\bar{b})\) be framed manifolds with \(k\geq d_{\mathbb{K}}\). The following statements are equivalent:_
1. \(M_{\mathbb{K}}([M,\bar{a}])=[N,\bar{b}]\in\pi_{k+d_{\mathbb{K}}n}^{s}/\mathrm{ im}(t_{-n})\)_;_
2. \(0\neq[S(\mathbb{R}^{d_{\mathbb{K}}(n+1),0}_{\mathbb{K}})\times M,\bar{c}^{ \mathbb{K}}_{n+1}\times\bar{a}]=[G_{\mathbb{K}}\times N,G_{\mathbb{K}}\times \bar{b}]\in\Omega^{\mathbb{K}}_{d_{\mathbb{K}}(n+1),k-d_{\mathbb{K}}}\)
Proof.: By definition,
\[M_{\mathbb{K}}([M,\bar{a}])=[N,\bar{b}]\in\pi^{s}_{k+d_{\mathbb{K}}n}/\operatorname {im}(t_{-n})\]
if and only if
\[\operatorname{pr}_{m}([M,\bar{a}])=0,\quad\operatorname{pr}_{m-1}([M,\bar{a}]) \neq 0,\quad\text{ and }j_{m-1}([N,\bar{b}])=\operatorname{pr}_{m-1}([M,\bar{a}]).\]
We must express each of these conditions in terms of equivariant bordism.
We begin with the map \(j\). Using the Pontryagin-Thom construction, we may identify this with
\[j:\Omega_{k}(S(\gamma_{\mathbb{K}}),p^{*}(n\gamma_{\mathbb{K}}))\to\Omega_{k} (\mathbb{K}P^{\infty},n\gamma_{\mathbb{K}}),\]
\[[M,f,\bar{f}]\mapsto[M,p\circ f,\bar{f}],\]
and composing with the isomorphism from Proposition 3.6, we obtain
\[j_{-n}:\Omega^{\operatorname{fr}}_{k+d_{\mathbb{K}}n}\to\Omega^{\mathbb{K}}_ {d_{\mathbb{K}}n+1,k-1},\]
\[[M,\bar{a}]\mapsto[G_{\mathbb{K}}\times M,G_{\mathbb{K}}\times\bar{a}].\]
Here, we use the assumption that \(k\geq d_{\mathbb{K}}\) (cf. Remark 3.7).
To identify \(\operatorname{pr}_{m}\) in terms of equivariant bordism, we first interpret \(p\) geometrically. Using the Pontryagin-Thom construction, we see that
\[p:\Omega_{k}(\mathbb{K}P^{\infty},n\gamma_{\mathbb{K}})\to\Omega_{k-d_{ \mathbb{K}}n}(\mathbb{K}P^{\infty},(n+1)\gamma_{\mathbb{K}}), \tag{3.3}\]
\[[M,f,\bar{f}]\mapsto[N=s^{-1}(0),f|_{N},\bar{f}|_{N}],\]
where \(s\) is a section of \(f^{*}\gamma_{\mathbb{K}}\) which is transverse to the zero section. Identifying the source and target of \(p\) with equivariant bordism groups via Proposition 3.6, we see that
\[p:[S(\mathbb{R}^{d_{\mathbb{K}}(n+1),0}_{\mathbb{K}}),\bar{c}^{\mathbb{K}}_{n+ 1}]\mapsto[S(\mathbb{R}^{d_{\mathbb{K}}n,0}_{\mathbb{K}}),\bar{c}^{\mathbb{K}}_ {n}]\]
as on [14, Pg. 114], and thus those classes assemble into a generator
\[g\in\lim_{m}\Omega^{\mathbb{K}}_{md_{\mathbb{K}},-1}\cong\lim_{m}\pi_{-d_{ \mathbb{K}}}(\mathbb{K}P^{\infty}_{-m}).\]
Thus
\[\operatorname{pr}_{-n}:\pi^{s}_{k}\xrightarrow{g_{*}}\lim_{m}\pi_{k-1}( \mathbb{K}P^{\infty}_{-m})\to\pi_{-d_{\mathbb{K}}}(\mathbb{K}P^{\infty}_{-n})\]
may be identified with
\[\operatorname{pr}_{-n}:\Omega^{\operatorname{fr}}_{k}\to\Omega^{\mathbb{K}}_ {d_{\mathbb{K}}n,k-1},\]
\[[M,\bar{a}]\mapsto[S(\mathbb{R}^{d_{\mathbb{K}}n,0}_{\mathbb{K}})\times M, \bar{c}^{\mathbb{K}}_{n}\times\bar{a}].\]
Putting together these identifications completes the proof.
In order to prove our main theorem, we need the following auxiliary construction.
**Definition 3.9** (Twisting framings).: Let \(M\) be a free \(G_{\mathbb{K}}\)-manifold with a \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framing \(\bar{f}:TM\oplus\epsilon^{d_{\mathbb{K}}s,t}_{\mathbb{K}}\to\epsilon^{d_{ \mathbb{K}}n+d_{\mathbb{K}}s,k+t}_{\mathbb{K}}\) and let \(g\in KO^{-1}_{G_{\mathbb{K}}}(M)\). Regarding \(g\) as a \(G_{\mathbb{K}}\)-equivariant bundle automorphism of \(\epsilon^{d_{\mathbb{K}}n+d_{\mathbb{K}}s,k+t}_{\mathbb{K}}\), we may compose to obtain a new \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framing of \(M\),
\[g\bar{f}:TM\oplus\epsilon^{d_{\mathbb{K}}s,t}_{\mathbb{K}}\xrightarrow{\bar{f} }\epsilon^{d_{\mathbb{K}}n+d_{\mathbb{K}}s,k+t}_{\mathbb{K}}\xrightarrow{g} \epsilon^{d_{\mathbb{K}}n+d_{\mathbb{K}}s,t+k}_{\mathbb{K}}.\]
**Construction 3.10** (The framing \(\bar{s}(\bar{t},\bar{a})\)).: Let \(M^{k}\) be a framed \(k\)-manifold and let \(\zeta^{d_{\mathbb{K}}n}\to M^{k}\) be a stably trivial vector bundle of rank \(d_{\mathbb{K}}n\). Let \(\bar{t}:\zeta^{d_{\mathbb{K}}n}\oplus\epsilon^{s}\xrightarrow{\simeq}\epsilon^{ d_{\mathbb{K}}n+s}\) be a stable trivialization of \(\zeta^{d_{\mathbb{K}}n}\) and let \(\bar{a}:TM^{k}\oplus\epsilon^{t}\xrightarrow{\cong}\epsilon^{t+k}\) be a stable framing of \(M^{k}\). Note that \(G_{\mathbb{K}}\) acts on \(S(\zeta^{d_{\mathbb{K}}n})\cong S(\mathbb{K}^{n})\) by restriction of the diagonal action of \(G_{\mathbb{K}}\) on \(\mathbb{K}^{n}\).
We obtain a \((d_{\mathbb{K}}n,k-1)\)-\(\mathbb{K}\)-framing on \(S(\zeta^{d_{\mathbb{K}}n})\) as follows. If \(p:D(\zeta^{d_{\mathbb{K}}n})\to M^{k}\), then there is a \(G_{\mathbb{K}}\)-equivariant bundle isomorphism
\[TD(\zeta^{d_{\mathbb{K}}n})\cong p^{*}(\zeta^{d_{\mathbb{K}}n})\oplus p^{*}TM^{ k},\]
where \(G_{\mathbb{K}}\) acts on \(TM^{k}\) trivially. The composite
\[TD(\zeta^{d_{\mathbb{K}}n})\oplus\epsilon_{\mathbb{K}}^{d_{\mathbb{K}}n,t}\cong p ^{*}\zeta^{d_{\mathbb{K}}n}\oplus\epsilon_{\mathbb{K}}^{d_{\mathbb{K}}s,0}\oplus p ^{*}TM\oplus\epsilon_{\mathbb{K}}^{0,t}\xrightarrow{\ p^{*}\bar{t}\oplus p^{*} \bar{a}}\epsilon_{\mathbb{K}}^{d_{\mathbb{K}}n+d_{\mathbb{K}}s,k+t}\]
gives a \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framing of \(D(\zeta^{d_{\mathbb{K}}n})\), and this restricts to a framing
\[\bar{s}(\bar{t},\bar{a}):TS(\zeta^{d_{\mathbb{K}}n})\oplus\epsilon_{\mathbb{K }}^{d_{\mathbb{K}}s,t-1}\xrightarrow{\simeq}\epsilon_{\mathbb{K}}^{d_{ \mathbb{K}}n+d_{\mathbb{K}}s,k+t-1}\]
generalizing the framing \(\bar{c}_{n}^{\mathbb{K}}\times\bar{a}\).
**Construction 3.11** (Twisting by elements in \(KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\)).: Let
\[h\in KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\]
and let \(c:M^{k}\to S^{k}\) be a degree one map. Using the composite
\[KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\xrightarrow{\text{pr}^{*}}KO^{-1}( \mathbb{K}P^{\infty}\times S^{k})\cong KO_{G_{\mathbb{K}}^{-1}}^{-1}(S^{k}) \xrightarrow{\text{cr}^{*}}KO_{G_{\mathbb{K}}^{-1}}^{-1}(M^{k})\xrightarrow{ \text{pr}^{*}}KO_{G_{\mathbb{K}}^{-1}}^{-1}(S(\zeta^{d_{\mathbb{K}}n})),\]
we may twist by \(h\) to obtain a new framing \(h\bar{s}(\bar{t},\bar{a})\).
We have the following analogues of [13, Lem. 1.14, Cor. 1.15]:
**Lemma 3.12**.: _We have_
\[p_{-(n+1)}([S(\zeta^{d_{\mathbb{K}}n}\oplus\epsilon_{\mathbb{K}}^{d_{\mathbb{ K}}}),h\bar{s}(\bar{t},\bar{a})])=[S(\zeta^{d_{\mathbb{K}}n}),h\bar{s}(\bar{t}, \bar{a})]\in\Omega_{d_{\mathbb{K}}n,k-1}^{\mathbb{K}}.\]
Proof.: As in the proof of [13, Lem. 1.14], the bundle \(S(\zeta)\subset S(\zeta\oplus\epsilon_{\mathbb{K}}^{d_{\mathbb{K}}})\) is the zero set of a \(G_{\mathbb{K}}\)-equivariant map \(S(\zeta\oplus\epsilon_{\mathbb{K}})\to\mathbb{R}_{\mathbb{K}}^{d_{\mathbb{K}},0}\). Since the framings \(h\bar{s}(\bar{t},\bar{a})\) on \(S(\zeta\oplus\epsilon)\) and \(S(\zeta)\) are compatible, the result follows from the geometric identification of \(p\) in (3.3).
**Corollary 3.13**.: _We have_
\[[S(\zeta^{d_{\mathbb{K}}n}),h\bar{s}(\bar{t},\bar{a})]=[S(\mathbb{R}_{\mathbb{ K}}^{d_{\mathbb{K}}n,0})\times M^{k},h(\bar{c}_{n}^{\mathbb{K}}\times\bar{a})] \in\Omega_{d_{\mathbb{K}}n,k-1}^{\mathbb{K}}.\]
Proof.: As in the discussion preceding [13, Cor. 1.15], this follows from iterated applications of the previous lemma, along with the stable triviality of \(\zeta\).
**Construction 3.14** (The map \(\tilde{J}_{\mathbb{K}}\)).: Let \(\bar{d}\) be the trivial framing of \(S^{k}\). The assignment
\[h\mapsto[S(\mathbb{R}_{\mathbb{K}}^{d_{\mathbb{K}}n,0})\times S(\mathbb{R}_{ \mathbb{K}}^{0,k+1}),h(\bar{c}_{n}^{\mathbb{K}}\times\bar{d})]\in\Omega_{d_{ \mathbb{K}}n,k-1}^{\mathbb{K}}\]
induces a well-defined map
\[\tilde{J}_{\mathbb{K}}:KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\to\lim_{n} \Omega_{d_{\mathbb{K}}n,k-1}^{\mathbb{K}}\cong\lim_{n}\pi_{k-1}(\mathbb{K}P_{- n}^{\infty})\to\pi_{k+d_{\mathbb{K}}-1}^{s}.\]
**Remark 3.15**.: The map
\[\tilde{J}:KO^{-1}(S^{k})\to\pi_{k}^{s}\]
defined by Stolz in [13, Eqn. 1.16] is obtained from the map
\[\tilde{J}_{\mathbb{R}}:KO^{-1}(\mathbb{R}P^{\infty}\wedge S^{k})\to\pi_{k}^{s}\]
defined above by precomposition with the \(C_{2}\)-transfer map
\[\text{tr}_{C_{2}}:\mathbb{R}P^{\infty}\to S^{0}.\]
We use \(\tilde{J}_{\mathbb{K}}\) instead of its composite with the map induced by the \(G_{\mathbb{K}}\)-transfer
\[\text{tr}_{G_{\mathbb{K}}}:\Sigma^{d_{\mathbb{K}}-1}\mathbb{K}P^{\infty}\to S^{0}\]
for the following reason. We recall that Stolz proves [13, Lem. 1.19] that the transfer map \(\text{tr}_{C_{2}}^{*}:KO^{-i}(S^{0})\to KO^{-i}(\mathbb{R}P^{\infty})\) is an isomorphism for all \(i\in\mathbb{Z}\), which allows him to freely identify elements in the source and target. The analogous isomorphism does not hold for \(\text{tr}_{U(1)}^{*}\) and \(\text{tr}_{Sp(1)}^{*}\); indeed, an elementary Atiyah-Hirzebruch spectral sequence calculation shows that the ranks of \(KO^{-4i}(\mathbb{C}P^{\infty})\) and \(KO^{-4i}(\mathbb{H}P^{\infty})\) are greater than the rank of \(KO^{-4i}(S^{0})\) for all
\(i\in\mathbb{Z}\). In any case, this means we cannot freely identify elements in the source and the target when \(\mathbb{K}=\mathbb{C}\) and \(\mathbb{K}=\mathbb{H}\).
Modifying the proof of [12, Lem. 1.17], we obtain the following, which implies that \(\tilde{J}_{\mathbb{K}}\) is a homomorphism.
**Lemma 3.16**.: _Let \(\zeta^{d_{\mathbb{K}}n}\) be a stably trivial bundle over a framed manifold \(M^{k}\), let \(\bar{s}\) be a \((d_{\mathbb{K}}n,k-1)\)-\(\mathbb{K}\)-framing of \(S(\zeta^{d_{\mathbb{K}}n})\), and let \(h\in KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\). Then_
\[[S(\zeta^{d_{\mathbb{K}}n}),h\bar{s}]=[S(\zeta^{d_{\mathbb{K}}n}),\bar{s}]+[S (\mathbb{R}^{d_{\mathbb{K}}n,0}_{\mathbb{K}})\times S(\mathbb{R}^{0,k+1}_{ \mathbb{K}}),h(\bar{c}^{\mathbb{K}}_{n}\times\bar{d})]\in\Omega^{\mathbb{K}}_ {d_{\mathbb{K}}n,k-1}.\]
Putting everything together, we obtain the following analogue of [12, Prop. 1.18]:
**Proposition 3.17**.: _Let \(\Sigma^{k}\) and \(\Sigma^{k+d_{\mathbb{K}}n}\) be homotopy spheres and let \(\xi^{d_{\mathbb{K}}n+d_{\mathbb{K}}}\) be a vector bundle over \(\Sigma^{k}\) with stable trivialization \(\bar{t}\). The following are equivalent:_
1. _There exist framings_ \(\bar{a}\) _and_ \(\bar{b}\) _of_ \(\Sigma^{k}\) _and_ \(\Sigma^{k+d_{\mathbb{K}}n}\)_, respectively, and an element_ \(h\in KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\) _such that_ 1. \(n\leq M\)_, where_ \(M\) _is the_ \(\mathbb{K}\)_-Mahowald filtration of_ \([\Sigma^{k},\bar{a}]+\tilde{J}_{\mathbb{K}}(h)\)_, and_ 2. \([\Sigma^{k+d_{\mathbb{K}}n},\bar{b}]\in M([\Sigma^{k},\bar{a}]+\tilde{J}_{ \mathbb{K}}(h))\) _if_ \(n=M\) _and is trivial if_ \(n<M\)_._
2. _There exist framings_ \(\bar{a}\) _and_ \(\bar{b}\) _of_ \(\Sigma^{k}\) _and_ \(\Sigma^{k+d_{\mathbb{K}}n}\)_, respectively, and an element_ \(h\in KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\) _such that_ \[[S(\xi^{d_{\mathbb{K}}n+d_{\mathbb{K}}}),h\bar{s}(\bar{t},\bar{a})]=[G_{ \mathbb{K}}\times\Sigma^{k+d_{\mathbb{K}}n},G_{\mathbb{K}}\times\bar{b}]\in \Omega^{\mathbb{K}}_{d_{\mathbb{K}}(n+1),k-1}.\]
Finally, we state the following relation between the homomorphisms \(\tilde{J}_{\mathbb{K}}\) and the classical \(J\)-homomorphism. We defer the proof until the end of Section 4.
**Proposition 3.18**.: _The image of \(\tilde{J}_{\mathbb{K}}\) is contained in the image of the classical \(J\)-homomorphism._
## 4. The main theorem
We are now prepared to prove Theorem A. The proof is essentially the same in the real, complex, and quaternionic cases, so to begin, we restate the theorem in a more concise form:
**Theorem 4.1**.: _Let \((\Sigma^{k},\Sigma^{k+d_{\mathbb{K}}n})\) be a pair of homotopy spheres. Suppose there exist framings \(\bar{a}\) and \(\bar{b}\) for \(\Sigma^{k}\) and \(\Sigma^{k+d_{\mathbb{K}}n}\), respectively, and an element \(h\in KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\) such that_
1. _The codimension_ \(d_{\mathbb{K}}n\) _is bounded above by the_ \(\mathbb{K}\)_-Mahowald filtration_ \(M\) _of_ \([\Sigma^{k},\bar{a}]+\tilde{J}_{\mathbb{K}}(h)\)_, and_
2. _We have_ \[[\Sigma^{k+d_{\mathbb{K}}n},\bar{b}]\in\begin{cases}M([\Sigma^{k},\bar{a}]+ \tilde{J}_{\mathbb{K}}(h)])&\text{if }d_{\mathbb{K}}n=M,\\ 0&\text{if }d_{\mathbb{K}}n<M.\end{cases}\]
_Then if \(\mathbb{K}=\mathbb{C}\) or \(\mathbb{K}=\mathbb{H}\), there exists a smooth \(G_{\mathbb{K}}\)-action on \(\Sigma^{k+d_{\mathbb{K}}n}\#\Sigma^{\prime}\), for some homotopy \((k+d_{\mathbb{K}}n)\)-sphere \(\Sigma^{\prime}\) which bounds a parallelizable manifold, with fixed points \(\Sigma^{k}\). If \(\mathbb{K}=\mathbb{R}\), the same conclusion holds provided that \(n>k+1\) and either \(n+k\) and \(n\) are both odd or \(n\) is even and \(n+k\equiv 1\mod 4\)._
**Remark 4.2**.: The case \(\mathbb{K}=\mathbb{R}\) is the main result of [12]. Our proof of the more general theorem is a direct adaptation of Stolz's proof from [12, Sec. 3].
**Remark 4.3**.: Theorem A follows immediately from Theorem 4.1 and Proposition 3.18; compare with [12, Rmk. (iii)].
Proof of Theorem 4.1.: Choose an embedding \(\Sigma^{k}\hookrightarrow\Sigma^{k+d_{\mathbb{K}}n}\). By Proposition 3.17, we have
\[[S(\zeta\oplus\epsilon^{d_{\mathbb{K}}}),h\bar{s}(\bar{t},\bar{a})]=[G_{ \mathbb{K}}\times\Sigma^{k+d_{\mathbb{K}}n},G_{\mathbb{K}}\times\bar{b}]\in \Omega^{\mathbb{K}}_{d_{\mathbb{K}}(n+1),k-1},\]
where the action of \(G_{\mathbb{K}}\) on \(S(\zeta\oplus\epsilon^{d_{\mathbb{K}}})\) comes from the standard action of \(G_{\mathbb{K}}\) on \(\epsilon^{d_{\mathbb{K}}}=\mathbb{K}\), and its action on the product \(G_{\mathbb{K}}\times\Sigma^{k+d_{\mathbb{K}}n}\) comes from the action of \(G_{\mathbb{K}}\) on itself. Let \((W,\bar{w})\) be a \(G_{\mathbb{K}}\)-equivariant framed bordism between these manifolds.
Choose an equivariant map \(s:W\to\mathbb{R}^{1,0}_{\mathbb{K}}\) transverse to \(0\in\mathbb{R}^{1,0}_{\mathbb{K}}\) such that \(s|_{S(\zeta\oplus\epsilon^{d_{\mathbb{K}}})}\) is the map projecting to the last coordinate and \(s(G_{\mathbb{K}}\times\Sigma^{k+d_{\mathbb{K}}n})\) is contained in the unit sphere \(G_{\mathbb{K}}\subset\mathbb{R}^{1,0}_{\mathbb{K}}\). Let \(V=s^{-1}(0)\subset W\).
Then the manifold \(W^{\prime}:=W/G_{\mathbb{K}}\) is a framed bordism between the manifolds
\[S(\zeta\oplus\epsilon^{d_{\mathbb{K}}})/G_{\mathbb{K}}\cup V\quad\text{and} \ \ \Sigma^{k+d_{\mathbb{K}}n}.\]
We notice that
\[S(\zeta\oplus\epsilon^{d_{\mathbb{K}}})/G_{\mathbb{K}} \cong \left((S(\zeta)\times D(\epsilon^{d_{\mathbb{K}}}))\cup_{(S( \zeta)\times S(\epsilon^{d_{\mathbb{K}}}))}(D(\zeta)\times S(\epsilon^{d_{ \mathbb{K}}}))\right)/G_{\mathbb{K}}\] \[\cong (S(\zeta)\times I)\cup_{S(\zeta)\times\{1\}}(D(\zeta)\times\{1\})\] \[\cong D(\zeta),\]
where we contract \(S(\zeta)\times I\) to the boundary of \(D(\zeta)\). Here, we emphasize that the group \(G_{\mathbb{K}}\) acts on \(D(\zeta)\cup_{S(\zeta)}V\) by rotating the fibers of \(D(\zeta)\) (with fixed points \(\Sigma^{k}\subset D(\zeta)\)) and \(G_{\mathbb{K}}\) acts freely on \(V\) by restricting the action on \(W\) which extends the free action (by rotating the fibers) on the sphere bundle \(S(\zeta)\). In particular, we have that \((D(\zeta)\cup_{S(\zeta)}V)^{G_{\mathbb{K}}}=\Sigma^{k}\).
Our next goal is to make the bordism \(W^{\prime}\) into an \(h\)-cobordism through surgery, so then \(D(\zeta)\cup V\) belongs to the same diffeomorphism class as \(\Sigma^{k+d_{\mathbb{K}}n}\), and we obtain the desired \(G_{\mathbb{K}}\)-action with fixed points \(\Sigma^{k}\) from the obvious action on \(D(\zeta)\cup V\). Making \(W^{\prime}\) into an \(h\)-cobordism proceeds in two steps.
First, there is no way for \(W^{\prime}\) to be an \(h\)-cobordism unless the manifolds \(D(\zeta)\cup V\) and \(\Sigma^{k+d_{\mathbb{K}}n}\) are homotopy equivalent, i.e., unless \(D(\zeta)\cup V\) is a homotopy sphere. Therefore our first step is to modify \(V\) by equivariant surgeries relative to the boundary to make the inclusion \(S^{n-1}\hookrightarrow S(\zeta)=\partial V\) into a homotopy equivalence. This makes \(D(\zeta)\cup V\) into a homotopy sphere, which can be verified using the exact homology sequence associated to the cofibration \(V\to D(\zeta)\cup V\to\operatorname{Th}(\zeta)\). The traces of the surgeries yield a new framed bordism between the new \(D(\zeta)\cup V\) and \(\Sigma^{k+d_{\mathbb{K}}n}\), which we will still denote by \(W^{\prime}\).
Second, now that \(D(\zeta)\cup V\) and \(\Sigma^{k+d_{\mathbb{K}}n}\) are both homotopy spheres, we can appeal to classical results of Kervaire and Milnor [13] to make \(W^{\prime}\) into an \(h\)-cobordism, provided we sum with an appropriate homotopy \((k+d_{\mathbb{K}}n)\)-sphere \(\Sigma^{\prime}\) which bounds a parallelizable manifold.
Thus, our task is to carry out the first step of modifying the manifold \(V\) so that \(S^{n-1}\hookrightarrow S(\zeta)=\partial V\) is a homotopy equivalence.
As \(V=s^{-1}(0)\), \(W\) is \((d_{\mathbb{K}}(n+1),k)\)-framed, and \(s\) was transverse to \(0\in\mathbb{R}^{1,0}_{\mathbb{K}}\), \(V\) is \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framed. Under the correspondence between \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framed manifolds and normal bordism groups from Proposition 3.6, the \((d_{\mathbb{K}}n,k)\)-\(\mathbb{K}\)-framing on \(V\) corresponds to a \((-n\gamma_{\mathbb{K}})\)-structure \((v,\bar{v})\) on \(\bar{V}=V/G_{\mathbb{K}}\),
Decomposing the base space \(\Sigma^{k}\) into \(\Sigma^{k}=D_{+}\cup_{f}D_{-}\), we obtain a decomposition
\[\partial\bar{V}=S(\zeta)/G_{\mathbb{K}}=\mathbb{K}P(\zeta|_{D_{+}})\cup_{g} \mathbb{K}P(\zeta|_{D_{-}})\]
with \(g:\mathbb{K}P(\zeta|_{S^{k-1}})\xrightarrow{\cong}\mathbb{K}P(\zeta|_{S^{k-1}})\) a diffeomorphism. Thus \(\bar{V}\) is a relative bordism between \(\mathbb{K}P(\zeta|_{D_{+}})\) and \(\mathbb{K}P(\zeta|_{D_{-}})\), and it is a relative \(h\)-cobordism if and only if \(S^{n-1}\hookrightarrow S(\zeta)=\partial V\) is a homotopy equivalence.
Since \(\zeta|_{D_{\pm}}\) is a trivial bundle, the restriction \(v|:\mathbb{K}P(\zeta|_{D_{\pm}})\to\mathbb{K}P^{\infty}\) is a \(d_{\mathbb{K}}(n-1)\)-equivalence. Therefore the triple \((\mathbb{K}P(\zeta|_{\pm}),v|,\bar{v}|)\) is a normal \((d_{\mathbb{K}}n-d_{\mathbb{K}}-1)\)-smoothing in the sense of Kreck [11, Pg. 711], and we can appeal to [11, Thms. 3 and 4] to determine when \(\bar{V}\) can be made into a relative \(h\)-cobordism.
The obstruction to obtaining a relative \(h\)-cobordism is an element \(\theta(\bar{V},\bar{\nu})\) in a certain quotient of an \(L\)-group. When \(\mathbb{K}=\mathbb{R}\), the vanishing of this obstruction can be guaranteed by imposing the conditions on \(k\) and \(n\) in the statement of the theorem. When \(\mathbb{K}=\mathbb{C}\) and \(\mathbb{K}=\mathbb{H}\), these obstructions automatically vanish: we have \(B=\mathbb{K}P^{\infty}\), and thus \(\pi_{1}(B)=0=w_{1}(B)\).
Proof of Theorem 3.2.: We denote by \(\Gamma\) the group \(G_{\mathbb{C}}\) or \(G_{\mathbb{H}}\). Suppose that \([\Sigma^{k+d_{\mathbb{K}}n},\bar{b}]=M([\Sigma^{k},\bar{a}])\), and let \(D(\zeta)\cup V\cong\Sigma^{k+d_{\mathbb{K}}n}\) be as in the proof above equipped with the action of \(\Gamma\). Then, by construction, \((D(\zeta)\cup V)^{\Gamma}=\Sigma^{k}\). Since \(C_{2}\subset\Gamma\), the construction above implies that \((D(\zeta)\cup V)^{C_{2}}=\Sigma^{k}\). Indeed, it is obvious that \(D(\zeta)^{\Gamma}=D(\zeta)^{C_{2}}\) and the action of \(C_{2}\subset\Gamma\) is free. According to [12, Theorem D], there exist corresponding framings \(\bar{a}^{\prime}\) and \(\bar{b}^{\prime}\) on \(\Sigma^{k}\) and \(\Sigma^{k+d_{\mathbb{K}}n}\) such that \(M_{\mathbb{R}}[(\Sigma^{k},\bar{a}^{\prime})]=[(\Sigma^{k+d_{\mathbb{K}}n}, \bar{b}^{\prime})]\); however, the framings \(\bar{a}^{\prime}\) and \(\bar{b}^{\prime}\) may differ from the initial framings \(\bar{a}\) and \(\bar{b}\). Thus we have either \(M_{\mathbb{K}}(\alpha)=M_{\mathbb{R}}(\alpha)\) or \(M_{\mathbb{K}}(\alpha)=\alpha\), where the first equality holds only in \(\operatorname{coker}(J)\).
Proof of Proposition 3.18.: Let \(j\in KO^{-1}(\mathbb{K}P^{\infty}\wedge S^{k})\) and let \([\Sigma^{k},\bar{\alpha}]=\bar{J}_{\mathbb{K}}(j)\in\pi_{k}^{s}\). Let \([\Sigma^{k+d_{\mathbb{K}}n},\bar{b}]\in M_{\mathbb{K}}([\Sigma^{k},\bar{ \alpha}])\). By Theorem A, there is a \(G_{\mathbb{K}}\)-action on \(\Sigma^{k+d_{\mathbb{K}}n}\) such that \((\Sigma^{k+d_{\mathbb{K}}n})^{G_{\mathbb{K}}}=\Sigma^{k}\), and by the discussion from the previous proof, restricting to \(C_{2}\subset G_{\mathbb{K}}\) yields \((\Sigma^{k+d_{\mathbb{K}}n})^{C_{2}}=\Sigma^{k}\). Iterating the \(\mathbb{K}\)-Mahowald invariant, we may realize \(\Sigma^{k}\) as the fixed points of a smooth \(C_{2}\)-action on a homology sphere of arbitrarily large dimension. [13, Thm. C] then implies that \([\Sigma^{k},\bar{a}]\) lies in the image of the ordinary \(J\)-homomorphism.
## 5. Applications
In this section, we compute some complex and quaternionic Mahowald invariants (Section 5.1 and Section 5.2) using the Adams spectral sequence. We record the applications to transformation groups of homotopy spheres in Section 5.3.
### Complex Mahowald invariants
We record the following facts about the homotopy groups of stunted complex projective spectra, each of which can be verified using the Adams spectral sequence:
1. In \(\mathbb{C}P^{\infty}_{-2}\), we see that \(\nu[-4]=\eta[-2]\), so \(\nu\in M_{\mathbb{C}}(\eta)\).
2. In \(\mathbb{C}P^{\infty}_{-3}\), we see that \(\nu^{2}[-6]=\eta^{2}[-2]\), so \(\nu^{2}\in M_{\mathbb{C}}(\eta^{2})\).
3. In \(\mathbb{C}P^{\infty}_{-3}\), we see that \(\sigma[-6]=\nu[-2]\), so \(\sigma\in M_{\mathbb{C}}(\nu)\).
4. In \(\mathbb{C}P^{\infty}_{-4}\), we see that \(\nu^{3}[-8]=\eta^{3}[-2]\), so \(\nu^{3}\in M_{\mathbb{C}}(\eta^{3})\).
5. In \(\mathbb{C}P^{\infty}_{-5}\), we see that \(\sigma^{2}[-10]=\nu^{2}[-2]\), so \(\sigma^{2}\in M_{\mathbb{C}}(\nu^{2})\).
6. In \(\mathbb{C}P^{\infty}_{-7}\), we see that \(\sigma^{3}[-14]=\nu^{3}[-2]\), so \(\sigma^{3}\in M_{\mathbb{C}}(\nu^{3})\).
**Proposition 5.1**.: _We have_
\[\nu\in M(\eta),\quad\nu^{2}\in M(\eta^{2}),\quad\nu^{3}\in M(\eta^{3}),\]
\[\sigma\in M(\nu),\quad\sigma^{2}\in M(\nu^{2}),\quad\sigma^{3}\in M(\nu^{3}).\]
### Quaternionic Mahowald invariants
We record the following facts about the homotopy groups of stunted quaternionic projective spectra, each of which can be verified using the Adams spectral sequence:
1. In \(\mathbb{H}P_{-2}^{\infty}\), we see that for \(i=0,1,2\), \(2^{i}\sigma[-8]=2^{i}\nu[-4]\), so \(2^{i}\sigma\in M_{\mathbb{H}}(2^{i}\nu)\).
2. In \(\mathbb{H}P_{-3}^{\infty}\), we see that \(\sigma^{2}[-12]=\nu^{2}[-4]\), so \(\sigma^{2}\in M_{\mathbb{H}}(\nu^{2})\).
3. In \(\mathbb{H}P_{-4}^{\infty}\), we see that \(\sigma^{3}[-16]=\nu^{3}[-4]\), so \(\sigma^{3}\in M_{\mathbb{H}}(\nu^{3})\).
**Proposition 5.2**.: _We have_
\[2^{i}\sigma\in M(2^{i}\nu),\quad\sigma^{2}\in M(\nu^{2}),\quad\sigma^{3}\in M (\nu^{3}),\]
_where \(i=0,1,2\)._
### Consequences for transformation groups
Feeding the complex and quaternionic Mahowald invariant computations above into Theorem A, we learn the following about smooth \(U(1)\)- and \(Sp(1)\)-actions on homotopy spheres:
**Corollary 5.3** (Compare with Proposition 5.1 and Proposition 5.2).:
1. _The homotopy spheres_ \(S^{3}\)_,_ \(S^{6}\)_, and_ \(S^{9}\) _corresponding to_ \(\nu\)_,_ \(\nu^{2}\)_, and_ \(\nu^{3}\) _admit nontrivial_ \(U(1)\)_-actions with fixed points the homotopy spheres_ \(S^{1}\)_,_ \(S^{2}\)_, and_ \(S^{3}\) _corresponding to_ \(\eta\)_,_ \(\eta^{2}\)_, and_ \(\eta^{3}\)_, respectively._
2. _The homotopy spheres_ \(S^{7}\) _and_ \(\Sigma^{21}\) _corresponding to_ \(\sigma\) _and_ \(\sigma^{3}\) _admit nontrivial_ \(U(1)\)_- and_ \(Sp(1)\)_-actions with fixed points the homotopy spheres_ \(S^{3}\) _and_ \(\Sigma^{9}\) _corresponding to_ \(\nu\) _and_ \(\nu^{3}\)_, respectively._
3. _The homotopy spheres_ \(S^{7}\) _and_ \(\Sigma^{21}\) _corresponding to_ \(\sigma\) _and_ \(\sigma^{3}\) _admit nontrivial_ \((U(1)\times Sp(1))\)_- and_ \(U(1)^{\times 2}\)_-actions with fixed points the homotopy spheres_ \(S^{1}\) _and_ \(S^{3}\) _corresponding to_ \(\eta\) _and_ \(\eta^{3}\)_, respectively._
## 6. Future Directions
We conclude by discussing some future research directions motivated by this work.
### Further computations
The classical Mahowald invariant has been studied extensively (cf. [23, Pg. 2]). However, beyond the low-dimensional complex Mahowald invariant computations of Ravenel [10] and our low-dimensional computations, we are unaware of any computations of complex and quaternionic Mahowald invariants which cannot be deduced from real Mahowald invariant computations and Theorem C. We propose two projects in this direction:
#### 6.1.1. Nontriviality of complex and quaternionic Mahowald invariants
The real Mahowald invariant of any \(x\neq 1\in\pi_{*}^{s}\) will never contain \(x\), but this is not the case for the complex and quaternionic Mahowald invariants. Ravenel notes [10, Pg. 424] that \(\alpha\in M_{\mathbb{C}}(\alpha)\) if and only if \(\alpha\) is not in the image of the map induced by the \(U(1)\)-transfer \(\Sigma\mathbb{C}P_{0}^{\infty}\to S^{0}\). An analogous result holds with \(M_{\mathbb{H}}(\alpha)\) and the \(Sp(1)\)-transfer. This motivates the following:
**Problem 6.1**.: _Determine the images of the maps_
\[\pi_{*-1}(\mathbb{C}P^{\infty})\to\pi_{*}^{s}\quad\text{ and }\quad\pi_{*-3}( \mathbb{H}P^{\infty})\to\pi_{*}^{s}\]
_induced by the \(U(1)\)- and \(Sp(1)\)-transfers._
#### 6.1.2. Efficient computation of certain real Mahowald invariants
Consider the fact that \(\sigma^{3}\in M_{\mathbb{R}}(\nu^{3})=M_{\mathbb{H}}(\nu^{3})\). In the real case, this follows from studying the homotopy groups of \(\mathbb{R}P_{-13}^{\infty}\), but in the quaternionic case, it follows from studying the homotopy groups of the much sparser \(\mathbb{H}P_{-4}^{\infty}\). This suggests that, in cases where the quaternionic Mahowald invariant is nontrivial, it might be more efficient to compute the real Mahowald invariant using the quaternionic stunted projective spectra instead of real stunted projective spectra.
**Problem 6.2**.: _Compute previously unknown real Mahowald invariants using the quaternionic Mahowald invariant and Theorem C._
### Higher symmetries and the chromatic filtration
In [10, Conj. 12], Mahowald and Ravenel conjecture that (roughly speaking) the real Mahowald invariant increases chromatic height. This conjecture is supported by explicit computations at low heights (see [11, Pg. 2] for a summary). We suggest the following generalization of their redshift conjecture:
**Conjecture 6.3**.: _If \(Y\) is a type \(n\) finite complex with \(v_{n}\)-periodic self-map \(v\) and \(\alpha\in\pi_{*}(\Sigma^{-d_{\mathbb{K}}}Y)\) is \(v\)-periodic, then the coset \(M_{\mathbb{K}}(\alpha)\) consists of entirely \(v_{n}\)-torsion elements provided \(\alpha\not\in M_{\mathbb{K}}(\alpha)\)._
_Let \(w:\Sigma^{d}Y\to Y\) be a power of \(v\) which annihilates every element in \(M_{\mathbb{K}}(\alpha)\). Let \(Z=\operatorname{cofib}(w)\), so \(Z\) has type \(n+1\) and every element in \(M_{\mathbb{K}}(\alpha)\) extends to a map from \(Z\) to a suitable sphere. At least one of these maps is \(v_{n+1}\)-periodic._
On the other hand, Stolz's theorem implies that a framed homotopy sphere which is a \(k\)-fold real Mahowald invariant admits a nontrivial smooth \((C_{2})^{\times k}\)-action, and Theorem A implies analogous results for \(U(1)^{\times k}\) and \(Sp(1)^{\times k}\)-actions. This motivates the following question about the relationship between the chromatic filtration of the stable homotopy groups of spheres and the smooth transformation groups of exotic spheres:
**Question 6.4**.: Let \(n\geq 1\). Suppose \(\alpha\in\pi_{k}^{s}\) is \(v_{n}\)-periodic and let \(\Sigma^{k}\) be the framed sphere corresponding to \(\alpha\) under the Pontryagin-Thom isomorphism. Is the degree of symmetry of \(\Sigma^{k}\) at least \(n\)?
### Extension to other compact Lie groups
In their application of Mahowald invariants to the Geography Problem for \(4\)-manifolds, Hopkins-Lin-Shi-Xu defined [10, Def. 1.25] the \(G\)-equivariant Mahowald invariant of \(\alpha\in\pi_{*}^{G}(S^{0})\) with respect to a non-nilpotent element \(\beta\in\pi_{*}^{G}(S^{0})\) for any compact Lie group \(G\). Here, \(\pi_{*}^{G}(S^{0})\) denotes the \(RO(G)\)-graded \(G\)-equivariant stable stems.
If \(G=C_{2}\), a result of Bruner and Greenlees [1] implies that one can recover the real Mahowald invariant from the \(C_{2}\)-equivariant Mahowald invariant with respect to the Euler class
\[[a_{\sigma}:S^{0}\hookrightarrow S^{\sigma}]\in\pi_{-\sigma}^{C_{2}}(S^{0}).\]
Similar arguments imply that the complex and quaternionic Mahowald invariants can be recovered from the \(U(1)\)- and \(Sp(1)\)-equivariant Mahowald invariants with respect to the Euler classes
\[[a_{\lambda}:S^{0}\hookrightarrow\mathbb{C}^{+}]\in\pi_{-\lambda}^{\mathbb{T} }(S^{0}),\]
\[[a_{\lambda^{\prime}}:S^{0}\hookrightarrow\mathbb{H}^{+}]\in\pi_{-\lambda^{ \prime}}^{Sp(1)}(S^{0}),\]
respectively.
If \(G=SO(n)\), then the Euler class
\[[a_{\lambda_{n}}:S^{0}\hookrightarrow(\mathbb{R}^{n})^{+}]\in\pi_{-\lambda_{n }}^{SO(n)}(S^{0})\]
is non-nilpotent, and we can define similar classes for \(SU(n)\) and \(Sp(n)\) using \(\mathbb{C}^{n}\) and \(\mathbb{H}^{n}\). Following the procedure from [10, Rmk. 1.26(2)], we may define a \(G\)-equivariant Mahowald invariant
\[M^{G}:\pi_{*}^{s}\rightsquigarrow\pi_{*}^{s}\]
carrying elements from the stable stems to cosets in the stable stems.
**Conjecture 6.5**.: _Let \(G\in\{SO(n),SU(n),Sp(n)\}\). Suppose that \(\beta\in M^{G}(\alpha)\) with \(\beta\neq\alpha\). Then the homotopy sphere corresponding to \(\beta\) admits a smooth \(G\)-action with fixed points the homotopy sphere corresponding to \(\alpha\)._
### Curvature bounds
While the scalar curvature of exotic spheres is completely understood by the work of Gromov-Lawson [1] and Stolz [14], metrics of positive Ricci and nonnegative sectional curvature are still mysterious. Wraith [20] proved that every exotic sphere which bounds a stably parallelizable manifold admits a Riemannian metric of positive Ricci curvature, as does a certain exotic \(8\)-sphere which does not bound a stably parallelizable manifold. The celebrated work of Gromoll-Meyer [13], Grove-Ziller [1], and Goette-Kerin-Shankar [11] shows that every exotic \(7\)-sphere admits a metric of nonnegative sectional curvature. As far as we are aware, these are the only known examples of exotic spheres admitting metrics of positive Ricci and nonnegative sectional curvature.
There are many results relating group actions on manifolds to the existence of Riemannian metrics with prescribed curvature bounds, e.g., the work of Lawson-Yau [10] for positive scalar curvature, Searle-Wilhelm [17] for positive Ricci curvature, and Grove-Ziller [1] for nonnegative sectional curvature. We would like to address the following (rather naive) problem:
**Problem 6.6**.: _Suppose that the homotopy spheres \(\Sigma^{k}\) and \(\Sigma^{k+d_{\mathbb{K}}n}\) are related as in Theorem A, and suppose that \(\Sigma^{k}\) admits a Riemannian metric with a lower bound on its scalar, Ricci, or sectional curvature. Find out whether the homotopy sphere \(\Sigma^{k+d\mathbb{K}n}\) admits a \(G_{\mathbb{K}}\)-invariant Riemannian metric with the same curvature bound._
|
2309.13709 | Universal Spin Teichmueller Theory, I. The action of P(SL(2,Z)) on
Tess^+ | Earlier work took as universal mapping class group the collection PPSL(2,Z)
of all piecewise PSL(2,Z) homeomorphisms of the unit circle S^1 with finitely
many breakpoints among the rational points. The spin mapping class group
P(SL(2,Z)) introduced here consists of all piecewise-constant maps from S^1 to
SL(2,Z) which projectivize to an element of PPSL(2,Z). We also introduce a spin
universal Teichmueller space Tess^+ covering the earlier universal Teichmueller
space Tess of tesselations of the Poincare disk D with fiber the space of Z/2
connections on the graphs dual to the tesselations in D. There is a natural
action of P(SL(2,Z)) on Tess^+ which is universal for finite-type hyperbolic
surfaces with spin structure in the same sense that the action of PPSL(2,Z) on
Tess is universal for finite-type hyperbolic surfaces. Three explicit elements
of P(SL(2,Z)) are defined combinatorially via their actions on Tess^+, and the
main new result here is that they generate P(SL(2,Z)). Background, including
material on hyperbolic and spin structures on finite-type surfaces, is sketched
down to first principles in order to motivate the new constructions and to
provide an overall survey. A companion paper to this one gives a finite
presentationof the universal spin mapping class group P(SL(2,Z)) introduced
here. | Robert Penner | 2023-09-24T17:45:05Z | http://arxiv.org/abs/2309.13709v3 | # Universal spin Teichmuller theory, I.
###### Abstract.
Earlier work took as universal mapping class group the collection \(\mathrm{PPSL}(2,\mathbb{Z})\) of all piecewise \(\mathrm{PSL}(2,\mathbb{Z})\) homeomorphisms of the unit circle \(S^{1}=\partial\mathbb{D}\) with finitely many breakpoints among the rational points in \(S^{1}\). The spin mapping class group \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) introduced here consists of all piecewise-constant maps \(S^{1}\to\mathrm{SL}(2,\mathbb{Z})\) which projectivize to an element of \(\mathrm{PPSL}(2,\mathbb{Z})\). We also introduce a spin universal Teichmuller space \(\mathcal{T}ess^{+}\) covering the earlier universal Teichmuller space \(\mathcal{T}ess\) of tesselations of \(\mathbb{D}\) with fiber the space of \(\mathbb{Z}/2\) connections on the graph dual to the tesselation in \(\mathbb{D}\). There is a natural action \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\subset\mathcal{T}ess^{+}\) which is universal for finite-type hyperbolic surfaces with spin structure in the same sense that \(\mathrm{PPSL}(2,\mathbb{Z})\subset\mathcal{T}ess\) is universal for finite-type hyperbolic surfaces. Three explicit elements of \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) are defined combinatorially via their actions on \(\mathcal{T}ess^{+}\), and the main new result here is that they generate \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\). Background, including material on hyperbolic and spin structures on finite-type surfaces, is sketched down to first principles in order to motivate the new constructions and to provide an overall survey. A companion paper to this one gives a finite presentation of the universal spin mapping class group \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) introduced here.
It is a pleasure to thank Igor Frenkel, Athanase Papadopoulos and Anton Zeitlin for discussions and Barry Mazur and Dennis Sullivan for remarks on exposition.
Keywords: Classical and universal Teichmuller space, Riemann moduli space, mapping class group, spin structure, Thompson group T.
Milen's graphic art, even as this confluence has been passed on to the next generation of their children.
Po is one of the main founders of the Orsay topology group, and thanks to his network of friends and relations, he is the one in that group who was the most open to the world mathematical community, both East and West. He has always worked on the most difficult problems [31], including the Poincare Conjecture in dimension three and the smooth Schoenflies Problem in dimension four, on which his studies continue today, including the related massive project [28, 29, 30] in geometric group theory.
Along with his lifelong best friend Barry Mazur, independently Po [32] and Barry [19] gave the headline first examples of pseudo 4-cells which are not topological 4-cells, but whose product with the unit interval are topological 5-cells. (A pseudo cell is a contractible compact combinatorial manifold with boundary.) Among other fruitful collaborations including his celebrated Immersion Theorem [11] with Andre Haefliger, Po's work with Boone and Haken [3] provides a fundamental tool for decision problems in topology,
Meanwhile, Po the man is the gentlest, kindest and warmest person I know, a beacon of calm and clear reason to whom I often turn for guidance both mathematical and otherwise. Here to Po in gratitude, admiration and friendship in this celebratory volume, I offer the universal action \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\subset\mathcal{T}ess^{+}\) of the spin mapping class group on its spin Teichmuller space as a healthfetl 90th birthday present. In order to formulate this, I first take two paragraphs to introduce the main new characters and recall their antecedents:
**Universal Spaces.** First define a _tesselation_\(\tau\) of the Poincare disk \(\mathbb{D}\) to be a locally finite collection of geodesics decomposing \(\mathbb{D}\) into ideal triangles. We shall typically choose a distinguished oriented edge or simply _doe_ in the tesselation \(\tau\), and define the auxiliary space
\[\mathcal{T}ess^{\prime}=\{\text{tesselations of $\mathbb{D}$ with $\text{doe}$}\},\]
which we shall prove later is homeomorphic to the space of orientation-preserving homeomorphisms of the circle \(S^{1}=\partial\mathbb{D}\). The Mobius group \(\mathrm{PSL}(2,\mathbb{R})\) acts on \(\mathcal{T}ess^{\prime}\) with Frechet-manifold quotient
\[\mathcal{T}ess=\mathcal{T}ess^{\prime}/\mathrm{PSL}(2,\mathbb{R})\]
the _universal Teichmuller space_ of [24]. The spin version is introduced here as the total space of the bundle
\[\mathcal{T}ess^{+}\rightarrow\mathcal{T}ess\]
of all equivalence classes of finitely supported functions \(\tau\to\{0,1\}\), where the equivalence relation is generated by changing all three values on the boundary of a fixed triangle complementary to \(\tau\) in \(\mathbb{D}\).
### Universal Groups
Recall that \(\mathrm{PSL}(2,\mathbb{R})\) denotes the group of fractional linear transformations of the upper half plane, and its double cover \(\mathrm{SL}(2,\mathbb{R})\) the area-preserving linear mappings of the plane \(\mathbb{R}^{2}\). \(\mathrm{SL}(2,\mathbb{Z})<\mathrm{SL}(2,\mathbb{R})\) is the subgroup that preserves the integral lattice in \(\mathbb{R}^{2}\), and \(\mathrm{PSL}(2,\mathbb{Z})\) is the projection of \(\mathrm{SL}(2,\mathbb{Z})\) to \(\mathrm{PSL}(2,\mathbb{R})\). Riemann surfaces (with known elementary exceptions) are quotients of the upper half plane by torsion free discrete subgroups of \(\mathrm{PSL}(2,\mathbb{R})\), and lifts of these subgroups to \(\mathrm{SL}(2,\mathbb{R})\) correspond precisely to spin structures on these Riemann surfaces. In the universal setting, the mapping class group is the collection \(\mathrm{PPSL}(2,\mathbb{Z})\) of all piecewise \(\mathrm{PSL}(2,\mathbb{Z})\) homeomorphisms of the unit circle \(S^{1}\) with finitely many breakpoints among the rational points in \(S^{1}\). Elements of \(\mathrm{PPSL}(2,\mathbb{Z})\) are automatically \(\mathcal{C}^{1}\)- but never \(\mathcal{C}^{2}\)-smooth at minimal breakpoints. Moreover, there is a triple incarnation of this group also as the Richard Thompson group \(T\)[5] and the combinatorial Ptolemy group acting on tesselations with doe by flips from [24]. The new spin mapping class group \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) consists of all piecewise-constant maps \(S^{1}\to\mathrm{SL}(2,\mathbb{Z})\) which projectivize to an element of \(\mathrm{PPSL}(2,\mathbb{Z})\). This group is defined combinatorially by its action on \(\mathcal{T}ess^{+}\) and then identified with the spin mapping class group in the proof of the next result.
Here then is the birthday gift for Po:
**Main Theorem**. _The group \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) generated by the three transformations \(\alpha,\beta,t\) illustrated in Figure 1 acting on \(\mathcal{T}ess^{+}\) is precisely the group of piecewise maps \(S^{1}\to\mathrm{SL}(2,\mathbb{Z})\) which projectivize to homeomorphisms in \(\mathrm{PPSL}(2,\mathbb{Z})\)._
My goal here is to explain these new constructions with motivations going back to first principles in some detail, which I hope might be useful whether as survey or invitation. The entire story comes together diagrammatically in Figure 1 illustrating the action of the generators of the universal spin mapping class group on its Teichmuller space, a kind of emblem that I hope Milen might even appreciate as design.
A companion paper [26] to this one derives a finite presentation of \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) from that of \(\mathrm{PPSL}(2,\mathbb{Z})\approx T\). In fact, [26] contains the first complete derivation in the literature of the latter finite presentation as well, though an equivalent presentation was given in [17] seemingly based on [24] and on unpublished notes of Thompson. By construction,
the diagram
\[\begin{array}{ccc}\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\times\mathcal{T}ess^{+}& \rightarrow&\mathcal{T}ess^{+}\\ \downarrow&&\downarrow\\ \mathrm{PPSL}(2,\mathbb{Z})\times\mathcal{T}ess&\rightarrow&\mathcal{T}ess \end{array}\]
commutes, where the vertical maps are the natural forgetful maps.
As I first learned as a graduate student from the Orsay _Travaux de Thurston sur les Surfaces_ seminars [8] co-organized by Po, the classical Teichmuller theory originated as a topic in complex analysis and evolved under William Thurston's masterful infusion of hyperbolic geometry. This has led to a combinatorial approach useful more recently for studying the Riemann moduli space and Teichmuller space \(T(F)\) of an orientable surface \(F=F_{g}^{s}\) of genus \(g\geq 0\) with \(s\geq 1\) punctures, where \(2-2g-s<0\).
The key construction, as recalled later, canonically associates a decomposition \(\Delta(\Gamma)\) of \(F\) into ideal polygons to a hyperbolic structure \(\Gamma\) on \(F\) suitably decorated with one horocycle about each of its punctures. Furthermore, solving the equation \(\Delta(\Gamma)=\Delta\) in \(\Gamma\) for fixed \(\Delta\) yields an ideal cell decomposition of the decorated Teichmuller space itself, which is a difficult theorem in [23]. Generically, an ideal triangulation of \(F\) is the label for a top-dimensional cell in the decorated Teichmuller space of \(F\). In effect in the universal setting, an ideal triangulation of the surface is replaced by a tesselation of its universal cover.
Figure 1. The three generators \(\alpha,\beta,t\) of \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) acting on a marked tesselation with distinguished oriented edge in the two cases that the distinguished edge may or may not have a non-zero label in \(\mathbb{Z}/2\) as indicated with a box icon. The distinguished oriented edge is indicated with an arrow.
For a universal Teichmuller space, we demand [24] a Frechet manifold \(\mathcal{T}\) supporting the action \(\mathcal{P}\subset\mathcal{T}\) of a universal mapping class group \(\mathcal{P}\) so that:
\(\bullet\) there are injective homomorphisms of the classical mapping class groups \(MC(F)\) into explicit completions \(\widehat{\mathcal{P}}_{F}\subset\mathcal{T}\) of \(\mathcal{P}\);
\(\bullet\) there are embeddings \(T(F)\subset\mathcal{T}\) of the classical Teichmuller spaces which are equivariant for the \(MC(F)\)-action;
\(\bullet\) the Weil-Petersson Kahler geometry on each \(T(F)\) is induced by pulling back a universal geometry on \(\mathcal{T}\).
Our universal Teichmuller space satisfying these conditions and more is given by the quotient
\[\mathcal{T}ess=\mathcal{T}ess^{\prime}/\mathrm{PSL}(2,\mathbb{R}).\]
Suitably decorating a tesselation with horocycles at its ideal points, a construction analogous to the classical case for finite-type surfaces (namely, a convex hull construction in Minkowski space, cf. the appendix) provides an ideal polygonal decomposition of \(\mathbb{D}\), and the space of decorated tesselations itself again admits a corresponding decomposition by solving the equation.
Also from [24], there are global so-called lambda length coordinates on \(\mathcal{T}ess\) providing the Frechet structure. In fact, the log lambda length deformations also from [24] lead to an infinite-dimensional Lie algebra of piecewise \(\mathfrak{sl}_{2}\) vector fields on \(S^{1}\) with breakpoints among the rationals, a kind of completion of the loop algebra of \(\mathfrak{sl}_{2}\), introduced in [18] and more recently studied in [10] as part of a larger program with implications to and from the current paper. This Lie algebra denoted \(\mathfrak{ps}l_{2}\) in [18] (for piecewise \(\mathfrak{sl}_{2}\)) was renamed \(\mathfrak{pps}l_{2}\) in [10] (to emphasize its relation with \(\mathrm{PPSL}(2,\mathbb{Z})\)) and is now more precisely seen to be exactly the Lie algebra of \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\). Lambda lengths and the associated Lie algebra are not further discussed in the current paper.
The novel contributions of this paper involve including spin structure in the universal setting. But it is not so much novel as it is the natural universal [24] extension from [27] in the classical case. The latter is reformulated in [13], as we shall explain, with related aspects of \(\mathrm{GL}(1|1)\)-graph connections also studied in [4]. An overview of the several equivalent formulations of spin structures on finite-type surfaces is given in the appendix. Considerations of universal spin structure are prefatory to any discussion of super universal Teichmuller theory.
The most elegant and immediately useful characterization of spin structures in our setting is due to Natanzon [22] for a hyperbolic structure on \(F\) specified by a (conjugacy class of projective) uniformizing
representation \(\pi_{1}(F)\to\mathrm{PSL}(2,\mathbb{R})\) of the fundamental group: a _spin structure on \(F\)_ is a lift \(\pi_{1}(F)\to\mathrm{SL}(2,\mathbb{R})\) of the uniformizing representation from \(\mathrm{PSL}(2,\mathbb{R})\) to \(\mathrm{SL}(2,\mathbb{R})\).
The natural spin generalization of \(\mathrm{PPSL}(2,\mathbb{Z})\) based on Natanzon's description is our universal spin mapping class group \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) of all piecewise maps \(S^{1}\to\mathrm{SL}(2,\mathbb{Z})\) with no conditions at the breakpoints except we demand that the underlying quotient map \(S^{1}\to\mathrm{PSL}(2,\mathbb{Z})\) lies in \(\mathrm{PPSL}(2,\mathbb{Z})\). Composition in the group \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) is given by taking common refinements of the piecewise structures and ordinary composition in \(\mathrm{SL}(2,\mathbb{Z})\) on the resulting pieces, just as in \(\mathrm{PPSL}(2,\mathbb{Z})\).
To define the spin universal space, we extend a different characterization of spin structure from [27] given by equivalence classes of orientations \(\omega:\tau\to\tilde{\tau}\) on the edges in \(\tau\in\mathcal{T}ess\), where \(\tilde{\tau}\) denotes the set of oriented edges of \(\tau\). The equivalence relation is generated by a move called _(Kastelyn) reflection_ on an ideal triangle complementary to \(\tau\), which reverses the orientations of all three frontier edges of a fixed complementary triangle.
In fact, the doe \(e\) itself also extends to another orientation \(\omega_{e}:\tau\to\tilde{\tau}\) induced by the flow in \(S^{1}\) from initial to terminal point on its right and the reverse orientation on its left. (Any specific orientation will suffice to specify the "standard" spin structure determined by the doe.) It follows that on a tesselation \(\tau\) with doe \(e\in\tilde{\tau}\), we might specify the orientation \(\omega\) determining a universal spin structure by instead marking with a box icon each edge \(f\in\tau\) where \(\omega(f)\neq\omega_{e}(f)\), called a _marking_. In this notation taken modulo two on each edge, a reflection on a complementary ideal triangle simply adds a mark to each frontier edge of the triangle on which the reflection is performed. Equivalently as in [13], [4] and the Abstract, a spin structure is a \(\mathbb{Z}/2\)-graph connection on the graph dual to an ideal triangulation of a punctured surface in the classical case, and also now dual to a tesselation with doe in the universal case. In this paper, we shall primarily stick to the formalism of equivalence classes of \(\mathbb{Z}/2\)-markings on edges of a tesselation.
SS1 gives an overview of Teichmuller theory, both classical and universal, necessary for a further discussion of our results, with more complete details but few proofs in an appendix. (Let us apologize for pedagogically inevitable minor expositional repetition in slightly different contexts in these two surveys.) SS2 discusses universal spin structure and the proof of the Main Theorem. The reference [25] subsumes the classical [23] and universal [24] decorated Teichmuller theories but does not treat spin structures from [27].
overview of \(\mathrm{Pt}\approx\mathrm{PPSL}(2,\mathbb{Z})\subset\mathcal{T}ess\approx \mathrm{Homeo}_{+}(S^{1})/\mathrm{PSL}(2,\mathbb{R})\)
Fix an oriented surface \(F=F^{s}_{g}\) as in the Introduction and choose a point in its Teichmuller space \(T(F)\), i.e., specify some complete finite-area metric on \(F\) of constant Gauss curvature -1 modulo push-forward by diffeomorphisms of \(F\) isotopic to the identity. The universal setting is intended to provide an infinite-dimensional space \(\mathcal{T}\), together with embeddings of each \(T(F)\subset\mathcal{T}\) which may depend upon choices, so that both the geometry and mapping class quotient topology of the classical spaces \(T(F)\) pull-back corresponding universal structures on \(\mathcal{T}\).
It turns out that the further assignment of one real parameter to each puncture of \(F\), which may be interpreted as the hyperbolic length of a specified horocycle about the puncture, is enough to determine a collection in \(F\) of pairwise disjointly embedded arcs with endpoints among the punctures and decomposing \(F\) into ideal polygons, as illustrated on the top in Figure 2. (This extra decoration is also sufficient to define the global affine coordinates on the decorated bundles given by lambda lengths.) The existence of such a canonical decomposition in Theorem A.2 from [23] is the starting point for all the combinatorics.
The Poincare dual of this cell decomposition in \(F\) is a graph \(G\) embedded as its deformation retract, as illustrated on the bottom of Figure 2, and it has the extra structure of a cyclic ordering on the half-edges about each vertex coming from an orientation on \(F\), a so-called _fatgraph_\(G\subset F\). Generically in Teichmuller space and as in the figure,
Figure 2. Examples of ideal triangulations in \(F^{1}_{1}\) and \(F^{4}_{0}\) above and their dual fatgraphs below.
the decomposition \(\Delta\) is an _ideal triangulation_ with only complementary triangles, and in this case \(G\) is a trivalent fatgraph.
In effect, the passage to the universal case entails lifting such an ideal triangulation of \(F\) to its universal cover \(\mathbb{D}\), namely the open unit disk in the complex plane \(\mathbb{C}\) with the Poincare metric. In the generic case, the ideal triangulation \(\Delta\) of \(F\) lifts to a tesselation \(\tau\) of \(\mathbb{D}\). Of course, such a lift to \(\mathbb{D}\) is invariant under the action of the cofinite Fuchsian group uniformizing \(F\). The universal case amounts to keeping the tesselation but dropping the Fuchsian group.
In order to have one natural example in mind and as discussed in detail in the appendix, the _Farey tesselation_\(\tau_{*}\) is illustrated in Figure 3 with its _distinguished oriented edge_ or _doe_\(e_{*}\in\tilde{\tau}_{*}\), where \(\tilde{\tau}\) denotes the set of orientations on edges in \(\tau\). Define
\[\mathcal{T}ess^{\prime}=\{\text{tesselations with doe of }\mathbb{D}\}.\]
The doe is not intrinsic and its specification as extra data is critical to the universal setting since tesselations with doe are rigid in the following sense: Let \(\tau^{0}\subset S^{1}=\partial\mathbb{D}\) denote the collection of ideal points of edges in \(\tau\). Given two tesselations \(\tau_{0}\) and \(\tau\) with doe, there is a unique \(f:\tau_{0}^{0}\rightarrow\tau^{0}\) so that \(x,y\in S^{1}\) are ideal vertices of an edge in \(\tau_{0}\) if and only if \(f(x),f(y)\) are likewise for \(\tau\). This is easy to see inductively, one triangle at a time starting with the triangle to the right of the
Figure 3. The Farey tesselation \(\tau_{*}\) of the PoincarΓ© disk \(\mathbb{D}\) with its triangle \(t\) to the right of its distinguished oriented edge. The extended rationals \(\mathbb{Q}\cup\{\infty\}\) are naturally identified with the ideal vertices of \(\tau_{*}\), as illustrated in low generation.
doe. As explained in the appendix, this map \(f\) moreover interpolates an orientation-preserving homeomorphism also denoted \(f:S^{1}\to S^{1}\).
Taking the Farey tesselation \(\tau_{0}=\tau_{*}\in\mathcal{T}ess^{\prime}\) with doe as a chosen base point in this construction gives the _characteristic map_
\(f_{e\in\tilde{\tau}}\in\mathrm{Homeo}_{+}(S^{1})=\{\mathrm{orientation}-\mathrm{preserving}\ \mathrm{homeomorphisms\ of}\ S^{1}\}\) canonically associated to \(e\in\tilde{\tau}\), and in fact, this association of characteristic map \(f_{e\in\tilde{\tau}}\) to a tesselation \(\tau\) with doe \(e\) induces an isomorphism
\[\mathcal{T}ess^{\prime}\approx\mathrm{Homeo}_{+}(S^{1}).\]
Our model of _universal Teichmuller space_ is the quotient
\[\mathcal{T}ess=\mathcal{T}ess^{\prime}/\mathrm{PSL}(2,\mathbb{R})\approx \mathrm{Homeo}_{+}(S^{1})/\mathrm{PSL}(2,\mathbb{R}),\]
and its _universal mapping class group_ is generated by a combinatorial move, called a _flip_ defined as follows for any edge \(e\in\tau\in\mathcal{T}ess\): remove \(e\) from \(\tau\) so as to produce a complementary quadrilateral and then replace the one diagonal \(e\) with the other diagonal \(f\) of this quadrilateral, as indicated in Figure 4, along with the dual move on fatgraphs. Using the characteristic map, edges and hence flips can be indexed by elements \(\tau_{*}\), so the collection of all finite sequences1 of such elements forms a group, the _Ptolemy group_\(\mathrm{Pt}\).
Footnote 1: The completion \(\widehat{\mathcal{P}}_{F}\) of \(\mathrm{PPSL}(2,\mathbb{Z})\) in the wish-list for a universal theory in the Introduction arises from finite sequences of simultaneous flips along all the edges in an orbit of the uniformizing Fuchsian group, i.e., the lift to the universal cover of a flip on one edge in the underlying surface.
In fact via the characteristic map, the Ptolemy group \(\mathrm{Pt}\) can be identified with the subgroup \(\mathrm{PPSL}(2,\mathbb{Z})\) of \(\mathrm{Homeo}_{+}(S^{1})\) consisting of all piecewise-\(\mathrm{PSL}(2,\mathbb{Z})\) homeomorphisms of \(S^{1}\) with only finitely many
Figure 4. The flip on an edge \(e\) in ideal triangulation \(\Delta\) produces another ideal triangulation \(\Delta_{e}\), and equivalently the flip in its dual trivalent fatgraph \(G(\Delta)\) produces \(G(\Delta_{e})\).
pieces whose endpoints lie in \(\tau_{*}^{0}\), and this group turns out to furthermore be isomorphic to Richard Thompson's group \(T\), all of which is detailed in the appendix. Thus, our universal mapping class group
\[\text{Thompson group }T\approx\text{PPSL}(2,\mathbb{Z})\approx\text{Ptolemy group Pt}\]
acts on the universal Teichmuller space \(\mathcal{T}ess\) by flips.
## 2. Proof of Main Theorem
One key explanation for the gift/theorem/emblem in the Introduction is that the enhanced flip on an oriented edge in a tesselation engendered by the move \(\alpha\) in Figure 1 is precisely the dual of the flip transformation on fatgraphs with spin structure discovered and illustrated in Theorem A.6 from [27, 13].
We have already defined the universal spin mapping class group
\[\text{P}(\text{SL}(2,\mathbb{Z}))=\begin{cases}\text{piecewise }\phi:S^{1}\to\text{SL}(2,\mathbb{Z})\text{ with rational breakpoints}\\ \text{so that the projectivization of }\phi\text{ lies in }\text{PPSL}(2,\mathbb{Z})\end{cases}\]
and universal spin Teichmuller space \(\mathcal{T}ess^{+}=\mathcal{T}ess^{\prime}{}^{+}/\text{PSL}(2,\mathbb{R})\), where
\[\mathcal{T}ess^{\prime}{}^{+}=\{(e\in\tilde{\tau},\mu)\in\mathcal{T}ess^{ \prime}\times\{0,1\}^{\tau}:\mu\text{ has finite support}\}/\sim\]
with the equivalence relation \(\sim\) on markings \(\mu\) given by finite compositions of _(Kastelyn) reflections_ which change marking on all three frontier edges of any fixed complementary triangle. Just as in [24] for flips, infinite sequences of reflections can also be allowed provided the marking of each edge eventually stabilizes.
For \(A=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\text{SL}(2,\mathbb{R})\), define the sign
\[\text{sgn}(A)=\begin{cases}\text{sgn}(a+d);&a+d\neq 0,\\ \text{sgn}(c);&\text{else},\end{cases}\]
where \(\text{sgn}(x)\) denotes the usual sign of \(0\neq x\in\mathbb{R}\). In particular, \(\text{trace}(A)=a+d=0=c\) is impossible since \(ad-bc=1\). It follows that a spin hyperbolic structure on a finite-type surface may be thought of as a pair comprised of a projective uniformizing representation
\[\pi_{1}(F)\to\text{PSL}(2,\mathbb{R})\]
as usual together with an induced function
\[\text{sgn}:\pi_{1}(F)\to\{\pm 1\}\]
determining which sign to assign to the representing projective matrix, the one of positive trace (for sgn=1) or of negative trace (for sgn=-1) generically, with a similar definition using the sign of the (2,1)-entry of the representing matrix in the traceless case.
Analogously, given \(\phi:S^{1}\rightarrow\mathrm{SL}(2,\mathbb{Z})\) in P(SL(2,\(\mathbb{Z}\))), there is by definition the induced projectivization \(\bar{\phi}:S^{1}\rightarrow\mathrm{PSL}(2,\mathbb{Z})\) in \(\mathrm{PPSL}(2,\mathbb{Z})\). Using the sign function in the previous paragraph, \(\phi\) itself is uniquely determined by \(\bar{\phi}\) together with a function \(\sigma:\Pi(\bar{\phi})\rightarrow\{\pm 1\}\) determining the signs, where \(\Pi(\bar{\phi})=\Pi(\phi)\) is the set of pieces of \(\bar{\phi}\) or \(\phi\).
To define \(\sigma\), we first define an auxiliary function \(\sigma:\tau-\{doe\}\rightarrow\{\pm 1\}\) as follows. As depicted in Figure 5 in the presence of a doe, another edge \(e\in\tau\) determines a point \(p_{e}\in U_{e}\subset S^{1}\), where \(U_{e}\) is the component of \(S^{1}-\partial e\) on the other side from the doe. A horocycle \(h_{e}\) centered at \(p_{e}\) meets a countable collection of edges of \(\tau\), some finite number of which have non-zero marking, and we define \(\hat{\sigma}(e)=+1\) if this number is even, and otherwise \(\hat{\sigma}(e)=-1\), completing the definition of \(\hat{\sigma}\). As illustrated in Figure 6, this function \(\hat{\sigma}:\tau\rightarrow\{\pm 1\}\) is well-defined on reflection equivalence classes. Let us emphasize that this determination according to parity of the number of marked edges meeting a horocycle derives from the classical case of finite-type surfaces. It is not simply invented here for our separate purposes.
Now suppose that \(e\in\tilde{\tau}\), where \(\tau\) agrees with the Farey tesselation \(\tau_{*}\) outside of a finite ideal polygon \(P\) which moreover contains all edges with non-zero markings as well as the doe. An induction over innermost disks proves that there is a marking equivalent to the given one with all non-zero marked edges lying in the frontier of \(P\), so we may henceforth assume that the marking is of this type.
The ideal vertices of \(P\) contain the breakpoints of \(\Pi(\phi)\), and it therefore suffices to define \(\sigma\) on the pieces so determined, and we set
Figure 5. In the presence of the doe, any other edge \(e\in\tau\) independently of orientations determines a complementary triangle \(T_{e}\) with its ideal vertex \(p_{e}=\partial T_{e}-\partial e\in S^{1}\) in a component \(U_{e}\) of \(S^{1}-\partial e\) in boldface. Also illustrated is a horocycle \(h_{e}\) centered at \(p_{e}\). A finite number of nearby edges may have non-zero markings, as indicated with the box icon.
\(\sigma(I)=\hat{\sigma}(e)\) if the circular interval \(I\) comprising a piece in the piecewise structure of \(\phi\) agrees with some \(U_{e}\). This defines the mapping
\[\langle\alpha,\beta,t\rangle\rightarrow\text{P(SL(2,}\mathbb{Z})\text{)}\]
from the group generated by \(\alpha,\beta,t\) to the group of piecewise SL(2,\(\mathbb{Z}\))-valued maps which descend to PPSL(2,\(\mathbb{Z}\)).
It remains to prove that \(\langle\alpha,\beta,t\rangle\rightarrow\text{P(SL(2,}\mathbb{Z})\text{)}\) is bijective. We begin with the proof of surjectivity, and to this end claim that there is a finite composition of reflections on triangles in \(P\) achieving any specified marking on its frontier, which we take to be an ideal \(n\)-gon. To see this, consider the fatgraph \(G\) dual to \(P\), which has \(V_{e}=n\) univalent external vertices and \(V_{i}=n-2\) trivalent internal ones, with \(n\) edges incident on the former and \(n-3\) edges with endpoints among the latter, for a total of \(E=2n-3\), so of course \(V_{e}+V_{i}-E=1\) according to Euler characteristic, whence \(V_{e}=1+E-V_{i}\).
Rather than Kastelyn reflect on a complementary ideal triangle in \(P\), let us dually imagine reflecting on an interior vertex of \(G\) by changing orientations on each incident edge, and consider the collection of all functions from the set of all edges of \(G\) to \(\mathbb{Z}/2\), a set with cardinality \(2^{E}\). Interior vertex reflections act on this set of functions in the natural way, and there are evidently \(2^{V_{i}}\) possible such compositions. The simultaneous reflection at all interior vertices of \(G\) acts trivially on this set of functions, so only \(2^{V_{i}-1}\) such compositions may act non-trivially. Insofar as \(2^{E}/2^{V_{i}-1}=2^{1+E-V_{i}}\) and there are \(1+E-V_{i}\) external edges, the claim follows, proving surjectivity.
As for injectivity of \(\langle\alpha,\beta,t\rangle\rightarrow\text{P(SL(2,}\mathbb{Z})\text{)}\), consider an element of the kernel, namely a word \(w\) in \(\alpha,\beta,t\) which in particular descends to the identity in PPSL(2,\(\mathbb{Z}\)) upon setting \(t=1\in\text{P(SL(2,}\mathbb{Z})\text{)}\). In other words in the language of the companion paper [26], \(w\) is a \(t\)-insertion in a relation in PPSL(2,\(\mathbb{Z}\))). Since it is in the kernel, \(w\) moreover preserves some marking, and hence any marking since markings, just as spin
Figure 6: Reflection preserves parity modulo two of the number of marked edges on each nearby horocycle as depicted on the level of dual fatgraphs.
structures, provide a torsor. This is precisely the characterization of a relation in \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) based on orbifold covers of quotients that is exploited in [26] to derive the finite presentation of \(\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\). An element in the kernel of \(\langle\alpha,\beta,t\rangle\to\mathrm{P}(\mathrm{SL}(2,\mathbb{Z}))\) is thus a relation in the domain, proving injectivity.
## Appendix A background material
### Farey tesselation and modular group
Let \(\mathbb{Z}\subseteq\mathbb{Q}\subseteq\mathbb{R}\subseteq\mathbb{C}\) denote the integers, rational, real and complex numbers, respectively, and let \(i=\sqrt{-1}\). Let \(\infty\) indicate a _point at infinity_ and the superscript plus sign denote the one-point completion by \(\infty\), e.g., \(\mathbb{Q}^{+}=\mathbb{Q}\cup\{\infty\}\). The upper half-plane \(\mathcal{U}=\{x+iy\in\mathbb{C}:y>0\}\) endowed with the metric \(ds^{2}=\frac{dx^{2}+dy^{2}}{y^{2}}\) provides a standard model for hyperbolic geometry. The _Cayley transform_\(C:s\mapsto\frac{s-i}{s+i}\) induces an isomorphism of pairs \((\mathcal{U},\mathbb{R}^{+})\to(\mathbb{D},S^{1})\), where \(\mathbb{D}\approx\mathcal{U}\) is the open unit disk in the complex plane with the induced metric, or _Poincare disk_ model of the hyperbolic plane, with its frontier \(S^{1}\) the _circle at infinity_.
Let \(t\) denote the ideal triangle with vertices \(\pm 1,-i\in S^{1}\) as in Figure 1, and consider the group \(\Gamma\) generated by hyperbolic reflections in its frontier edges. Define the _Farey tesselation_\(\tau_{*}\) to be the \(\Gamma\)-orbit of the frontier edges of \(t\), regarded as a set of geodesic _edges_, together with its _distinguished oriented edge_, or _doe_, given by the edge from \(-1\) to \(+1\). A rational point \(\frac{p}{q}\in\mathbb{Q}^{+}=\mathbb{Q}\cup\{\infty\}\), as illustrated in Figure 1, is said to be of _generation g_ if the radial arc in \(\mathbb{D}\) from the origin to \(\frac{p}{q}\) meets the interior of \(g\geq 0\) distinct complementary triangles.
The _modular group_\(\mathrm{PSL}(2,\mathbb{Z})\) is the subgroup of \(\Gamma\) consisting of compositions of an even number of reflections, or in other words the group of two-by-two integral matrices \(A\) of unit determinant modulo the equivalence relation generated by identifying \(A\) with \(-A\). This group acts on the left by orientation-preserving hyperbolic isometry on \(z\in\mathcal{U}\) by fractional linear transformation \(A=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right):z\mapsto\frac{az+b}{cz+d}\). There is also the action of \(A\in\mathrm{PSL}(2,\mathbb{Z})\) on the right (following Gauss) on the rational points, for which we introduce the notation \(A:\mathrm{doe}\mapsto e_{A}\in\tilde{\tau}\), where \(e_{A}\) has respective initial and terminal points \(\frac{b+ia}{b-ia}\) and \(\frac{d+ic}{d-ic}\) in \(\mathbb{C}\), that is, respective labels \(-\frac{b}{a}\) and \(-\frac{d}{c}\) in Figure 3.
The main point for us is that the modular group leaves set-wise invariant the Farey tesselation \(\tau_{*}\), mapping \(\cup\tau_{*}\) onto \(\cup\tau_{*}\), and any orientation-preserving homeomorphism of the circle leaving \(\tau_{*}\) invariant in this manner lies in \(\mathrm{PSL}(2,\mathbb{Z})\). In fact, the modular group acts simply transitively on the set \(\tilde{\tau}_{*}\) of orientations on edges of \(\tau_{*}\), and a generating
set for \(\mathrm{PSL}(2,\mathbb{Z})\) is given by any two of
\[R=\begin{pmatrix}0&-1\\ 1&1\end{pmatrix},\ S=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\ T=\begin{pmatrix}1&-1\\ 0&1\end{pmatrix},\ U=\begin{pmatrix}1&0\\ 1&1\end{pmatrix}.\]
Moreover, \(S^{2}=1=R^{3}\) is a complete set of relations in the generators \(R=T^{-1}U\) and \(S=TU^{-1}T\), so \(\mathrm{PSL}(2,\mathbb{Z})\approx\mathbb{Z}/2\star\mathbb{Z}/3\).
Geometrically in Figure 1: the elliptic element \(S\) fixes the distinguished edge in \(\tau_{*}\) reversing the orientation of the doe; \(R\) is the elliptic transformation which counter-clockwise cyclically permutes the vertices of the triangle \(t\); and \(U\) (respectively \(T\)) is the parabolic transformation with the fixed point \(\frac{0}{1}\) (respectively \(\frac{1}{0}\)) which cyclically permutes the incident edges of \(\tau_{*}\) in the counter-clockwise sense about \(\frac{0}{1}\) (respectively the clockwise sense about \(\frac{1}{0}\)). Typical aspects of this enumeration of oriented edges by elements of \(\mathrm{PSL}(2,\mathbb{Z})\) are illustrated in Figure 7.
2. **Tesselations of \(\mathbb{D}\).** An arbitrary tesselation of \(\mathbb{D}\) is a collection \(\tau\) of bi-infinite geodesics locally finite in \(\mathbb{D}\) and decomposing it into complementary ideal triangles. Geodesics in \(\tau\) are called its _edges_, and \(\tau\) itself is regarded as a set of edges. Let \(\tau^{0}\subset S^{1}\) denote the collection of endpoints of all the edges in \(\tau\), which is automatically dense in \(S^{1}\) by local finiteness using that complementary regions are ideal triangles. Let \(\tilde{\tau}\) denote the set of oriented edges of \(\tau\). A distinguished oriented edge, or simply _doe_, on \(\tau\) is the specification of an element of \(\tilde{\tau}\).
Another canonical tesselation of \(\mathbb{D}\) with doe is the dyadic tesselation \(\tau_{d}\), which has the same doe as \(\tau_{*}\) as well as the same generation zero vertices, and which is recursively characterized by the property that one vertex of each triangle complementary to \(\tau_{d}\) bisects the angle between its other two vertices. Thus, \(\tau_{d}^{0}\) consists of the points in \(S^{1}\) with (dyadic) rational arguments, as opposed to \(\tau_{*}^{0}\subset S^{1}\) which is comprised of points with rational rectilinear coordinates.
We again exploit the fact that tesselations with doe are rigid, namely, suppose that \(\tau\) is any tesselation of \(\mathbb{D}\) with doe \(e\in\tilde{\tau}\). There is a
unique \(A\in\mathrm{PSL}(2,\mathbb{R})\) mapping the doe of \(\tau_{*}\) to \(e\) and the triangle \(t\) to its right to the triangle complementary to \(\tau\) on the right of \(e\). One continues in this manner mapping complementary triangles to induce a map \(\phi=\phi_{e\tilde{\tau}}:\tau_{*}^{0}\to\tau^{0}\) called the _characteristic map_ of \(e\in\tilde{\tau}\).
It is not hard to see that \(\phi\) is a surjection using local finiteness in \(\mathbb{D}\), and it is an order-preserving injection by construction. As an order-preserving bijection between dense sets in \(S^{1}\), the characteristic map interpolates an orientation-preserving homeomorphism \(\phi:S^{1}\to S^{1}\) of the same name. Letting \(\mathrm{Homeo}_{+}(S^{1})\) denote the topological group of orientation-preserving homeomorphisms of \(S^{1}\) with the compact-open topology and \(\mathcal{T}ess^{\prime}\) the space of tesselations of \(\mathbb{D}\) with doe with the Hausdorff topology on closed subsets of \(\mathbb{D}\), we have
**Theorem A.1** ([24]).: _The assignment \(\phi_{\tau,e}\) of characteristic map to \(e\in\tilde{\tau}\) induces a homeomorphism \(\mathcal{T}ess^{\prime}\to\mathrm{Homeo}_{+}(S^{1})\)._
Our model of _universal Teichmuller space_ is the quotient
\[\mathcal{T}ess=\mathcal{T}ess^{\prime}/\mathrm{PSL}(2,\mathbb{R})\approx \mathrm{Homeo}_{+}(S^{1})/\mathrm{PSL}(2,\mathbb{R}),\]
which can be identified with the collection of all ideal triangulations of \(\mathbb{D}\) with the same doe as \(\tau_{*}\) and the same triangle \(t\) to its right; in effect in \(\mathcal{T}ess^{\prime}\), the doe determines a triangle, which then is normalized in \(\mathcal{T}ess\) to \(t\) by \(\mathrm{PSL}(2,\mathbb{R})\). Bers' universal Teichmuller space [2] is the collection of all quasi-symmetric homeomorphisms of \(S^{1}\) modulo \(\mathrm{PSL}(2,\mathbb{R})\), so our construction naturally generalizes Bers' version by extending to all orientation-preserving homeomorphisms of \(S^{1}\).
## 3. Teichmuller theory for finite-type surfaces
Consider a connected closed orientable surface with genus \(g\geqslant 0\), and let \(F=F_{g}^{s}\) be the complement of a finite set of cardinality \(s\geqslant 1\), which are taken to be the punctures of \(F\). Let us tacitly fix an orientation on \(F\) and choose a basepoint for its fundamental group.
Provided the Euler characteristic \(2-2g-s<0\) is negative, \(F=\mathcal{U}/\Gamma\) is _uniformized by a Fuchsian group_, by which we mean: let \(\mathcal{U}\) denote the upper half plane with its Poincare metric \(ds^{2}=\frac{dx^{2}+dy^{2}}{y^{2}}\) and its projective matrix group \(\mathrm{PSL}(2,\mathbb{R})=\mathrm{SL}(2,\mathbb{R})/\pm I\) of oriented isometries, where \(I\) denotes the identity matrix; upon choosing a basepoint in \(F\), there is a (conjugacy class of) injective homomorphism called the _uniformizing representation_\(\pi_{1}\to\mathrm{PSL}(2,\mathbb{R})\) of the fundamental group \(\pi_{1}=\pi_{1}(F)\) of \(F\) onto a discrete _Fuchsian subgroup_\(\Gamma<\mathrm{PSL}(2,\mathbb{R})\) so that non-trivial loops about punctures in \(F\) are represented by parabolic transformations, namely, those with absolute trace equal to two. See [1, 9, 25] for example.
The _Teichmuller space_ of \(F\) is
\[T(F)\ =\ \mathrm{Hom}^{\prime}(\pi_{1},\mathrm{PSL}(2,\mathbb{R}))/\mathrm{PSL}(2, \mathbb{R}),\]
where the prime indicates Fuchsian representations as just defined and the action of \(\mathrm{PSL}(2,\mathbb{R})\) on \(\mathrm{Hom}^{\prime}\) is by conjugation. The _decorated Teichmuller space_ is simply \(\tilde{T}(F)=T(F^{s}_{g})\times\mathbb{R}^{s}_{+}\), where the decoration is conveniently regarded as an \(s\)-tuple of positive real weights on the punctures. In particular, the _mapping class group_\(MC(F)\) of homotopy classes of orientation-preserving homeomorphisms of \(F\) acts on \(T(F)\) and on \(\tilde{T}(F)\) in the natural way by push-forward. \(T(F)\) is homeomorphic to an open ball of real dimension \(6g-6+2s\).
The homotopy class of a collection \(\Delta\) of pairwise disjointly embedded essential arcs in \(F\) connecting punctures is called an _arc family_ in \(F\) provided no two distinct arcs in \(\Delta\) are isotopic. \(\Delta\) is said to _fill \(F\)_ provided each component of \(F-\Delta\) is simply connected. An _ideal cell decomposition_ is a decomposition of a topological space into simplices minus certain of their faces of codimension at least two, e.g., an ideal triangulation \(\Delta\) filling \(F\) decomposes it into triangles with their ideal vertices at infinity at the punctures of \(F\).
There is the following foundational result based on a convex hull construction2 in Minkowski space:
Footnote 2: Uniformize the punctured surface \(F\) in Minkowski space by \(\Gamma\subset\mathbb{R}^{2,1}\), where \(\Gamma\) acts by Lorentz isometry on \(\mathbb{R}^{2,1}\) and lies in the component of the identity. A choice of horocycles about the punctures in \(F\) provides a finite collection of \(\Gamma\)-orbits of points in the positive light-cone in \(\mathbb{R}^{2,1}\), the closed convex hull of which provides a \(\Gamma\)-invariant convex body in \(\mathbb{R}^{3}\approx\mathbb{R}^{2,1}\). The extreme edges of this body project to an ideal cell decomposition of \(F\). This basic convex hull construction is from [23]. The same applies to the analogous end for discrete radially dense subsets of the light-cone in the universal context of [24].
**Theorem A.2** ([23, 25]).: _There is an \(MC(F)\)-invariant smooth ideal cell decomposition \(\mathcal{C}(F)\) of \(\tilde{T}(F)\) whose simplices are indexed by arc families filling \(F\) with faces corresponding to inclusion of arc families._
A maximal arc family is called an _ideal triangulation_ of \(F=F^{s}_{g}\) and contains \(6g-g+3s\) edges. In fact, the lambda lengths discussed in the Introduction on the edges of any ideal triangulation give global affine coordinates on \(\tilde{T}(F)\). By construction, crossing a codimension-one face of \(\mathcal{C}(F)\) corresponds to a _flip_ on an edge \(e\) in an ideal triangulation \(\Delta\) as illustrated in Figure 4, replacing \(e\) by \(f\) to produce another ideal triangulaton \(\Delta_{e}\) of \(F\).
As also depicted, dual to an arc family \(\Delta\) filling \(F\), there is a graph \(G=G(\Delta)\subset F\) embedded in \(F\) as a deformation retract, called a _spine_
of \(F\). An orientation on \(F\) induces the counter clockwise ordering on the half edges of \(G\) incident on each fixed vertex thus giving the abstract graph \(G\) the structure of a _fatgraph_ (also called a _ribbon graph_). Dual to flipping diagonals of a quadrilateral in an ideal triangulation of \(F\), there is the combinatorial move depicted in Figure 4 also called a _flip_ on the dual trivalent fatgraph spine \(G\subset F\): contract an edge of \(T\) with distinct endpoints and then expand the resulting 4-valent vertex in the unique distinct manner in order to produce another trivalent fatgraph.
This leads to the _Ptolemy groupoid_\(\operatorname{Pt}(F)\) of \(F\) whose objects are homotopy classes of ideal triangulations of \(F\), or dually trivalent fatgraph spines, and whose morphisms are compositions of flips. \(\operatorname{Pt}(F)\) is the fundamental path groupoid of \(\tilde{T}(F)\) according to Theorem A.2. In fact, finite compositions of flips act transitively on homotopy classes of trivalent fatgraph spines. To see this, use the evident path connectivity of \(\tilde{T}(F)\) to join by a smooth path cells corresponding to distinct ideal triangulations. Put this path in general position with respect to the ideal cell decomposition \(\mathcal{C}(F)\) so that it crosses only codimension-one faces, as required. See [23, 25] for the details, namely, \(\mathcal{C}(F)\) is algebraic hence smooth so that general position is viable, and every ideal triangulation of \(F\) actually occurs in \(\mathcal{C}(F)\).
It follows that flips generate \(MC(F)\) in the sense that if \(G\subset F\) is a trivalent fatgraph spine and \(\varphi\in MC(F)\), then there is a sequence
\[\varphi(G)=G_{1}-G_{2}-\cdots-G_{n}=G\]
of trivalent fatgraph spines of \(F\) where any consecutive pair differ by a flip, with the similar statement for ideal triangulations. Moreover, the general position argument just given, but now for two-dimensional homotopies of paths, implies
**Corollary A.3**.: _A complete set of relations in the fundamental path groupoid of \(\tilde{T}(F)\) is given by declaring flips idempotent together with the links of codimension-two faces as illustrated in Figure 8, namely: commutativity of flips on disjoint quadrilaterals; and the pentagon relation of five consecutive flips alternating between two distinct edges of a triangle._
## 4. Ptolemy group
The _Ptolemy group(oid)_\(\operatorname{Pt}\) has objects given by tesselations with doe of \(\mathbb{D}\) which coincide with \(\tau_{*}\) outside of a finite ideal polygon and morphisms given by finite compositions of flips. A typical flip clearly has order two, however, the flip on the doe is defined to produce another doe so that these oriented edges in this order respect the orientation of \(F\), and hence the flip on the doe has order four, cf. the top-left for \(\alpha\) in Figure 1 ignoring boxes
Using combinatorial rigidity of tesselations with doe, flips can be labeled by edges of a fixed tesselation, so words in these labels render Pt in fact a group, not just a groupoid as it is for finite-type surfaces \(F\) without doe. Furthermore, \(\mathrm{PSL}(2,\mathbb{Z})\) sits inside Pt as those tesselations which are identical to \(\tau_{*}\) except perhaps for the location of the doe.
Let \(\mathrm{PPSL}(2,\mathbb{Z})<\mathrm{Homeo}_{+}(S^{1})\) denote the collection of all piecewise-\(\mathrm{PSL}(2,\mathbb{Z})\) homeomorphisms of \(S^{1}\) with only finitely many pieces whose endpoints lie in \(\tau_{*}^{0}\). Since any orientation-preserving homeomorphism of \(\mathbb{D}\) preserving \(\cup\tau_{*}\) must lie in the modular group, it follows that \(\mathrm{PPSL}(2,\mathbb{Z})\) can also be described as the collection of characteristic maps of objects in Pt. As pointed out to me by Maxim Kontsevich [16] decades ago now and as exposed by Imbert in [12], by the same logic these two discrete subgroups of \(\mathrm{Homeo}_{+}(S^{1})\) coincide under the isomorphism in Theorem A.1, so we have \(\mathrm{PPSL}(2,\mathbb{Z})\approx\mathrm{Pt}\).
Moreover, it is not difficult to see that the characteristic map \(\tau_{*}\to\tau_{d}\) of the dyadic tesselation \(\tau_{d}\) is precisely the Minkowski \(?\)-function, and it conjugates \(\mathrm{PPSL}(2,\mathbb{Z})\) to the celebrated Thompson group \(T\), cf. [5]. We have arrived at the remarkable fact that the three avatars
\[\mathrm{Thompson\ group}\ T\approx\mathrm{PPSL}(2,\mathbb{Z})\approx\mathrm{Ptolemy \ group\ Pt}\]
of our _universal mapping class group_ Pt act on the universal Teichmuller space \(\mathcal{T}ess\approx\mathrm{Homeo}_{+}(S^{1})/\mathrm{PSL}(2,\mathbb{R})\) by flips.
Let \(\alpha\) denote the flip on the doe and \(\beta\) denote the move that fixes the tesselation and moves the doe around the triangle to its left, cf. \(\beta\) in Figure 1 again ignoring boxes. Clearly \(\alpha^{2}\sim S\) and \(\beta\sim R\) generate \(\mathrm{PSL}(2,\mathbb{Z})\), which acts simply transitively on \(\tilde{\tau}_{*}\).
Figure 8. Links of codimension-two cells in the idea cell decomposition of decorated TeichmΓΌller spaces.
As explained in [26], there is the following direct consequence of the ideal analogue of Theorem A.2 from [24]:
**Theorem A.4**.: \(\mathrm{PPSL}(2,\mathbb{Z})\) _is generated by the flip \(\alpha\) on the doe and the transformation \(\beta\) which moves the doe one edge counter-clockwise in the triangle to its left. A presentation in these generators is given by the following relations: \(\alpha^{4}\), \(\beta^{3}\), \((\alpha\beta)^{5}\) and the two commutators \([\beta\alpha\beta,\alpha^{2}\beta\alpha\beta\alpha^{2}]\) and \([\beta\alpha\beta,\alpha^{2}\beta^{2}\alpha^{2}\ \beta\alpha\beta\ \alpha^{2}]\)._
See [17] for the discussion of an algebraic proof of an equivalent presentation based on unpublished computations of Thompson, and [26] for a complete and self-contained proof.
### Spin structures on finite-type surfaces
Milnor's elegant definition [20] of a _spin structure_ on a smooth surface \(F\) is a class in the modulo two first cohomology group of the unit tangent bundle of \(F\) which is non-zero on the fiber class. An equally elegant definition of immediate utility in our situation due to Natanzon [22] on a uniformized surface \(F\) is given by a lift of the uniformizing representation \(\pi_{1}\to\mathrm{PSL}(2,\mathbb{R})\) to \(\pi_{1}\to\mathrm{SL}(2,\mathbb{R})\); this immediately leads to \(\mathrm{P}(\mathrm{SL}(2,\mathbb{R}))\) as the universal spin mapping class group.
Our starting point for the universal spin Techmuller space is Johnson's general formulation [15] as an element of the affine \(H^{1}(F;\mathbb{Z}/2)\)-space \(\mathcal{Q}(F)\) of quadratic forms, i.e., functions
\[q:H_{1}=H_{1}(F;\mathbb{Z}/2)\to\mathbb{Z}_{2}\]
which are quadratic with respect to the homology intersection pairing \(\cdot:H_{1}\otimes H_{1}\to\mathbb{Z}_{2}\) on \(H_{1}\) in the sense that \(q(a+b)=q(a)+q(b)+a\cdot b\), for \(a,b\in H_{1}\). Our combinatorial starting point towards Johnson's definition as an element of \(\mathcal{Q}(F)\) is the special case of a general technique from statistical physics by Cimazoni-Reshetikhin in [6, 7]. This is expressed in terms of Kastelyn orientations (which disagree with the orientation as a boundary an odd number of times, cf. the top of Figure 9) and dimer configurations (disjoint unions of closed edges covering the vertices, again cf. Figure 9) on the one-skeleton of a suitable CW decomposition of \(F\):
**Theorem A.5** (Theorem 2.2 of [7]).: _Fix a dimer configuration \(D\) on a surface graph with boundary \(\mathcal{G}\) for the surface \(\Sigma\) and let \(\alpha\in H_{1}(\Sigma;\mathbb{Z}_{2})\) be represented by oriented closed curves \(C_{1},\ldots,C_{m}\in\bar{\mathcal{G}}\). If \(K\) is a Kasteleyn orientation on \(\mathcal{G}\), then the function \(q_{D}^{K}:H_{1}(\Sigma;\mathbb{Z}_{2})\to\mathbb{Z}_{2}\) given by_
\[q_{D}^{K}(\alpha)=\sum_{i<j}C_{i}\cdot C_{j}+\sum_{n=1}^{m}(1+n_{C_{i}}^{K}+ \ell_{C_{i}}^{D})\pmod{2}\]
_is a well-defined quadratic form, where \(\ell_{C}^{D}\) is the number of edges of \(D\) sticking out to the left of \(C\), and \(n_{C}^{K}\) is the number of edges counted with multiplicity where the orientation of \(C\) disagrees with that of \(K\). Moreover for each fixed dimer \(D\), this establishes an isomorphism as affine \(H^{1}(\Sigma;\mathbb{Z}_{2})\)-space between \(\mathcal{Q}(\Sigma)\) and the collection of equivalence classes of Kastelyn orientations on \(\mathcal{G}\), with equivalence generated by reversing orientations of all edges incident on some fixed vertex of \(\mathcal{G}\)._
We actually rely here on another formulation of spin structure from [27] which amounts to choosing a CW decomposition in Theorem A.5 suited to the tesselation with a special dimer on it, cf. Figure 9. Fix a trivalent fatgraph spine \(G\subset F\) and consider the set \(\mathcal{O}(G)\) of equivalence classes of all orientations on its edges, where the equivalence relation is generated by _reflections_ at vertices \(v\) of by reversing the orientation of each edge incident on \(v\). (These reflections on \(G\) are compositions of six Kastelyn reflections on \(\overline{\mathcal{G}}\) as in Theorem A.5.)
Here is the combinatorial result from [27] which explains the genesis of the generator \(\alpha\) of P(SL(2,\(\mathbb{Z}\))):
**Theorem A.6** ([27, 13]).: _Suppose \(G\subset F=F_{g}^{s}\) is a trivalent fatgraph spine. Then \(\mathcal{O}(G)\) and \(\mathcal{Q}(F)\) are isomorphic as affine \(H^{1}(F;\mathbb{Z}_{2})\)-spaces. Moreover, the action of the mapping class group \(MC(F)\) on \(\mathcal{Q}(F)\) lifts to the action of the Ptolemy groupoid on \(\mathcal{O}(G)\) described by the following figure_
Figure 9. On the top, CW decomposition of a neighborhood of a trivalent fatgraph \(G\) with its special dimer indicated in boldface. Orientations on hexagons extend to Kastelyn orientations on the top, and on the bottom is the corresponding orientation on the edge of \(G\) in the two cases.
_where \(\epsilon_{i}\) indicates orientations on edges, and the extra minus sign denotes orientation reversal; in the special case \(F=F_{1}^{1}\), \(\epsilon_{3}\) replaces \(-\epsilon_{3}\)._
Embarrassingly, there were inadvertently two cases after a lengthy reduction from 16 cases in [27], but as noted in [13], the two cases are each reflection-equivalent to the single diagram in the theorem.
A notion from physics, a spin structure \(q\in\mathcal{Q}(F)\) on \(F\) distinguishes two types of punctures: If \(\gamma_{p}\) is a simple loop about the puncture \(p\) of \(F\), then \(p\) is a _Neveu-Schwarz_ puncture if \(q([\gamma_{p}])=0\) and otherwise \(p\) is a _Ramond_ puncture, where bracket denotes homology class.
**Theorem A.7** ([14]).: _Consider a fatgraph spine \(G\) of \(F\) and a simple oriented edge-path \(\gamma\) in \(G\) surrounding a fixed puncture \(p\) of \(F\). A spin structure as in the previous theorem determined by the equivalence class of an orientation \(\omega\) on \(G\) has \(p\) as a Ramond puncture if and only if the length of \(\gamma\) has the same parity modulo two as the number of edges of \(G\) where \(\omega\) agrees with \(\gamma\)._ |
2303.00048 | Uncloneable Cryptographic Primitives with Interaction | Much of the strength of quantum cryptography may be attributed to the
no-cloning property of quantum information. We construct three new
cryptographic primitives whose security is based on uncloneability, and that
have in common that their security can be established via a novel
monogamy-of-entanglement (MoE) property:
- We define interactive uncloneable encryption, a version of the uncloneable
encryption defined by Broadbent and Lord [TQC 2020] where the receiver must
partake in an interaction with the sender in order to decrypt the ciphertext.
We provide a one-round construction that is secure in the information-theoretic
setting, in the sense that no other receiver may learn the message even if she
eavesdrops on all the interactions.
- We provide a way to make a bit string commitment scheme uncloneable. The
scheme is augmented with a check step chronologically in between the commit and
open steps, where an honest sender verifies that the commitment may not be
opened by an eavesdropper, even if the receiver is malicious.
- We construct a receiver-independent quantum key distribution (QKD) scheme,
which strengthens the notion of one-sided device independent QKD of Tomamichel,
Fehr, Kaniewski, and Wehner (TFKW) [NJP 2013] by also permitting the receiver's
classical device to be untrusted. Explicitly, the sender remains fully trusted
while only the receiver's communication is trusted.
To show security, we prove an extension of the MoE property of coset states
introduced by Coladangelo, Liu, Liu, and Zhandry [Crypto 2021]. In our stronger
version, the player Charlie also receives Bob's answer prior to making his
guess, simulating a party who eavesdrops on an interaction. To use this
property, we express it as a new type of entropic uncertainty relation which
arises naturally from the structure of the underlying MoE game. | Anne Broadbent, Eric Culf | 2023-02-28T19:46:15Z | http://arxiv.org/abs/2303.00048v1 | # Uncloneable Cryptographic Primitives
###### Abstract
Much of the strength of quantum cryptography may be attributed to the no-cloning property of quantum information. We construct three new cryptographic primitives whose security is based on uncloneability, and that have in common that their security can be established via a novel monogamy-of-entanglement (MoE) property:
* We define _interactive uncloneable encryption_, a version of the uncloneable encryption defined by Broadbent and Lord [14] where the receiver must partake in an interaction with the sender in order to decrypt the ciphertext. We provide a one-round construction that is secure in the information-theoretic setting, in the sense that no other receiver may learn the message even if she eavesdrops on all the interactions.
* We provide a way to make a bit string commitment scheme uncloneable. The scheme is augmented with a check step chronologically in between the commit and open steps, where an honest sender verifies that the commitment may not be opened by an eavesdropper, even if the receiver is malicious. Our construction preserves the assumptions of the original commitment while requiring only a polynomial decrease in the length of the committed string.
* We construct a _receiver-independent quantum key distribution_ (QKD) scheme, which strengthens the notion of one-sided device independent QKD of Tomamichel, Fehr, Kaniewski, and Wehner (TFKW) [13] by also permitting the receiver's classical device to be untrusted. Explicitly, the sender remains fully trusted while only the receiver's communication is trusted. We provide a construction that achieves the same asymptotic error tolerance as the scheme of TFKW.
To show security, we prove an extension of the MoE property of coset states introduced by Coladangelo, Liu, Liu, and Zhandry [12]. In our stronger version, the player Charlie also receives Bob's answer prior to making his guess, thus simulating a party who eavesdrops on an interaction. To make use of this property, we express it as a new type of entropic uncertainty relation which arises naturally from the structure of the underlying MoE game.
###### Contents
* 1 Introduction
* 1.1 Uncloneable encryption with interactive decryption
* 1.2 Uncloneable bit commitment
* 1.3 Receiver-independent QKD
* 1.4 Main technique: MoE entropic uncertainty relations
* 1.5 Further related work
* 1.6 Acknowledgements
* 1.7 Outline
* 2 Preliminaries
* 2.1 Registers and states
* 2.2 Finite vector spaces and subspace coset states
* 2.3 Entropy and extractors
* 3 Novel Coset State Monogamy-of-Entanglement Property
* 3.1 Weak and strong MoE properties
* 3.2 The leaky MoE property
* 3.3 A new type of entropic uncertainty relation
* 4 Interactive Uncloneable Encryption
* 4.1 QECMs with interactive decryption and their security
* 4.2 General properties
* 4.3 Instantiation and security proofs
* 5 Uncloneable Bit Commitment
* 5.1 Motivation and definitions
* 5.2 Security proofs
* 6 Receiver-Independent Quantum Key Distribution
* 6.1 Robust leaky MoE property
* 6.2 Motivation and construction
* 6.3 QKD security
Introduction
An important feature of quantum information is the no-cloning principle -- the property that an arbitrary quantum state cannot be perfectly copied, unlike a classical string [10, 24, 15]. This idea underpins many of the unique constructions in quantum cryptography [14], beginning with quantum money [25] and quantum key distribution (QKD) [1]. In this work, we give three new constructions of cryptographic primitives that, at the intuitive level, make use of uncloneability: _uncloneable encryption with interactive decryption_, _uncloneable bit commitment_, and _receiver-independent QKD_. An important consequence of the uncloneability is that none of these primitives can be secure classically -- in fact, as classical information can always be copied, the security is clearly unachievable.
In order to prove security of these primitives and formally reason about their "uncloneability," we show a strengthened form of the subspace coset state monogamy-of-entanglement (MoE) property [13, 12], which is a bound on the winning probability of an MoE game built using subspace coset states. MoE games are used to quantify the strength of quantum tripartite correlations. They belong to the family of _extended nonlocal games_[11], which generalise nonlocal games, but are highly distinct from them. The MoE game paradigm, introduced in [15], has recently been used in various uncloneability-related cryptographic constructions [1, 2, 13, 14]. An MoE game is played between two cooperating players, Bob and Charlie, and an honest referee, Alice, all of whom may hold a quantum system. The subspace coset MoE game (called the strong monogamy game in [13]), proceeds as follows. First, Alice samples a subspace \(a\) of dimension \(n/2\) of the space of \(n\)-bit strings \(\mathbb{Z}_{2}^{n}\), and strings \(t,t^{\prime}\) uniformly at random, and prepares the coset state1
Footnote 1: We use lowercase rather than uppercase letters for subspaces as we aim to reserve the uppercase letters for registers and random variables.
\[|a_{t,t^{\prime}}\rangle=\frac{1}{\sqrt{|a|}}\sum_{u\in a}(-1)^{u \cdot t^{\prime}}|t+u\rangle. \tag{1}\]
She sends this state to Bob and Charlie, who may do arbitrary actions to split2 the state between their two systems, after which they are isolated. Next, Alice provides them with a description of \(a\). In order to win, Bob must provide a vector from the coset \(t+a\) and Charlie must provide one from \(t^{\prime}+a^{\perp}\), where \(a^{\perp}\) is the orthogonal complement of \(a\). This game was shown in [12] to have an exponentially small winning probability in \(n\). We strengthen the relation by showing that the same bound holds on a version of the game that is easier to win -- Bob's answer, whether or not it is correct, leaks to Charlie before he makes his guess. In this way, we are able to see the information that Charlie gets as messages sent during an interaction between Alice and Bob, on which he eavesdrops. We refer to this bound on the winning probability as the _leaky_ monogamy-of-entanglement property.
Footnote 2: Note that the splitting operation is represented by an arbitrary quantum channel, chosen by Bob and Charlie. It is not necessarily something simple like a bipartition of the qubits.
### Uncloneable encryption with interactive decryption
We introduce, study, and construct a variant of uncloneable encryption that allows for an interaction during the decryption process. Uncloneable encryption as is currently understood was introduced
in [10], building on earlier concepts such as the tamper-evident encryption of [14] and the MoE games of [15]. In its most general form, an uncloneable encryption scheme provides a way to encrypt messages in such a way that they cannot be simultaneously read by two malicious parties, Bob and Charlie, under the assumption that they are isolated once the encryption key is released. To the best of our knowledge, it is unknown whether this is achievable in the plain model, even if we allow computational assumptions. Uncloneable encryption schemes in the quantum random oracle model (QROM) have been studied [10] and provide nearly optimal security. Other computational assumptions have been considered: under the assumption of post-quantum one-way functions, [1] show that it is possible to turn an uncloneable encryption scheme into one with semantic security; and under the assumption of a post-quantum public key encryption scheme, they show how to turn the scheme into a public-key uncloneable encryption scheme. Since all these rely on the existence of uncloneable encryption, a key open question remains concerning the existence of an "uncloneable bit" -- an optimal uncloneable encryption scheme in the plain model that encrypts one-bit message. This is a fundamental object as any uncloneable encryption scheme implies an uncloneable bit [10, Theorem 9]. We work with a simple communication assumption rather than a computational assumption in order to instantiate a new form of uncloneable encryption.
Originally, the encryption was represented by a quantum encryption of classical messages (QECM), a protocol that encrypts classical messages as quantum ciphertexts, which can be decrypted using only the classical encryption key [10]. A QECM scheme is uncloneable if two receivers receive a ciphertext, split it arbitrarily, and only get the key once they are isolated, then they can simultaneously learn the message with at best near-trivial probability. We extend the original non-interactive setting of [10] by allowing interaction in the decryption phase. We call this model _quantum encryption of classical messages with interactive decryption_ (QECM-ID). To adapt uncloneability to a QECM-ID scheme, we again have two receivers, whom we call Bob and Eve, who split a ciphertext. To decrypt, Bob initiates an interaction with Alice. Only after this point does Bob need to be seen as the intended recipient of the message. To avoid the trivial attack where Bob simply gives the decrypted message to Eve, they may not communicate directly during the interaction step -- nevertheless, Eve may eavesdrop on the communication between Alice and Bob. We therefore say that the encryption is uncloneable if, for any actions Bob and Eve take, the probability that Eve guesses the message correctly once the interaction finishes and the decryption protocol does not abort is near-trivial.
We also adapt uncloneable-indistinguishable security, which is meant to represent an uncloneability version of chosen-plaintext attack (CPA) security. For a QECM, this is the property that Bob and Eve cannot simultaneously distinguish the encryption of a chosen message distribution from a fixed message [10]. To adapt this to a QECM-ID, we say that it is uncloneable-indistinguishable secure if, after the decryption interaction, the probability that, simultaneously, Alice accepts the decryption and Eve distinguishes a chosen message distribution from a fixed message is near trivial, _i.e._ half the probability of accepting. Intuitively, the condition that Bob guesses correctly is replaced with the condition that Alice accepts the decryption in order to adapt the definition to a QECM-ID.
Finally, we show that there is an equivalence between uncloneable and uncloneable-indistinguishable security for QECM-IDs. This extends the result, shown in [10], that uncloneable security implies uncloneable-indistinguishable security for QECMs. Further, the equivalence generalises an important property of classical encryption. To the best of our knowledge, it is unknown whether both implications hold for QECMs.
Proof technique.To instantiate an uncloneable QECM-ID, we make use of the leaky MoE property. Alice, to encrypt her message \(m\), uses as a key a subspace \(a\), strings \(h\) and \(t,t^{\prime}\), and a key \(r\) for a quantum-proof strong extractor \(e\). She sends the pair \((m+e(t^{\prime},r)+h,|a_{t,t^{\prime}}))\) as the ciphertext. The MoE property implies that, if Bob is able to provide \(t\) to Alice, then with high probability Eve is unable to guess \(t^{\prime}\) correctly, even if she learns \(t\). Hence, Alice can use the interaction to check whether Bob knows \(t\). If this succeeds, then \(t^{\prime}\) is secure against Eve with high probability, so Alice sends remainder of the key \((r,h)\) to Bob. With this, our construction satisfies both forms of uncloneable security, with tighter bounds that the equivalence between the properties implies.
### Uncloneable bit commitment
In bit string commitment, a sender Alice commits to a string that a receiver Bob can only access when she chooses. Ideally, the commitment should be _hiding_, in the sense that Bob cannot learn the string Alice has committed until she chooses to reveal, and _binding_, in the sense that Alice must reveal the same string to which she had committed. Without additional assumptions, bit commitment is impossible [13, 14, 15], but there are a variety of models in which it was shown to exist. For example, under classical computational assumptions [1, 12] (see also [14]) or in the noisy quantum storage model [11]. However, a problem underlying many classically-defined cryptographic primitives is that they are inherently cloneable; if an eavesdropper Eve is able to eavesdrop on the communications between Alice and Bob, she may be able to produce a transcript of their interactions and hence learn the final string whenever it is revealed. This is the case for bit commitment: in fact, the reveal step is usually represented as a public broadcast with no indication of security against an eavesdropper. We remedy this with a method to make a bit string commitment scheme uncloneable.
We define an _uncloneable bit string commitment scheme_ as a commitment scheme with an additional check step in between the commit and reveal steps, where Alice verifies whether an eavesdropper has attempted to clone the commitment. If the commitment passes this check, then an honest Alice can be sure that only Bob will be able to open it during the reveal phase, despite a lack of prior agreement between them. Bob may even be malicious: the only restriction needed on him is that he does not communicate directly to Eve after the check. With this in mind, the point in time when Alice chooses to undertake the check allows it to be run under varying assumptions. In particular, Alice may check immediately after committing, which means that no honest party needs to store any quantum information, but Alice needs to be sure that Bob does not communicate privately with Eve at any point after committing. This is more feasible for near-term quantum devices, but requires that Bob not communicate information to Eve for a period of time between steps. On the other hand, if Alice waits until immediately before revealing to do the check, she may assume that Bob and Eve have arbitrary communication after committing. The drawback is that Bob must store a quantum state even if he is honest.
Proof technique.We use the leaky MoE property to provide a way to turn a commitment scheme into an uncloneable commitment of the above form, which works under the same assumptions as the original commitment. We assume that this is a randomised commitment scheme, where Alice commits to a uniformly random string; this form of commitment is equivalent to the standard one where Alice chooses the string to commit [11]. In order to commit to the random
string \(e(t^{\prime},r)+h\), where \(e\) is a quantum-proof strong extractor, Alice commits to \((r,h)\) using the original commitment and sends a coset state \(|a_{t,t^{\prime}}\rangle\) to Bob. Because Bob does not know \(a\), he has no information about \(t^{\prime}\) and \((r,h)\) has not been revealed, so the commitment is hiding. Next, to check for cloning, Alice sends \(a\) to Bob and verifies that he can measure \(t\). Due to the leaky MoE property, this implies that Eve is only able to guess \(t^{\prime}\) with low probability. Finally, to reveal, Alice reveals \((r,h)\) and Bob queries Alice for some information about \(t^{\prime}\) to make sure that their values are consistent, making the scheme binding. With a good choice of strong extractor, this causes only a polynomial decrease in the length of the committed string and an exponentially small change in the binding parameter.
### Receiver-independent QKD
Quantum key distribution (QKD), introduced by Bennett and Brassard [1], is a foundationally important quantum cryptographic primitive. In its most basic form, it allows an honest sender, Alice, to share a secret key with an honest receiver, Bob, over a public channel without an eavesdropper Eve learning the key. Many variants of QKD that require only weaker assumptions on the honest parties have been proposed. In particular, device-independent protocols, initiated by Ekert [1], seek to allow QKD with few, if any, assumptions on the behaviour of Alice and Bob's devices. One-sided device-independent QKD, shown to be secure against any eavesdropper in [13], allows Bob's quantum device to be fully untrusted, relying on a monogamy-of-entanglement game winning probability bound for security; and fully device-independent QKD, shown by Vazirani and Vidick [21], allows both Alice and Bob's quantum devices to be untrusted, with security coming from the rigidity of a nonlocal game. These varying assumptions allow implementations of QKD to balance practicality and security, depending on available resources.
We show security of QKD in a model extending the one-sided device-independent model, which we call _receiver-independent QKD_. In this model, Alice's quantum device remains fully trusted, but neither Bob's quantum nor his _classical_ device is trusted. However, we require that Bob's communication be trusted: if Bob's communication were not trusted, any QKD scheme would be susceptible to the trivial attack where Bob sends his final key to Eve. In this way, this model can be seen as the minimal assumption on the receiver, hence warranting the name "receiver-independent".
Receiver-independent QKD schemes are distinct in a number of ways. First, since any computation Bob might want to make is inherently untrusted, he cannot be trusted to check any property of the shared state. As such, only Alice may be given the power to abort the protocol. In this way, the interactions between Alice and Bob take the form of a sequence of challenges and responses. Also, the idea of correctness must be altered to account for the fact that Bob's classical computations are untrusted. This is because it is not possible to be certain that Bob has access to the final key, but it is possible to be sure that his device can compute it.
Proof technique.We construct a receiver-independent QKD scheme using coset states, and show its security using an error-robust generalisation of the leaky MoE property. Alice sends a coset state \(|a_{t,t^{\prime}}\rangle\) to Bob. To verify that Eve does not have \(t^{\prime}\), Alice asks Bob to provide \(t\), acting as the parameter estimation step. If he is able to, with only small error, then Alice issues challenges to Bob that allow her to correct her \(t^{\prime}\) to match the guess \(\hat{t}^{\prime}\) Bob's device claims to have, and then verify
this match, which act as the error correction and information reconciliation steps, respectively. Finally, for privacy amplification, Alice acts on her corrected raw key with a quantum-proof strong extractor and instructs Bob to do the same. It is worth noting that our use of an entropic uncertainty relation, as introduced in Section 1.4 below, brings the security proof intuitively closer to earlier proofs of QKD security, as in [10], than the proof of [11], which works more directly with an MoE game.
### Main technique: MoE entropic uncertainty relations
Entropic uncertainty relations, and earlier uncertainty relations beginning with [14], have played a foundational role in quantum information [15]. Tomamichel, Fehr, Kaniewski, and Wehner show an entropic uncertainty relation in the same scenario as their MoE game [11]. We provide an entropic uncertainty relation that arises naturally from the scenario of the leaky subspace coset MoE game, allowing us to work with the full strength of the MoE property in an entropy setting.
To show our relation, we generalise the min-entropy of guessing \(H_{\min}(X|A)_{\rho}\) to a novel property that we refer to as the _sequential min-entropy_, \(H_{\min}(X|A;Y|B)_{\rho}\), which represents the uncertainty of guessing \(X\) knowing \(A\), followed by guessing \(Y\) knowing \(B\), on the same state. For any measurement \(M\) on \(A\) used to guess \(X\), this decomposes as the entropic uncertainty relation
\[H_{\min}(X|M(A))_{\rho}+H_{\min}(Y|B)_{\rho_{|(M(A)=X)}}\geq H_{\min}(X|A;Y|B) _{\rho}, \tag{2}\]
where \(\rho_{|(M(A)=X)}\) is the state conditioned on the guess of \(X\) being correct. A notable distinction between such an entropic uncertainty and a more standard relation is that the states on the two terms are different, although closely related. The winning probability of the leaky MoE game can directly be expressed using a sequential entropy as \(\exp(-H_{\min}(T|AB,T^{\prime}|A^{\prime}TC)_{\rho})\), where \(\rho_{AA^{\prime}TT^{\prime}BC}\) is the state such that \(A\) and \(A^{\prime}\) hold two copies of the subspace \(a\), \(T\) and \(T^{\prime}\) hold the coset representatives \(t,t^{\prime}\), and \(B\) and \(C\) hold Bob and Charlie's quantum systems once they are isolated. Hence, the leaky MoE property provides the entropic uncertainty relation
\[H_{\min}(T|M(AB))_{\rho}+H_{\min}(T^{\prime}|A^{\prime}TC)_{\rho_{|(M(AB)=T)} }\in\Omega(n). \tag{3}\]
This may be compared to the MoE game-based entropic uncertainty relation that was studied in [11], \(H_{\min}(X|\Theta B)_{\rho}+H_{\min}(X|\Theta C)_{\rho}\geq-2\lg\bigl{[}(1+2^ {-n/2})/2\bigr{]}\in O(1)\), where \(\rho_{ABC}\) is any quantum state with \(A=\mathbb{Z}_{2}^{n}\), \(X\) is the result of measuring \(A\) in a uniformly random Wiesner basis of states \(|x^{\theta}\rangle=H^{\theta_{1}}|x_{1}\rangle\otimes\cdots H^{\theta_{n}}|x_ {n}\rangle\), and \(\Theta\) is the description of the basis. The relation is found in the same way as their bound on the winning probability of their MoE game, but is strictly weaker than that bound, since it only considers entropies with respect to the same state. This makes it too weak to provide security of cryptographic primitives such as QKD. In fact, even in the case of the subspace coset MoE game, we similarly have
\[H_{\min}(T|M(AB))_{\rho}+H_{\min}(T^{\prime}|A^{\prime}TC)_{\rho}\in O(1), \tag{4}\]
using the same simple attack: half the time, Bob takes the whole state, and the other half of the time, Charlie takes the whole state.
In order to extend the use of the leaky MoE property and associated entropic uncertainty relation to scenarios where errors should be accounted for, such as QKD, we adapt the MoE game to
allow for errors. That is, we show a bound on the winning probability of a robust generalisation of the leaky MoE game where Bob and Charlie's answers are considered to be correct even if some small number of bits are wrong. The important case for QKD is where Bob is allowed to guess \(t\) incorrectly up to relative error \(\gamma\) but Charlie, who represents the eavesdropper, must still answer perfectly. For small enough error, the winning probability remains exponentially small in \(n\). We can also handle this probability of approximate guessing as an entropic uncertainty relation, by representing the "entropy of approximate guessing" as an entropy of exact guessing on a modified state. Explicitly, the relation takes the now-familiar form
\[H_{\min}(T|M(AB))_{\sigma}+H_{\min}(T^{\prime}|A^{\prime}TC)_{\sigma_{|(M(AB)= T)}}\in\Omega(n), \tag{5}\]
where \(\sigma\) is the state modified to account for the error bit flips \(\sigma=\mathbbm{E}_{|u|\leq\gamma n/2}\,X_{T}^{u}\rho X_{T}^{u}\).
### Further related work
The no-cloning property is found in a wide and growing range of cryptographic applications, such as tamper-detection [10], copy-protection [1, 2], certified deletion [1], secure software leasing [1, 2], and uncloneable decryption [1].
The coset states we study act as a generalisation of subspace states -- uniform superpositions of the elements of a subspace -- introduced in the context of quantum money by Aaronson and Christiano [1]. Rather than using the properties of subspaces, it is possible to see the generalisation to coset states as subspace states encrypted with a quantum one time pad \(|a_{t,t^{\prime}}\rangle=X^{t}Z^{t^{\prime}}|a\rangle\). Coset states under this definition have been studied in the context of proofs of knowledge by Vidick and Zhang [21].
Though inspired by uncloneable encryption of [1], uncloneable encryption using a QECM-ID also bears comparison to tamper-evident encryption, introduced by Gottesman [10] (under the name uncloneable encryption). This is a scheme where an honest receiver can verify, during decryption, whether an eavesdropper had attempted to clone an encrypted message. We emphasize that [10] requires both an honest sender and receiver and that our techniques are fundamentally different since they are resilient to a dishonest receiver.
Finally, the recent work of Kundu and Tan [14] provides an alternate extension of the uncloneable encryption paradigm. They consider the case where, for each encryption key, there are multiple decryption keys. They give a construction of an encryption scheme that is uncloneable as long as the attackers receive independently generated keys. Similarly to the interaction in our model, an assumption on the communication during the decryption is used to guarantee uncloneability. Also, their results consider noise on the devices, similarly to what we are concerned with in the robust version of the game used for receiver-independent QKD; arbitrary small leakage of information between Bob and Charlie's devices, contrasting with our fixed but large leakage of Bob's measurement result; and full device-independence, which requires an interactive encryption
### Acknowledgements
This work was supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0375, Canada's NSERC, and the University of Ottawa's Research Chairs program.
### Outline
In Section 2, we introduce our notation and the relevant basic technical facts. In Section 3, we introduce and analyse the monogamy-of-entanglement game we study, as well as the related entropic uncertainty relation. In Sections 4, 5 and 6 we define and study the primitives of interactive uncloneable encryption, uncloneable bit commitment, and receiver-independent QKD, respectively. In Section 6, we also study the robust version of the MoE game. The MoE properties are given as Theorem 3.2 and Theorem 6.2, and their expressions as entropic uncertainty relations as Corollary 3.7 and Corollary 6.5.
## 2 Preliminaries
In this section, we introduce the notation and recall the technical facts we use in this paper. In Section 2.1, we go over the basics of quantum information and probability that we need; in Section 2.2, we discuss subspaces of vector spaces of bit strings and recall the definition of subspace coset states; and in Section 2.3, we note the definitions of conditional min-entropy and strong extractors.
### Registers and states
A _register_ is a set \(X\) that represents the classical states of a physical system. Note that we may have distinct registers with the same underlying set of states. We represent registers by uppercase Latin letters and classical states from the register by the corresponding lowercase letter. For registers \(X_{1}\) and \(X_{2}\), write the _compound register_\(X_{1}X_{2}=X_{1}\times X_{2}\), representing the states of both systems. A register \(Y\) is a _subregister_ of \(X\) if \(X\) is a compound register with \(Y\) as a factor. For a register \(X\), define the Hilbert space \(\mathcal{H}_{X}\) as the \(|X|\)-dimensional space spanned by the orthonormal basis \(\{|x\rangle\mid x\in X\}\) called the _register basis_. The pure quantum states on \(X\) are given by the unit vectors of \(\mathcal{H}_{X}\), up to phase. We implicitly make use of the isomorphism \(\mathcal{H}_{XY}\cong\mathcal{H}_{X}\otimes\mathcal{H}_{Y}\).
We write the set of _linear operators_\(\mathcal{H}_{X}\to\mathcal{H}_{Y}\) as \(\mathcal{L}(X,Y)\), and if \(X=Y\) as \(\mathcal{L}(X)\); the set of _positive semidefinite operators_ on \(\mathcal{H}_{X}\) as \(\mathcal{P}(X)\), and when \(X\) is evident, write \(P\geq 0\) for \(P\in\mathcal{P}(X)\); and the set of _density operators_\(\mathcal{D}(X)=\ \{\rho\in\mathcal{P}(X)\mid\operatorname{Tr}(\rho)=1\}\), representing the mixed quantum states. An operator \(\rho\in\mathcal{P}(X)\) is a _subnormalised state_ if \(\operatorname{Tr}(\rho)\leq 1\). The definitions below for mixed states extend directly to subnormalised states. Write \(\mathbb{I}_{X}\in\mathcal{L}(X)\) for the _identity operator_, and \(\operatorname{id}_{X}:\mathcal{L}(X)\to\mathcal{L}(X)\) for the _identity channel_. For \(\rho=\rho_{XY}\in\mathcal{D}(XY)\), write \(\rho_{X}=\operatorname{Tr}_{Y}(\rho_{XY})\). A state \(\rho\in\mathcal{D}(X)\) is _classical_ if it is diagonal in the register basis: it corresponds to a probability distribution on \(X\). As a shorthand, write \([x]:=|x\rangle\!\langle x|\in\mathcal{D}(X)\) to represent the density operator of a deterministic classical state. A state \(\rho\in\mathcal{D}(XY)\) is called _classical-quantum_ (cq) or classical on \(X\) if it can be written \(\rho=\sum_{x\in X}p_{x}[x]\otimes\rho_{Y}^{x}\) for some \(\rho_{Y}^{x}\in\mathcal{D}(Y)\) and \(p_{x}\in[0,1]\). By extension, we say a state \(\rho\in\mathcal{D}(X_{1}\cdots X_{m}Y_{1}\cdots Y_{n})\) is \(\operatorname{c}^{m}\!\operatorname{q}^{n}\) if it is classical on each \(X_{i}\). We say a register \(X\) is _classical_ to assume that every state we work with is classical on it. We say that a state \(\rho_{X}\) is _supported_ on \(Y\) if \(Y\) is a subregister of \(X\).
We represent a probability distribution on a register \(X\) by a function \(\pi:X\to[0,1]\) such that \(\sum_{x\in X}\pi(x)=1\). When the probability distribution is implicit, we write the probability of an event \(\Omega\subseteq X\) as \(\operatorname{Pr}[\Omega]=\sum_{x\in\Omega}\pi(x)\). For any \(\mathbb{C}\)-vector space \(V\), we write the _expectation value_ with
respect to the distribution as \(\mathbbm{E}_{x+\pi}f(x):=\sum_{x\in X}\pi(x)f(x)\). The classical state corresponding to \(\pi\) is written \(\mu_{\pi}=\mathbbm{E}_{x+\pi}[x]\in\mathcal{D}(x)\). For the uniform distribution, we write the expectation simply \(\mathbbm{E}_{x\in X}\) and the state \(\mu_{X}\). Abusing notation a bit, when we consider a random variable with values in a register \(X\), we often refer to the variable as \(X\) as well.
A linear map \(\Phi:\mathcal{L}(X)\to\mathcal{L}(Y)\) is called _completely positive_ if for any register \(Z\) and \(P\in\mathcal{P}(ZX)\), \((\mathrm{id}_{Z}\otimes\Phi)(P)\geq 0\). It is _trace-preserving_ if for any \(P\in\mathcal{P}(X)\), \(\mathrm{Tr}(\Phi(P))=\mathrm{Tr}(P)\); and _trace non-increasing_ if \(\mathrm{Tr}(\Phi(P))\leq\mathrm{Tr}(P)\). The _quantum channels_\(X\to Y\) are the completely positive trace-preserving (CPTP) maps \(\mathcal{L}(X)\to\mathcal{L}(Y)\) -- they represent the most general quantum operations. A _positive operator-valued measurement_ (POVM) is a map \(P:S\to\mathcal{P}(X)\), where \(S\) and \(X\) are registers, such that \(\sum_{s\in S}P(s)=\mathbbm{I}_{X}\); we write \(P_{s}:=P(s)\). A POVM \(P\) is a _projector-valued measurement_ (PVM) if \(P_{s}P_{s^{\prime}}=\delta_{s,s^{\prime}}P_{s}\) for all \(s,s^{\prime}\in S\). We can associate various channels to a measurement. For a POVM \(P:S\to\mathcal{P}(X)\), the _destructive measurement channel_ is \(\Psi_{P}:\mathcal{L}(X)\to\mathcal{L}(S)\) defined as
\[\Psi_{P}(\rho)=\sum_{s\in S}\mathrm{Tr}(P_{s}\rho)[s], \tag{6}\]
representing the classical outcome of a measurement; and the _nondestructive measurement channel_\(\Phi_{P}:\mathcal{L}(X)\to\mathcal{L}(SX)\) defined as
\[\Phi_{P}(\rho)=\sum_{s\in S}[s]\otimes\sqrt{P_{s}}\rho\sqrt{P_{s}}, \tag{7}\]
which represents both the classical outcome and the perturbed quantum state after the measurement. Evidently, \(\mathrm{Tr}_{X}(\Phi_{P}(\rho))=\Psi_{P}(\rho)\). For a state \(\rho_{XY}\in D(XY)\), write \(\rho_{P(X)XY}=(\Phi_{P}\otimes\mathrm{id}_{Y})(\rho_{XY})\). Similarly, if \(\rho_{XY}\) is classical on \(X\), for any function \(f:X\to S\), we write \(\rho_{f(X)XY}=\sum_{x\in X}p_{x}[f(x)x]\otimes\rho_{Y}^{x}\). For any cq state \(\rho_{XY}\) and any event \(\Omega\subseteq X\) -- which may be phrased as either a subset or as a relation -- write the _partial state_
\[\rho_{\wedge\Omega}=\rho_{XY\wedge\Omega}=\sum_{x\in\Omega}p_{x}[x]\otimes \rho_{Y}^{x}, \tag{8}\]
and the _conditional state_\(\rho_{|\Omega}=\frac{\rho_{\wedge\Omega}}{\mathrm{Tr}\,\rho_{\wedge\Omega}}\). If the event makes reference to a measurement, _e.g._\(\Omega=(P(Y)=s)\), or a function evaluation, we assume that the measurement or evaluation is undertaken by the nondestructive channel, used to come up with the partial or conditional state, and then the result is forgotten by tracing out. This may perturb registers on which the state is non-classical, so we have to in particular assure ourselves that any two measurements in the same event are compatible.
### Finite vector spaces and subspace coset states
Consider the vector space of bit strings \(V=\mathbb{Z}_{2}^{n}\) over the finite field \(\mathbb{Z}_{2}\). The _canonical basis_ of \(V\) is the set \(E=\{e_{1},\ldots,e_{n}\}\), where \(e_{i}\) is the string that is \(1\) at position \(i\) and \(0\) elsewhere. For any \(u\in V\), we expand in the basis as \(u=\sum_{i}u_{i}e_{i}\). The _inner product_ on \(V\times V\to\mathbb{Z}_{2}\) is defined as \(u\cdot v=\sum_{i}u_{i}v_{i}\). For any subspace \(a\subseteq V\), the _orthogonal subspace_
\[a^{\perp}=\,\left\{v\in V\mid u\cdot v=0\ \forall\ u\in a\right\}. \tag{9}\]
This satisfies \((a^{\perp})^{\perp}=a\) and \(\dim a+\dim a^{\perp}=\dim V=n\), but in general \(\operatorname{span}_{\mathbb{Z}_{2}}(a\cup a^{\perp})=a+a^{\perp}\neq V\), for example \(\{00,11\}^{\perp}=\{00,11\}\).
A subspace \(a\subseteq V\) is called a _register subspace_ if it may be expressed as \(a=\operatorname{span}_{\mathbb{Z}_{2}}S\) for some \(S\subseteq E\)[13]. For a register subspace, we have that \(a^{\perp}=\operatorname{span}_{\mathbb{Z}_{2}}S^{c}\), and therefore that \(a+a^{\perp}=V\). In this case, we get the canonical isomorphisms \(V/a\cong a^{\perp}\) and \(V/a^{\perp}\cong a\). We can easily express any register subspace by an indicator vector \(\iota(a)\in V\) defined by \(\iota(a)_{i}=1\) if and only if \(e_{i}\in a\).
The space \(V\) can be be seen as a register, giving the Hilbert space \(\mathcal{H}_{V}\cong(\mathbb{C}^{2})^{\otimes n}\).
**Definition 2.1** ([14, 21]).: Let \(a\subseteq V\) be a subspace. Given \(t,t^{\prime}\in V\), the _subspace coset state_
\[|a_{t,t^{\prime}}\rangle=\frac{1}{\sqrt{|a|}}\sum_{u\in a}(-1)^{u \cdot t^{\prime}}|u+t\rangle. \tag{10}\]
If \(u\in t+a\) and \(u^{\prime}\in t^{\prime}+a^{\perp}\), we have that \(|a_{u,u^{\prime}}\rangle\) is equal to \(|a_{t,t^{\prime}}\rangle\) up to global phase. To make use of this, for any subspace \(a\), we fix a linear map \(\mathbb{Z}_{2}^{n-\dim a}\to\mathbb{Z}_{2}^{n}\), \(t\mapsto t_{a}\) such that \(t\mapsto t_{a}+a\) is an isomorphism \(\mathbb{Z}_{2}^{n-\dim a}\cong\mathbb{Z}_{2}^{n}/a\), and then take, for \(t\in\mathbb{Z}_{2}^{n-\dim a}\) and \(t^{\prime}\in\mathbb{Z}_{2}^{\dim a}\), \(|a_{t,t^{\prime}}\rangle:=|a_{t_{a},t^{\prime}_{a^{\perp}}}\rangle\). Then, the coset states \(\left\{|a_{t,t^{\prime}}\rangle\mid t\in\mathbb{Z}_{2}^{n-\dim a},t^{\prime} \in\mathbb{Z}_{2}^{\dim a}\right\}\) are all distinct and form an orthonormal basis of \(\mathcal{H}_{V}\).
If \(a\) is a register subspace, there is a particularly good choice of map. For \(a^{\perp}=\operatorname{span}_{\mathbb{Z}_{2}}\{e_{i_{1}},\ldots,e_{i_{m}}\}\) with \(i_{1}<i_{2}<\ldots<i_{m}\), we take \(t_{a}=\sum_{j=1}^{m}t_{j}e_{i_{j}}\). This allows us to write the subspace coset state in this case as a Wiesner state \(|a_{t,t^{\prime}}\rangle=|x^{\theta}\rangle\), where \(x=t_{a}+t^{\prime}_{a^{\perp}}\) and \(\theta=\iota(a)\).
### Entropy and extractors
Given a state \(\rho_{XY}\in\mathcal{D}(XY)\), the _conditional min-entropy_ of \(X\) given \(Y\) is defined as
\[H_{\min}(X|Y)_{\rho}=-\lg\inf\;\left\{\operatorname{Tr}(\sigma_{Y})\mid\rho_{ XY}\leq\mathbb{I}_{X}\otimes\sigma_{Y};\sigma_{Y}\in\mathcal{P}(Y)\right\}, \tag{11}\]
where \(\lg\) is the base-two logarithm [15, 16]. Qualitatively, this represents the uncertainty on \(X\), knowing \(Y\). If \(\rho\) is classical on \(X\), this takes on a quantitative meaning: \(2^{-H_{\min}(X|Y)}\) is the maximal probability of guessing \(X\) when given the register \(Y\). In the absence of side information, the conditional min-entropy becomes the min-entropy \(H_{\min}(X)_{\rho}=-\lg\lVert\rho_{X}\rVert\), where the norm here is the operator norm.
We will use strong extractors to go from a condition on the entropy to a near-independence of registers.
**Definition 2.2** ([15]).: Let \(X,Y,Z\) be classical registers. A _quantum-proof \((k,\varepsilon)\)-strong extractor_ is a function \(e:X\times Y\to Z\) that satisfies the following property. Let \(\rho_{XQ}\) be a subnormalised state, where \(Q\) is a quantum register. If \(H_{\min}(X|Q)\geq k\), then
\[\left\lVert\rho_{e(X,Y)YQ}-\mu_{Z}\otimes\mu_{Y}\otimes\rho_{Q} \right\rVert_{\operatorname{Tr}}\leq\varepsilon, \tag{12}\]
where \(\rho_{YXQ}=\mu_{Y}\otimes\rho_{XQ}\).
Here, the norm is the trace norm \(\|A\|_{\mathrm{Tr}}=\frac{1}{2}\operatorname{Tr}\sqrt{A^{\dagger}A}\). Due to [1], many constructions of extractors exist. Though we will tend to stay general, we give an example of their construction that is useful to keep in mind. For any \(m,n\in\mathbb{N}\) and \(\varepsilon>0\), there exists a quantum-proof \((8\lg(3m/2\varepsilon)+m,\varepsilon)\)-strong extractor \(e:\mathbb{Z}_{2}^{n}\times\mathbb{Z}_{2}^{d}\rightarrow\mathbb{Z}_{2}^{m}\), where \(d\in O(\lg(m\sqrt{n}/\varepsilon)^{2}\lg m)\). For course, for this to be useful, we need that \(k=8\lg(3m/2\varepsilon)+m<n\). Nevertheless, it is possible to achieve an exponentially small error \(\varepsilon=\eta^{m}\) for any output length \(m\) by taking \(n>8\lg(3m/2)+(1+8\lg 1/\eta)m\in O(m)\), though this requires the key length \(d\) to be polynomial in \(m\). This example absolutely defeats the original purpose of strong extractors, to extract a large amount of near-uniform randomness using a small seed, but is of great use in our cryptographic applications.
## 3 Novel Coset State Monogamy-of-Entanglement Property
In this section, we introduce and prove the MoE property that we make use of throughout the paper. In Section 3.1, we recall the MoE properties of coset states that are already known. In Section 3.2, we show our new leaky MoE property: the result is given in Theorem 3.2. Finally, in Section 3.3, we show that this MoE property is equivalent to an entropic uncertainty relation, given as Corollary 3.7.
### Weak and strong MoE properties
Let the register \(V=\mathbb{Z}_{2}^{n}\) and \(A\) be a set of subspaces of \(\mathbb{Z}_{2}^{n}\) of dimension \(n/2\): we take \(A\) to either be the set of all register subspaces of dimension \(n/2\) or all subspaces of dimension \(n/2\). We consider the following monogamy-of-entanglement game, played between a referee Alice, who holds \(V\), and two cooperating players, Bob and Charlie.
1. Alice samples a uniformly random \(a\in A\) and \(t,t^{\prime}\in\mathbb{Z}_{2}^{n/2}\). She prepares the state \(|a_{t,t^{\prime}}\rangle\in\mathcal{H}_{V}\) and sends it to Bob and Charlie.
2. They act by an arbitrary channel \(\Phi:\mathcal{L}(V)\rightarrow\mathcal{L}(BC)\) and then are isolated, so that Bob holds \(B\) and Charlie holds \(C\).
3. Alice shares \(a\) with Bob and Charlie, and they each make guesses of the pair \((t,t^{\prime})\).
4. Bob and Charlie win if their guesses are both correct.
It was shown in [13] that the winning probability of this game is sub-exponentially small in \(n\). This is called the _weak monogamy-of-entanglement property of subspace coset states._
There is also a _strong monogamy-of-entanglement property_, conjectured in the same work, which constrains the winning probability of a related game. The difference here is that the winning condition is slackened: Bob needs only guess \(t\) and Charlie needs only guess \(t^{\prime}\) correctly to win. It was shown in [12] that the winning probability of this game is upper-bounded by \(\sqrt{e}(\cos\frac{\pi}{8})^{n}\).
### The leaky MoE property
We exhibit an even stronger version of the MoE properties by showing that the same bound holds on a family of games that can only be easier to win. In the same setting as above, the game proceeds as follows:
1. Alice samples a uniformly random \(a\in A\) and \(t,t^{\prime}\in\mathbb{Z}_{2}^{n/2}\). She prepares the state \(\left|a_{t,t^{\prime}}\right\rangle\in\mathcal{H}_{V}\) and sends it to Bob and Charlie.
2. They act by an arbitrary channel \(\Phi:\mathcal{L}(V)\rightarrow\mathcal{L}(BC)\) and then are isolated, so that Bob holds \(B\) and Charlie holds \(C\).
3. Alice shares \(a\) with Bob and Charlie.
4. Bob makes a guess \(t_{B}\) of \(t\), which is then given to Charlie; Charlie makes a guess \(t^{\prime}_{C}\) of \(t^{\prime}\).
5. Bob and Charlie win if their guesses are both correct.
We call this the _\((n,A)\)-leaky monogamy-of-entanglement game_. The scenario is illustrated in Fig. 1. An alternate but equivalent way to play the game, in order to bring it closer to the original form of an MoE game, is to have Alice provide Charlie with the correct value of \(t\) rather than Bob's guess. The equivalence can be seen by noting that, in the original interpretation, only the cases when Bob's guess is correct are relevant to the computation of the winning probability. Next, we formalise the strategies and winning probability of this game.
**Definition 3.1**.: A quantum _strategy_ for the \((n,A)\)-leaky MoE game is a tuple of the form \(\mathsf{S}=(B,C,\,\left\{B^{a}\right\}_{a\in A},\,\left\{C^{a,t}\right\}_{a\in A,t\in\mathbb{Z}_{2}^{n/2}},\Phi)\), where
* \(B\) and \(C\) are the registers representing Bob and Charlie's systems, respectively;
Figure 1: The subspace coset MoE games. The additional guesses Bob and Charlie need to make in the weak MoE game are given in light gray, and the additional interaction step in the leaky MoE game is given in dark gray.
* \(B^{a}:\mathbb{Z}_{2}^{n/2}\to\mathcal{P}(B)\) and \(C^{a,t}:\mathbb{Z}_{2}^{n/2}\to\mathcal{P}(C)\) are POVMs, representing Bob and Charlie's measurements;
* \(\Phi:\mathcal{L}(V)\to\mathcal{L}(BC)\) is a quantum channel, representing the splitting operation.
The _winning probability_ of a strategy S is
\[\mathfrak{w}_{n,A}(\textsf{S})=\underset{\begin{subarray}{c}a\in A\\ t,t^{\prime}\in\mathbb{Z}_{2}^{n/2}\end{subarray}}{\prod}\mathrm{Tr}\big{[}( B_{t}^{a}\otimes C_{t^{\prime}}^{a,t})\Phi(\,|a_{t,t^{\prime}}\rangle\!\langle a _{t,t^{\prime}}|)\big{]}. \tag{13}\]
The optimal winning probability of the \((n,A)\)-leaky MoE game is the supremum over all quantum strategies \(\mathfrak{w}^{*}(n,A)=\sup_{\textsf{S}}\mathfrak{w}_{n,A}(\textsf{S})\).
Now, we can formally express the leaky MoE property.
**Theorem 3.2**.: Let \(n\in\mathbb{N}\) and \(A\) be either the collection of register subspaces or the collection of all subspaces of dimension \(n/2\) of \(\mathbb{Z}_{2}^{n}\). Then,
\[\mathfrak{w}^{*}(n,A)\leq\sqrt{e}(\cos\tfrac{\pi}{8})^{n}. \tag{14}\]
First, we note that, as in [14], we need only consider strategies for the \((n,A)\)-strong MoE game where the measurements \(B^{a}\) and \(C^{a,t}\) are projective, as any measurement may be made projective by dilating using Naimark's theorem. Next, we need an important lemma.
**Lemma 3.3** (Lemma 2 in [14]).: Let \(P^{s}\in\mathcal{P}(H)\) for \(s\in S\) be a collection of positive operators. For any set of mutually orthogonal permutations \(\pi_{s}:S\to S\) (permutations such that \(\pi_{s}\circ\pi_{t}^{-1}\) has a fixed point iff \(s=t\)) then
\[\Big{\|}\sum_{s\in S}P^{s}\Big{\|}\leq\sum_{s\in S}\max_{t\in S}\Big{\|}\sqrt{ P^{t}}\sqrt{P^{\pi_{s}(t)}}\Big{\|}.\]
The following technical lemma is the final step of the proof of the theorem.
**Lemma 3.4**.: For any \(a,b\in A\), \(\|P^{a}P^{b}\|\leq\sqrt{\frac{|a\cap b|}{2^{n/2}}}\), where \(P^{a}=\sum_{t,t^{\prime}\in\mathbb{Z}_{2}^{n/2}}\,|a_{t,t^{\prime}}\rangle\! \langle a_{t,t^{\prime}}|\otimes B_{t}^{a}\otimes C_{t^{\prime}}^{a,t}\).
Proof.: First, note that \(P^{a}\leq\sum_{t,t^{\prime}}\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{ \prime}}|\otimes\mathbb{I}_{B}\otimes C_{t^{\prime}}^{a,t}\) and
\[P^{b}\leq\sum_{u,u^{\prime}}\,|b_{u,u^{\prime}}\rangle\!\langle b_{u,u^{ \prime}}|\otimes B_{u}^{b}\otimes\mathbb{I}_{C}=\sum_{u}\Pi_{b+u_{b}}\otimes B _{u}^{b}\otimes\mathbb{I}_{C}, \tag{15}\]
where \(\Pi_{b+u_{b}}=\sum_{v\in b+u_{b}}\,|v\rangle\!\langle v|\) is the projector onto \(b+u_{b}\). Then,
\[\begin{split}\big{\|}P^{a}P^{b}\big{\|}&\leq\Big{\|} \sum_{t,t^{\prime},u}\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\, \Pi_{b+u_{b}}\otimes B_{u}^{b}\otimes C_{t^{\prime}}^{a,t}\Big{\|}\\ &=\max_{u\in\mathbb{Z}_{2}^{n/2}}\Big{\|}\!\sum_{t,t^{\prime}}\,| a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\,\Pi_{b+u_{b}}\otimes C_{t^{ \prime}}^{a,t}\Big{\|},\end{split} \tag{16}\]
since the \(B_{u}^{b}\) are orthogonal projectors. Next, by the \(C^{*}\) identity,
\[\Big{\|}\sum_{t,t^{\prime}}\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}| \,\Pi_{b+u_{b}}\otimes C_{t^{\prime}}^{a,t}\Big{\|}=\Big{\|}\sum_{t,t^{\prime}} \Pi_{b+u_{b}}\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\,\Pi_{b+u_{ b}}\otimes C_{t^{\prime}}^{a,t}\Big{\|}^{1/2}. \tag{17}\]
Now, the terms in this sum are Hermitian with orthogonal supports, because \(\Pi_{b+u_{b}}|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\,\Pi_{b+u_{ b}}\) provides the orthogonality for different values of \(t\), and equal values of \(t\), \(C_{t^{\prime}}^{a,t}\) provides it for different values of \(t^{\prime}\). Therefore, we can again decompose this norm as the maximum of the norms of each term. Putting this together, we get
\[\big{\|}P^{a}P^{b}\big{\|}\leq\max_{t,t^{\prime},u\in\mathbb{Z}_{2}^{n/2}}\! \|\Pi_{b+u_{b}}\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\,\Pi_{b+ u_{b}}\|^{1/2}=\max_{t,t^{\prime},u\in\mathbb{Z}_{2}^{n/2}}\sqrt{\,\langle a _{t,t^{\prime}}|\Pi_{b+u_{b}}|a_{t,t^{\prime}}\rangle}, \tag{18}\]
and we complete the proof by noting that
\[\langle a_{t,t^{\prime}}|\Pi_{b+u_{b}}|a_{t,t^{\prime}}\rangle=\frac{1}{2^{n/2 }}\sum_{v\in(a+t_{a})\cap(b+u_{b})}|(-1)^{t^{\prime}_{a}\perp v}|^{2}\leq \frac{|a\cap b|}{2^{n/2}}. \tag{19}\]
Now, we can proceed to the proof of Theorem 3.2, which follows the method of the analogous proof in [20].
Proof of Theorem 3.2.: First, for any strategy, we upper bound the winning probability by the norm of a related operator. Using the Choi-Jamiolkowski representation \(J(\Phi)=\frac{1}{2^{n}}\sum_{u,v\in\mathbb{Z}_{2}^{n}}|u\rangle\!\langle v| \otimes\Phi(\,|u\rangle\!\langle v|)\in\mathcal{D}(VBC)\) of \(\Phi\), we see that
\[\begin{split}\mathfrak{w}_{n,A}(\mathsf{S})&=\big{\|} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
number of permutations such that \(|\gamma\cap\pi_{s}(\gamma)|=\dim(\operatorname{span}\gamma\cap\operatorname{span} \pi_{s}(\gamma))=\frac{n}{2}-k\) for each \(\gamma\) is \(\binom{n/2}{k}^{2}\). Using Lemma 3.3 and then Lemma 3.4, we have, since \(P^{a}\) is a projector,
\[\begin{split}\Big{\|}\big{\|}\big{\|}\big{\rgroup}_{\gamma\in S}P^{ \operatorname{span}\gamma}\Big{\|}&\leq\big{\rgroup}_{s\in S} \max_{\gamma\in S}\lVert P^{\operatorname{span}\gamma}P^{\operatorname{span} \pi_{s}(\gamma)}\rVert\\ &\leq\big{\rgroup}_{s\in S}\max_{\gamma\in S}\sqrt{\frac{ \lvert\operatorname{span}\gamma\cap\operatorname{span}\pi_{s}(\gamma)\rvert}{2 ^{n/2}}}\\ &=\frac{1}{\binom{n}{n/2}}\sum_{k=0}^{n/2}\binom{n/2}{k}^{2} \sqrt{\frac{2^{n/2-k}}{2^{n/2}}}=\frac{1}{\binom{n}{n/2}}\sum_{k=0}^{n/2} \binom{n/2}{k}^{2}2^{-k/2}.\end{split} \tag{22}\]
Using a result of [2], this is upper-bounded by \(\sqrt{e}(\cos\frac{\pi}{8})^{n}\), finishing the proof.
### A new type of entropic uncertainty relation
We define a generalisation of the min-entropy that can be used to express MoE properties.
**Definition 3.5**.: Let \(\rho\) be a state supported on not necessarily distinct classical registers \(X_{1},\ldots,X_{n}\) and quantum registers \(A_{1},\ldots,A_{n}\). For POVMs \(M^{i}:X_{i}\to\mathcal{P}(A_{i})\), write
\[H_{\min}(X_{1}|M^{1}(A_{1});\ldots;X_{n}|M^{n}(A_{n}))_{\rho}=-\lg\operatorname {Tr}\bigl{[}(\cdots(\rho_{\wedge(M^{1}(A_{1})=X_{1})})\cdots)_{\wedge(M^{n}(A_{ n})=X_{n})}\bigr{]}. \tag{23}\]
Then, we define the _sequential min-entropy_ of \(X_{1},\ldots,X_{n}\) knowing \(A_{1},\ldots,A_{n}\) as
\[H_{\min}(X_{1}|A_{1};\ldots;X_{n}|A_{n})_{\rho}=\inf_{M^{1},\ldots,M^{n}\text { POVMs}}H_{\min}(X_{1}|M^{1}(A_{1});\ldots;X_{n}|M^{n}(A_{n})). \tag{24}\]
Note that the sequential min-entropy is a generalisation of the conditional min-entropy in the sense that they are the same for \(n=1\).
The winning probability of the \((n,A)\)-leaky MoE game may be phrased using this entropy. First, for registers \(T=T^{\prime}=\mathbb{Z}_{2}^{n/2}\) and \(A\) representing either the register subspaces or all subspaces of \(\mathbb{Z}_{2}^{n}\) of dimension \(n/2\), Alice prepares \(\rho_{ATT^{\prime}}=\mu_{A}\otimes\mu_{T}\otimes\mu_{T^{\prime}}\), and then copies \(A\) and prepares coset states on \(V=\mathbb{Z}_{2}^{n}\) accordingly to get
\[\rho_{AA^{\prime}TT^{\prime}V}=\big{\rgroup}_{a,t,t^{\prime}}[aatt^{\prime}] \otimes\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\,. \tag{25}\]
Bob and Charlie act with a channel \(\Phi\), giving \(\rho_{AA^{\prime}TT^{\prime}BC}=(\operatorname{id}_{AA^{\prime}TT^{\prime}} \otimes\Phi)(\rho_{AA^{\prime}TT^{\prime}V})\). In terms of the sequential min-entropy, the leaky MoE property is the statement that
\[H_{\min}(T|AB;T^{\prime}|A^{\prime}TC)_{\rho}\geq-\lg\mathfrak{w}^{*}(n,A) \geq(-\lg\cos\tfrac{\pi}{8})n-\tfrac{1}{2\ln 2}. \tag{26}\]
This expression follows directly from the definition. The only snarl is that, in general in the definition of the sequential min-entropy, Bob's measurement may not preserve \(A\); and similarly Charlie's measurement may not preserve \(A^{\prime}T\). However, since these classical registers are not reused, only the diagonal blocks have any effect, and therefore, we may assume that the measurements are diagonal on the classical registers. As such, the infimum over the measurements is attained by
those measurements that correspond to strategies. Note that any MoE game admits an entropic expression of this form.
To close off this section, we present a way to expand the sequential min-entropy as an entropic uncertainty relation.
**Proposition 3.6**.: Let \(\rho\) be a state supported on classical registers \(X,Y\) and quantum registers \(A,B\). Then,
\[H_{\min}(X|A;Y|B)_{\rho}=\inf_{M:X\to\mathcal{P}(A)\text{ POVM}}\Bigl{(}H_{\min}(X|M(A))_{\rho}+H_{\min}(Y|B)_{\rho_{|(M(A)=X)}} \Bigr{)}. \tag{27}\]
Note the contrast between this entropic uncertainty relation and that found in [13]. Most importantly, their relation considers the min-entropy of the same state on both terms, whereas ours uses different, albeit closely related, states. This avoids the shortcoming of their entropic uncertainty relation -- that the entropy can remain bounded for any dimension of Alice's space -- and thus allows us to make use of the full power of the MoE property in terms of an entropy.
Proof.: This follows immediately from the definition. We have
\[H_{\min}(X|A;Y|B) =\inf_{M,N}-\lg\operatorname{Tr}\bigl{[}(\rho_{\wedge(M(A)=X)})_{ \wedge(N(B)=Y)}\bigr{]}\] \[=\inf_{M,N}-\lg\operatorname{Tr}\bigl{[}\rho_{\wedge(M(A)=X)} \bigr{]}\operatorname{Tr}\bigl{[}(\rho_{|(M(A)=X)})_{\wedge(N(B)=Y)}\bigr{]}\] \[=\inf_{M}\Bigl{(}-\lg\operatorname{Tr}\bigl{[}\rho_{\wedge(M(A)= X)}\bigr{]}+\inf_{N}-\lg\operatorname{Tr}\bigl{[}(\rho_{|(M(A)=X)})_{\wedge(N(B)=Y)} \bigr{]}\Bigr{)}\] \[=\inf_{M}\Bigl{(}H_{\min}(X|M(A))_{\rho}+H_{\min}(Y|B)_{\rho_{|(M( A)=X)}}\Bigr{)}\qed\]
Using the above proposition, we may express the leaky MoE property as an entropic uncertainty relation.
**Corollary 3.7** (Leaky MoE entropic uncertainty relation).: For any measurement \(M:T\to\mathcal{P}(AB)\) Bob makes in the leaky MoE game, we have
\[H_{\min}(T|M(AB))_{\rho}+H_{\min}(T^{\prime}|A^{\prime}TC)_{\rho_{|(M(AB)=T)}} \geq(-\lg\cos\tfrac{\pi}{8})n-\tfrac{1}{2\ln 2}. \tag{28}\]
This follows immediately by combining Theorem 3.2 with Proposition 3.6 via Eq. (26). This is the form of the bound that we make use of throughout the remainder of the paper.
## 4 Interactive Uncloneable Encryption
In this section, we discuss our first application, introduced in Section 1.1. In Section 4.1, we introduce the formalism used for interactive uncloneable encryption and discuss its security. In Section 4.3, we give a construction, given as Protocol 4.5, and prove its security using the leaky MoE property of the previous section.
### QECMs with interactive decryption and their security
We construct an uncloneable encryption scheme which requires only a communication assumption. That is, in order to decrypt a message, the sender Alice is required to have a short interaction with the receiver Bob. Note that, like uncloneable encryption, interactive uncloneable encryption does not assume an intended recipient, but once the interaction is started, only the party that initiated the interaction will be able to decrypt the message with high probability. First, in order to make sense of this interactive decryption, we extend the idea of a quantum encryption of classical messages of [1], by allowing the decryption to contain an interaction between the sender Alice and the receiver Bob. This allows for uncloneability via the leaky MoE property, as it will permit Alice to check whether an eavesdropper has the ciphertext by checking whether Bob holds an uncorrelated piece of information. We present this formally.
**Definition 4.1**.: A _quantum encryption of classical messages with interactive decryption (QECM-ID)_ is a tuple \(\mathtt{Q}=(\mathtt{Key},\mathtt{Enc},\mathtt{Dec})\).
* \(\mathtt{Key}:\mathcal{D}(\{0\})\rightarrow\mathcal{D}(K)\) is the quantum channel representing the key-generation algorithm, where \(K\) is the classical key register.
* \(\mathtt{Enc}:\mathcal{D}(KM)\rightarrow\mathcal{D}(KMC)\) is the quantum channel representing the encryption algorithm, where \(M\) is the classical message register and \(C\) is the quantum ciphertext register. \(\mathtt{Enc}\) preserves \(KM\), _i.e._\(\mathtt{Enc}([km])=[km]\otimes\sigma_{C}^{km}\), where \(\sigma_{C}^{km}\) is the quantum ciphertext.
* The decryption algorithm \(\mathtt{Dec}\) is an interaction between Alice and Bob that takes a state \(\rho_{KMB}\) to \(\rho_{KMF\hat{M}B^{\prime}}\), where Alice holds \(K\), \(M\), and \(F=\mathbb{Z}_{2}\) (a classical register that indicates whether Alice aborts (0) or accepts the decryption (1)); and Bob holds \(\hat{M}\) (a classical register holding Bob's decryption of the message), and \(B\) and \(B^{\prime}\) (additional quantum registers).
The scheme is _\(\varepsilon\)-correct_ if, for any classical state \(\rho_{M}\), when Alice and Bob run \(\mathtt{Dec}\) as intended on \(\rho_{KMC}=\mathtt{Enc}(\mathtt{Key}([0])\otimes\rho_{M})\) for \(B=C\), they get \(\rho_{KMF\hat{M}}\) such that3
Footnote 3: We use this definition as it presents an operational way to simultaneously lower bound the probabilities of aborting and decrypting the correct message.
\[\|\rho_{M\hat{M}\wedge(F=1)}-\rho_{MM}\|_{\mathrm{Tr}}\leq\varepsilon. \tag{29}\]
Note that this reduces to the original definition of a QECM if the decryption is a simple one-round interaction: Alice sends the key \(k\) to Bob, who uses it to decrypt the ciphertext, and Alice always accepts the decryption. We extend the security properties of indistinguishable, uncloneable, and uncloneable-indistinguishable security of a QECM to this setting as well. Intuitively, the definitions are meant to replace the condition of Bob guessing correctly with Alice accepting the decryption.
First, we can describe the security properties by means of security games. The _indistinguishable security game_ is played by an adversary Bob against a challenger Alice.
1. Bob prepares a cq state \(\rho_{MS}\) and sends register \(M\) to Alice, keeping hold of the side-information.
2. Alice samples a bit \(y\) uniformly at random. If \(y=0\) she replaces \(M\) with a fixed message \(m_{0}\); else she preserves \(M\).
3. Alice samples a key using Key and encrypts the message. She then sends the ciphertext to Bob.
4. Bob attempts to guess \(y\). He wins if he guesses correctly.
Indistinguishable security is achieved if the winning probability of this game is only slightly above \(\frac{1}{2}\). This is a standard property of encryption schemes.
Uncloneable security guarantees that, even if a colluding party decrypts, an eavesdropper can only guess the message as well as her side information allows. The _uncloneable security game_ is played by two cooperating adversaries Bob and Eve against a challenger Alice.
1. Alice samples a message uniformly at random. She samples a key and encrypts the message. She sends the ciphertext to the adversaries.
2. The adversaries split the state between them using a quantum channel, and then may no longer communicate.
3. Alice and Bob decrypt with the interaction Dec, and Eve eavesdrops on their interactions.
4. Eve attempts to guess the message. The adversaries win if Alice accepts the decryption (\(f=1\)) and Eve guesses correctly.
Uncloneable security is achieved if the winning probability is only slightly above the probability of Alice accepting and Eve guessing the message given no information \(\frac{\Pr[F=1]}{|M|}\).
Finally, uncloneable-indistinguishable security combines uncloneable and indistinguishable security: it guarantees that, even if a colluding party decrypts, an eavesdropper cannot distinguish between the encryptions of an intended message and a fixed message. The _uncloneable-indistinguishable security game_ is also played by two cooperating adversaries against a challenger.
1. The adversaries prepare a cq state \(\rho_{MS}\) and send register \(M\) to Alice.
2. Alice samples a bit \(y\) uniformly at random. If \(y=0\) she replaces \(M\) with a fixed message \(m_{0}\); else she preserves \(M\).
3. Alice samples a key and encrypts the message. She sends the ciphertext to the adversaries.
4. The adversaries split the state between them using a quantum channel, and then may no longer communicate.
5. Alice and Bob decrypt with the interaction Dec, and Eve eavesdrops on their interactions.
6. Eve tries to guess \(y\). The adversaries win if Alice accepts the decryption and Eve guesses correctly.
Uncloneable-indistinguishable security is achieved if the winning probability is only slightly above \(\frac{1}{2}\Pr[F=1]\), half the probability of accepting.
We now formalise the intuition of these security games in a way that is amenable to security proofs in the information-theoretic setting.
**Definition 4.2**.: Let \(\mathtt{Q}=(\mathtt{Key},\mathtt{Enc},\mathtt{Dec})\) be a QECM-ID. We say the scheme satisfies
\(\varepsilon_{1}\)**-indistinguishable security**: if
\[\|\rho_{CS|(Y=0)}-\rho_{CS|(Y=1)}\|_{\mathrm{Tr}}\leq\varepsilon_{1}, \tag{30}\]
for \(\rho\) prepared as follows. Fix \(m_{0}\in M\), and let \(Y=\mathbb{Z}_{2}\) and \(\rho_{MS}\) be any cq state. Alice prepares the state \(\rho_{MSY}=\frac{1}{2}([m_{0}]\otimes\rho_{S}\otimes[0]+\rho_{MS}\otimes[1])\), then encrypts to get \(\rho_{KMCSY}=(\mathtt{Enc}\otimes\mathrm{id}_{SY})(\mathtt{Key}([0])\otimes \rho_{MSY})\).
\(\varepsilon_{2}\)**-uncloneable security**: if
\[\Pr\bigl{[}M=\check{M}\wedge F=1\bigr{]}_{\rho}\leq\frac{1}{|M|}\Pr[F=1]_{ \rho}+\varepsilon_{2}, \tag{31}\]
for \(\rho\) prepared as follows. Let \(\rho_{M}=\mu_{M}\) be the maximally mixed state. Alice then encrypts \(\rho_{KMC}=\mathtt{Enc}(\mathtt{Key}([0])\otimes\rho_{M})\) and an eavesdropper Eve acts with a quantum channel \(\Phi:\mathcal{L}(C)\rightarrow\mathcal{L}(BE)\) to get \(\rho_{KMBE}=(\mathrm{id}_{KM}\otimes\Phi)(\rho_{KMC})\). Then, after eavesdropping on all the interactions during \(\mathtt{Dec}\), Eve produces a guess \(\check{M}\) of \(M\).
\(\varepsilon_{3}\)**-uncloneable-indistinguishable security**: if
\[\|\rho_{E^{\prime}|(Y=0)\wedge(F=1)}-\rho_{E^{\prime}|(Y=1)\wedge(F=1)}\|_{ \mathrm{Tr}}\leq\varepsilon_{3}, \tag{32}\]
for \(\rho\) prepared as follows. Fix \(m_{0}\in M\), and let \(Y=\mathbb{Z}_{2}\) and \(\rho_{MS}\) be any cq state. Alice prepares the state \(\rho_{MSY}=\frac{1}{2}([m_{0}]\otimes\rho_{S}\otimes[0]+\rho_{MS}\otimes[1])\), then encrypts to get \(\rho_{KMCSY}=(\mathtt{Enc}\otimes\mathrm{id}_{SY})(\mathtt{Key}([0])\otimes \rho_{MSY})\). Next, an eavesdropper Eve acts with a quantum channel \(\Phi:\mathcal{L}(CS)\rightarrow\mathcal{L}(BE)\) to get \(\rho_{KMBEY}=(\mathrm{id}_{KM}\otimes\Phi\otimes\mathrm{id}_{Y})(\rho_{KMCSY})\) and after eavesdropping on all the interactions during \(\mathtt{Dec}\), Eve holds a register \(E^{\prime}\).
The security definitions are illustrated in Fig. 2.
### General properties
In this section, we show some relations on the uncloneable security properties for QECM-IDs, with the idea to generalise properties of classical encryption schemes. These extend and strengthen results known for QECMs.
First, we see that uncloneable security holds for non-uniform message distributions, generalising a property shown in [1].
**Lemma 4.3**.: Let \(\mathtt{Q}\) be an \(\varepsilon\)-uncloneable QECM-ID. Then, if the uncloneable security game is played with a classical state \(\rho_{M}\) not necessarily uniform, the winning probability
\[\Pr\bigl{[}M=\check{M}\wedge F=1\bigr{]}_{\rho}\leq 2^{-H_{\min}(M)_{\rho}}\Pr[F =1]+|M|2^{-H_{\min}(M)_{\rho}}\varepsilon. \tag{33}\]
Proof.: We relate this to the winning probability with \(\rho_{M}=\mu_{M}\). In fact,
\[\Pr[M=\tilde{M}\wedge F=1]_{\rho} =\sum_{m\in M}\Pr[M=m]\Pr[\tilde{M}=m\wedge F=1|M=m] \tag{34}\] \[\leq\max_{m}\Pr[M=m]\sum_{m}\Pr[\tilde{M}=m\wedge F=1|M=m]\] \[=|M|2^{-H_{\min}(M)_{\rho}}\Pr[M=\tilde{M}\wedge F=1]_{\mu}\] \[\leq 2^{-H_{\min}(M)_{\rho}}\Pr[F=1]+|M|2^{-H_{\min}(M)_{\rho}}\varepsilon\]
Next, we find an equivalence, up to scalar multiple of the parameters, between the uncloneable and uncloneable-indistinguishable security properties. One direction, uncloneable security implying uncloneable-indistinguishable security, generalises a similar property shown for QECMs in [1], while the other direction is new, and remains an open question for QECMs in the information-theoretic setting. The equivalence of these security properties is similar to the equivalence of semantic security and indistinguishability in classical encryption.
**Theorem 4.4**.: Let \(\mathtt{Q}\) be a perfectly indistinguishable QECM-ID.
* If \(\mathtt{Q}\) is \(\varepsilon\)-uncloneable secure then it is \(|M|\varepsilon\)-uncloneable-indistinguishable secure.
* If \(\mathtt{Q}\) is \(\varepsilon\)-uncloneable-indistinguishable secure then it is \(\varepsilon\)-uncloneable secure.
Figure 2: Schematics of the state constructions in the QECM-ID security definitions. Blocks represent operations, with interactions if they are split by a dotted line. Horizontal lines represent registers; they take part in the operations they touch. Vertical arrows represent eavesdropping.
Note that this theorem means that, outside of some pathological cases, it is only necessary to show either uncloneable and uncloneable-indistinguishable security for QECM-IDs, not both. However, we nevertheless show both in the following section, as it allows us to work out better parameters.
Proof.:
* We proceed by contrapositive. Suppose there exists an attack for the uncloneable-indistinguishable security game that wins with advantage greater than \(|M|\varepsilon\). An important observation we make to help simplify the proof is that we may always assume that \(\rho_{MS}=[m_{1}]\) for some message \(m_{1}\in M\)[10]. This is because the trace norm is convex, so \[\big{\|}\rho_{E^{\prime}|(Y=0)\wedge(F=1)}-\rho_{E^{\prime}|(Y=1)\wedge(F=1)} \big{\|}_{\mathrm{Tr}}\leq\sum_{m\in M}p_{m}\big{\|}\rho_{E^{\prime}|(Y=0) \wedge(F=1)}^{m}-\rho_{E^{\prime}|(Y=1)\wedge(F=1)}^{m}\big{\|}_{\mathrm{Tr}},\] (35) and thus we can take \(m_{1}\) to be the value whose term in this convex combination is maximal. Finally, we can remove the side information by redefining the splitting channel \(\Phi^{\prime}(\sigma)=\Phi(\sigma\otimes\rho_{S}^{m_{1}})\). With such an attack, we construct an attack against the uncloneable security game. The splitting operation and Bob act in the same way. To attempt to guess the message, Charlie makes the measurement that optimally distinguishes the cases \(y=0\) and \(y=1\), and guess \(m_{0}\) or \(m_{1}\), respectively. Then, the guessing probability \[\Pr\big{[}M=\tilde{M}\wedge F=1\big{]} =\Pr\big{[}M=\tilde{M}\wedge F=1\wedge M\notin\{m_{0},m_{1}\}\big{]}\] (36) \[+\Pr[M\in\{m_{0},m_{1}\}]\Pr\big{[}M=\tilde{M}\wedge F=1|M\in\{m_ {0},m_{1}\}\big{]}\] Since \(\Pr\big{[}M=\tilde{M}\wedge F=1|M\in\{m_{0},m_{1}\}\big{]}\) is the probability of distinguishing messages \(m_{0}\) and \(m_{1}\), we have by hypothesis that this is greater than \(\frac{\Pr[F=1|M\in\{m_{0},m_{1}\}]+|M|\varepsilon}{2}\). Finally, as \(\mathtt{Q}\) is perfectly indistinguishable, \(\Pr[F=1|M\in\{m_{0},m_{1}\}]=\Pr[F=1]\) -- otherwise Bob could distinguish the messages without access to the key. Putting this together, \[\Pr\big{[}M=\tilde{M}\wedge F=1\big{]}>\frac{\Pr[F=1]}{|M|}+\varepsilon.\] (37)
* Let \(\rho_{ME^{\prime}\wedge(F=1)}=\mathbbm{E}_{m\in M}[m]\otimes\rho_{E^{\prime} \wedge(F=1)}^{m}\) be the final state in the uncloneable security game. Since we have by hypothesis that \(\mathtt{Q}\) is uncloneable-indistinguishable secure, \(\|\rho_{E^{\prime}\wedge(F=1)}^{m_{0}}-\rho_{E^{\prime}\wedge(F=1)}^{m}\|_{ \mathrm{Tr}}\leq\varepsilon\) for all \(m\in M\). Setting the state \(\tau_{ME^{\prime}\wedge(F=1)}=\mu_{M}\otimes\rho_{E^{\prime}\wedge(F=1)}^{m_{0}}\), we have that \[\|\tau_{ME^{\prime}\wedge(F=1)}-\rho_{ME^{\prime}\wedge(F=1)}\|_{\mathrm{Tr}} =\big{\|}_{m\in M}\|\rho_{E^{\prime}\wedge(F=1)}^{m_{0}}-\rho_{E^{\prime} \wedge(F=1)}^{m}\|_{\mathrm{Tr}}\leq\varepsilon.\] (38) Because the registers \(M\) and \(E^{\prime}\) are independent on \(\tau\), the guessing probability \(\Pr[M=\tilde{M}\wedge F=1]_{\tau}\leq\frac{\Pr[F=1]_{\tau}}{|M|}\). Finally, because \(\tau\) is only \(\varepsilon\) away from \(\rho\) in trace norm and \(\Pr[F=1]_{\tau}=\Pr[F=1|M=m_{0}]_{\rho}=\Pr[F=1]_{\rho}\) by perfect indistinguishability, we get that \(\Pr[M=\tilde{M}\wedge F=1]_{\rho}\leq\frac{\Pr[F=1]_{\rho}}{|M|}+\varepsilon\)
### Instantiation and security proofs
Now, we give a construction of a QECM-ID. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and let \(A\) be the set of all subspaces of \(V=\mathbb{Z}_{2}^{n}\) of dimension \(n/2\).
**Protocol 4.5** (Coset state QECM-ID).: \[\textbf{Key generation}\;\;\text{Let}\;T=T^{\prime}=\mathbb{Z}_{2}^{n/2}\;\text{ and}\;H=\mathbb{Z}_{2}^{\ell}\;\text{and}\;\text{take}\;K=ATT^{\prime}RH.\;\text{The channel}\] \[\;\text{Key}([0])=\underset{a,t,t^{\prime},r,h}{\mathop{\hbox{ \rule[0.0pt]{0.0pt}{0.0pt}}}\limits}[att^{\prime}rh].\] (39)
**Encrypption**: Let \(M=\bar{M}=\mathbb{Z}_{2}^{\ell}\) and \(C=\bar{M}V\). Take
\[\text{Enc}([att^{\prime}rh]\otimes[m])=[att^{\prime}rh]\otimes[m]\otimes[m+e( t^{\prime},r)+h]\otimes\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\,. \tag{40}\]
**Decryption**: Dec proceeds as follows. First, Alice sends \(a\) to Bob. Then, Bob measures \(V\) in the coset state basis to get measurements \(\hat{t},\hat{t}^{\prime}\) of \(t,t^{\prime}\). Bob sends \(\hat{t}\) to Alice: if \(\hat{t}=t\), Alice sets \(f=1\), else she sets \(f=0\) and aborts. Alice sends \(r\) and \(h\) to Bob. Bob computes \(\hat{m}=\bar{m}+e(\hat{t}^{\prime},r)+h\).
**Proposition 4.6**.: Protocol 4.5 is perfectly correct, _i.e._\(0\)-correct.
Proof.: First, writing \(\rho_{M}=\sum_{m}p_{m}[m]\),
\[\rho_{KMC}=\rho_{ATT^{\prime}RHM\bar{M}V}=\underset{a,t,t^{\prime},r,h}{ \mathop{\hbox{\rule[0.0pt]{0.0pt}{0.0pt}}}\limits}\sum_{m}p_{m}[att^{\prime}rh] \otimes[m]\otimes[m+e(t^{\prime},r)+h]\otimes\,|a_{t,t^{\prime}}\rangle\! \langle a_{t,t^{\prime}}|\,. \tag{41}\]
To begin the decryption, Bob measures in the coset state basis and gets
\[\rho_{ATT^{\prime}RHM\bar{M}\bar{T}\hat{T}^{\prime}}=\underset{a,t,t^{\prime},r,h}{\mathop{\hbox{\rule[0.0pt]{0.0pt}{0.0pt}}}\limits}\sum_{m}p_{m}[att^{ \prime}rh]\otimes[m]\otimes[m+e(t^{\prime},r)+h]\otimes[tt^{\prime}]. \tag{42}\]
Sending \(\hat{t}=t\) to Alice, she always sets \(F=1\), and then gives \(r\) and \(h\) to Bob. Then, the state become
\[\rho_{ATT^{\prime}RHM\bar{T}\hat{T}^{\prime}}=\underset{a,t,t^{\prime},r,h}{ \mathop{\hbox{\rule[0.0pt]{0.0pt}{0.0pt}}}\limits}\sum_{m}p_{m}[att^{\prime}rh] \otimes[m]\otimes[1]\otimes[m+e(t^{\prime},r)+h]\otimes[t^{\prime}]. \tag{43}\]
Finally, Bob computes \(\hat{m}=\bar{m}+e(\hat{t}^{\prime},r)+h=m\), getting
\[\rho_{KM\bar{T}\hat{M}}=\underset{a,t,t^{\prime},r,h}{\mathop{\hbox{\rule[0.0pt]{0.0pt}{0.0pt}}}\limits}\sum_{m}p_{m}[att^{\prime}rh]\otimes[m]\otimes[1] \otimes[m]. \tag{44}\]
Thus, \(\rho_{M\bar{M}\wedge(F=1)}=\sum_{m}p_{m}[m]\otimes[m]=\rho_{MM}\)
**Proposition 4.7**.: Protocol 4.5 is perfectly indistinguishable.
Proof.: Writing \(\rho_{MS}=\sum_{m}p_{m}[m]\otimes\rho_{S}^{m}\), we see that
\[\rho_{KMCSY} =\frac{1}{2}(\mathtt{Enc}(\mathtt{Key}(0)\otimes[m_{0}])\otimes \rho_{S}\otimes[0]+(\mathtt{Enc}\otimes\mathrm{id}_{S})(\mathtt{Key}(0) \otimes\rho_{MS})\otimes[1])\] \[=\frac{1}{2}\sum_{m}\mathtt{Enc}(\mathtt{Key}(0)\otimes[m]) \otimes(\delta_{m,m_{0}}\rho_{S}\otimes[0]+p_{m}\rho_{S}^{m}\otimes[1])\] \[=\frac{1}{2}\sum_{m}\underset{a,t,t^{\prime},r,h}{\bigbox[}att^{ \prime}rhm]\otimes[m+e(t^{\prime},r)+h]\otimes\,|a_{t,t^{\prime}}\rangle\! \langle a_{t,t^{\prime}}|\otimes(\delta_{m,m_{0}}\rho_{S}\otimes[0]+p_{m}\rho_ {S}^{m}\otimes[1]). \tag{45}\]
Hence,
\[\rho_{CSY} =\frac{1}{2}\sum_{m}\underset{a,t,t^{\prime},r,h}{\bigbox[}m+e(t ^{\prime},r)+h]\otimes\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}| \otimes(\delta_{m,m_{0}}\rho_{S}\otimes[0]+p_{m}\rho_{S}^{m}\otimes[1])\] \[=\frac{1}{2}\underset{a,t,t^{\prime}}{\bigbox[}\mu_{\bar{M}} \otimes\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\otimes\sum_{m}( \delta_{m,m_{0}}\rho_{S}\otimes[0]+p_{m}\rho_{S}^{m}\otimes[1]) \tag{46}\] \[=\frac{1}{2}\mu_{C}\otimes(\rho_{S}\otimes[0]+\rho_{S}\otimes[1]) =\mu_{C}\otimes\rho_{S}\otimes\mu_{Y}.\]
Thus, \(\rho_{CS|(Y=0)}=\rho_{CS|(Y=1)}\).
**Theorem 4.8**.: Suppose \(\kappa\geq\frac{-\lg\cos\frac{\pi}{8}}{2}n-\frac{1}{4\ln 2}\). Then, Protocol 4.5 is \(\max\{\varepsilon,e^{1/4}(\cos\frac{\pi}{8})^{n/2}\}\)-uncloneable.
Proof.: We have the state before decryption
\[\rho_{ATT^{\prime}RHMBE}=\underset{a,t,t^{\prime},r,h,m}{\bigbox[}att^{ \prime}rh\otimes[m]\otimes\Phi([m+e(t^{\prime},r)+h]\otimes\,|a_{t,t^{\prime}} \rangle\!\langle a_{t,t^{\prime}}|). \tag{47}\]
To begin the decryption, Alice shares \(a\), and Bob makes a measurement \(N\) on \(B\) to determine a guess \(\hat{t}\) of \(t\). Fix \(\bar{m}\in\bar{M}\). Then, taking \(\sigma\mapsto\Phi([\bar{m}]\otimes\sigma)\) to be the cloning channel in the leaky MoE game, we get by the leaky MoE property that \(H_{\min}(T|AB;T^{\prime}|A^{\prime}TE)_{\rho_{[(\bar{M}=m)}}\geq(-\lg\cos \frac{\pi}{8})n-\frac{1}{2\ln 2}\), where \(A^{\prime}\) is a copy of \(A\). Thus, we must have either \(H_{\min}(T|N(AB))_{\rho_{[(\bar{M}=m)}}\geq\frac{-\lg\cos\frac{\pi}{8}}{2}n- \frac{1}{4\ln 2}\) or \(H_{\min}(T^{\prime}|A^{\prime}TE)_{\rho_{[(N(AB)=T\wedge\bar{M}=m)}}\geq\frac {-\lg\cos\frac{\pi}{8}}{2}n-\frac{1}{4\ln 2}\). In the former case, as \(AB\) is the register Bob has access to by that point, we have
\[\Pr[F=1]=\Pr[\hat{T}=T]=\Pr[N(AB)=T]\leq e^{1/4}(\cos\frac{\pi}{8})^{n/2}. \tag{48}\]
In the latter case, we have by hypothesis and the strong extractor property,
\[\|\rho_{e(T^{\prime},R)RA^{\prime}TE|(F=1\wedge\tilde{M}=\bar{m}) }-\mu_{\tilde{M}}\otimes\mu_{R}\otimes\rho_{A^{\prime}TE|(F=1\wedge M=m)}\|_{ \mathrm{Tr}} \tag{49}\] \[=\|\rho_{e(T^{\prime},R)RA^{\prime}TE|(N(AB)=T\wedge\tilde{M}= \bar{m})}-\mu_{\tilde{M}}\otimes\mu_{R}\otimes\rho_{A^{\prime}TE|(N(AB)=T \wedge\tilde{M}=\bar{m})}\|_{\mathrm{Tr}}\leq\varepsilon,\]
where \(\tilde{M}=\mathbb{Z}_{2}^{\ell}\) is the register containing \(e(T^{\prime},R)\). Combining the two cases,
\[\|\rho_{e(T^{\prime},R)RA^{\prime}TE\wedge(F=1)|(\tilde{M}=\bar{m })}-\mu_{\tilde{M}}\otimes\mu_{R}\otimes\rho_{A^{\prime}TE\wedge(F=1)|(\tilde {M}=\bar{m})}\|_{\mathrm{Tr}}\] \[=\Pr[F=1]_{\rho}\|\rho_{e(T^{\prime},R)RA^{\prime}TE|(F=1\wedge \tilde{M}=\bar{m})}-\mu_{\tilde{M}}\otimes\mu_{R}\otimes\rho_{A^{\prime}TE|(F=1 \wedge\tilde{M}=\bar{m})}\|_{\mathrm{Tr}} \tag{50}\] \[\leq\varepsilon^{*},\]
where we set \(\varepsilon^{*}=\max\{\varepsilon,e^{1/4}(\cos\frac{\pi}{8})^{n/2}\}\). This implies that, as \(m\) and \(\bar{m}=m+e(t^{\prime},r)+h\) are uniformly distributed and independent,
\[\begin{split}\rho_{M\bar{M}e(T^{\prime},R)RA^{\prime}TE\wedge(F=1) }&=\mathop{\mathchoice{\vbox{\hbox{$\vbox{\hbox{$ \vbox{\hbox{$\vbox{\hbox{$\vbox{\hbox{$\vbox{\hbox{$ \vbox{\hbox{$\vbox{\hbox{$\vbox{$\vbox{$\vbox{$ \vbox{$\vbox{$\vbox{$\vbox{$\vbox{$ \vbox{$$$}}}}}}}}$}}}}}{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\hbox{$\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{$\vbox{\vbox{$ \vbox{\vbox{$\vbox{$\vbox{\vbox{$\vbox{$}}}}}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\hbox{$\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{ \vbox{\vbox{\vbox{$}}}}}}}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{ \vbox{\hbox{$\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{ }}}}}}}}}}}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\hbox{$\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{ \vbox{\vbox{\vbox{\hbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{ }}}}}}}}}}}}}}}}}}}}}}}}}[m\bar{m}]\otimes\rho_{e(T^{\prime},R)RA^{\prime}TE \wedge(F=1)|(\bar{M}=\bar{m})\\ &\approx_{\varepsilon^{*}}\mathop{\mathchoice{\vbox{\vbox{\vbox{ \vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{\vbox{\vbox{ \vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{}}}}}}}}}}}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{ \vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{}}}}}}}}}}}}}}}}}}}}}}}}[m\bar{m}]\otimes\mu_{\bar{M}}\otimes\mu_{R} \otimes\rho_{A^{\prime}TE\wedge(F=1)|(\bar{M}=\bar{m})},\end{split} \tag{51}\]
hence \(\|\rho_{\bar{M}RM\bar{M}A^{\prime}TE\wedge(F=1)}-\mu_{\bar{M}RM}\otimes\rho_{ \bar{M}A^{\prime}TE\wedge(F=1)}\|_{\mathrm{Tr}}\leq\varepsilon^{*}\). Supposing \(f=1\), the decryption continues and Eve also gets \(h=m+\bar{m}+\tilde{m}\) and tries to guess \(m\). As classical computations are CPTP maps, we see that
\[\begin{split}&\|\rho_{R\bar{M}M\bar{M}A^{\prime}TE\wedge(F=1)}-\mu_ {R\bar{M}M}\otimes\rho_{\bar{M}A^{\prime}TE\wedge(F=1)}\|_{\mathrm{Tr}}\\ &\geq\|\rho_{R\bar{M}M(M+\bar{M}+\bar{M})\bar{M}A^{\prime}TE\wedge (F=1)}-\mu_{R}\otimes\sigma_{\bar{M}M(M+\bar{M}+\bar{M})\bar{M}A^{\prime}TE \wedge(F=1)}\|_{\mathrm{Tr}}\\ &\geq\|\rho_{R\bar{M}(M+\bar{M}+\bar{M})A^{\prime}TE\wedge(F=1)} -\mu_{R}\otimes\sigma_{M(M+\bar{M}+\bar{M})A^{\prime}TE\wedge(F=1)}\|_{ \mathrm{Tr}},\end{split} \tag{52}\]
where \(\sigma_{\bar{M}M\bar{M}A^{\prime}TE\wedge(F=1)}=\mu_{\bar{M}M}\otimes\rho_{ MA^{\prime}TE\wedge(F=1)}\), so
\[\begin{split}\sigma_{M(M+\bar{M}+\bar{M})A^{\prime}TE\wedge(F=1 )}&=\mathop{\mathchoice{\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{\vbox{\vbox{ \vbox{\vbox{\vbox{\vbox{\vbox{}}}}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{{\vbox{ \hbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{}}}}}}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{\hbox{$\vbox{\vbox{\hbox{$\vbox{ \vbox{{\vbox{\hbox{$\vbox{\vbox{\vbox{\hbox{\vbox{{\vbox{\vbox{\vbox{\vbox{\vbox{}}}}}}}}}}}}}}}}} }}}{\hbox{$\vbox{\vbox{\hbox{$ \vbox{\vbox{\hbox{{$\vbox{\vbox{\hbox{{$\vbox{\vbox{\hbox{\vbox{{\vbox{\vbox{\vbox{{\vbox{ }}}}}}}}}}}}}}}}}}}[m+\tilde{m})] \otimes\rho_{A^{\prime}TE\wedge(F=1)|(\bar{M}=\bar{m})}\\ &=\mu_{MH}\otimes\rho_{A^{\prime}TE\wedge(F=1)}.\end{split} \tag{53}\]
During the decryption, all the information Eve receives is contained in \(E^{\prime}=RHA^{\prime}TE\). Let the subnormalised state \(\tau_{ME^{\prime}}=\mu_{MRH}\otimes\rho_{A^{\prime}TE\wedge(F=1)}\). By the above, we have that \(\|\rho_{ME^{\prime}\wedge(F=1)}-\tau_{ME^{\prime}}\|_{\mathrm{Tr}}\leq \varepsilon^{*}\). As such, if the shared state were \(\tau\), \(M\) is independent from \(E^{\prime}\), and therefore \(\Pr\bigl{[}M=\bar{M}\wedge F=1\bigr{]}_{\tau}\leq\frac{\mathrm{Tr}\,\tau}{|M |}=\frac{\mathrm{Pr}[F=1]_{\rho}}{|M|}\). This implies that the probability of guessing \(M\) given \(E^{\prime}\) of \(\rho_{ME^{\prime}\wedge(F=1)}\) is at most
\[\Pr[M=\bar{M}\wedge F=1]_{\rho}\leq\Pr\bigl{[}M=\bar{M}\wedge F=1\bigr{]}_{ \tau}+\varepsilon^{*}\leq\frac{\Pr[F=1]_{\rho}}{|M|}+\varepsilon^{*}, \tag{54}\]
as wanted.
**Theorem 4.9**.: Suppose \(\kappa\geq\frac{-\lg\cos\frac{\pi}{8}}{2}n-\frac{1}{4\ln 2}\). Then, Protocol 4.5 is \(\max\{2\varepsilon,2e^{1/4}(\cos\frac{\pi}{8})^{n/2}\}\) -indistinguishable-uncloneable.
Proof.: With \(\rho_{MS}=\sum_{m}p_{m}[m]\otimes\rho_{S}^{m}\), we have again
\[\rho_{KMCSY}=\rho_{ATT^{\prime}RHM\bar{M}VYS} \tag{55}\] \[=\frac{1}{2}\sum_{m}\mathop{\mathchoice{\vbox{\vbox{\vbox{ \vbox{\hbox{$\vbox{{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox}}}}}}}}}}{\hbox{$ \vbox{\vbox{\hbox{$\vbox{{\vbox{\hbox{{\vbox{\vbox{\vbox{\vbox{\vbox}}}}}}}}}}}}{\hbox{$ \vbox{\vbox{{\vbox{\hbox{$\vbox{{\vbox{\hbox{{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox}}}}}}}}}}}} }}{\hbox{$\vbox{\vbox{\hbox{$\vbox{\vbox{{\vbox{{\hbox{ \vbox{{\vbox{{\vbox{\vbox{\vbox{\vbox}}}}}}}}}}}}}}{}}}[at^{\prime}rhm] \otimes[m+e(t^{\prime},r)+h]\otimes\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{ \prime}}|\otimes(\delta_{m,m_{0}}\rho_{S}\otimes[0]+p_{m}\rho_{S}^{m}\otimes[1]),\]
so given the cloning attack \(\Phi:\mathcal{L}(\bar{M}VS)\to\mathcal{L}(BE)\), the state before decryption is
\[\rho_{KMBEY}=\frac{1}{2}\sum_{m}\mathop{\mathchoice{\vbox{ \vbox{\vbox{\hbox{$\vbox{\vbox{\vbox{{\vbox{\vbox{\}{\vbox{\vbox{}}}}}}}}}}{\hbox{$ \vbox{\vbox{\hbox{{$\vbox{\vbox{\vbox{{\vbox{\vbox{\vbox{\vbox{\hbox{}}}}}}}}}}} }}{\hbox{$\vbox{\vbox{\hbox{$ \vbox{{\vbox{\hbox{{$\vbox{\vbox{{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\hbox{}}}}}}}}}}}}}}} }}[att^{\prime}rhm]\otimes\bigl{(}\delta_{m,m_{0}}\Phi([m+e(t^{\prime},r)+h] \otimes\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\otimes\rho_{S}) \otimes[0]\] (56) \[\qquad\qquad\qquad+p_{m
On \(\rho_{|(Y=0\wedge\bar{M}=\bar{m})}\), the cloning attack is \(\sigma\mapsto\Phi([\bar{m}]\otimes\sigma\otimes\rho_{S})\), so we have
\[H_{\min}(T|AB;T^{\prime}|A^{\prime}TE)_{\rho_{|(Y=0\wedge\bar{M}=m)}}\geq(-\log \cos\frac{\pi}{8})n-\tfrac{1}{2\ln 2}, \tag{57}\]
where \(A^{\prime}\) is a copy of \(A\), and hence as above
\[\|\rho_{e(T^{\prime},R)RA^{\prime}TE|(Y=0\wedge\bar{M}=\bar{m})\wedge(F=1)}- \mu_{\tilde{M}}\otimes\mu_{R}\otimes\rho_{A^{\prime}TE|(Y=0\wedge\bar{M}=\bar {m})\wedge(F=1)}\|_{\mathrm{Tr}}\leq\varepsilon^{*}, \tag{58}\]
and then, in order to include \(M\) and \(\bar{M}\),
\[\begin{split}\rho_{Me(T^{\prime},R)RA^{\prime}TE\bar{M}|(Y=0) \wedge(F=1)}&=\mathop{\mathchoice{\vbox{\hbox{$ \vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt\vrule width 0.4pt heigh t 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt}} \vbox{\hbox{$\vbox{\hrule height 0.4pt width 0.4pt depth 0.4pt}}\hrule height 0.4pt width 0.4pt depth 0.4pt}} \vbox
Uncloneable Bit Commitment
In this section, we discuss our second application, introduced in Section 1.2. In Section 5.1, we define uncloneable commitments and provide a construction, given as Protocol 5.3. Finally, in Section 5.2, we prove security of our construction.
### Motivation and definitions
We want to extend bit commitment protocols to make them uncloneable -- that is that only the intended recipient can successfully reveal a commitment. First, we recall a usual definition of bit commitment, as in [10]. The form of commitment we use allows for strings, not just single bits, to be committed. Also, it supposes that, in the honest case, a uniformly random string is chosen to be committed; this however is not a restriction on the general case.
**Definition 5.1**.: A _\((\ell,\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\)-randomised bit string commitment (RBC) scheme_ is a pair of interactive protocols between two parties Alice and Bob: a protocol commit that creates a state \(\rho_{YAB}\), and a protocol reveal that creates a state \(\rho_{YA\hat{Y}FB^{\prime}}\). Here \(Y=\mathbb{Z}_{2}^{\ell}\) is a classical register holding the committed string; \(\hat{Y}=\mathbb{Z}_{2}^{\ell}\) is a classical register holding the revealed string; \(F=\mathbb{Z}_{2}\) is a classical register that indicates whether Bob accepts (1) or rejects (0) the reveal; and \(A,A^{\prime}\) and \(B,B^{\prime}\) are additional quantum registers that Alice and Bob hold, respectively. The scheme additionally satisfies
\(\varepsilon_{1}\)**-correctness**: If Alice and Bob are honest, then \(\|\rho_{Y\hat{Y}F}-\sigma_{YYF}\|_{\mathrm{Tr}}\leq\varepsilon_{1}\), for \(\sigma_{YF}=\mu_{Y}\otimes[1]\). \(\varepsilon_{2}\)**-hiding**: If Alice is honest, then after commit, \(\|\rho_{YB}-\mu_{Y}\otimes\rho_{B}\|_{\mathrm{Tr}}\leq\varepsilon_{2}\). \(\varepsilon_{3}\)**-binding**: If Bob is honest, there exists a state \(\sigma_{YAB}\) such that \(\|\rho_{YAB}-\sigma_{YAB}\|_{\mathrm{Tr}}\leq\varepsilon_{3}\), and if reveal is run to get \(\sigma_{YA^{\prime}\hat{Y}FB^{\prime}}\), \(\Pr[Y\neq\hat{Y}\wedge F=1]_{\sigma}\leq\varepsilon_{3}\).
Bit commitment is not possible with no additional assumptions [1], so we need a model with, _e.g._, computational or storage assumptions in order for this definition to not be vacuous. Notwithstanding, we can extend the definition to handle uncloneability as well. We do so by adding an eavesdropper Eve, from whom Alice wishes to hide her commitment. In order to check for cloning, the protocol will have an additional check step which is used to verify whether it is in fact Bob who received the commitment. The separation of the check step also allows us to consider various models: Eve can be allowed to freely communicate with Bob prior to that step, but not afterwards, as Bob could in that case simply give his register that passed the check to her.
**Definition 5.2**.: A _\((\ell,\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\delta)\)-uncloneable randomised bit string commitment (URBC) scheme_ is a triple of protocols between two parties Alice and Bob, eavesdropped by an eavesdropper Eve: a protocol commit that creates a state \(\rho_{YABE}\), a protocol check that creates a state \(\rho_{YGA^{\prime}B^{\prime}E^{\prime}}\), and a protocol reveal that creates a state \(\rho_{YGA^{\prime}\hat{Y}FB^{\prime\prime}E^{\prime\prime}}\). Here, \(Y=\mathbb{Z}_{2}^{\ell}\) is a classical register holding the committed string; \(\hat{Y}=\mathbb{Z}_{2}^{\ell}\) is a classical register holding the revealed string; \(G=\mathbb{Z}_{2}\) is a classical register that indicates whether Alice accepts (1) or rejects (0) the check; \(F=\mathbb{Z}_{2}\) is a classical register that indicates whether Bob accepts (1) or rejects (0) the reveal; and \(A,A^{\prime},A^{\prime\prime}\), \(B,B^{\prime},B^{\prime\prime}\), and \(E,E^{\prime},E^{\prime\prime}\) are additional quantum registers that Alice, Bob, and Eve hold, respectively. The scheme additionally satisfies
\(\varepsilon_{1}\)**-correctness**: If Alice and Bob are honest, and Eve does not act, then \(\left\|\rho_{YG\hat{Y}F}-\sigma_{YGFF}\right\|_{\mathrm{Tr}}\leq\varepsilon_{1}\), where \(\sigma_{YGF}=\mu_{Y}\otimes[1]\otimes[1]\). \(\varepsilon_{2}\)**-hiding**: If Alice is honest, then after commit, \(\left\|\rho_{YBE}-\mu_{Y}\otimes\rho_{BE}\right\|_{\mathrm{Tr}}\leq\varepsilon_ {2}\), and after check, \(\left\|\rho_{YB^{\prime}E^{\prime}}-\mu_{Y}\otimes\rho_{B^{\prime}E^{\prime}} \right\|_{\mathrm{Tr}}\leq\varepsilon_{2}\). \(\varepsilon_{3}\)**-binding**: If Bob is honest, there exists a state \(\sigma_{YABE}\) such that \(\left\|\rho_{YABE}-\sigma_{YABE}\right\|_{\mathrm{Tr}}\leq\varepsilon_{3}\) and \(\Pr[Y\neq\hat{Y}\wedge F=1]_{\sigma}\leq\varepsilon_{3}\). \(\delta\)**-uncloneability**: If Alice is honest, \(\left\|\rho_{YE^{\prime\prime}\wedge(G=1)}-\mu_{Y}\otimes\rho_{E^{\prime\prime }\wedge(G=1)}\right\|_{\mathrm{Tr}}\leq\delta\).
From this definition, we see that uncloneability holds for any malicious Bob, even one who colludes with Eve, as long as they do not communicate after the check. Similarly to interactive uncloneable encryption, the commitment can be seen as not having an intended recipient prior to the check step -- in particular, Bob and Eve may have arbitrary communication before then. This illustrates an important aspect of the uncloneability, as only Bob will be able to open despite a lack of an agreement between him an Alice, such as a pre-shared secret key.
_Remark_.: Note that the above definitions do not hold as given in the computational setting. However, it is straightforward to adapt them by replacing the supremum in the trace norm \(\left\|A\right\|_{\mathrm{Tr}}=\sup_{0\leq p\leq\mathbb{I}}\mathrm{Tr}(PA)\) with the distinguishing advantage corresponding to a computationally-bounded guessing strategy. This allows adaptation to a wide range of computational settings where different computational assumptions that give rise to commitments can be considered. For simplicity, we use the trace norm definition to prove security of our URBC construction, but the proofs work as well in such computational settings simply because the trace norm upper bounds any seminorm given as a supremum over fewer operators. Nevertheless, in our instantiation, the information-theoretic nature of the uncloneability property may be preserved as this does not depend on the choice of commitment assumption.
Now, we can define a candidate URBC scheme. We do so by taking an RBC scheme and turning it into an uncloneable one on polynomially shorter bit strings using the leaky MoE property, implicitly working under the assumptions that are required for the commitment.
Let \(c=(\mathtt{commit}_{0},\mathtt{reveal}_{0})\) be a \((k+\ell,\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\)-RBC scheme, let \(A\) be the set of all subspaces of \(V=\mathbb{Z}_{2}^{n}\) of dimension \(n/2\), let \(e:\mathbb{Z}_{2}^{n/2}\times\mathbb{Z}_{2}^{k}\to\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon^{\prime})\)-strong extractor, and let \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be an \((n/2,n/2-s,d)\)-linear error-correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\to\mathbb{Z}_{2}^{s}\).
**Protocol 5.3** (Uncloneable bit string commitment).:
* Let \(R=\mathbb{Z}_{2}^{k}\), \(H=\mathbb{Z}_{2}^{\ell}\), and \(T=T^{\prime}=\mathbb{Z}_{2}^{n/2}\). Alice and Bob commit to \((r,h)\in R\times H\) using \(c\). Then, Alice samples \(a\in A\), \(t\in T\), and \(t^{\prime}\in T^{\prime}\) uniformly at random, after which she prepares the state \(\left|a_{t,t^{\prime}}\right\rangle\) and sends it to Bob. Alice stores \(t,t^{\prime},a\) and Bob stores \(\left|a_{t,t^{\prime}}\right\rangle\), and they both store what is needed to reveal the commitment of \((r,h)\).
* Alice sends Bob \(a\) and he measures in the coset state basis to get measurements \(\hat{t},\hat{t}^{\prime}\) of \(t,t^{\prime}\), then sends \(\hat{t}\) to Alice. If \(\hat{t}=t\), Alice sets \(g=1\), else she sets \(g=0\). Alice stores \(t^{\prime}\) and Bob stores \(\hat{t}^{\prime}\), and they both store what is needed to reveal the commitment of \((r,h)\).
**R Reveal**: Bob selects a random subset \(j\subseteq\{1,\ldots,n/2\}\) of cardinality \(\eta n/2\) and sends it to Alice. She replies with \(\mathrm{syn}(t^{\prime})\) and \(t^{\prime}_{j}\). Then, they reveal the commitment \(c\) to get \((\hat{r},\hat{h})\). If \(\mathrm{syn}(\hat{t}^{\prime})=\mathrm{syn}(t^{\prime})\), \(t^{\prime}_{j}=\hat{t}^{\prime}_{j}\), and \(\mathtt{reveal}_{0}\) accepts (\(f_{0}=1\)), Bob sets \(f=1\); else he sets \(f=0\). Alice's output is \(e(t^{\prime},r)+h\) and Bob's output is \(e(\hat{t}^{\prime},\hat{r})+\hat{h}\).
This protocol is illustrated in Fig. 3.
### Security proofs
**Proposition 5.4**.: Protocol 5.3 is \(\varepsilon_{1}\)-correct.
Proof.: We suppose Alice and Bob are honest, and Eve does not act. First, Alice and Bob run \(\mathtt{commit}_{0}\) to get \(\rho_{RHA_{0}B_{0}}\). Then, in the commit and check phases, Alice sends \(|a_{t,t^{\prime}}\rangle\) and \(a\) to Bob, and he is able to measure \(t,t^{\prime}\) exactly, so \(\hat{t}=t\) and \(\hat{t}^{\prime}=t^{\prime}\). Bob sends \(\hat{t}\) to Alice, and she sets \(g=1\). At that point, the shared state has the form \(\rho_{RHA_{0}B_{0}T^{\prime}\hat{T}^{\prime}G}=\rho_{RHA_{0}B_{0}}\otimes \sigma_{T^{\prime}T^{\prime}}\otimes[1]\) for \(\sigma_{T^{\prime}}=\mu_{T^{\prime}}\). Next, in the reveal phase, we have that \(\mathrm{syn}(\hat{t}^{\prime})=\mathrm{syn}(t^{\prime})\) and \(\hat{t}^{\prime}_{j}=t^{\prime}_{j}\), so Bob's flag \(f=f_{0}\). When Alice and Bob run \(\mathtt{reveal}_{0}\), the shared state becomes \(\rho_{RHA^{\prime}_{0}\hat{R}\hat{H}B^{\prime}_{0}F_{0}FT^{\prime}\hat{T}^{ \prime}G}=\rho_{RHA^{\prime}_{0}\hat{R}\hat{H}B^{\prime}_{0}F_{0}F_{0}}\otimes \sigma_{T^{\prime}T^{\prime}}\otimes[1]\), where we know by correctness of \(c\) that \(\|\rho_{R\hat{H}\hat{R}\hat{H}F_{0}}-\sigma_{RHRHF_{0}}\|_{\mathrm{T}}\leq \varepsilon_{1}\)
Figure 3: Illustration of the commitment protocol Protocol 5.3. Solid arrows represent transmission of quantum states, double arrows represent transmission of classical information, dashed arrows represent commitment and opening, and dotted lines represent other interactions involved in the commitment without transmission of relevant information.
for \(\sigma_{RHF_{0}}=\mu_{RH}\otimes[1]\). Thus, for \(\sigma_{T^{\prime}RHF_{0}}=\mu_{T^{\prime}RH}\otimes[1]\), we see that
\[\|\rho_{T\hat{T}^{\prime}RHR\hat{R}F_{0}F}-\sigma_{T^{\prime}T^{\prime}RHRHF_{0}F _{0}}\|_{\mathrm{Tr}}\leq\|\sigma_{T^{\prime}T^{\prime}}\otimes(\rho_{RH\hat{R} \hat{R}\hat{R}F_{0}}-\sigma_{RHRHF_{0}})\|_{\mathrm{Tr}}\leq\varepsilon_{1}. \tag{65}\]
We see that \(\sigma_{(e(T^{\prime},R)+H)F}=\sigma_{YF}=\mu_{Y}\otimes[1]\), as \(\sigma_{H}=\mu_{H}\) is hashed. Then, as classical computations are quantum channels,
\[\|\rho_{(e(T^{\prime},R)+H)G(e(\hat{T}^{\prime},\hat{R})+\hat{H})F}-\sigma_{ YGYF}\|_{\mathrm{Tr}}\leq\|\rho_{T^{\prime}\hat{T}^{\prime}RH\hat{R}\hat{R} \hat{R}F_{0}F}-\sigma_{T^{\prime}T^{\prime}RHRHF_{0}F_{0}}\|_{\mathrm{Tr}} \leq\varepsilon_{1}. \tag{66}\]
**Proposition 5.5**.: Protocol 5.3 is \(\varepsilon_{2}\)-hiding.
Proof.: As Alice is honest, the commitment \(c\) is hiding in the sense that \(\|\rho_{RHB_{0}}-\mu_{RH}\otimes\rho_{B_{0}}\|_{\mathrm{Tr}}\leq\varepsilon_{2}\). Consider the state \(\sigma_{RHAATT^{\prime}VB_{0}}=\mu_{RH}\otimes\rho_{ATT^{\prime}VB_{0}}\). As \(H\) is uniformly random, for each \(t^{\prime}\in T^{\prime}\) and \(r\in R\), \(e(t^{\prime},r)+H\) is uniformly random. Hence,
\[\sigma_{(e(T^{\prime},R)+H)AVB_{0}}=\mathop{\hbox to 0.0pt{\vbox{ \hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 10% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 10% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 100% height 10% height 100% height 100% height 100% height 10% height 100% height 100% height 100% height 100% height 10% height 100% height
of \(\eta\frac{n}{2}\) indices chosen uniformly at random, the probability that \(t^{\prime}_{j}=\hat{t}^{\prime}_{j}\) is no more than \(\frac{\binom{n/2-d}{\eta n/2}}{\binom{n/2}{m/2}}\). Simplifying,
\[\begin{split}\Pr\big{[}T^{\prime}\neq\hat{T}^{\prime}\wedge\mathrm{ syn}(T^{\prime})=&\mathrm{syn}(\hat{T}^{\prime})\wedge T^{\prime}_{J}= \hat{T}^{\prime}_{J}\big{]}\leq\Pr\Bigl{[}T^{\prime}\neq\hat{T}^{\prime}\wedge d (T^{\prime},\hat{T}^{\prime})\geq d\wedge T^{\prime}_{J}=\hat{T}^{\prime}_{J} \Bigr{]}\\ &\leq\frac{\binom{n/2-d}{\eta n/2}}{\binom{n/2}{m/2}}=\frac{(n/2-d )\cdots(n/2-d-\eta n/2+1)}{(n/2)\cdots(n/2-\eta n/2+1)}\\ &=\biggl{(}1-\frac{d}{n/2}\biggr{)}\biggl{(}1-\frac{d}{n/2-1} \biggr{)}\cdots\biggl{(}1-\frac{d}{n/2-\eta n/2+1}\biggr{)}\\ &\leq\biggl{(}1-\frac{2d}{n}\biggr{)}^{\eta\frac{n}{2}},\end{split} \tag{70}\]
which gives the result.
**Theorem 5.7**.: Suppose \(\kappa\leq\frac{-\lg\cos\frac{\pi}{8}}{2}n-\frac{1}{4\ln 2}-s-\eta\frac{n}{2}\). Then, Protocol 5.3 is \(\max\{\varepsilon^{\prime},e^{1/4}(\cos\frac{\pi}{8})^{n/2}\}\) -uncloneable.
Proof.: Due to the leaky MoE property, we must have \(H_{\min}(T|AB;T^{\prime}|A^{\prime}TE)\geq(-\lg\cos\frac{\pi}{8})n-\frac{1}{2 \ln 2}\) when Bob guesses \(t\) during the check phase. This implies that, for any measurement \(M\) Bob might have made to get \(\hat{t}\), either \(H_{\min}(T|M(AB))_{\rho}\geq\frac{-\lg\cos\frac{\pi}{8}}{2}n-\frac{1}{4\ln 2}\) or \(H_{\min}(T^{\prime}|A^{\prime}TE)_{\rho_{|(M(AB)=T)}}\geq\frac{-\lg\cos\frac{ \pi}{8}}{2}n-\frac{1}{4\ln 2}\). In the former case, the probability that \(\hat{t}=t\), and hence that \(g=1\), is at most \(e^{1/4}(\cos\frac{\pi}{8})^{n/2}\). In the latter case, the additional information that Eve gets about \(t^{\prime}\) during the reveal phase is \(\mathrm{syn}(t^{\prime})\) and \(t^{\prime}_{j}\), so knowing that her final register \(E^{\prime\prime}=A^{\prime}TE\mathrm{syn}(T^{\prime})T^{\prime}_{J}J\),
\[\begin{split} H_{\min}(T^{\prime}|E^{\prime\prime})_{\rho_{|(M(AB )=T)}}&\geq H_{\min}(T^{\prime}|A^{\prime}TE)_{\rho_{|(M(AB)=T)}} -\lg|\mathrm{syn}(T^{\prime})|-|J|\\ &\geq\frac{-\lg\cos\frac{\pi}{8}}{2}n-\frac{1}{4\ln 2}-s-\eta \frac{n}{2}.\end{split} \tag{71}\]
Then, by hypothesis on the extractor, \(\bigl{\|}\rho_{YE^{\prime\prime}|(M(AB)=T)}-\mu_{Y}\otimes\rho_{E^{\prime \prime}|(M(AB)=T)}\bigr{\|}_{\mathrm{Tr}}\leq\varepsilon^{\prime}\). Thus, combining the two cases and noting that the events \(M(AB)=T\) and \(G=1\) are equivalent,
\[\begin{split}\bigl{\|}\rho_{YE^{\prime\prime}\wedge(G=1)}-\mu_{Y }\otimes\rho_{E^{\prime\prime}\wedge(G=1)}\bigr{\|}_{\mathrm{Tr}}\\ &\qquad=\Pr[M(AB)=T]\bigl{\|}\rho_{YE^{\prime\prime}|(M(AB)=T)}- \mu_{Y}\otimes\rho_{E^{\prime\prime}|(M(AB)=T)}\bigr{\|}_{\mathrm{Tr}}\\ &\qquad\leq\max\{\varepsilon^{\prime},e^{1/4}(\cos\frac{\pi}{8})^ {n/2}\}.\end{split} \tag{72}\]
## 6 Receiver-Independent Quantum Key Distribution
In this section, we discuss our final application, introduced in Section 1.3. In Section 6.1, we prove a version of the leaky MoE property that is robust against errors, given as Theorem 6.2, and discuss its expression as an entropic uncertainty relation, given as Corollary 6.5. In Section 6.2, we present receiver-independent QKD and provide a construction, given as Protocol 6.7. Finally, in Section 6.3, we recall the QKD security definitions and prove security for our construction.
### Robust leaky MoE property
We first need a robust version of the leaky MoE property, analogous to the game with imperfect guessing in [13]. To do so, we fix \(U,U^{\prime}\subseteq\mathbb{Z}_{2}^{n/2}\) to be neighbourhoods of \(0\), and modify the leaky MoE game winning condition by saying that Alice accepts if Bob's answer is in \(t+U\) and Charlie's is in \(t^{\prime}+U^{\prime}\). To warrant the name "leaky", we suppose that Charlie gets Bob's potentially erroneous guess of \(t\) -- but never the actual value of \(t\) chosen by Alice -- before making his guess. In the case of \(U=U^{\prime}=\{0\}\), this reduces to the original leaky MoE game. We formalise this.
**Definition 6.1**.: Let \(A\) to be a set of subspaces of \(\mathbb{Z}_{2}^{n}\) of dimension \(n/2\), and \(U,U^{\prime}\subseteq\mathbb{Z}_{2}^{n/2}\) be neighbourhoods of \(0\). A _strategy_\(\mathsf{S}\) for the \((n,A,U,U^{\prime})\)-robust leaky monogamy-of-entanglement game is simply a strategy for the \((n,A)\)-leaky MoE game. The _winning probability_ of \(\mathsf{S}\) is
\[\mathfrak{w}_{n,A,U,U^{\prime}}(\mathsf{S})=\biguplus_{a\in A} \biguplus_{t,t^{\prime}\in\mathbb{Z}_{2}^{n/2}}\sum_{u\in U,u^{\prime}\in U^{ \prime}}\operatorname{Tr}\bigl{[}(B^{a}_{t+u}\otimes C^{a,t+u}_{t^{\prime}+u^ {\prime}})\Phi(\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|)\bigr{]}. \tag{73}\]
The _optimal winning probability_ of \(\mathsf{G}\) is \(\mathfrak{w}^{*}(n,A,U,U^{\prime})=\sup_{\mathsf{S}}\mathfrak{w}_{n,A,U,U^{ \prime}}(\mathsf{S})\).
We show an upper bound in a context relevant to QKD, where the errors correspond to independent bit flip errors. We define some standard objects: the Hamming norm of \(x\in\mathbb{Z}_{2}^{n}\) is the number of non-zero terms, written \(|x|\), and the corresponding metric, the Hamming distance, is written \(d(x,y)=|x+y|\); the unit ball in of radius \(m\) in \(\mathbb{Z}_{2}^{n}\) is \(B(n,m):=\{x\in\mathbb{Z}_{2}^{n}\mid|x|\leq m\}\); and the binary entropy function is \(h:[0,1]\to\mathbb{R}\) defined as \(h(x)=-x\lg x-(1-x)\lg(1-x)\). We have the very useful bound on the volume of this ball: if \(m\leq n/2\), \(|B(n,m)|\leq 2^{nh(m/n)}\).
**Theorem 6.2**.: Let \(A\) be the set of register subspaces of \(\mathbb{Z}_{2}^{n}\) of dimension \(n/2\). Then, for \(m,m^{\prime}\leq n/4\)
\[\mathfrak{w}^{*}(n,A,B(n/2,m),B(n/2,m^{\prime}))\leq\sqrt{e}2^{ \frac{n}{2}h(\frac{2m}{n})+\frac{n}{4}h(\frac{2m^{\prime}}{n})}\bigl{(}\cos \tfrac{\pi}{8}\bigr{)}^{n}. \tag{74}\]
Note that this bound is not particularly tight. We try to stick with the tightest possible expression throughout the proof before passing to this simple closed-form expression at the very end.
The proof proceeds similarly to Theorem 3.2. First, we need a robust generalisation of Lemma 3.4.
**Lemma 6.3**.: Let \(a,b\subseteq\mathbb{Z}_{2}^{n}\) be subspaces of dimension \(n/2\), and \(U,U^{\prime}\subseteq\mathbb{Z}_{2}^{n/2}\) be neighbourhoods of \(0\). Then,
\[\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|\leq\max_{t\in\mathbb{Z}_{2 }^{n}}\Bigl{(}|(a+b+t)\cap U_{b}||U||U^{\prime}|\frac{|a\cap b|}{|a|}\Bigr{)}^{ 1/2}, \tag{75}\]
where \(P^{a}=\sum_{t,t^{\prime}\mathbb{Z}_{2}^{n/2}}\sum_{u\in U,u^{\prime}\in U^{ \prime}}\,|a_{t,t^{\prime}}\rangle\!\langle a_{t,t^{\prime}}|\otimes B^{a}_{t +u}\otimes C^{a,t+u}_{t^{\prime}+u^{\prime}}\) and \(U_{b}=\{x_{b}\mid x\in U\}\subseteq\mathbb{Z}_{2}^{n}\) for \(x_{b}\) as defined in Section 2.2.
Proof.: Since \(\sum_{v^{\prime}\in U^{\prime}}C^{b,s+v}_{s^{\prime}+v^{\prime}}\leq\mathbb{I}\) for any \(s,s^{\prime},v\in\mathbb{Z}_{2}^{n/2}\), we get the bound
\[\begin{split} P^{b}&\leq\sum_{s,s^{\prime}\in \mathbb{Z}_{2}^{n/2};v\in U}|b_{s,s^{\prime}}\rangle\!\langle b_{s,s^{\prime}}| \otimes B^{b}_{s+v}\otimes\mathbb{I}=\sum_{s\in\mathbb{Z}_{2}^{n/2};v\in U}\Pi _{b+s_{b}}\otimes B^{b}_{s+v}\otimes\mathbb{I}\\ &=\sum_{s\in\mathbb{Z}_{2}^{n/2}}\Pi_{\bigcup_{v\in U}(b+(s+v)_{b} )}\otimes B^{b}_{s}\otimes\mathbb{I}.\end{split} \tag{76}\]
Since the right hand side is a projector, we have by monotonicity of the square root that it is also a bound on \(\sqrt{P^{b}}\). We also bound
\[P^{a}\leq\sum_{t,t^{\prime},u,u^{\prime}}\;|a_{t,t^{\prime}}\rangle\!\langle a_{ t,t^{\prime}}|\otimes\mathbb{I}\otimes C^{a,t+u}_{t^{\prime}+u^{\prime}}=\sum_{t,t^{ \prime},u,u^{\prime}}\;|a_{t+u,t^{\prime}+u^{\prime}}\rangle\!\langle a_{t+u,t^ {\prime}+u^{\prime}}|\otimes\mathbb{I}\otimes C^{a,t}_{t^{\prime}}. \tag{77}\]
Using these,
\[\begin{split}&\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|=\left\| \sqrt{P^{b}}P^{a}\sqrt{P^{b}}\right\|^{1/2}\\ &\leq\Big{\|}\sum_{\begin{subarray}{c}t,t^{\prime},s\in\mathbb{Z }_{2}^{n/2}\\ u\in U,u^{\prime}\in U^{\prime}\end{subarray}}\Pi_{\bigcup_{v\in U}(b+(s+v) _{b})}\,|a_{t+u,t^{\prime}+u^{\prime}}\rangle\!\langle a_{t+u,t^{\prime}+u^{ \prime}}|\,\Pi_{\bigcup_{v\in U}(b+(s+v)_{b})}\otimes B_{s}^{b}\otimes C^{a, t}_{t^{\prime}}\Big{\|}^{1/2}\\ &\leq\max_{s\in\mathbb{Z}_{2}^{n/2}}\Big{\|}\sum_{t,t^{\prime},u, u^{\prime}}\Pi_{\bigcup_{v\in U}(b+(s+v)_{b})}\,|a_{t+u,t^{\prime}+u^{\prime}} \rangle\!\langle a_{t+u,t^{\prime}+u^{\prime}}|\,\Pi_{\bigcup_{v\in U}(b+(s+v) _{b})}\otimes C^{a,t}_{t^{\prime}}\Big{\|}^{1/2}.\end{split} \tag{78}\]
Next, using the triangle inequality,
\[\begin{split}\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|\leq\max_{s} \Bigl{(}\sum_{u}\Bigl{\|}\sum_{t,t^{\prime},u^{\prime}}\Pi_{\bigcup_{v}(b+(s+v )_{b})}\,|a_{t+u,t^{\prime}+u^{\prime}}\rangle\!\langle a_{t+u,t^{\prime}+u^{ \prime}}|\,\Pi_{\bigcup_{v}(b+(s+v)_{b})}\otimes C^{a,t}_{t^{\prime}}\Bigr{\|} \Bigr{)}^{1/2}.\end{split} \tag{79}\]
Now, as the terms \(\sum_{u^{\prime}}\Pi_{\bigcup_{v}(b+(s+v)_{b})}\,|a_{t+u,t^{\prime}+u^{\prime }}\rangle\!\langle a_{t+u,t^{\prime}+u^{\prime}}|\,\Pi_{\bigcup_{v}(b+(s+v)_{b })}\otimes C^{a,t}_{t^{\prime}}\) of the sum are Hermitian operators with orthogonal supports, we can bound
\[\begin{split}\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|& \leq\max_{s}\Bigl{(}\sum_{u}\max_{t,t^{\prime}}\Bigl{\|}\sum_{u^{ \prime}}\Pi_{\bigcup_{v}(b+(s+v)_{b})}\,|a_{t+u,t^{\prime}+u^{\prime}} \rangle\!\langle a_{t+u,t^{\prime}+u^{\prime}}|\,\Pi_{\bigcup_{v}(b+(s+v)_{b })}\otimes C^{a,t}_{t^{\prime}}\Bigr{\|}\Bigr{)}^{1/2}\\ &\leq\max_{s}\Bigl{(}|U|\max_{t,t^{\prime}}\Bigl{\|}\sum_{u^{ \prime}}\Pi_{\bigcup_{v}(b+(s+v)_{b})}\,|a_{t,t^{\prime}+u^{\prime}}\rangle \!\langle a_{t,t^{\prime}+u^{\prime}}|\,\Pi_{\bigcup_{v}(b+(s+v)_{b})}\Bigr{\|} \Bigr{)}^{1/2}.\end{split} \tag{80}\]
For each of these terms,
\[\begin{split}&\Bigl{\|}\sum_{u^{\prime}}\Pi_{\bigcup_{v}(b+(s+v)_{b })}\,|a_{t,t^{\prime}+u^{\prime}}\rangle\!\langle a_{t,t^{\prime}+u^{\prime}} |\,\Pi_{\bigcup_{v}(b+(s+v)_{b})}\Bigr{\|}\\ &\leq\sum_{u^{\prime}}\Bigl{\|}\Pi_{\bigcup_{v}(b+(s+v)_{b})}\,|a _{t,t^{\prime}+u^{\prime}}\rangle\!\langle a_{t,t^{\prime}+u^{\prime}}|\,\Pi_ {\bigcup_{v}(b+(s+v)_{b})}\Bigr{\|}\\ &=\sum_{u^{\prime}}\;\langle a_{t,t^{\prime}+u^{\prime}}|\Pi_{ \bigcup_{v}(b+(s+v)_{b})}|a_{t,t^{\prime}+u^{\prime}}\rangle=|U^{\prime}| \frac{|(a+t_{a})\cap\bigcup_{v}(b+(s+v)_{b})|}{|a|}.\end{split} \tag{81}\]
The cardinality of the intersection may be written as
\[\begin{split}\left|(a+t_{a})\cap\bigcup_{v}(b+(s+v)_{b})\right|& =|a\cap b||\,\{v\in U\;|\;(a+t_{a})\cap(b+(s+v)_{b})\neq\varnothing\}|\\ &=|a\cap b||\,\{v\in U\;|\;t_{a}+s_{b}+v_{b}\in a+b\}|\\ &=|a\cap b||(a+b+t_{a}+s_{b})\cap U_{b}|.\end{split} \tag{82}\]
This gives the wanted bound
\[\begin{split}\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|& \leq\max_{s}\Bigl{(}|U|\max_{t,t^{\prime}}|U^{\prime}||a\cap b||(a+b+t_{a}+s_{b })\cap U_{b}|\frac{|a\cap b|}{|a|}\Bigr{)}^{1/2}\\ &\leq\max_{t\in\mathbb{Z}_{2}^{n}}\Bigl{(}|U||U^{\prime}||(a+b+t) \cap U_{b}|\frac{|a\cap b|}{|a|}\Bigr{)}^{1/2}.\end{split} \tag{83}\]
Now, we proceed to the proof of the theorem.
Proof of Theorem 6.2.: Write \(U=B(n/2,m)\) and \(U^{\prime}=B(n/2,m^{\prime})\). First, we bound the winning probability by an operator norm
\[\mathfrak{w}_{n,A,U,U^{\prime}}(\mathsf{S})\leq\Bigl{\|}\prod_{a\in A}P^{a} \Bigr{\|}, \tag{84}\]
so that we can apply Lemma 3.3 using the same permutations \(\pi_{s}:S\to S\) as in Theorem 3.2, giving
\[\mathfrak{w}_{n,A,U,U^{\prime}}(\mathsf{S})\leq\prod_{s\in S}\max_{\gamma\in S }\Bigl{\|}\sqrt{P^{\mathrm{span}\,\gamma}}\sqrt{P^{\mathrm{span}\,\pi_{s}( \gamma)}}\Bigr{\|}. \tag{85}\]
We use Lemma 6.3 to write the overlap \(\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|\) in terms \(\dim(a\cap b)\). Suppose \(a=\mathrm{span}\,\gamma\) and \(b=\mathrm{span}\,\eta\). Then \(U_{b}=\,\{u\in\mathbb{Z}_{2}^{n}\mid u_{\eta}=0,|u|\leq m\}\). Thus, as \(a+b=\mathrm{span}(\eta\cup\gamma)\), for any \(t\in\mathbb{Z}_{2}^{n}\),
\[(a+b+t)\cap U_{b}=\,\{u\in\mathbb{Z}_{2}^{n}\mid u_{\eta^{c}\cap\gamma^{c}}=t_ {\eta^{c}\cap\gamma^{c}},u_{\eta}=0,|u|\leq m\}\,. \tag{86}\]
To maximise the cardinality of this set, we take \(t_{\eta^{c}\cap\gamma^{c}}=0\), so
\[\begin{split}|(a+b+t)\cap U_{b}|&=|\,\{u\in\mathbb{ Z}_{2}^{n}\mid u_{\eta\cup\gamma^{c}}=0,|u|\leq m\}|\\ &=|B(|\eta^{c}\cap\gamma|,m)|=|B(n/2-\dim(a\cap b),m)|.\end{split} \tag{87}\]
This gives \(\left\|\sqrt{P^{a}}\sqrt{P^{b}}\right\|\leq\sqrt{|B(n/2,m)||B(n/2,m^{\prime} )||B(n/2-\dim(a\cap b),m)|}2^{\dim(a\cap b)/2-n/4}\). Putting this into the bound on the winning probability,
\[\mathfrak{w}_{n,A,U,U^{\prime}}(\mathsf{S})\leq\frac{1}{\binom{n}{n/2}}\sum_{ k=0}^{n/2}\binom{n/2}{k}^{2}\sqrt{|B(n/2,m)||B(n/2,m^{\prime})||B(k,m)| 2^{-k}}. \tag{88}\]
We can bound \(B(k,m)\leq B(n/2,m)\) and therefore
\[\begin{split}\mathfrak{w}_{n,A,U,U^{\prime}}(\mathsf{S})& \leq\frac{|B(n/2,m)|\sqrt{|B(n/2,m^{\prime})|}}{\binom{n}{n/2}} \sum_{k=0}^{n/2}\binom{n/2}{k}^{2}2^{-k/2}\\ &\leq\sqrt{e}|B(n/2,m)|\sqrt{|B(n/2,m^{\prime})|}(\cos\tfrac{\pi }{8})^{n}.\end{split} \tag{89}\]
Using the bound on the volume of a ball \(B(n/2,m)\leq 2^{\frac{n}{2}h(\frac{2m}{n})}\) gives the result.
It will prove useful to express the winning probability of this game as a sequential min-entropy as well.
**Corollary 6.4**.: Fix a strategy for the \((n,A,U,U^{\prime})\)-robust leaky monogamy game with \(U=B(n/2,m)\) and \(U^{\prime}=0\). Let the state
\[\sigma_{AA^{\prime}TT^{\prime}BC}=\mathop{\hbox to 0.0pt{\vbox{\hrule height 0.4pt \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.
**Protocol 6.6** (one-sided device independent QKD of [13]).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.7** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.7** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.8** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.9** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.10** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.11** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.12** (receiver-independent QKD).:
In our model, the security of this QKD scheme can be broken, because we cannot trust Bob's classical device to honestly do parameter estimation. Bob would simply control the communication to and from the device, and receive the message \(\hat{k}\) or an abort message once the protocol finishes. Consider the following attack involving a malicious device provided by an eavesdropper Eve. When Alice sends the state \(|x^{\theta}\rangle\), Eve intercepts it and holds on to it, and sends Bob's device \(|0^{n}\rangle\). Then, Eve intercepts every message Alice sends and is able to compute Bob's intended output \(\hat{k}\), while Bob's device simply outputs a uniformly random string to him. Neither Alice nor Bob have learned that an attack has happened. In this way, Eve succeeds in completely breaking the security of the one-sided device-independent QKD protocol in the receiver-independent model.
To avoid this sort of attack, we need a QKD protocol where only Bob's communication is trusted but none of his devices are. We present the protocol. Let \(e:\mathbb{Z}_{2}^{n/2}\times R\rightarrow\mathbb{Z}_{2}^{\ell}\) be a quantum-proof \((\kappa,\varepsilon)\)-strong extractor and \(C\subseteq\mathbb{Z}_{2}^{n/2}\) be a \((n/2,n/2-s,d)\)-linear error correcting code with syndrome \(\mathrm{syn}:\mathbb{Z}_{2}^{n/2}\rightarrow\mathbb{Z}_{2}^{s}\).
**Protocol 6.13** (receiver-independent QKD).:
**Privacy amplification**: Alice sends uniformly random \(r\in R\) to Bob. Alice outputs \(k=e(\vec{t}^{\prime},r)\) and Bob outputs \(\hat{k}=e(\hat{t}^{\prime},r)\).
We note that, unlike usual QKD, Alice has full control over whether to abort the protocol. This allows us to consider the case where the checks that Bob makes are untrusted.
Since Bob's classical computations are untrusted, the idea of correctness must also be altered. Neither Alice nor Bob can in general check that Bob's final key matches Alice's, since Bob's device can always, once all the checks have been passed, output a uniformly random string to Bob. As such, all Alice can assure herself of is that Bob's device has all the necessary information allowing it to compute the key. So, we only require correctness to hold for the device's computed key, though Bob may not actually receive it.
### QKD security
First, following [14, 15], we give the security definiton of QKD.
**Definition 6.8**.: A _receiver-independent QKD protocol_ is an interaction between Alice, who is trusted, and Bob, who has trusted communication but untrusted quantum and classical devices, and which is eavesdropped by an eavesdropper Eve. The interaction produces the state \(\rho_{FK\hat{K}E}\) where \(F=\mathbb{Z}_{2}\) holds a flag set to \(1\) if the protocol accepts and \(0\) otherwise, \(K=\mathbb{Z}_{2}^{\ell}\) holds Alice's outputted key, \(\hat{K}=\mathbb{Z}_{2}^{\ell}\) holds Bob's device's key, and \(E\) is Eve's side information. The protocol is
* \(\varepsilon_{1}\)_-correct_ if \(\Pr\Bigl{[}K\neq\hat{K}\wedge F=1\Bigr{]}\leq\varepsilon_{1}\).
* \(\varepsilon_{2}\)_-secret_ if \(\|\rho_{KE\wedge(F=1)}-\mu_{K}\otimes\rho_{E\wedge(F=1)}\|_{\mathrm{Tr}}\leq \varepsilon_{2}\).
* \((\Phi,\varepsilon_{3})\)_-complete_ if, when Eve acts as the channel \(\Phi\) and Bob's device works as intended, \(\Pr[F=0]\leq\varepsilon_{3}\).
A subtle but important difference between this and the usual QKD definition is in Bob's key \(\hat{k}\). Here, the key is produced by Bob's device, but as the device is untrusted, Alice cannot be sure that the key is actually given to Bob at the end of the protocol.
We now show that Protocol 6.7 satisfies these security properties under some conditions on the parameters.
**Proposition 6.9**.: Protocol 6.7 is \(\left(1-\frac{2d}{n}\right)^{\eta\frac{n}{2}}\)-correct.
Note that, in our protocol, in order for Bob to actually receive the key, Bob's classical device is only required to do one computation honestly: the final privacy amplification step.
Proof.: First, the event that \(F=1\) is equivalent to \(d(T,\hat{T})\leq\gamma\frac{n}{2}\wedge\bar{T}^{\prime}_{J}=\hat{T}^{\prime}_ {J}\). Then,
\[\Pr[K\neq\hat{K}\wedge F=1]\leq\Pr[e(\bar{T}^{\prime},R)\neq e(\hat{T}^{\prime },R)\wedge\bar{T}^{\prime}_{J}=\hat{T}^{\prime}_{J}]\leq\Pr[\bar{T}^{\prime} \neq\hat{T}^{\prime}\wedge\bar{T}^{\prime}_{J}=\hat{T}^{\prime}_{J}] \tag{95}\]
We claim that the event \(\bar{T}^{\prime}\neq\hat{T}^{\prime}\) implies the event \(d(\bar{T}^{\prime},\hat{T}^{\prime})\geq d\). To see this, (writing for \(x\in\mathbb{Z}_{2}^{n/2}\), \(\mathrm{corr}(x)\in C\) the correction from the error-correcting code, _i.e._ the nearest point in \(C\) to \(x\))
first note that if \(\vec{t}^{\prime}\neq\vec{t}^{\prime}\), then \(\mathrm{corr}(t^{\prime})=\mathrm{corr}(\vec{t}^{\prime})\neq\mathrm{corr}(\vec {t}^{\prime})\). Then, as the code distance is \(d\), \(d(\mathrm{corr}(t^{\prime}),\mathrm{corr}(\hat{t}^{\prime}))\geq d\). Since \(\vec{t}^{\prime}+\mathrm{corr}(t^{\prime})=\hat{t}^{\prime}+\mathrm{corr}( \hat{t}^{\prime})\), \(d(\vec{t}^{\prime},\hat{t}^{\prime})=d(\mathrm{corr}(t^{\prime}),\mathrm{corr }(\hat{t}^{\prime}))\geq d\).
Thus, as \(j\) is sampled uniformly at random among the substrings of length \(\eta\frac{n}{2}\),
\[\Pr[d(\bar{T}^{\prime},\hat{T}^{\prime})\geq d\wedge\bar{T}^{\prime}_{J}=\hat{ T}^{\prime}_{J}]\leq\frac{\binom{n/2-d}{m/2}}{\binom{n/2}{m/2}}\leq\left(1- \frac{2d}{n}\right)^{\eta\frac{n}{2}}. \tag{96}\]
\(\blacksquare\)
**Theorem 6.10**.: Suppose \(0\leq\delta\leq\gamma,\frac{2d}{n}\). Then, Protocol 6.7 is \((\Phi^{\otimes n},(e^{-(\gamma-\delta)^{2}})^{n}+(e^{-(2d/n-\delta)^{2}})^{n})\)-complete, where \(\Phi^{\otimes n}:\mathcal{L}(V)\rightarrow\mathcal{L}(V)\) is any iid noise channel such that \(\langle 0|\Phi(\,|1\rangle\!\langle 1|)|0\rangle\leq\delta\), \(\langle 1|\Phi(\,|0\rangle\!\langle 0|)|1\rangle\leq\delta\), \(\langle+|\Phi(\,|-\rangle\!\langle-|)|+\rangle\leq\delta\), and \(\langle-|\Phi(\,|+\rangle\!\langle+|)|-\rangle\leq\delta\).
In particular, note that this gives an exponentially small abort rate if the error \(\delta<\gamma,\frac{2d}{n}\). We make use of Hoeffding's inequality in the proof: for independent random variables \(\Gamma_{1},\ldots,\Gamma_{n}\) with support in \([0,1]\), their sum \(\Gamma=\sum_{i}\Gamma_{i}\) has the property that, for any \(x\geq 0\),
\[\Pr[\Gamma\geq\mathbb{E}\Gamma+x]\geq\exp\!\left(-\frac{2x^{2}}{n}\right) \tag{97}\]
Proof.: First, recall that Alice sends states of the form \(|a_{t,t^{\prime}}\rangle=|x^{\theta}\rangle\), for \(x=t_{a}+t^{\prime}_{a^{\perp}}\) and \(\theta=\iota(a)\), the indicator vector. Thus, the conditions on \(\Phi\) are simply that there is an independent probability at most \(\delta\) of a bit flip on any of the bits of the measured strings. Next, since \(\hat{t}^{\prime}=\bar{t}^{\prime}\) implies that \(\hat{t}^{\prime}_{j}=\bar{t}^{\prime}_{j}\), we have that
\[\begin{split}\Pr[F=0]&=\Pr\!\left[d(\hat{T},T)> \tfrac{n}{2}\gamma\vee\bar{T}^{\prime}_{J}=\hat{T}^{\prime}_{J}\right]\\ &\leq\Pr\!\left[d(\hat{T},T)>\tfrac{n}{2}\gamma\right]+\Pr\! \left[\hat{T}^{\prime}\neq\bar{T}^{\prime}\right]\!.\end{split} \tag{98}\]
First, the probability of more than \(\gamma\frac{n}{2}\) bit flips occurring on \(\hat{t}\) is
\[\Pr\!\left[d(\hat{T},T)>\tfrac{n}{2}\gamma\right]\leq\sum_{k=n\gamma/2+1}^{n/ 2}\binom{n/2}{k}\delta^{k}(1-\delta)^{n/2-k}=\Pr\!\left[\Gamma\geq\gamma\frac {n}{2}+1\right]\!, \tag{99}\]
where the binomial random variable \(\Gamma\sim\mathrm{Bin}(n/2,\delta)\). Consider the independent identically distributed Bernoulli random variables \(\Gamma_{1},\ldots,\Gamma_{n/2}\sim\mathrm{B}(\delta)\). Since we know \(\Gamma=\sum_{i}\Gamma_{i}\) and \(\mathbb{E}\Gamma=\delta\frac{n}{2}\), Hoeffding's inequality provides
\[\Pr\!\left[d(\hat{T},T)>\tfrac{n}{2}\gamma\right]\leq\exp\!\left(-\frac{4(( \gamma-\delta)n/2+1)^{2}}{n}\right)\leq\left(\exp(-(\gamma-\delta)^{2}) \right)^{n}\!. \tag{100}\]
To proceed similarly for the second term, first note that, in the same way as in Proposition 6.9, \(\bar{t}^{\prime}\neq\hat{t}^{\prime}\) implies \(d(\hat{t}^{\prime},t^{\prime})\geq d\). Thus, as before \(\Pr\!\left[\hat{T}^{\prime}\neq\bar{T}^{\prime}\right]\leq\Pr\!\left[d(\hat{T }^{\prime},T^{\prime})\geq d\right]\leq\left(\exp(-(\frac{2d}{n}-\delta)^{2} )\right)^{n}\!\). \(\blacksquare\)
**Lemma 6.11**.: Let \(X=\mathbb{Z}_{2}^{n}\) and \(A\) be registers, \(\rho_{XA}\) be a cq state, and \(U=B(n,m)\subseteq\mathbb{Z}_{2}^{n}\) be a ball. For \(\sigma=\mathbbm{E}_{u\in U}\,X_{X}^{u}\rho_{XA}X_{X}^{u}\) where \(X\) is the Pauli operator and any POVM \(M:X\to\mathcal{P}(A)\), we have
\[\rho_{M(A)A\wedge(d(M(A),X)\leq m)}=|U|\sigma_{M(A)A\wedge(M(A)=X)}. \tag{101}\]
Proof.: First, writing \(\rho_{XA}=\sum_{x\in X}[x]\otimes\rho_{A}^{x}\), we see that
\[\rho_{XM(A)A}=\sum_{x,y\in X}[xy]\otimes\sqrt{M_{y}}\rho_{A}^{x}\sqrt{M_{y}}, \tag{102}\]
and so
\[\rho_{M(A)A\wedge(d(M(A),X)\leq m)}=\sum_{\begin{subarray}{c}x,y\in X\\ d(x,y)\leq m\end{subarray}}[y]\otimes\sqrt{M_{y}}\rho_{A}^{x}\sqrt{M_{y}}=\sum _{y\in X}[y]\otimes\sqrt{M_{y}}\sum_{u\in U}\rho_{A}^{y+u}\sqrt{M_{y}}. \tag{103}\]
On the other hand, \(|U|\sigma_{XA}=\sum_{x\in X,u\in U}[x]\otimes\rho_{A}^{x+u}\), so
\[|U|\sigma_{M(A)A\wedge(M(A)=X)}=\sum_{x\in X,u\in U}[x]\otimes\sqrt{M_{x}}\rho _{A}^{x+u}\sqrt{M_{x}}, \tag{104}\]
which completes the proof.
**Lemma 6.12**.: Let \(X,Y,A\) be registers and \(\rho_{XYA}\) be a ccq state. Then, for any \(y_{0}\in Y\),
\[H_{\min}(X|A)_{\rho_{\wedge(Y=y_{0})}}\geq H_{\min}(X|AY)_{\rho}. \tag{105}\]
Proof.: We interpret this in terms of the guessing probability. Writing \(\rho_{XYA}=\sum_{x,y}[xy]\otimes\rho_{A}^{x,y}\), the probability of guessing \(X\) given \(AY\) is
\[\begin{split} 2^{-H_{\min}(X|AY)_{\rho}}&=\sup_{M^{y}:X \to\mathcal{P}(A)\text{ POVMs}}\sum_{x,y}\operatorname{Tr}[M_{x}^{y}\rho_{A}^{x,y}]\\ &\geq\sup_{M:X\to\mathcal{P}(A)\text{ POVM}}\sum_{x}\operatorname{Tr}[M_{x}\rho_{A}^{x,y_{0}}]=2^{-H_{\min}(X|A)_{ \rho_{\wedge(Y=y_{0})}}},\end{split} \tag{106}\]
as \(\rho_{XYA\wedge(Y=y_{0})}=\sum_{x}[xy_{0}]\otimes\rho_{A}^{x,y_{0}}\).
**Theorem 6.13**.: Suppose that \(\kappa\leq\left(-\log\cos\frac{\pi}{8}-\frac{2s}{n}-2\eta-\frac{1}{(2\ln 2)n} \right)\frac{n}{2}\). Then, the QKD protocol Protocol 6.7 is \(\max\{2\frac{n}{2}h(\gamma)\varepsilon,2^{-(-\log\cos\frac{\pi}{8}-h(\gamma)- \frac{1}{(2\ln 2)n})}\frac{n}{2}\}\)-secret.
Asymptotically, in order for the QKD protocol to produce a secure key, we require only
\[\left(-\log\cos\frac{\pi}{8}-h(\gamma)-\frac{1}{(2\ln 2)n}\right)\frac{n}{2}>0, \left(-\log\cos\frac{\pi}{8}-\frac{2s}{n}-2\eta-\frac{1}{(2\ln 2)n}\right) \frac{n}{2}>0, \tag{107}\]
as we can make \(\varepsilon\) arbitrarily small by enlarging the key. These provide the asymptotic noise tolerance. First \(\frac{1}{2\ln 2n}\to 0\) and we can choose \(\eta\) small enough to have \(\eta\to 0\) while preserving subexponential correctness (for example \(\eta=1/\sqrt{n}\)), so we don't need to worry about those terms. Also, the Shannon limit provides the minimum value \(s=\frac{n}{2}h(\gamma)\). Therefore, the inequalities reduce to \(-\log\cos\frac{\pi}{8}>h(\gamma)\) asymptotically, so approximately \(\gamma<0.0153\); thus the asymptotic noise tolerance is \(\approx 1.5\%\). Note that this is the same tolerance as in [7].
Proof.: At the start of the protocol, Alice prepares the state \(\rho_{ATT^{\prime}V}=\mathbbm{E}_{a,t,t^{\prime}}[att^{\prime}]\otimes\,|a_{t,t^{ \prime}}\rangle\!\langle a_{t,t^{\prime}}|\), where she holds onto \(ATT^{\prime}\) and sends \(V\). Eve acts with some channel \(\Phi:\mathcal{L}(V)\rightarrow\mathcal{L}(BE)\) and sends the register \(B\) to Bob. Bob sends \(\hat{T}\) to Alice, which Eve may intercept and copy. We work first with the state \(\sigma_{ATT^{\prime}BE}=\mathbbm{E}_{u\in U}\,X_{T}^{u}\rho X_{T}^{u}\), and then exchange it for \(\rho\) later, using Lemma 6.11. At the parameter estimation step, the robust leaky MoE property implies \(H_{\min}(T|AB;T^{\prime}|A^{\prime}TE)_{\sigma}\geq\Big{(}-\lg\cos\frac{\pi}{ 8}-\frac{1}{(2\ln 2)n}\Big{)}n\), where \(A^{\prime}\) is a copy of \(A\). Let \(M:T\rightarrow\mathcal{P}(AB)\) be the measurement Bob's device uses to get the guess of \(T\). Then, by the entropic uncertainty relation, we must have either
\[\begin{split}& H_{\min}(T|M(AB))_{\sigma}\geq\Big{(}-\lg\cos \frac{\pi}{8}-\frac{1}{(2\ln 2)n}\Big{)}\frac{n}{2}\qquad\text{or}\\ & H_{\min}(T^{\prime}|A^{\prime}TE)_{\sigma_{|(M(AB)=T)}}\geq \Big{(}-\lg\cos\frac{\pi}{8}-\frac{1}{(2\ln 2)n}\Big{)}\frac{n}{2}.\end{split} \tag{108}\]
In the former case, we have
\[\operatorname{Tr}\!\left(\sigma_{\wedge(\hat{T}=T)}\right)=\operatorname{Pr} \!\left[M(AB)=T\right]_{\sigma}\leq 2^{-\big{(}-\lg\cos\frac{\pi}{8}-\frac{1}{(2 \ln 2)n}\big{)}\frac{n}{2}}, \tag{109}\]
as \(\hat{T}=M(AB)\). In the latter case, by the error correction step, Eve holds \(E_{0}=A^{\prime}\hat{T}\operatorname{syn}(\hat{T}^{\prime})J\hat{T}^{\prime}_{J}E\) and thus, making use of Lemma 6.12
\[\begin{split} H_{\min}(T^{\prime}|A^{\prime}\hat{T}\operatorname{ syn}(\hat{T}^{\prime})J\bar{T}^{\prime}_{J}E)_{\sigma_{|(\hat{T}=T)\wedge(\hat{T}^{ \prime}_{J}=\bar{T}^{\prime}_{J})}}&\geq H_{\min}(T^{\prime}|A^{ \prime}\hat{T}\operatorname{syn}(\hat{T}^{\prime})J\bar{T}^{\prime}_{J}\bar{T} ^{\prime}_{J}E)_{\sigma_{|(\hat{T}=T)}}\\ &\geq H_{\min}(T^{\prime}|A^{\prime}TE)_{\sigma_{|(M(AB)=T)}}-s-2 \eta\frac{n}{2}\\ &\geq\Big{(}-\lg\cos\frac{\pi}{8}-\frac{2s}{n}-2\eta-\frac{1}{(2 \ln 2)n}\Big{)}\frac{n}{2}.\end{split} \tag{110}\]
Next, as Eve has access to the syndrome \(\operatorname{syn}(\hat{t}^{\prime})\), her probability of guessing \(t^{\prime}\) is equal to that of guessing \(\bar{t}^{\prime}\), giving \(H_{\min}(\bar{T}^{\prime}|A\hat{T}\operatorname{syn}(\hat{T}^{\prime})E)_{ \sigma_{|(\hat{T}=T)\wedge(\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J})}}\, \geq\,\Big{(}-\lg\cos\frac{\pi}{8}-\frac{2s}{n}-2\eta-\frac{1}{(2\ln 2)n} \Big{)}\frac{n}{2}\). By hypothesis on the strong extractor, we have that
\[\|\sigma_{e(\bar{T}^{\prime},R)RE_{0}|(\hat{T}=T)\wedge(\hat{T}^{\prime}_{J} =\bar{T}^{\prime}_{J})}-\mu_{Z}\otimes\mu_{R}\otimes\sigma_{E_{0}|(\hat{T}=T) \wedge(\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J})}\|_{\operatorname{Tr}}\leq\varepsilon, \tag{111}\]
where the register \(Z=\mathbb{Z}_{2}^{\ell}\). Before passing to the information reconciliation step, we combine the two cases. Writing \(\varepsilon^{*}=\max\,\,\{\varepsilon,2^{-\big{(}-\lg\cos\frac{\pi}{8}-\frac{ 1}{(2\ln 2)n}\big{)}\frac{n}{2}}\}\), we get
\[\begin{split}&\|\sigma_{e(\bar{T}^{\prime},R)RE_{0}\wedge(\hat{T}=T \wedge\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J})}-\mu_{Z}\otimes\mu_{R}\otimes \sigma_{E_{0}\wedge(\hat{T}=T\wedge\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J})} \|_{\operatorname{Tr}}\\ &\qquad=\operatorname{Tr}(\sigma_{\wedge(\hat{T}=T)})\|\sigma_{e( \bar{T}^{\prime},R)RE_{0}|(\hat{T}=T)\wedge(\hat{T}^{\prime}_{J}=\bar{T}^{ \prime}_{J})}-\mu_{Z}\otimes\mu_{R}\otimes\sigma_{E_{0}|(\hat{T}=T)\wedge(\hat{ T}^{\prime}_{J}=\bar{T}^{\prime}_{J})}\|_{\operatorname{Tr}}\leq\varepsilon^{*}. \end{split} \tag{112}\]
Now, we can pass to the real state \(\rho\). Using Lemma 6.11 with \(X=T\),
\[\begin{split}&\|\rho_{e(\bar{T}^{\prime},R)RE_{0}\wedge(d(\hat{T},T )\leq\gamma n/2\wedge\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J})}-\mu_{Z}\otimes \mu_{R}\otimes\rho_{E_{0}\wedge(d(\hat{T},T)\leq\gamma n/2\wedge\hat{T}^{\prime }_{J}=\bar{T}^{\prime}_{J})}\|_{\operatorname{Tr}}\\ &\qquad=|U|\|\sigma_{e(\bar{T}^{\prime},R)RE_{0}\wedge(\hat{T}=T \wedge\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J})}-\mu_{Z}\otimes\mu_{R} \otimes\sigma_{E_{0}\wedge(\hat{T}=T\wedge\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_ {J})}\|_{\operatorname{Tr}}\leq 2^{\frac{n}{2}h(\gamma)}\varepsilon^{*}.\end{split} \tag{113}\]
As the event \(F=1\) is equivalent to \(d(\hat{T},T)\leq\gamma n/2\wedge\hat{T}^{\prime}_{J}=\bar{T}^{\prime}_{J}\), this means
\[\|\rho_{e(\bar{T}^{\prime},R)RE_{0}\wedge(F=1)}-\mu_{K}\otimes\rho_{RE_{0} \wedge(F=1)}\|_{\operatorname{Tr}}\leq 2^{\frac{n}{2}h(\gamma)}\varepsilon^{*}. \tag{114}\]
Finally, as Eve's register at the end of the privacy amplification step is \(E^{\prime}=RE_{0}=RA\hat{T}\operatorname{syn}(\hat{T}^{\prime})J\hat{T}_{J}E\), we get the wanted result \(\|\rho_{KE^{\prime}\wedge(F=1)}-\mu_{K}\otimes\rho_{E^{\prime}\wedge(F=1)}\|_{ \operatorname{Tr}}\leq 2^{\frac{n}{2}h(\gamma)}\varepsilon^{*}\). |
2309.15378 | Adversarial Object Rearrangement in Constrained Environments with
Heterogeneous Graph Neural Networks | Adversarial object rearrangement in the real world (e.g., previously unseen
or oversized items in kitchens and stores) could benefit from understanding
task scenes, which inherently entail heterogeneous components such as current
objects, goal objects, and environmental constraints. The semantic
relationships among these components are distinct from each other and crucial
for multi-skilled robots to perform efficiently in everyday scenarios. We
propose a hierarchical robotic manipulation system that learns the underlying
relationships and maximizes the collaborative power of its diverse skills
(e.g., pick-place, push) for rearranging adversarial objects in constrained
environments. The high-level coordinator employs a heterogeneous graph neural
network (HetGNN), which reasons about the current objects, goal objects, and
environmental constraints; the low-level 3D Convolutional Neural Network-based
actors execute the action primitives. Our approach is trained entirely in
simulation, and achieved an average success rate of 87.88% and a planning cost
of 12.82 in real-world experiments, surpassing all baseline methods.
Supplementary material is available at
https://sites.google.com/umn.edu/versatile-rearrangement. | Xibai Lou, Houjian Yu, Ross Worobel, Yang Yang, Changhyun Choi | 2023-09-27T03:15:45Z | http://arxiv.org/abs/2309.15378v1 | Adversarial Object Rearrangement in Constrained Environments with Heterogeneous Graph Neural Networks
###### Abstract
Adversarial object rearrangement in the real world (e.g., previously unseen or oversized items in kitchens and stores) could benefit from understanding task scenes, which inherently entail heterogeneous components such as current objects, goal objects, and environmental constraints. The semantic relationships among these components are distinct from each other and crucial for multi-skilled robots to perform efficiently in everyday scenarios. We propose a hierarchical robotic manipulation system that learns the underlying relationships and maximizes the collaborative power of its diverse skills (e.g., pick-place, push) for rearranging adversarial objects in constrained environments. The high-level coordinator employs a heterogeneous graph neural network (HetGNN), which reasons about the current objects, goal objects, and environmental constraints; the low-level 3D Convolutional Neural Network-based actors execute the action primitives. Our approach is trained entirely in simulation, and achieved an average success rate of 87.88% and a planning cost of 12.82 in real-world experiments, surpassing all baseline methods. Supplementary material is available at [https://sites.google.com/umn.edu/versatile-rearrangement](https://sites.google.com/umn.edu/versatile-rearrangement).
Deep Learning in Grasping and Manipulation, Perception for Grasping and Manipulation
## I Introduction
Real-world robots typically operate in highly structured environments rather than everyday scenarios that contain adversarial objects (e.g., previously unseen or oversized items) and complex constraints (e.g., boxes, shelves, etc.). While a factory robot simply transfers identical items on a belt drive, a domestic robot tasked with rearranging a pantry may frequently encounter oversized containers on shelves. As illustrated in Fig. 1, real-world object rearrangement tasks are inherently heterogeneous, consisting of current objects, goal objects, and environmental constraints. The semantic relationships among these components (e.g., the "meat can" on the "ground" has a goal location on the "shelf") contain essential information for efficiently completing the task. Robots that understand and utilize such knowledge are more likely to succeed in the real world, where adversarial objects and various environmental constraints are ubiquitous.
The object rearrangement problem has traditionally been addressed with model-based task and motion planning (TAMP) [1], which often assumes a fully observable environment and is thus difficult to scale to previously unseen scenarios [2, 3, 4]. Recent deep learning-based approaches can generalize to novel objects, owing to the advances in perception and grasping models [5, 6]. However, they typically assume no environmental constraints (e.g., an open tabletop) [7, 8, 4] or rely on iterative collision checking [9], which limits their generalizability in the real world. Additionally, most existing works focus on graspable objects and separately study pick-place or pushing. Although some have investigated both [7, 8], they only push to facilitate grasping [8] or employ specialized tools [7]. Relatively few have explored coordinating low-cost pushing, which may be limited by the environment, with pick-place to improve the robot's capability and efficiency. Therefore, the problem of rearranging adversarial objects with multiple skills in constrained environments remains unsolved.
To address this challenge, we propose to learn from the heterogeneous task components and exploit the distinct
Fig. 1: In this adversarial object rearrangement task, the color-coded heterogeneous task components (e.g., current objects, goal objects, and environmental constraints) are linked by different semantic relationships that are crucial to efficiently guiding a multi-skilled robot. By understanding that the βbowlβ and the βshelfβ are related by βonβ, a robot will swiftly push it to the nearby goal and clear space for the βmeat canβ, which requires pick-place to move from βgroundβ to βshelfβ.
semantic relationships among them. We devise an adversarial object rearrangement system that utilizes both pushing and grasping, to maximize the robot's efficiency and generalize to novel constrained environments. Our hierarchical approach represents a task as a heterogeneous graph over a pair of current and goal RGB-D images, which are segmented into objects and environmental constraints. At the high-level, a heterogeneous graph neural network [10] (HetGNN)-based coordinator reasons about the graph and its underlying semantic information and predicts the optimal action primitive and next target, such that the goal configuration can be successfully achieved by the low-level actors with minimal planning costs. The system operates in a closed-loop fashion, continually re-observing the scene at each time step to predict more accurate rearrangement plans.
We experiment in both simulated and real-world environments. Our approach achieves, on average, an 88.78% success rate with 12.82 actions in real-world tests, outperforming several baselines by large margins. To the best of our knowledge, this is the first approach that utilizes HetGNN to coordinate robot skills for rearranging adversarial objects in constrained environments. The main contributions of this paper are as follows:
* We propose a hierarchical pushing and grasping robotic system that addresses adversarial object rearrangement problems in constrained environments. By leveraging the semantic relationships in the task, the high-level coordinator guides the 3D CNN-based low-level actors to perform more efficiently.
* Our approach represents the rearrangement task as a heterogeneous graph and exploits the power of a HetGNN to reason about the underlying relationships among the task components. It learns from an expert planner in simulation and predicts the next target and action end-to-end.
* While previous approaches often assume an open workspace or use hard-coded solutions, our method learns to adapt to complex environments, where previously unseen constraints could significantly limit existing works.
## II Related Work
Object rearrangement is an essential challenge in robotics and embodied AI [11]. The problem is commonly studied under the broad subject of task and motion planning (TAMP) [1], which is often formulated hierarchically with a high-level task planner (i.e., which action to perform on which item) and a low-level motion planner (i.e., how to move the end-effector) such that the goals can be achieved [2, 11]. Typical TAMP approaches are model-based and often rely on task-specific knowledge and accurate 3D models of the environment [2, 3, 4]. Hence, they often do not generalize well to the real world, where the required information may not be accessible.
Recent works have equipped classical TAMP with deep learning-based perception [12, 13] and grasping models [14, 15, 16] to generalize to novel objects [6, 5, 17, 8]. However, many researchers focus exclusively on pick-place [5, 18, 19, 17], largely limiting the robot's capability in the real world where objects are frequently not graspable (e.g., large items with a parallel-jaw gripper or cloths with a suction gripper). To rearrange more adversarial objects, non-prehensile action primitives such as pushing are needed. Inspired by [20], Tang et al. [8] use pushing to facilitate grasping by breaking the clutter, but not to rearrange adversarial objects. While [7] sorts large-scale basic cuboids with both pushing and grasping, they build a specialized end of arm tooling (EOAT) for pushing. Transporter [21] and TRLB [4] bypass the challenge with suction mechanisms.
Long-horizon planning for object rearrangement has been studied analytically with Rapidly Exploring Random Tree (RRT) [22] or Monte Carlo Tree Search (MCTS) [23], which explores multiple future possibilities but is less robust to noise and occlusions. PlaNet [24] addresses the partial observability issue with a learned forward dynamics model and plans actions in latent space. Similarly, Visual Robot Task Planning [25] learns to encode the scene into a latent representation and then uses tree search for planning in this latent space. Both works are task-specific in simulation and highly likely require large demonstration data to generalize to a real robot. Other researchers have leveraged spatial relations for planning [18, 19]. Liu et al. [19] take language as an input that specifies the goal configuration and then employ Transformers [26] to translate the spatial relations into a sequence of pick-place instructions. Our approach conveniently uses a single imperfect RGB-D image to specify the goal and directly transfer to the real world.
Prior robotics research has investigated Graph Neural Networks [27, 28] in object rearrangement problems [29, 5, 8]. Closely related to our work, NeRP [5] employs a high-level object selection module with k-GNNs that plan for rearranging novel objects with pick-place. However, they are limited to graspable objects on an open tabletop, not considering any environmental constraints. Tang et al. [8] compare the Graph Edit Distance (GED) between the start and goal scene graphs and plan for selective object rearrangement of multiple objects, but also assume a simplified environment. In constrained environments, existing works typically assume a constant structure [30] and rely on iterative collision checking [9], which is computationally expensive and often suffers from noise and occlusion in the real world. These methods are not as generalizable as ours, as we employ a novel HetGNN-based [10] coordinator that exploits the semantic relationships among heterogeneous components in the task and significantly improves the robot's efficiency.
## III Problem Formulation
We aim to design an efficient robotic manipulation system that addresses adversarial object rearrangement problems in unstructured real-world environments, where environmental constraints could heavily influence the robot's behavior. We formulate the problem as follows:
**Definition 1**.: _Given a goal image \(I_{T}\) describing a desired object configuration in a constrained environment, the goal of the rearrangement task is to apply a sequence of manipulation actions on the current objects to achieve the
goal configuration where every object is within \(\tau\) of its corresponding goal location in 3D space._
In our experiments, we use \(\tau=3\)\(cm\). The constrained environments considered in this work are defined as
**Definition 2**.: _Constrained environments include geometric constraints such that certain manipulation actions are not always feasible (e.g., pushing an object across height discontinuities)._
We assume about robot skills and objects as follows:
**Assumption 1**.: _The robot is capable of pick, place, move, and push. pick-place is a sequence of pick, move, and place, while push requires a single move._
**Assumption 2**.: _The adversarial objects are possibly unknown (i.e., novel objects) to the robot and may not be graspable (e.g., object dimension is larger than the maximum opening of the robot's end effector) for which only push is applicable._
Let \(\mathcal{O}^{t}=\{o_{1}^{t},o_{2}^{t},\cdots,o_{N}^{t}\}\) and \(\mathcal{O}^{T}=\{o_{1}^{T},o_{2}^{T},\cdots,o_{N}^{T}\}\) denote the set of objects in the current scene and the goal scene, respectively. The robot action \(a\in\{\text{pick-place},\text{push}\}\) for a selected object \(o_{i}^{t}\in\mathcal{O}^{t}\) is subject to a binary-valued metric \(\mathcal{S}_{a}(o_{i}^{t},o_{i}^{T},\mathcal{C})\in\{\text{0,1}\}\) where \(\mathcal{C}=\{c_{1},c_{2},\cdots,c_{N}\}\) denotes the set of environmental constraints in the scene. \(\mathcal{S}_{a}=1\) indicates that push is more effective for the selected object at time \(t\), whereas \(\mathcal{S}_{a}=0\) represents that pick-place is more effective. When both actions are applicable, the robot performs push (i.e., \(\mathcal{S}_{a}=1\)) since it costs fewer actions.
To reason about the relationships between the objects \(\mathcal{O}^{t},\mathcal{O}^{T}\), and the constrained environment \(\mathcal{C}\), we employ a heterogeneous graph representation \(\mathcal{G}\). A heterogeneous graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) and \(\mathcal{E}\) represent the set of nodes and edges, respectively, is associated with node and edge type mapping functions \(\phi\colon\mathcal{V}\rightarrow\mathcal{F}\) and \(\psi\colon\mathcal{E}\rightarrow\mathcal{R}\), where \(\mathcal{F}\) is the set of node types (e.g., "current", "goal", and "environment") and \(\mathcal{R}\) is the set of edge types describing spatial relations (e.g., a "goal" node is "in" a "box"). The heterogeneous graph \(\mathcal{G}\) is constructed from a pair of RGB-D observations of current and goal configurations \((I_{t},I_{T})\). We would like to learn a high-level coordinator that predicts a selection probability \(p_{o}(\mathcal{C},\mathcal{O}^{t},\mathcal{O}^{T})\) for each object such that the goal configuration can be achieved with the least number of actions by rearranging the most feasible object. The coordinator should simultaneously learn to select the appropriate action for such targets. Specifically, the action probability \(p_{a}(\mathcal{C},\mathcal{O}^{t},\mathcal{O}^{T})=p_{push}(\mathcal{C}, \mathcal{O}^{t},\mathcal{O}^{T})=Pr(\mathcal{S}_{a}=1|\mathcal{G}(\mathcal{V},\mathcal{E}))\); hence the pick-place probability \(p_{pick}(\mathcal{C},\mathcal{O}^{t},\mathcal{O}^{T})=Pr(\mathcal{S}_{a}=0| \mathcal{G}(\mathcal{V},\mathcal{E}))=1-p_{push}\).
## IV Proposed Approach
This section describes the proposed adversarial object rearrangement system that coordinates pick-place and push in constrained environments. To address the exploration challenge in long-horizon problems, our approach takes advantage of the hierarchical structure and uses a high-level coordinator in conjunction with low-level actors to guide the robot at each time step \(t\). The goal configuration at time \(T\) is given as a reference RGB-D image \(I_{T}\). Given the current observation of the scene \(I_{t}\), the HetGNN-based coordinator reasons about the underlying relationships in the heterogeneous graph and simultaneously predicts which object should be prioritized and how to move it, such that the goal can be achieved efficiently. The overview of the approach is described in Fig. 2, and the algorithm is delineated in Algorithm 1.
### _Object Matching_
Given the goal configuration specified by an RGB-D image \(I_{T}\), the object matching module finds each object's correspondence in the current observation \(I_{t}\). We first obtain the instance masks \(\mathcal{M}_{T}\) of \(N\) objects in \(I_{T}\) using the SAG [13], an object instance segmentation method with active robotic manipulation. Next, we encode each object's RGB-D cropping from \(\mathcal{M}_{T}\) into a feature vector \(\mathbf{h}_{i}\in\mathbb{R}^{10}\) using a Siamese network [31]. The network is trained with contrastive loss such that the L2 distance in the latent space is close for the same objects and far for different ones [32],
Fig. 2: The current RGB-D image \(I_{t}\) and goal \(I_{T}\) are fed into the graph constructor, which encodes the heterogeneous task components (color-coded) into node embeddings with pre-trained 3D encoders. Then the HetGNN updates the embeddings based on its learned parameters, and the high-level coordinator predicts the object selection score \(p_{o}\) and the action selection score \(p_{a}\) for each object. We select the object with the highest \(p_{o}\) as the target and decide which action to execute based on \(p_{a}\). Finally, we feed the decision to the low-level actors, which are responsible for performing the robotβs actions. The closed-loop system will run until the goal configuration is achieved or the maximum number of steps is reached.
[33]. The set \(\mathcal{H}_{T}=\{\mathbf{h}_{1}^{T},\mathbf{h}_{2}^{T},\cdots,\mathbf{h}_{N}^{T}\}\) represents the features of objects in the goal configuration. At the current time step \(t\), we follow the same procedure to extract the feature set \(\mathcal{H}_{t}\) of current objects. The L2 distance between each element in \(\mathcal{H}_{t}\) and each element in \(\mathcal{H}_{T}\) is calculated, and the current-goal correspondence \(\mathbf{c}\in\mathbb{R}^{N\times 2}\) is established by associating each goal object to the one with the smallest L2 distance in the current time \(t\).
### _Constructing Heterogeneous Graph_
The high-level coordinator is based on a HetGNN, which exploits the heterogeneity and the underlying semantic information in the input heterogeneous graph. To construct a graph that can efficiently capture the information, we consider three different node types: current objects \(\mathcal{O}^{t}\), goal objects \(\mathcal{O}^{T}\), and the environmental constraints \(\mathcal{C}\). Unlike the traditional homogeneous graphs, the relationships between these nodes are represented by a set of heterogeneous edge types, which could be semantically interpreted (e.g., the edge between "current objects" nodes and "constraints" nodes representing the "in" relationship, the edge between "current objects" nodes and "goal object" nodes representing the "to" relationship).
The heterogeneous graph is illustrated in Fig. 2. The nodes \(\mathcal{V}\) include current nodes \(\mathbf{v}^{t}\), goal nodes \(\mathbf{v}^{T}\), and the constraints nodes \(\mathbf{v}^{c}\), representing different types of the heterogeneous task components. The graph connectivity contains two fully-connected sub-graphs, one for current nodes and one for goals nodes. Each current node is also connected to its corresponding goal node, specified by the current-goal correspondence \(\mathbf{c}\). The constraint node(s) is/are individually connected to each object node to propagate the influence of the environmental constraints. Each node embedding is extracted from the geometric shape of the object or environment. Specifically, the point clouds of the current objects \(\mathcal{P}_{t}\), goal objects \(\mathcal{P}_{T}\), and constraints \(\mathcal{P}_{c}\) are obtained through back-projection, and transformed into voxel grids \(V_{t}\), \(V_{T}\), and \(V_{c}\), respectively. We then encode the voxel grids into geometric features \(\mathbf{x}_{t}\), \(\mathbf{x}_{T}\), and \(\mathbf{x}_{c}\) using a 3D encoder \(E_{\phi}\): Conv3D(1, 32, 5) \(\rightarrow\) ELU \(\rightarrow\) Maxpool(2) \(\rightarrow\) Conv3D(32, 32, 3) \(\rightarrow\) ELU \(\rightarrow\) Maxpool(2) \(\rightarrow\) FC(\(32\times 6\times 6\times 6\), 12). The encoder is taken from a pretrained 3D Convolutional Autoencoder, whose latent features could effectively represent the shape of the input object. Finally, each node embedding is concatenated with the object's location \(\mathbf{z}\in\mathbb{R}^{3}\).
### _HetGNN-based Coordinator_
GNNs are effective in discovering underlying relationships among nodes by learning a non-linear function \(\mathcal{F}\), which encodes a graph \(\mathcal{G}\) to \(\mathcal{G}^{\prime}\) with updated node and edge features [34]. We start with the base homogeneous Graph Attention Networks (GAT) [35]. The message-passing function, parameterized by a weight matrix \(\mathbf{\Theta}\) and attention coefficients \(\alpha_{i,j}\), for updating latent features \(\mathbf{x}_{i}\) of node \(\mathbf{v}_{i}\) is defined as
\[\mathbf{x}_{i}^{\prime}=\alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i}+\sum_{j\in \mathcal{N}(i)}\alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j} \tag{1}\]
where the attention coefficients \(\alpha_{i,j}\) are computed by
\[\alpha_{i,j}=\frac{\exp\left(\sigma\left(\mathbf{a}^{\top}[\mathbf{\Theta} \mathbf{x}_{i}\,\|\,\mathbf{\Theta}\mathbf{x}_{j}]\right)\right)}{\sum_{k\in \mathcal{N}(i)\cup\{i\}}\exp\left(\sigma\left(\mathbf{a}^{\top}[\mathbf{\Theta }\mathbf{x}_{i}\,\|\,\mathbf{\Theta}\mathbf{x}_{k}]\right)\right)} \tag{2}\]
The \(\mathbf{a}\) is the learned weight matrix of the attention mechanism, \(\mathcal{N}(i)\) is the neighbors of \(\mathbf{v}_{i}\), and \(\sigma=LeakyReLU(\cdot)\).
Note that homogeneous graph neural networks could not differentiate different types of nodes and edges. They lack the mechanism to effectively harness the heterogeneous information. To exploit the semantic relationships among the heterogeneous task components, we adopt the approach in [10] that introduces heterogeneity to the homogeneous GNN by dedicating an individual message passing function to each edge type, as shown in Fig. 3. Given a heterogeneous graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), the network aggregates node embeddings by using the message passing functions corresponding to
Fig. 3: The HetGNN network takes as input graphs of heterogeneous node types (e.g., \(\mathbf{x}_{T},\mathbf{x}_{t},\mathbf{x}_{c}\)). The message passing functions are duplicated for each edge type to update the weights for different relationships. Finally, the scores \(p_{a}\) and \(p_{o}\) are derived from updated current node features \(\mathbf{x}_{T}^{\prime},\mathbf{x}_{t}^{\prime},\mathbf{x}_{c}\).
the active edge types, which are determined by the types of the connected nodes. For instance, the edge between current nodes \(\mathbf{v}_{t}\) and goal nodes \(\mathbf{v}_{T}\) belongs to a "current-to-goal" edge type. The HetGNN includes three graph attention convolutional layers to ensure effective learning of the underlying relational information. After the node embeddings are updated by the HetGNN, two Multi-Layer-Perceptron (MLP)-based prediction heads, object selector \(\psi_{o}:\mathcal{V}\rightarrow\mathcal{O}\) and action selector \(\psi_{a}:\mathcal{V}\rightarrow\mathcal{A}\), are connected to \(\mathbf{x}_{t}\) to estimate which action should be performed on which object.
### _Low-level Actors_
The low-level actors are responsible for executing the actions decided by the high-level coordinator. If pick-place is selected, we first generate a batch of grasp candidates following the shape completion-based sampling algorithm in [36]. Next, we use the Grasp Stability Predictor (GSP)1, a 3D CNN-based 6-DoF grasp detection algorithm, to select a feasible pose for grasping. After the target object has been successfully grasped, we place the object to its corresponding goal location by checking the current-goal correspondence \(\mathbf{c}\) calculated in IV-A. If push is the more effective action for the target, we plan for a direct pushing path while checking collisions using the flexible collision library (FCL) [37]. The robot closes its fingers and follows a straight path, which is divided into multiple short segments of fixed length by the intermediate waypoints. Then we use the mean square error (MSE) between the object's voxel grids and goal location to supervise a simplified model predictive control loop.
Footnote 1: Note that the acronym GSP refers to the 3D CNN grasping module in [14], defined here for a concise reference.
### _Expert Planner and Training_
To obtain training data, we generated 3,000 RGB-D images of the randomly positioned training objects (e.g., toy blocks and cylinders of different size) in environments with arbitrary constraints (e.g., bins, shelves). Then, two RGB-D images are randomly sampled as start and goal configurations for a rearrangement task. For all the rearrangement tasks, we define a pick-place cost of **3** and a push cost of **1** following Assumption 1.
The training labels, action selection labels, and object selection labels are automatically annotated by an expert planner built in a fully observable simulator. First, the expert planner examines each object's mesh model and reasons about which action primitive to use. It relies on two criteria: 1) if the object is graspable and 2) if a direct pushing path exists between the current and goal location. The binary action selection label will be 1 for push if the object is not graspable or a direct pushing path exists, and 0 for pick-place if the object is graspable and no direct pushing path exists. We assume there are no invalid tasks such as moving an ungraspable object across a discontinuous path (e.g., moving a large plate from the table onto the shelf). Then, based on the action assigned to the objects, the expert planner computes the optimal planning solution analytically using the \(A^{*}\) algorithm, which globally minimizes the predefined operating cost function by computing all possible planning sequences. For an ungraspable object without a direct pushing path, we assign an action cost of 3 because it requires multiple pushing actions. An infinite heuristic cost is associated with a planning sequence if the goal location is blocked by other objects, determined by FCL. The binary object selection label is 1 if it is the first in a sequence planned by \(A^{*}\) and 0 otherwise. The dataset contains 30,000 pairs of current and goal RGB-D images and labels, which are transformed into heterogeneous graphs using methods described in Sec. IV-B.
During training, we use a binary cross-entropy loss \(\mathcal{L}_{action}\) to supervise the action predictions:
\[\mathcal{L}_{action}=-(y\log(p)+(1-y)\log(1-p)). \tag{3}\]
The Huber loss \(\mathcal{L}_{object}\) for the object prediction head output \(\hat{y}\) is defined as
\[\mathcal{L}_{object}=\begin{cases}\frac{1}{2}(y-\hat{y})^{2}&\text{if}\ \ |(y-\hat{y})|<\delta\\ \delta((y-\hat{y})-\frac{1}{2}\delta)&\text{otherwise}\end{cases} \tag{4}\]
where \(\delta=1.15\). The combined loss \(\mathcal{L}\) is defined as
\[\mathcal{L}=\mathcal{L}_{object}+\lambda\mathcal{L}_{action}. \tag{5}\]
We empirically found that \(\lambda=0.65\) yields the best performance for our problem.
## V Experiments
We experiment in both simulated and real-world settings. These experiments are designed to: 1) demonstrate the effectiveness of our hierarchical system for the adversarial object rearrangement problem; 2) evaluate our HetGNN-based coordinator in various constrained environments and compare it to other baselines; and 3) show the generalizability of our approach to unstructured everyday scenarios.
**Evaluation metrics**: Following Definition 1, we define the _success rate_ as \(\frac{\#\text{ of successful rearrangement}}{\#\text{ of total rearrangement problems}}\). If a given rearrangement is not achievable (e.g., lifting a non-graspable object from the ground to the upper shelf), the experiment will be re-initialized. Each test is limited to \(2\times N\) planning steps, where \(N\) is the number of objects. A timeout is also considered a failure. We also consider the _planning cost_, which measures the number of actions taken to rearrange the objects from the start to the goal configuration. Each push costs **1** action and each pick-place costs **3** actions by Definition 2. Because pick-place only approaches could
Fig. 4: The testing objects are drawn from the YCB dataset and differ in size, color, and shape from our basic training objects (e.g., blocks, cylinders). Some objects such as bowls and cracker boxes, may not be graspable due to their orientation.
not work in our settings due to the non-graspable objects, we instead compare our method with the following four baselines:
* **Model** is a model-based approach that assumes access to ground truth IDs and mesh models. It randomly selects a target and checks if its corresponding goal location is available. If that location is occupied, it will push the occupying object to an arbitrary free space. Otherwise, it moves the object using the expert action selection algorithm described in Sec. IV-E.
* **Plan** is a variant of the expert planner in Sec. IV-E. It combines an optimal planner with a deep learning-based perception module [13]. Instead of using 3D models, this classical approach is based-on segmentation masks and plans for the entire action sequence with \(A^{*}\) algorithm that globally minimizes the cost function.
* **GNN** employs a Homogeneous Graph Neural Network instead of HetGNN for the coordinator. The network is trained with the same dataset as ours, except that the heterogeneous structure is not used. All other components are kept the same as in our approach.
* **NeRP+Push** builds upon a recent state-of-the-art object rearrangement approach [5]. It learns a k-GNN-based planner that selects a near-optimal policy and uses pick-place for unknown object rearrangement. To adapt to our test environments, we allow NeRP to heuristically push ungraspable objects when the object mask is larger than a threshold.
* **Expert** is our expert planner in Sec. IV-E whose performance is regarded as the upper bound of each scenario. It computes the optimal solution but may fail due to unexpected object dynamics and imperfect low-level actors.
### _Simulation Experiments_
The simulated test environment is in CoppeliaSim 4.0 [38] with Bullet physics engine v2.83. The scene includes a Franka Emika Panda robot arm with the original gripper, different numbers of testing objects, and various environmental constraints. A single-view RGB-D observation is taken with a simulated Kinect camera.
**Experiment scenes:** The experiments are depicted in Fig. 5. We first test in the open _Tabletop_ scenario for which the baseline methods were designed to verify if our approach is efficient in the simple environment. Each scene contains five to seven objects from Fig. 4 to demonstrate that our approach is generalizable to different numbers of objects (i.e., clutteredness). Each method is tested 51 times, and the success rate and planning cost are compiled in Table I and II respectively. Then we experiment with increasing the complexity of the environmental constraints: _Shelf_ demonstrates that our approach is able to efficiently solve multi-planar scenarios; _Bins_ is commonly seen in warehouses and introduces more partial occlusions; the additional shelves in _Pantry_ mimic a more realistic scene and show the generalizability to novel constraints, where analytical approaches often face difficulties. The experiments contain six YCB objects, and the results are compiled in Table III and IV.
goal locations are occupied. _Plan_'s open-loop planner could not resolve a collision immediately, potentially causing more failures and requiring more actions to complete the task afterward. _GNN_ learns less effectively and makes predictions that are not as accurate as ours (e.g., selecting pick-place while push is feasible). _NeRP+Push_ only pushes when the item is not graspable, since it could not efficiently coordinate different skills. The results suggest that our approach is able to generalize to different numbers of adversarial objects and is the most efficient in cluttered scenes, thanks to the HetGNN coordinator and the closed-loop design.
Constrained experiments increase environmental discontinuities and clutteredness, necessitating more accurate reasoning about the relationships among task components. The privileged information available to _Model_ helps maintain its performance, while _GNN_ becomes worse since it has no knowledge of the relations between each component. This indicates that the information learned by HetGNN is crucial to efficient planning. _Plan_ depends on accurate object masks to calculate the true trajectory cost, and _NeRP+Push_ only considers objects' center locations. Consequently, their performance suffers from the additional challenge of constrained experiments. In contrast, our HetGNN-based coordinator that employs 3D shape features generalizes better with partial observations. Overall, we achieved an average success rate of 96.73%, surpassing the best-performing baseline by 7.2%, with a planning cost of only 14.50, which is the closest to _Expert_'s result.
### _Real-robot Experiments_
Our real-world experiment consists of a Franka Emika Panda robot arm with FESTO DHAS soft fingers and an Intel RealSense D415 camera that overlooks the workspace. We test each method 11 times in three real-world scenarios: _Bins_, _Shelf_, and _Novel_. _Model_ is excluded from the baseline methods because ground truth mesh models are not available in the real world. Five adversarial objects are first randomly placed in the scene as the goal configuration and then re-initialized to the start configuration. To address the sim-to-real gap of the RGB-D sensor, we fine-tuned the object-matching module with a dataset consisting of 500 real-world images.
Fig. 6 depicts a successful rearrangement task in novel scenarios. Due to the perception challenge and more complex object dynamics in the real world, the performance of all the methods declines compared to the simulated results. However, ours drops much less thanks to the learned model that is robust to noise and occlusion. _Plan_ and _NeRP+Push_ sometimes select the wrong action and falsely store objects when the goal locations are available because of inaccurate masks and noise. Collisions with the occluded geometry also become more frequent in the real world, highlighting the importance of our closed-loop system. We summarized the experimental results in Table V and VI. Our approach achieves an average success rate of 87.88% and completes the task with 12.82 actions, indicating that it is the most efficient and generalizable to novel office objects and constraints. Our method is limited by the analytical pushing algorithm, which may rotate large objects unexpectedly and incur additional costs if they collide with other objects.
## VI Conclusion
We presented an object rearrangement system that coordinates pick-place and push in challenging scenarios with adversarial objects and environmental constraints. Our approach hierarchically employs a HetGNN coordinator and low-level 3D CNN-based actors to achieve the goal arrangement in an efficient manner. The proposed simulation-trained rearrangement system achieved an average success rate of 87.88% and a planning cost of 12.82 in real-world
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Plan & GNN & NeRP+Push & Ours \\ \hline Shelf & 81.82 & 72.73 & **90.91** & **90.91** \\ Bins & 72.73 & 63.64 & 72.73 & **81.82** \\ Novel & 63.64 & 72.73 & 81.82 & **90.91** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Success Rate (%) in Real World
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Plan & GNN & NeRP+Push & Ours \\ \hline Shelf & 14.62 & 16.67 & 19.56 & **11.81** \\ Bins & 19.13 & 17.88 & 23.83 & **14.34** \\ Novel & 20.20 & 17.37 & 21.21 & **12.31** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Planning Cost (# actions) in Real World
Fig. 6: An example of real-world experiments in the _Novel_ scenario (11 actions). The HetGNN-based coordinator predicts the most feasible target and utilizes pick-place and push accordingly while the low-level actors execute the plan in closed-loop.
experiments with adversarial objects and environmental constraints. One avenue for future extension is to simultaneously learn the orientations of objects during placement, as we are currently focusing on arranging objects in terms of positions.
|
2309.12417 | Advances in developing deep neural networks for finding primary vertices
in proton-proton collisions at the LHC | We are studying the use of deep neural networks (DNNs) to identify and locate
primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work
focused on finding primary vertices in simulated LHCb data using a hybrid
approach that started with kernel density estimators (KDEs) derived
heuristically from the ensemble of charged track parameters and predicted
"target histogram" proxies, from which the actual PV positions are extracted.
We have recently demonstrated that using a UNet architecture performs
indistinguishably from a "flat" convolutional neural network model. We have
developed an "end-to-end" tracks-to-hist DNN that predicts target histograms
directly from track parameters using simulated LHCb data that provides better
performance (a lower false positive rate for the same high efficiency) than the
best KDE-to-hists model studied. This DNN also provides better efficiency than
the default heuristic algorithm for the same low false positive rate.
"Quantization" of this model, using FP16 rather than FP32 arithmetic, degrades
its performance minimally. Reducing the number of UNet channels degrades
performance more substantially. We have demonstrated that the KDE-to-hists
algorithm developed for LHCb data can be adapted to ATLAS and ACTS data using
two variations of the UNet architecture. Within ATLAS/ACTS, these algorithms
have been validated against the standard vertex finder algorithm. Both
variations produce PV-finding efficiencies similar to that of the standard
algorithm and vertex-vertex separation resolutions that are significantly
better. | Simon Akar, Mohamed Elashri, Rocky Bala Garg, Elliott Kauffman, Michael Peters, Henry Schreiner, Michael Sokoloff, William Tepe, Lauren Tompkins | 2023-09-21T18:34:00Z | http://arxiv.org/abs/2309.12417v2 | Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC
###### Abstract
We are studying the use of deep neural networks (DNNs) to identify and locate primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work focused on finding primary vertices in simulated LHCb data using a hybrid approach that started with kernel density estimators (KDEs) derived heuristically from the ensemble of charged track parameters and predicted "target histogram" proxies, from which the actual PV positions are extracted. We have recently demonstrated that using a UNet architecture performs indistinguishably from a "flat" convolutional neural network model. We have developed an "end-to-end" tracks-to-hist DNN that predicts target histograms directly from track parameters using simulated LHCb data that provides better performance (a lower false positive rate for the same high efficiency) than the best KDE-to-hists model studied. This DNN also provides better efficiency than the default heuristic algorithm for the same low false positive rate. "Quantization" of this model, using FP16 rather than FP32 arithmetic, degrades its performance minimally. Reducing the number of UNet channels degrades performance more substantially. We have demonstrated that the KDE-to-hists algorithm developed for LHCb data can be adapted to ATLAS and ACTS data using two variations of the UNet architecture. Within ATLAS/ACTS, these algorithms have been validated against the standard vertex finder algorithm. Both variations produce PV-finding efficiencies similar to that of the standard algorithm and vertex-vertex separation resolutions that are significantly better.
## 1 Introduction
Reconstruction of proton-proton collision points, referred to as primary vertices (PVs), is critical for physics analyses conducted by all experiments at the Large Hadron Collider (LHC) and for triggering in LHCb. The precise identification of the PV locations, and their other characteristics, enables the complete reconstruction of final states under investigation. Moreover, it provides crucial information about the collision environment, which is essential for obtaining accurate measurements. The task of PV reconstruction poses a significant challenge across the experiments conducted at the LHC.
The LHCb detector has been upgraded for Run 3 of the LHC so that it can process a five-fold increase in its instantaneous luminosity compared to Run 2 and it has removed its
hardware-level trigger in favor of a pure software trigger [1]. The average number of visible PVs detected in the vicinity of the beam crossing area has increased from 1.1 to 5.6. In contrast, the ATLAS experiment has observed an average of 40-60 simultaneous collisions (known as pile-up, \(\mu\)) during Run 3 in 2023 and is expected to see 140-200 simultaneous collisions during the coming high-luminosity phase of the LHC. These demanding conditions invite development of new PV reconstruction algorithms to address these challenges.
This document presents the implementation and performance of a family of machine learning PV reconstruction algorithms known as PV-Finder for both LHCb and ATLAS. Conceptually, these algorithms compute one-dimensional Kernel Density Estimators (KDEs) that describe where charged track trajectories overlap in the vicinity of the beamline and use these as input feature sets for convolutional neural networks (CNNs) that predict target histograms that are proxies for the PV positions. LHCb has traditionally used heuristically computed KDEs with its CNNs; in this papers it reports merging a fully connected neural network for KDE computation with a CNN to produce an "end-to-end" tracks-to-hist deep neural network (DNN) model and compares its performance with that of older models. ATLAS currently uses an analytical approach for KDE computation (referred to as a KDE-to-hist model) and compares the performance with the Adaptive Multi-Vertex Finder (AMVF) algorithm [2], the heurustic PV identification algorithm currently used in ATLAS.
## 2 PV-Finder in LHCb
The original LHCb DNN for reconstructing PVs used a single kernel density estimator (KDE) calculated using a heuristic algorithm as the input feature set for each event (each beam crossing) and produced a target histogram from which PV positions were deduced. We refer to this class of algorithms as KDE-to-hist algorithms. The results of the initial proof-of-principal project, and some details of the "toy Monte Carlo" and KDE used for that study are reported in Ref. [3]. Using track parameters produced by the LHCb Run 3 Vertex Locator (VELO) tracking algorithm [4] leads to significantly better performance [5]. Since then, our research has advanced in several directions. We replaced our original input feature set with four input feature sets: a first KDE based on summed probabilities in voxels projected onto the beam axis, a second KDE based on summed (probability-squared values) in voxels projected onto the beam axis, plus the \(x-\) and \(y-\) coordinates of the maximum summed probability at each value of \(z\) (along the beam axis). We found that using a modified U-Net architecture [6] in place of our original CNN architecture provided equally good fidelity and trained much more quickly. We also investigated using a fully connected network to calculate a KDE from track parameters (a tracks-to-KDE model) and merging this model with a KDE-to-hist model to produce an "end-to-end" tracks-to-hist neural network. The fidelity of the tracks-to-hist studied then was inferior to that of the KDE-to-hist models. The results of these studies were presented at CHEP-2021 [7].
A major advance reported at this conference (CHEP-2023) is that we have produced a tracks-to-hist model that produces efficiencies very similar to the best produced by our KDE-to-hist models _and_ produces significantly lower false positive (FP) rates. These results were reported previously at ACAT-2022 [8]. Below, we summarize the most salient features. Brand new for this conference are results using FP16 arithmetic rather than FP32 arithmetic for the tracks-to-hist model and results using smaller U-Net components in the FP16 tracks-to-hist models.
The current tracks-to-hist model, whose architecture is shown in Fig. 1, includes a few updates relative to the original version described in Ref. [7]: the tracks-to-KDE part of the model consists of 6 fully connected layers that are initially trained to produce a KDE and the weights of the first 5 layers are temporarily frozen; a variation with 8 latent feature
sets is merged to a KDE-to-hist-like DNN where the classical CNN layers are replaced by a U-Net model. Critically, we also updated the structure of the input data for training and inference. In the earlier approach [7], the target histograms consisted of 4000 bins along the z-direction (beamline), each \(100\,\mathrm{\mu m}\) wide, spanning the active area of the VELO around the interaction point, such that \(z\in[-100,300]\,\mathrm{mm}\). Parameters describing all tracks served as input features. In place of describing the true PVs using a single 4000-bin histogram, we now slice each event into 40 intervals of 100 bins each. For each interval, parameters of tracks whose points of closest approach to the beamline lie within 2.5 mm of the interval edges are used as input features. This approach is motivated by the fact that the shapes of the target histogram are expected to be invariant as a function of the true PV position and it is easier for a DNN to learn to predict target histograms over a smaller range of bins. In particular, the fully connected layers that calculate the KDE-like latent features used as input features by the U-Net layers predict heuristic KDEs as the ground truth much more effectively when training 100-bin intervals rather than the full 4000-bin range. Additionally, the depth of the U-Net part of the DNN can be lower when processing a 100-bin feature set rather than a 4000-bin feature set. With an average of \(\sim 5\) PVs per event, most of the bins in both the KDE and target histograms have no significant activity. We expect this will allow us to eventually build a more performant inference engine in the LHCb software stack. The 40 intervals of 100 bins are independent and homogeneous between events. Each interval is treated independently, after which the predicted 4000-bin histogram is stitched back together. As in past studies, an asymmetry parameter between the cost of overestimating contributions to the target histograms and underestimating them [3] is used as a hyperparameter to allow higher efficiency by incurring higher false positive rates.
Performance is evaluated using a heuristic algorithm, based on the PV positions along the beam axis, \(z\). Exactly how efficiencies and FP rates are calculated is described in Ref. [7]. The left-hand plot in Fig. 2 shows how the performance of the DNN algorithms have evolved over time. The efficiency is shown on the horizontal axis and the false positive rate per event is shown on the vertical axis. The solid blue circles show the performance of any early KDE-to-hist model described at ACAT-2019 [3]. The green squares show the performances of a KDE-to-hist described at Connecting-the-Dots in 2020 [5]. Both of the above models were trained using "toy Monte Carlo" with proto-tracking. All subsequent DNN models were
Figure 1: This diagram illustrates the end-to-end, tracks-to-hist, DNN Each event is now sliced into 40 independent 100-bin intervals. Six fully connected layers populate 8 100-bin channels in sixth layer, for each track. These contributions are summed and processed by a U-Net model with 5 convolutional layers to construct the final 100-bin histogram.
trained using full VELO tracking algorithm [4], leading to significantly better performances (red triangles to be compared to green squares). The cyan circles and the yellow squares correspond to the best achieved performances for KDE-to-hist models using either a classical CNN architecture or the U-Net model described at CHEP-2021
The performances of all above models were obtained using an "older" matching procedure with a fixed search window of 0.5 mm. The magenta diamonds show the performance of the tracks-to-hist model described above using the matching procedure described in Ref. [7]. The new tracks-to-hist model enables the DNN to simultaneously reach high efficiencies (\(>97\%\)) and low false positive rates (0.03 per event or 0.6% per reconstructed PV).
Running an inference engine inside a software stack adds another "knob to turn" - throughput versus fidelity. Computing resources are finite, especially in LHCb's first level software trigger which processes 30 MHz of beam crossing data, about 40 Tbit/s, in a GPU application [1]. Modern GPUs provide FP16 performance that can be about twice as fast as FP32 arithmetic, so it is interesting to investigate whether using FP16 arithmetic degrades performance significantly. It similarly interesting to investigate how performance degrades as the size of the convolutional network inside our DNN is reduced. The right-hand plot in Fig. 2 shows the efficiency versus FP rate for four DNN configurations. The magenta diamonds correspond to the default tracks-to-hist configuration. These points are exactly the same as those in the left-hand plot; the ranges of the axes have been modified to focus on the region of interest. The purple "\(\times\)" markers correspond to the same logical configuration, but using FP16 arithmetic rather than FP32. Near 96% efficiency, the FP rate has increased marginally. Near 97% efficiency, the FP rate has increased much more substantially. Reducing the number of U-Net channels from 64 to 32 or 16, while using FP16 arithmetic, (the darker and lighter crosses in the plot) additionally additionally increases the FP rate near 96% efficiency by a small amount, but increases the FP rate much more significantly near 96.5%. We have begun to code an inference engine to run in LHCb's first level software trigger. The details of the model to be instantiated will balance fidelity of the model against throughput.
## 3 PV-Finder in ATLAS
The ATLAS experiment at the LHC is a versatile particle detector designed with a symmetric cylindrical geometry and near-complete coverage of \(4\pi\) in solid angle [9]. It has a multi-layer
Figure 2: (left) Comparison between the performances of models reported in previous years and the new tracks-to-hist model (magenta diamonds). A cost asymmetry parameter described in Ref. [3] is varied to produce the families of points observed. (right) Comparison between tracks-to-hist models. The magenta diamonds here are the same as in the plot on the left. The other models have U-Net architectures but use FP16 arithmetic rather than FP32. Two of the FP16 models have smaller U-Net components than the FP32 model. NB: the horizontal and vertical scales on the right cover more limited ranges than those on the left.
structure with many sub-detector systems including an inner tracking detector, superconducting magnets, electromagnetic and hadronic calorimeters, and a muon spectrometer. An extensive software suite [10] facilitates its various functions such as data reconstruction and analysis, detector operations, trigger and data acquisition systems etc.
The input dataset used for studying PV-Finder in ATLAS has been generated using POWHEG BOX[v2][11] interfaced with PYTHIA[8.230][12] and processed through the ATLAS detector simulation framework [10], using the GEANT4 toolkit [13]. The hard-scatter (HS) process involves the production of semi-leptonically decaying top quark pairs (\(t\bar{t}\)) from proton-proton collisions at a center-of-mass energy of 13 TeV, overlaid with simulated minimum-bias events with an average pile-up of 60.
### PV-Finder algorithm and model architecture
The flowchart representing work-flow of the PV-Finder algorithm for ATLAS is shown in figure 3. More details about the architecture can be found at the ATLAS PubNote [14]. Truth-matched reconstructed tracks passing tight quality selection cuts [15] and \(\mathrm{p_{T}}>500\) MeV are used for the preparation of input features for the neural network. A track's signed radial and longitudinal impact parameters, \(d_{0}\) and \(z_{0}\), measured at the point of closest approach (POCA) to the beamline, and their uncertainties, \(\sigma(d_{0})\) and \(\sigma(z_{0})\), are used as input to generate KDEs. Each KDE feature is a one-dimensional binned histogram with 12,000 bins in \(z\in[-240,240]\) mm, corresponding to a bin-size of 40 \(\mu\)m.
To compute these features, each track is modeled as a correlated radial and longitudinal Gaussian probability distribution \(\mathbb{P}(d,z)\) centred at \((d_{0},z_{0})\) which is defined as follows:
\[\mathbb{P}(r)=\mathbb{P}(d,z)=\frac{1}{2\pi\sqrt{|\Sigma|}}\mathrm{exp}\bigg{(} -\frac{1}{2}\Big{(}(d-d_{0}),(z-z_{0})\Big{)}^{T}\Sigma^{-1}\Big{(}(d-d_{0}), (z-z_{0})\Big{)}\bigg{)} \tag{1}\]
where \(d\) and \(z\) are coordinates in the radial and longitudinal directions and \(\Sigma=\left(\begin{array}{cc}\sigma^{2}(d_{0})&\sigma(d_{0},z_{0})\\ \sigma(d_{0},z_{0})&\sigma^{2}(z_{0})\end{array}\right)\) is the covariance matrix. The sum of probabilities from all the contributing tracks is considered in each \(z\)-bin and four KDE features are constructed: KDE-A (sum of track probability values), KDE-B (sum of the squares of track probability values), XMax (YMax) (location of the maximum summed track probability in \(x(y)\) (mm)). An example illustrating these four features for a random event is shown in Fig. 4. The vertical grey
Figure 3: Flowchart representing work-flow of the PV-Finder algorithm from left to right.
lines in the upper plot mark the locations of true primary vertices while horizontal grey line in the lower plot denotes the position of the beam spot in the radial direction. A restricted range of the luminous region is shown so that details can be seen.
To train the neural network, a one-dimensional target truth histogram, with the same binning as the input features and calculated by considering Gaussian probabilities around truth vertex locations, is also provided as input along with the four KDE features. A CNN is trained on these features which then outputs a distribution with approximately Gaussian peaks centered at the predicted locations of PVs. An algorithm then takes this predicted distribution and identifies the candidate PV locations on the \(z\)-axis by finding the local maxima. Two NN architectures have been considered for these studies: the UNet architecture is inspired from the original architecture developed for biomedical image segmentation [6] while the UNet++ architecture is a variation of UNet with dense skip connections.
### Performance
The PV-Finder algorithm's performance for UNet and UNet++ architectures has been studied and a comparative analysis is conducted with the AMVF algorithm using an independent test data sample. Figure 5 showcases an example of two adjacent vertices accurately located by the PV-Finder algorithm. To quantitatively evaluate the performance of the PV-Finder, vertex classification is performed, and efficiency and false positive rates are calculated. The classification assigns vertices into distinct categories, namely clean, merged, split, and fake based on the distance between the center of a predicted vertex and the \(z\)-location of truth vertices. The classification is illustrated in Figure 6 and demonstrated in Figure 7 for the three approaches.
The truth and reconstructed primary vertices are associated based on a vertex-vertex resolution, \(\sigma_{\text{vtx-vtx}}\), which is obtained by computing the \(z\)-difference between pairs of nearby reconstructed vertices and fitting the distribution with the fit function: \(y=\frac{a}{1+\exp\left(b(R_{cc}-|x|)\right)}+c\), where \(a,b,c\) are free parameters, and \(R_{cc}\) is the cluster-cluster resolution referred to as \(\sigma_{\text{vtx-vtx}}\). The vertex-vertex resolution for PV-Finder UNet, PV-Finder UNet++ and AMVF is presented in Figure 8 and Table 1.
The vertex finding efficiency is defined as the number of truth vertices assigned to reconstructed vertices as "clean" and "merged" divided by the total number of reconstructable truth vertices while the false positive rate is defined as the average number of predicted vertices not matched to any truth vertex. Figure 9 shows the vertex finding efficiency as a function of
the number of reconstructed tracks associated to a truth vertex and Table 1 shows the average efficiency and false positive rates obtained for three cases.
## 4 Conclusion
The PV-Finder family of algorithms has been studied by both the LHCb and ATLAS experiments. LHCb has demonstrated the performances of the end-to-end tracks-to-hist approach for several configurations including those that use FP16 arithmetic rather than FP32. ATLAS has demonstrated that a hybrid KDE-to-hist approach produces efficiencies comparable to the ATLAS AMVF algorithm while also achieving significanty improved resolution. These enhanced efficiency and resolution metrics hold significant importance, especially considering the future High Luminosity LHC program. The results are promising and motivate further studies and refinement of the PV-Finder algorithms across experiments.
[Copyright 2023 CERN for the benefit of the ATLAS and LHCb Collaborations. CC-BY-4.0 license]
|
2309.09751 | On the Adjacency and Seidel Spectra of Hypergraphs | A hypergraph generalizes the concept of an ordinary graph. In an ordinary
graph, edges connect pairs of vertices, whereas in a hypergraph, hyperedges can
connect multiple vertices at a time. In this paper, we obtain a relationship
between the characteristic polynomial of Seidel and adjacency matrices of
hypergraph and also compute all the eigenvalues of some k-uniform hypergraphs.
Moreover, we estimate the adjacency and Seidel spectra of the uniform double
hyperstar and sunflower hypergraph. In addition to that, we determine the
Seidel spectrum and main Seidel eigenvalues of hyperstar. | Liya Jess Kurian, Chithra A V | 2023-09-18T13:25:17Z | http://arxiv.org/abs/2309.09751v1 | # On the Adjacency and Seidel Spectra of Hypergraphs
###### Abstract
A hypergraph generalizes the concept of an ordinary graph. In an ordinary graph, edges connect pairs of vertices, whereas in a hypergraph, hyperedges can connect multiple vertices at a time. In this paper, we obtain a relationship between the characteristic polynomial of Seidel and adjacency matrices of hypergraph and also compute all the eigenvalues of some \(k\)-uniform hypergraphs. Moreover, we estimate the adjacency and Seidel spectra of the uniform double hyperstar and sunflower hypergraph. In addition to that, we determine the Seidel spectrum and main Seidel eigenvalues of hyperstar.
**Keywords:** Seidel matrix, adjacency matrix, hypergraph, \((k,r)\)-regular hypergraph, uniform double hyperstar, sunflower.
## 1 Introduction
Let \(G^{*}=(V,E)\) be a hypergraph of order n with vertex set \(V=\{v_{1},v_{2},\cdots,v_{n}\}\) and edge set \(E=\{e_{1},e_{2},e_{3},\cdots,e_{m}\}\), each hyperedge \(e_{i}\in E\) is a subset of \(V\)[4]. The rank of hypergraph \(G^{*}\) is the maximum cardinality of its hyperedges, and co-rank is the minimum cardinality of its hyperedges. The order of hypergraph \(G^{*}=(V,E)\) is the cardinality of \(V\). The degree \(d(v)\) of a vertex \(v\in V\) is the number of hyperedges that contain \(v\). A hypergraph \(G^{*}\) is said to be \(k\)-uniform hypergraph [7, 15] if the cardinality of each of its hyperedges is \(k\) where \(k\geq 2\). It is evident that an ordinary graph is a \(2\)-uniform hypergraph. A hypergraph with \(d(v_{i})=r\) for all \(v_{i}\in V\) is called an \(r\)-regular hypergraph. A hypergraph is said to be \((k,r)\)-regular hypergraph if it is both \(k\)-uniform and \(r\)-regualr. The properties of \((k,r)\) regular hypergraph are studied in [15]. The adjacency matrix \(A=(a_{ij})\) of \(G^{*}\)[17] is an \(n\times n\) matrix whose rows and columns are indexed by the vertices of \(G^{*}\) and for all \(v_{i},v_{j}\in V,\)
\[a_{ij}=\left\{\begin{array}{ll}|\ \{e_{k}\in E:\{v_{i},v_{j}\}\subset e_{k}\} \ |&,\,v_{i}\neq v_{j},k=1,2,3,...,m\\ 0&,\,v_{i}=v_{j}\end{array}\right..\]
The adjacency spectrum of hypergraphs, in particular the generalized spectrum of power hypergraphs, are studied in [6]. Let \(G=(V^{\prime},E^{\prime})\) be an ordinary graph. Then, the power graph is formed by adding \((k-2)\) vertices to each edge of a graph \(G\). Hyperstar can be considered as a power graph of a star graph. In [5], Cardoso investigated hyperstars and their properties. The author also gave the adjacency spectrum of hyperstar.
**Theorem 1.1**.: [5] _The adjacency spectrum of hyperstar \(S_{n}^{k}\) is_
\[\sigma_{A}(S_{n}^{k})=\begin{pmatrix}-1&k-2&r_{1}&r_{2}\\ (n-1)(k-2)&n-2&1&1\end{pmatrix}\]
_where \(r_{1}\) and \(r_{2}\) are the roots of the equation \(\lambda^{2}-(k-2)\lambda-(n-1)(k-1)=0\)._
Let \(J_{k,n}\) denote all one matrix of order \(k\times n\) and \(J_{n}\) and \(I_{n}\) of order \(n\) denote the all one and identity matrix, respectively. Then, the Seidel matrix \(S\) of a hypergraph \(G^{*}\) is defined as \(S=J_{n}-I_{n}-2A\)[22]. The matrices \(A\) and \(S\) of \(G^{*}\) are real and symmetric. So, their eigenvalues are real. For any square matrix \(M\) we can find a scalar \(\lambda\) such that \(M\mathbf{x}=\lambda\mathbf{x}\) where \(\mathbf{x}\) is the nonzero eigenvector corresponding to eigenvalue \(\lambda\). Let \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) and \(\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{n}\) are the eigenvalues of \(A\) and \(S\) respectively. The collection of all eigenvalues together with their multiplicities is known as the spectrum of \(A\,(\text{or}\ S)\) of \(G^{*}\). Let \(\lambda_{1},\lambda_{2},\lambda_{3},\cdots,\lambda_{d}\), be the distinct eigenvalues of an adjacency matrix \(A\) of hypergraph \(G^{*}\) with multiplicities \(m_{1},m_{2},m_{3},...,m_{d}\). Then the adjacency spectrum of \(G^{*}\) is denoted by,
\[\sigma_{A}(G^{*})=\begin{pmatrix}\lambda_{1}&\lambda_{2}&\lambda_{3}&\cdots& \lambda_{d}\\ m_{1}&m_{2}&m_{3}&\cdots&m_{d}\end{pmatrix}.\]
The Seidel energy \(SE(G^{*})\) of hypergraph \(G^{*}\) is defined as the sum of the absolute values of the Seidel eigenvalues of \(G^{*}\). In [9], Cvetkovic proposed the idea of the main eigenvalue; an eigenvalue is said to be the main eigenvalue if it has an eigenvector in which the sum of the entries is not equal to zero, that is it has an eigenvector which is not orthogonal to \(\boldsymbol{j}\) where \(\boldsymbol{j}\) denotes a column vector whose all entries are equal to \(1\). Note that a Seidel eigenvalue of \(S\) is said to be a main Seidel eigenvalue of \(G^{*}\) if the eigenspace is not orthogonal to \(\boldsymbol{j}\). The tensor product of \(n\times m\) matrix \(M=(m_{ij})\) and \(p\times q\) matrix \(N\) is an \(np\times mq\) matrix given by \((M\otimes N)_{(i,j)}=m_{ij}N\). Throughout \(A\) and \(S\) represents adjacency and Seidel matrix of the hypergraph \(G^{*}\).
In this paper, we focus on the study of some classes of non-regular hypergraphs. In Section 2, we give basic definitions and results that will be used later. In Section 3, we determine the relationship between the characteristic polynomial of Seidel and the adjacency matrices of a hypergraph. Also, we obtain the seidel spectrum of \((k,r)-\)regular hypergraph. In Section 4, the Seidel spectrum and main Seidel eigenvalues of hyperstar are calculated. Also, we estimate the Seidel energy of the hyperstar. In section 5, we compute the adjacency spectrum and Seidel spectrum of uniform double hyperstar. In section 6, the adjacency and Seidel spectrum of the sunflower hypergraph are given.
## 2 Preliminaries
This section gives basic definitions, terminologies, and facts used in the main results.
**Theorem 2.1**.: [18] _Let \(v_{i}\) and \(v_{j}\) be two vertices of a hypergraph \(G^{*}\). Then the number of walks of length \(k\) from \(v_{i}\) to \(v_{j}\) of \(G^{*}\) is the \((i,j)^{\text{th}}\) entry of the matrix \(A^{k}\)._
**Definition 2.2**.: [8] _The walk generating function of the number of walks of hypergraph \(G^{*}\) is given by,_
\[H_{G^{*}}(t)=\sum_{l=0}^{\infty}N_{l}t^{l}\]
_where \(N_{l}\) denote the number of walks of length \(l\) in \(G^{*}\)._
**Theorem 2.3**.: [9] _Let \(G\) be a multigraph of order \(n\) and \(A(G)\) be the adjacency matrix of \(G\). If \(\lambda_{1},\lambda_{2},\lambda_{3},\cdots,\lambda_{n}\) be the eigenvalues of \(A(G)\) corresponding to the mutually orthogonal normalized eigenvectors \(x_{1},x_{2},\)\(x_{3},\cdots,x_{n}\) and \(X=(x_{ij})\) be an orthogonal matrix of the eigenvectors of \(A\). Then the total number of walks of length \(l\) in \(G\) is given by,_
\[N_{l}=\sum_{j=1}^{n}C_{j}\lambda_{j}^{l},\]
_where \(C_{j}=\Bigl{(}\sum_{i=1}^{n}x_{ij}\Bigr{)}^{2}\)._
**Definition 2.4**.: [5] _Let \(S_{n}\) be a star with \(n\) vertices \(\{v_{0,0}\), \(v_{1,1}\), \(v_{2,1}\), \(\cdots\)\(v_{n-1,1}\}\), then hyperstar \(S_{n}^{k}=(V,E)\) is obtained from the star by adding \(k-2\) new vertices to each hyperedge in such a way that \(V=\{v_{0,0},v_{1,1},v_{1,2},\)\(\cdots\)\(,v_{1,k-1},v_{2,1}\)\(,\)\(v_{2,2},\cdots\)\(,\)\(v_{2,k-1},\)\(\cdots\)\(,\)\(v_{n-1,k-1}\}\) and \(n-1\) hyperedges \(E=\{\{v_{0,0},v_{1,1},v_{1,2},\)\(\cdots\)\(,v_{1,k-1}\}\)\(,\)\(\{v_{0,0},v_{2,1}\)\(,\)\(v_{2,2},\cdots\)\(,\)\(v_{2,k-1}\}\)\(,\)\(\cdots\)\(,\)\(\{v_{0,0},v_{n-1,1},v_{n-1,2},\)\(\cdots\)\(,\)\(v_{n-1,k-1}\}\)\(\}\)._
**Definition 2.5**.: [2] _The complete \(r\)-uniform hypergraph \(K_{n}^{r}\) is a hypergraph with \(n\) vertices such that all possible subsets with \(r\) vertices form hyperedges._
**Lemma 2.6**.: [10] _Let \(B,C,W\), and \(X\) be matrices with \(B\) invertible. Let_
\[M=\begin{pmatrix}B&C\\ D&X\end{pmatrix}\]
_Then \(det(M)=det(B)det(X-DB^{-1}C)\) and if \(X\) is invertible, then \(det(M)=det(X)det(B-CX^{-1}D)\)._
**Lemma 2.7**.: [20] _Let \(\mathbf{M}\in\mathbb{R}^{n\times n}\) be an invertible matrix, and \(U\) and \(W\) are \(n\times 1\) matrices. Then_
\[det(\mathbf{M}+UW^{T})=\text{det}(\mathbf{M})+W^{T}\text{adj}(\mathbf{M})U\]
_where \(adj(\mathbf{M})\) denotes the adjoint of \(\mathbf{M}\)._
**Definition 2.8**.: [21] _Let \(S_{n_{1},n_{2}}\) be a double star of order \(n_{1}+n_{2}\), which is obtained by adding an edge connecting the central vertices of the star \(S_{n_{1}}\) and \(S_{n_{2}}\). Then the \(k\)-th power of \(S_{n_{1},n_{2}}\) is called uniform double hyperstar \(S_{n_{1},n_{2}}^{k}\)._
**Theorem 2.9**.: [11] _Let \(N\in\mathbb{R}^{n\times n}\) be a symmetric matrix with eigenvalues \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\cdots\geq\lambda_{n}\). If \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\cdots\geq\mu_{m}\) are the eigenvalues of the principal submatrix \(M\in\mathbb{R}^{m\times m}\), then_
\[\lambda_{i}\geq\mu_{i}\geq\lambda_{n-m+i}\text{ for }i=1,2,\cdots,m.\]
Figure 1: \(S_{4}^{3}\)-Hyperstar
Figure 2: Uniform double hyperstar-\(S_{4,5}^{3}\)
**Lemma 2.10**.: [14] _For any two real numbers \(r\) and \(s\),_
\[(rI_{n}-sJ_{n})^{-1}=\frac{1}{r}I_{n}+\frac{s}{r(r-ns)}J_{n}.\]
**Lemma 2.11**.: [14] _Let \(M\) be an \(n\times n\) matrix.Then_
\[det(\beta I_{n}-M-\gamma J_{n})=(1-\gamma\chi_{M}(\beta))det(\beta I_{n}-M).\]
_For an \(n\times n\) real matrix \(M\) with row sum equal to \(r\), \(\chi_{M}(\beta)=\frac{n}{\beta-r}\)_
**Definition 2.12**.: [13] _Let \(S^{k}=(V,E)\) be a \(k\)-uniform hypergraph of order \(k(k-1)+1\). If label the vertex set \(V\) as \(V=\{v_{0,0},v_{1,1},v_{1,2},\cdots,v_{1,k},\cdots,v_{k-1,1},v_{k-1,2},v_{k-1, 3},\cdots,v_{k-1,k}\}\) such that set of hyperedges being \(E=\{\{v_{1,1},v_{1,2},v_{1,3}\cdots,v_{1,k}\},\cdots,\{v_{k-1,1},v_{k-1,2},v_ {k-1,3},\cdots,v_{k-1,k}\},\{v_{0,0},v_{1,1},v_{2,1},\cdots,\)\(v_{k-1,1}\}\}\), then \(S^{k}\) is a sunflower hypergraph._
**Theorem 2.13**.: [12] _Let \(G\) be a graph of order \(n\). Then the rank of the matrix \(\begin{bmatrix}\boldsymbol{j}&A\boldsymbol{j}&\cdots&A^{n-1}\boldsymbol{j} \end{bmatrix}\) is equal to the number of its main eigenvalues._
Let \(M\) be a real matrix of order \(n\) such that rows and columns of \(M\) are indexed by elements of \(X=\{1,2,3,\cdots,n\}\). Consider a partition \(P=\{X_{1},X_{2},\cdots,X_{m}\}\) of \(X\). Then the partition of \(M\) according to \(P\) is \(\begin{bmatrix}M_{11}&M_{12}&\cdots&M_{1m}\\ M_{21}&M_{22}&\cdots&M_{2m}\\ \vdots&\vdots&\ddots&\vdots\\ M_{m1}&M_{m2}&\cdots&M_{mm}\end{bmatrix},\) where each \(M_{ij}\) is a submatrix of \(M\) such that rows and columns of \(M_{ij}\) are indexed by elements of \(X_{i}\) and \(X_{j}\) respectively. If \(q_{ij}\) denotes the average row sum of \(M_{ij}\), then the matrix \(Q=(q_{ij})\) is called a quotient matrix of \(M\). If the row sum of each block \(M_{ij}\) is a constant, then the partition \(P\) is called equitable.
**Theorem 2.14**.: [1] _Let \(Q\) be a quotient matrix of any square matrix \(M\) corresponding to an equitable partition. Then, the spectrum of \(M\) contains the spectrum of \(Q\)._
## 3 Characteristic Polynomial of a Hypergraph
Using the characteristic polynomial of the adjacency matrix of hypergraph, we can find the spectrum of the Seidel matrix of \(G^{*}\).
The following theorem gives the relation between the characteristic polynomial of the adjacency and the Seidel matrix of the hypergraph \(G^{*}\).
Figure 3: sunflower hypergraph: \(S^{4}\)
**Theorem 3.1**.: _Let \(P_{S}(\lambda)\) be the characteristic polynomial of the Seidel matrix of \(G^{*}\) and \(P_{A}(\lambda)\) be the characteristic polynomial of the adjacency matrix of \(G^{*}\). Then,_
\[P_{S}(\lambda)=(-2)^{n}P_{A}\Big{(}-\frac{\lambda+1}{2}\Big{)}\left(\frac{-1}{ \lambda+1}H_{G^{*}}\Big{(}\frac{-2}{\lambda+1}\Big{)}+1\right),\]
_where \(H_{G^{*}}\) is the walk generating function of number of walks in \(G^{*}\)._
Proof.: Consider an invertible square matrix \(\mathbf{M},\)\(Sum(\mathbf{M})\) denote the sum of all of the entries of \(\mathbf{M}.\) By Lemma 2.7,
\[\det(\mathbf{M}+UW^{T})=\det(\mathbf{M})+W^{T}\mathrm{adj}(\mathbf{M})U.\]
Take \(U_{n\times 1}=[\frac{1}{2}\ \ \frac{1}{2}\ \ \frac{1}{2}\ \cdots\ \frac{1}{2}]^{T}\) and \(W_{n\times 1}=[w\ \ w\ \ w\ \cdots\ w]^{T},\) where \(w\in\mathbb{R}\) then
\[UW^{T}=\frac{w}{2}\ J\quad\text{and}\quad W^{T}\mathrm{adj}(\mathbf{M})U= \frac{w}{2}Sum(adj(\mathbf{M})),\]
So,
\[\det(\mathbf{M}+\frac{w}{2}J)=\det(\mathbf{M})+\frac{w}{2}Sum(adj(\mathbf{M})). \tag{1}\]
From Theorem 2.1, the total number of walks of length \(l,\)\(N_{l}=\sum\limits_{i,j=1}^{n}a_{ij}^{l},\) where \(a_{ij}^{l}\) is the \(ij\)-th entry of \(A^{l}\). Therefore,\(N_{l}=Sum(A^{l}).\) Let \(H_{G^{*}}(t)=\sum\limits_{l=0}^{\infty}N_{l}t^{l}\) be the generating function of the number of walks of length \(l\) of \(G^{*}\). Then,
\[H_{G^{*}}(t)=\sum\limits_{l=0}^{\infty}N_{l}t^{l}=\sum\limits_{l=0}^{\infty} Sum(A^{l})t^{l}.\]
We know that, \(\sum\limits_{l=0}^{\infty}A^{l}t^{l}=(I-tA)^{-1}\) when \(\|tA\|<1.\) Then, \(\sum\limits_{l=0}^{\infty}A^{l}t^{l}=(det(I-tA))^{-1}adj(I-tA).\)
Therefore,
\[H_{G^{*}}(t)=(\det(I-tA))^{-1}\ Sum(adj(I-tA)).\]
From (1) we get, \(Sum(adj(I-tA))=\frac{2}{t}[\det(I-tA+\frac{t}{2}J)-\det(I-tA)].\)
Thus,
\[Sum(adj(I-tA))=\frac{2}{t}[\det((1+\frac{t}{2})I+\frac{t}{2}S)-\det(I-tA)].\]
Hence \(H_{G^{*}}(t)\),
\[H_{G^{*}}(t)=\frac{2}{t}\left(\frac{\det((1+\frac{t}{2})I+\frac{t }{2}S)-\det(I-tA)}{\det(I-tA)}\right) =\frac{2}{t}\left(\frac{\det(\frac{1}{2}(\frac{2+t}{t})I+S)}{\det (\frac{1}{t}I-A)}-1\right)\] \[=\frac{2}{t}\left(\left(\frac{-1}{2}\right)^{n}\frac{\det(-( \frac{2+t}{t})I-S)}{\det(\frac{1}{t}I-A)}-1\right).\]
Therefore,
\[H_{G^{*}}(t)=\frac{2}{t}\left(\left(-\frac{1}{2}\right)^{n}\frac{P_{S}\left(- \frac{t+2}{t}\right)}{P_{A}(\frac{1}{t})}-1\right).\]
Then
\[P_{A}(\lambda)=\frac{\left(-\frac{1}{2}\right)^{n}P_{S}(-1-2\lambda)}{\frac{1 }{2\lambda}H_{G^{*}}\left(\frac{1}{\lambda}\right)+1},\text{ when }t=\frac{1}{\lambda} \tag{2}\]
and replacing \(t\) by \(\frac{-2}{1+\lambda}\)
\[P_{S}(\lambda)=(-2)^{n}P_{A}\Big{(}-\frac{\lambda+1}{2}\Big{)}\left(\frac{-1}{ \lambda+1}H_{G^{*}}\Big{(}\frac{-2}{\lambda+1}\Big{)}+1\right).\]
**Lemma 3.2**.: _Let \(G^{*}\) be a hypergraph of order \(n\) and \(X=(x_{ij})\) be a matrix of mutually orthogonal normalized eigenvectors of \(A\) corresponding to the eigenvalues \(\lambda_{1},\lambda_{2},\lambda_{3},\cdots,\lambda_{n}\). Then the total number of walks of length \(l\) in \(G^{*}\) is given by,_
\[N_{l}=\sum_{j=1}^{n}C_{j}\lambda_{j}^{l},\]
_where \(C_{j}=\Big{(}\sum_{i=1}^{n}x_{ij}\Big{)}^{2}\)._
Proof.: The proof follows from Theorem 2.3.
**Theorem 3.3**.: _If the adjacency spectrum of hypergraph \(G^{*}\) contains an eigenvalue \(\lambda_{0}\) with multiplicity \(m_{p}>1\), then the Seidel spectrum of \(G^{*}\) has an eigenvalue \(-2\lambda_{0}-1\) with multiplicity \(m_{q}\), where \(m_{q}\geq m_{p-1}\)._
Proof.: By Definition 2.2 and Lemma 3.2,
\[H_{G^{*}}(t)=\sum_{j=1}^{n}C_{j}\frac{1}{1-t\lambda_{j}}.\]
Now we define the function \(\varPhi\) as,
\[\varPhi(\lambda)=\frac{(-\frac{1}{2})^{n}P_{S}(-1-2\lambda)}{P_{A}(\lambda)}.\]
From (2) we obtain,
\[\varPhi(\lambda)=\frac{1}{2\lambda}H_{G^{*}}(\frac{1}{\lambda})+1=\frac{1}{2} \sum_{j=1}^{n}C_{j}\frac{\lambda}{\lambda-\lambda_{j}}+1.\]
By expanding the right hand side of the above equation, we get a rational polynomial \(\frac{P_{1}(\lambda)}{P_{2}(\lambda)}\). Since
\[\frac{(-\frac{1}{2})^{n}P_{S}(-1-2\lambda)}{P_{A}(\lambda)}=\frac{1}{2}\sum_{j =1}^{n}C_{j}\frac{\lambda}{\lambda-\lambda_{j}}+1,\]
it is clear that the roots of \(P_{2}(\lambda)\) are all of multiplicity \(1\). So if \(\lambda_{0}\) is an eigenvalue of \(A\) with multiplicity \(m_{p}\,(m_{p}\geq 2)\), \(P_{S}(-1-2\lambda)\) contain a factor \((\lambda-\lambda_{0})^{m_{q}}\) where \(m_{q}\geq m_{p-1}\). Therefore, \(P_{S}(\lambda)\) contains the factor \((\lambda+2\lambda_{0}+1)^{m_{q}}\). Hence, Seidel spectrum of \(G^{*}\) contains an eigenvalue \(-2\lambda_{0}-1\) with multiplicity \(m_{q}\)
### Characteristic polynomial of \((k,r)\)-regular hypergraph
The \((k,r)\) regular hypergraph was investigated in [15, 16]. A \((k,r)\)-regular hypergraph is a \(k\)-uniform \(r\)-regular hypergraph. In [16], Li W and Sole P derive \(r(k-1)\) as an eigenvalue of the adjacency matrix of the \((k,r)\) regular hypergraph.
**Lemma 3.4**.: _Let \(G^{*}\) be a \((k,r)\)-regular hypergraph with vertices \(v_{i}\)\((1\leq i\leq n)\) and hyperedges \(e_{j}\)\((1\leq j\leq m)\). Then the number of walks of length \(l\) is given by,_
\[N_{l}=nr^{l}(k-1)^{l}.\]
Proof.: For a \((k,r)\)-regular hypergraph \(G^{*}\) on \(n\) vertices, let \(N_{l}\) be the number of walks of length \(l\). The proof follows from induction on length \(l\). When \(l=1\), pick any random vertex \(v_{1}\in G^{*}\), and assume that it is contained in hyperedge \(e_{1}\), which contains \(k-1\) additional vertices. By this argument, there are \(k-1\) walks of length one with origin \(v_{1}\). Since \(G^{*}\) is \(r\)-regular, there exist \(r(k-1)\) walks of length \(1\) starting from \(v_{1}\). Hence,
\[N_{1}=nr(k-1).\]
Assume that the result holds for \(l=p\) then \(N_{p}=nr^{p}(k-1)^{p}.\) Now, we prove for \(l=p+1\). For that we choose a walk of length \(p\) from the \(nr^{p}(k-1)^{p}\) walks, say \(v_{1}e_{1}v_{2}e_{2}\)\(v_{3}\cdots e_{p-1}v_{p}\). Since \(v_{p}\) is adjacent to \(r(k-1)\) vertices, we can obtain \(r(k-1)\) walks. The walk \(v_{1}e_{1}v_{2}e_{2}\)\(v_{3}\cdots e_{p-1}v_{p}\) is arbitary, so in total we can have \((nr^{p}(k-1)^{p})(r(k-l))=nr^{p+1}(k-1)^{p+1}\) walks of length \(l+1\). Hence the theorem.
**Lemma 3.5**.: _The generating function of the number of walks of a \((k,r)\)-regular hypergraph \(G^{*}\) on \(n\) vertices is given by,_
\[H_{G^{*}}(t)=\frac{n}{1-r(k-1)t}\ \ \text{if}\ |t|\leq\frac{1}{r(k-1)}.\]
Proof.: The proof follows from Lemma 3.4
**Theorem 3.6**.: _Let \(G^{*}\) be a \((k,r)\)-regular hypergraph with \(n\) vertices and it's adjacency spectrum is \(\lambda_{1}=r(k-1),\lambda_{2},\lambda_{3},\cdots,\lambda_{n}\). Then the Seidel spectrum of \(G^{*}\) is \(n-1-2\lambda_{1},-1-2\lambda_{2},-1-2\lambda_{3},\cdots,-1-2\lambda_{n}\)._
Proof.: From Theorem 3.1 and Lemma 3.5 we obtain,
\[\frac{n}{1-r(k-1)t}=\frac{2}{t}\left(\left(-\frac{1}{2}\right)^{n}\frac{P_{S} \left(-\frac{t+2}{t}\right)}{P_{A}(\frac{1}{t})}-1\right).\]
Putting \(-\bigg{(}\frac{t+2}{t}\bigg{)}=\lambda\) we have,
\[-\lambda-1\left(\left(-\frac{1}{2}\right)^{n}\frac{P_{S}(\lambda)}{P_{A}(\frac {-\lambda-1}{2})}-1\right)=\frac{n}{1-r(k-1)\frac{-2}{(\lambda+1)}}=\frac{n( \lambda+1)}{(\lambda+1)+2r(k-1)}\]
On simplification we get,
\[P_{S}(\lambda)=(-2)^{n}\left[\frac{-n+\lambda+1+2r(k-1)}{\lambda+1+2r(k-1)} \right]P_{A}\left(\frac{-\lambda-1}{2}\right).\]
Since \(r(k-1)\) is an eigenvalue of a \((k,r)\)-regular hypergraph, \(P_{A}(\lambda)\) contains the factor \((\lambda-r(k-1))\). Thus,
\[P_{A}\Big{(}\frac{-\lambda-1}{2}\Big{)}=(\lambda+1+2r(k-1))Q_{A}\Big{(}\frac{ -\lambda-1}{2}\Big{)},\]
where \(Q_{A}\) is another polynomial of degree one less than \(P_{A}\). Therefore,
\[P_{S}(\lambda)=(-2)^{n}(\lambda-n+1+2r(k-1))Q_{A}\Big{(}\frac{-\lambda-1}{2} \Big{)}.\]
The Seidel eigenvalue corresponding to the eigenvalue \(r(k-1)\) of the adjacency spectrum is \(n-1-2r(k-1)\). Since \(G^{*}\) is a \((k,r)\) regular hypergraph with n vertices,
\[P_{S}(\lambda)=(-2)^{n}\frac{\lambda-n+1+2r(k-1)}{\lambda+1+2r(k-1)}P_{A} \Big{(}\frac{-\lambda-1}{2}\Big{)}.\]
An \(r\)-uniform complete hypergraph of order \(n\), as referred by Berge in [3], is a hypergraph consisting of all the \(r\)-subsets of the vertex set \(V\). Zakiyyah [22] deals with the spectrum of \(r\)-uniform complete hypergraph. If the adjacency spectrum is known, we use Theorem 3.6 to establish the Seidel spectrum of the hypergraph.
For example, the adjacency spectrum of \(K_{n}^{r}\) is,
\[\sigma_{A}(K_{n}^{r})=\begin{pmatrix}(n-1)\left(\begin{smallmatrix}n-2\\ r-2\end{smallmatrix}\right)&-\left(\begin{smallmatrix}n-2\\ r-2\end{smallmatrix}\right)\\ 1&n-1\end{pmatrix}.\]
Using Theorem 3.6, the Seidel spectrum of \((K_{n}^{r})\) is
\[\sigma_{S}(K_{n}^{r})=\begin{pmatrix}(n-1)(1-2\left(\begin{smallmatrix}n-2\\ r-2\end{smallmatrix}\right))&2\left(\begin{smallmatrix}n-2\\ r-2\end{smallmatrix}\right)-1\\ 1&n-1\end{pmatrix}.\]
## 4 Seidel spectrum of hyperstars
This section extends the study to the Seidel spectrum of hyperstars. Hyperstars are \(k\)-uniform hypergraphs with \((n-1)(k-1)+1\) vertices and \(n-1\) hyperedges.
Let \(G^{*}=(V,E)\) be the hypergraph with n vertices and \(\mathbf{x}=(x_{v_{i}})\) be an \(n\)-dimensional vector. Define
\[x(\alpha)=x(v_{1},v_{2},v_{3},\cdots,v_{t})=x_{v_{1}}+x_{v_{2}}+x_{v_{3}}+ \cdots+x_{v_{t}},\]
where \(\alpha\) is the non-empty collection of subset of \(V\). For simplicity, we write \(x_{v_{i}}\) as \(x_{i}\). Let \(E_{[v]}\) be the set of all hyperedges containing vertex \(v\). The entries corresponding to the vertex \(v\) of the adjacency matrix \(A\) of \(G^{*}\) is given by \((A)_{v}\). Then,
\[(A\mathbf{x})_{v}=\sum_{e\in E_{[v]}}x(e-\{v\}),\forall v\in V.\]
Then for a Seidel matrix \(S\) of \(G^{*}\),
\[(S\mathbf{x})_{v}=x(V-\{v\})-2\sum_{e\in E_{[v]}}x(e-\{v\}),\forall v\in V. \tag{3}\]
For example, let \(G^{*}\) be a hypergraph with vertex set \(V=\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) and hyperedge set \(E=\{\{v_{1},v_{2},v_{3}\}\), \(\{v_{2},v_{3},\)\(v_{4},v_{5}\}\), \(\{v_{1},v_{2},v_{4}\}\}\) and \(\mathbf{x}=\begin{bmatrix}x_{1}&x_{2}&x_{3}&x_{4}&x_{5}\end{bmatrix}^{T}\). Then,
\[S\mathbf{x}=\begin{bmatrix}0&-3&-1&-1&1\\ -3&0&-3&-3&-1\\ -1&-3&0&-1&-1\\ -1&-3&-1&0&-1\\ 1&-1&-1&-1&0\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\\ x_{5}\end{bmatrix}=\begin{bmatrix}-3x_{2}-x_{3}-x_{4}+x_{5}\\ -3x_{1}+-3x_{3}-3x_{4}-x_{5}\\ -x_{1}-3x_{2}-x_{4}-x_{5}\\ -x_{1}-3x_{2}-x_{3}-x_{5}\\ x_{1}-x_{2}-x_{3}-x_{4}\end{bmatrix}.\]
Therefore, \((S\mathbf{x})_{v_{1}}=-3x_{2}-x_{3}-x_{4}+x_{5}\). Since \(E_{[v_{1}]}=\{\{v_{1},v_{2},v_{3}\},\,\{v_{1},v_{2},v_{4}\}\}\), we get
\[\sum_{e\in E_{[v_{1}]}}x(e-\{v_{1}\})=x(v_{2},v_{3})+x(v_{2},v_{4})=2x_{2}+x_{3} +x_{4}.\]
Also,
\[x(V-\{v_{1}\})=x(v_{1},v_{2},v_{4},v_{5})=x_{2}+x_{3}+x_{4}+x_{5}.\]
Therefore from (3) we get,
\[(S\mathbf{x})_{v_{1}}=x_{2}+x_{3}+x_{4}+x_{5}-2(2x_{2}+x_{3}+x_{4})=-3x_{2}-x_{ 3}-x_{4}+x_{5}.\]
**Lemma 4.1**.: _Let \(G^{*}\) be a \(k\)-uniform hypergraph and \(u\), \(v\in V(G^{*})\), belonging to the exact same hyperedges. If \((\lambda,\mathbf{x})\) is an eigenpair of \(S\) with \(\lambda+1\neq 2d(u)\) then \(x_{u}=x_{v}\), where \(x_{u}\) and \(x_{v}\) are entries of \(\mathbf{x}\) corresponding to the vertices \(u\) and \(v\) respectively._
Proof.: Let \((\lambda,\mathbf{x})\) be an eigenpair of \(S(G^{*})\) and \(\mathbf{x}=(x_{1},x_{2},x_{3},\cdots,x_{n})\) be the corresponding eigenvector.
By the definition of \(S\),
\[((S+I)\mathbf{x})_{u}=((J-2A)\mathbf{x})_{u}.\]
Now,
\[((S+I)\mathbf{x})_{u}-2d(u)x_{u} =((J-2A)\mathbf{x})_{u}-2d(u)x_{u}\] \[=x_{1}+x_{2}+x_{3}+...+x_{n}-2(\sum_{e\in E_{[u]}}x(e-\{u\})+d(u) x_{u})\] \[=x_{1}+x_{2}+x_{3}+...+x_{n}-2\sum_{e\in E_{[u]}}x(e)\] \[=x_{1}+x_{2}+x_{3}+...+x_{n}-2\sum_{e\in E_{[v]}}x(e)\] \[=((S+I)\mathbf{x})_{v}-2d(v)x_{v}.\]
Since \((\lambda,\mathbf{x})\) is an eigenpair of \(S\), \((\lambda+1,\mathbf{x})\) is an eigenpair of \(S+I\).
Therefore,
\[(\lambda+1)x_{u}-2d(u)x_{u}=(\lambda+1)x_{v}-2d(v)x_{v}.\]
Since \(u\) and \(v\) are contained in the exact same hyperedges, \(d(u)=d(v)\). Hence \(x_{u}=x_{v}\) if \(\lambda+1\neq 2d(u)\).
**Theorem 4.2**.: _Let \(S_{n}\) be a star on n vertices. Then Seidel spectrum of \(S_{n}^{k}\)\((k\geq 2)\) is_
\[\sigma_{s}(S_{n}^{k})=\binom{1}{(n-1)(k-2)}\begin{array}{rrr}1&3-2k&r_{1}&r_{ 2}\\ n-2&1&1\end{array},\]
_where \(r_{1}\) and \(r_{2}\) are the roots of the equation,_
\[\lambda^{2}-((k-1)(n-3)+1)\lambda-(n-1)(k-1)=0.\]
Proof.: Let \(e\in E(S_{n})\). Adding \(k-2\) vertices to \(e\) forms a hyperedge of \(S_{n}^{k}\). Therefore, each edge \(e\) forms a hyperedge \(e^{k}\) of \(k\) vertices in \(S_{n}^{k}\). Let \(\{u_{1},u_{2},u_{3},...,u_{k-1}\}\in e^{k}\) be vertices of degree \(1\). For \(2\leq i\leq k-1\) we construct \((k-2)\) linearly independent eigenvectors \(\mathbf{x}^{i}=(x^{i})_{v},v\in V(S_{n}^{k})\) as follows
\[\mathbf{x}^{i}=(x^{i})_{v}=\begin{cases}-1&\text{if }v=u_{1}\\ 1&\text{if }v=u_{i}\\ 0&\text{otherwise}.\end{cases}\]
Repeating the construction for other hyperedges, we get \((n-1)(k-2)\) linearly independent vectors associated with an eigenvalue \(1\).
Let \(\{\ e_{1},e_{2},e_{3},...,e_{n-1}\}\) be the edges of \(S_{n}\) and \(\{\ e_{1}^{k},e_{2}^{k},e_{3}^{k},...,e_{n-1}^{k}\ \}\) be the hyperedges of \(S_{n}^{k}\). For \(2\leq j\leq(n-2)\)
\[\mathbf{z}^{j}=(z^{j})_{v}=\begin{cases}-1&\text{if $v\in e_{1}^{k}$ and $d(v)=1$}\\ 2&\text{if $v\in e_{j}^{k}$ and $d(v)=1$}\\ -1&\text{if $v\in e_{j+1}^{k}$ and $d(v)=1$}\\ 0&\text{otherwise.}\end{cases}\]
and
\[\mathbf{z}^{n-1}=(z^{n-1})_{v}=\begin{cases}-1&\text{if $v\in e_{1}^{k}$ and $d(v)=1$}\\ 2&\text{if $v\in e_{n-1}^{k}$ and $d(v)=1$}\\ -1&\text{if $v\in e_{2}^{k}$ and $d(v)=1$}\\ 0&\text{otherwise.}\end{cases}\]
This construction will give \(n-2\) linearly independent eigenvectors \(\mathbf{z}^{j}\) corresponding to the eigenvalue \(3-2k\).
Let \((\lambda,\mathbf{x})\) be an eigenpair of \(S_{n}^{k}\), we have
\[S\mathbf{x}=\lambda\mathbf{x}.\]
Let \(E(S_{n})=\{e_{1},e_{2},...,e_{n-1}\}\) and \(\{u_{1}^{j},u_{2}^{j},...,u_{k-1}^{j}\}\) be vertices of degree \(1\) in \(e_{j}^{k}\) and \(v\) be the vertex of degree \(n-1\). Also from Lemma 4.1, since \(u_{i}^{j}\) and \(u_{1}^{j}\) are contained in exactly same hyperedges \(x_{u_{i}}^{j}=x_{u_{1}}^{j}\) where \(1\leq j\leq n-1\), \(1\leq i\leq k-1\) and \(n\geq 3\).
By expanding \(S\mathbf{x}=\lambda\mathbf{x}\), we obtain the following system of equations
\[\lambda x_{v} =-[(k-1)x_{u_{1}^{1}}+(k-1)x_{u_{1}^{2}}+...+(k-1)x_{u_{1}^{n-1} }], \tag{4}\] \[\lambda x_{u_{1}^{1}} =-x_{v}-(k-2)x_{u_{1}^{1}}+(k-1)x_{u_{1}^{2}}+(k-1)x_{u_{1}^{3}} +...+(k-1)x_{u_{1}^{n-1}},\] (5) \[\lambda x_{u_{1}^{2}} =-x_{v}+(k-1)x_{u_{1}^{1}}-(k-2)x_{u_{1}^{2}}+(k-1)x_{u_{1}^{3}} +...+(k-1)x_{u_{1}^{n-1}},\] (6) \[\lambda x_{u_{1}^{3}} =-x_{v}+(k-1)x_{u_{1}^{1}}+(k-1)x_{u_{1}^{2}}-(k-2)x_{u_{1}^{3}} +...+(k-1)x_{u_{1}^{n-1}},\] (7) \[\vdots\] \[\lambda x_{u_{1}^{n-1}} =-x_{v}+(k-1)x_{u_{1}^{1}}+(k-1)x_{u_{1}^{2}}+(k-1)x_{u_{1}^{3}} +...-(k-2)x_{u_{1}^{n-1}}. \tag{8}\]
From (5) and (6) we have,
\[x_{v} =-\lambda x_{u_{1}^{1}}-(k-2)x_{u_{1}^{1}}+(k-1)x_{u_{1}^{2}}+(k- 1)x_{u_{1}^{3}}+...+(k-1)x_{u_{1}^{n-1}},\] \[x_{v} =(k-1)x_{u_{1}^{1}}-\lambda x_{u_{1}^{2}}-(k-2)x_{u_{1}^{2}}+(k- 1)x_{u_{1}^{3}}+...+(k-1)x_{u_{1}^{n-1}}.\]
Then,
\[-(\lambda+k-2)x_{u_{1}^{1}}+ (k-1)x_{u_{1}^{2}}+(k-1)x_{u_{1}^{3}}+...+(k-1)x_{u_{1}^{n-1}}\] \[=(k-1)x_{u_{1}^{1}}-(\lambda+k-2)x_{u_{1}^{2}}+(k-1)x_{u_{1}^{3}} +...+(k-1)x_{u_{1}^{n-1}}.\]
On simplification, we obtain,
\[(\lambda+2k-3)x_{u_{1}^{1}}=(\lambda+2k-3)x_{u_{1}^{2}}.\]
Similarly,
\[(\lambda+2k-3)x_{u_{1}^{2}}=(\lambda+2k-3)x_{u_{1}^{3}}.\]
In general,
\[(\lambda+2k-3)x_{u_{1}^{i}}=(\lambda+2k-3)x_{u_{1}^{i+1}},\ \ i=1,2,3,\cdots,n-2.\]
Suppose \(\lambda\neq-2k+3\) then,
\[x_{u_{1}^{1}}=x_{u_{1}^{2}}=\cdots=x_{u_{1}^{n-1}}.\]
From (4)-(8) we obtain,
\[\lambda x_{v} =-(k-1)(n-1)x_{u_{1}^{1}},\] \[\lambda x_{u_{1}^{1}} =-x_{v}-(k-2)x_{u_{1}^{1}}+(k-1)(n-2)x_{u_{1}^{1}}.\]
Therefore,
\[x_{v}=(-\lambda-(k-2)+(k-1)(n-2))x_{u_{1}^{1}}.\]
Thus,
\[(\lambda^{2}+((k-2)+(k-1)(n-2))\lambda)x_{u_{1}^{1}}=-(k-1)(n-1)x_{u_{1}^{1}}.\]
Since \(x_{u_{1}^{1}}\neq 0\) we get,
\[\lambda^{2}-((k-1)(n-3)+1)\lambda-(k-1)(n-1)=0.\]
Therefore, roots \(r_{1}\), \(r_{2}\) of the above equation are also eigenvalues of the hyperstar. Thus we have all \((n-1)(k-1)+1\) eigenvalues.
Next, we determine the Seidel energy of the hyperstar \(S_{n}^{k}\). Also, we obtain a relation between the Seidel energy of \(G^{*}\) and \(G^{*}-v\) where \(v\in V\).
**Theorem 4.3**.: _The Seidel energy \(SE(S_{n}^{k})\) of \(S_{n}^{k}\) is,_
\[SE(S_{n}^{k})=(n-1)(3k-5)-(2k-3)+\sqrt{(k-1)^{2}(n-3)^{2}+2(k-1)(3n-5)+1}.\]
Proof.: The Seidel energy of \(S_{n}^{k}\) is, \(SE(S_{n}^{k})=\sum_{i=1}^{(n-1)(k-1)+1}|\lambda_{i}|\).
From Theorem 4.2
\[SE(S_{n}^{k})=|1|(n-1)(k-2)+|3-2k|(n-2)+|r_{1}|+|r_{2}|,\]
where \(r_{1}\) and \(r_{2}\) are the roots of the equation \(\lambda^{2}-((k-1)(n-3)+1)\lambda-(k-1)(n-1)=0\).
We can notice that
\[r_{1} =\frac{(k-1)(n-3)+1+\sqrt{((k-1)(n-3)+1)^{2}+4(k-1)(n-1)}}{2}\geq 0,\] \[r_{2} =\frac{(k-1)(n-3)+1-\sqrt{((k-1)(n-3)+1)^{2}+4(k-1)(n-1)}}{2}\leq 0.\]
Therefore,
\[SE(S_{n}^{k})=(n-1)(3k-5)-(2k-3)+\sqrt{(k-1)^{2}(n-3)^{2}+2(k-1)(3n-5)+1}.\]
For convenience in the next theorem, we denote the Seidel matrix of \(G^{*}\) is denoted by \(S(G^{*})\).
**Theorem 4.4**.: _Let \(G^{*}=(V,E)\) be a hypergraph of order \(n\) and \(v\in V(G^{*})\) be any arbitrary vertex. Then_
\[SE(G^{*})\geq SE(G^{*}-v).\]
Proof.: Let \(\mu_{1}^{\prime}\geq\mu_{2}^{\prime}\geq\mu_{3}^{\prime}\geq\cdots\geq\mu_{t}^{\prime}\), where \(t\leq n-1\) be the positive eigenvalues of \(S(G^{*}-v)\).
Then
\[SE(G^{*}-v)=2\sum_{i=1}^{t}\mu_{i}^{\prime}.\]
The Seidel matrix \(S(G^{*}-v)\) is a principal submatrix of \(S(G^{*})\) of order \(n-1\). By Theorem 2.9 we can find eigenvalues of \(S(G^{*})\), \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\cdots\geq\mu_{t}\) such that
\[\mu_{1}\geq\mu_{1}^{\prime}\geq\mu_{2}\geq\mu_{2}^{\prime}\geq\mu_{3}\geq \cdots\geq\mu_{t}\geq\mu_{t}^{\prime}.\]
Therefore,
\[SE(G^{*})\geq 2\sum_{i=1}^{t}\mu_{i}\geq 2\sum_{i=1}^{t}\mu_{i}^{\prime}=SE(G^{ *}-v).\]
Hence the result.
**Corollary 4.5**.: _Let \(S_{n}^{k}\) be a \(k\)-uniform hyperstar, then_
\[SE(S_{n}^{k})\geq SE(S_{n}^{k-1}).\]
### Main Seidel eigenvalues of hyperstar
The main eigenvalues of a graph have been studied in [9]. In this section, we discuss the number of main Seidel eigenvalues and the largest eigenvalue of a hyperstar.
**Lemma 4.6**.: _Let \(S\) be the Seidel matrix of the hypergraph \(G^{*}\) of order \(n\). Then the rank of the matrix \(\begin{bmatrix}\boldsymbol{j}&S\boldsymbol{j}&S^{2}\boldsymbol{j}&\cdots&S^{n- 1}\boldsymbol{j}\end{bmatrix}\) is equal to the number of main Seidel eigenvalues of \(G^{*}\)._
Proof.: By applying similar arguments as in the proof of Theorem 2.13, we get the desired result.
**Theorem 4.7**.: _The main Seidel eigenvalues of a hyperstar \(S_{n}^{k}\) are \(r_{1}\) and \(r_{2}\) which are the roots of the equation \(\lambda^{2}-((k-1)(n-3)+1)\lambda-(n-1)(k-1)=0\)._
Proof.: From the proof of Theorem 4.2 we obtain \(\boldsymbol{j}^{T}\mathbf{x}^{i}=0\) for all \((n-1)(k-2)\) eigenvectors and \(\boldsymbol{j}^{T}\boldsymbol{x}^{j}=0\) for all \((n-2)\) eigenvector corresponding to eigenvalue \(1\) and \(3-2k\) respectively. Thus, the only possible main Seidel eigenvalues are \(r_{1}\)and \(r_{2}\), where
\[r_{1}=\frac{\left(k-1\right)\left(n-3\right)+1+\sqrt{\left(\left(k-1\right) \left(n-3\right)+1\right)^{2}+4\left(k-1\right)\left(n-1\right)}}{2}\]
and
\[r_{2}=\frac{\left(k-1\right)\left(n-3\right)+1-\sqrt{\left(\left(k-1\right) \left(n-3\right)+1\right)^{2}+4\left(k-1\right)\left(n-1\right)}}{2}.\]
Next we find the \(\operatorname{rank}(\begin{bmatrix}\boldsymbol{j}&S\boldsymbol{j}&S^{2} \boldsymbol{j}&\cdots&S^{n-1}\boldsymbol{j}\end{bmatrix})\). Now we prove that \(\boldsymbol{j}\) and \(S\boldsymbol{j}\) are linearly independent. We can represent the Seidel matrix of \(S_{n}^{k}\) as follows,
\[S=\begin{bmatrix}O_{1\times 1}&-J_{1\times(k-1)}\otimes J_{1\times(n-1)}\\ -J_{(k-1)\times 1}\otimes J_{(n-1)\times 1}&I_{n-1}\otimes B_{k-1}+(J_{n-1}-I_{n-1 })\otimes J_{k-1}\end{bmatrix}\]
where \(B_{n}=I_{n}-J_{n}\).
\[S\boldsymbol{j}=\begin{bmatrix}O_{1\times 1}&-J_{1\times(k-1)(n-1)}\\ -J_{(k-1)(n-1)\times 1}&I_{n-1}\otimes B_{k-1}+(J_{n-1}-I_{n-1})\otimes J_{k-1} \end{bmatrix}\begin{bmatrix}I_{1}\\ J_{(n-1)(k-1)\times 1}\end{bmatrix}\]
\[=\begin{bmatrix}-J_{1\times(k-1)(n-1)}J_{(n-1)(k-1)\times 1}\\ -J_{(k-1)(n-1)\times 1}+(I_{n-1}\otimes B_{k-1}+(J_{n-1}-I_{n-1})\otimes J_{k-1}) \left(J_{(n-1)(k-1)\times 1}\right)\end{bmatrix}.\]
Therefore,
\[S\boldsymbol{j}=\begin{bmatrix}-\left(n-1\right)\left(k-1\right)J_{1}\\ -\left(n-3\right)\left(k-1\right)J_{(n-1)(k-1)\times 1}\end{bmatrix}.\]
Thus, \(\boldsymbol{j}\) and \(S\boldsymbol{j}\) are linearly independent. Hence \(\text{rank}(\begin{bmatrix}\boldsymbol{j}&S\boldsymbol{j}&S^{2}\boldsymbol{j}& \cdots&S^{n-1}\boldsymbol{j}\end{bmatrix})=2\). Therefore, \(r_{1}\) and \(r_{2}\) are the main Seidel eigenvalues of a hyperstar.
**Remark 4.8**.: _From Theorem 4.7, we can say that largest Seidel eigenvalue of hyperstar is a main Seidel eigenvalue. But the converse need not be true._
## 5 Spectrum of uniform double hyperstar
In this section, we estimate the adjacency spectrum and Seidel spectrum of uniform double hyperstar.
**Theorem 5.1**.: _Let \(S_{n_{1},n_{2}}\) be a double star of order \(n_{1}+n_{2}\), then the spectrum of \(S_{n_{1},n_{2}}^{k},\ (k\geq 3)\) is given by,_
\[\sigma_{A}(S_{n_{1},n_{2}}^{k})=\begin{pmatrix}-1&k-2&r_{1}&r_{2}&r_{3}&r_{4}& r_{5}\\ (k-2)(n_{1}+n_{2}-1)-1&n_{1}+n_{2}-4&1&1&1&1&1\end{pmatrix}\]
_where \(r_{i},i=1,2,3,4,5\) are the roots of the equation_
\[\lambda^{5}-(-7+3k)\lambda^{4}+(17+3k^{2}+n_{2}+n_{1}-k(14+n_{2}+ n_{1}))\lambda^{3}-(-5(3+n_{2}+n_{1})+k(17+7n_{2}+7n_{1})\\ -k^{2}(7+2n_{2}+2n_{1})+k^{3})\lambda^{2}-(-1+(-7+12k-6k^{2}+k^{3} )n_{1}+(7-5k+k^{2}+(1-k)n_{1})(-1+k)n_{2})\lambda\\ -(-1+k)(-1+n_{2}+n_{1}+(3-4k+k^{2})n_{2}n_{1})=0. \tag{9}\]
Proof.: Let \(A(S_{n_{1}}^{k})\) and \(A(S_{n_{2}}^{k})\) be the adjacency matrix corresponding to \(S_{n_{1}}^{k}\) and \(S_{n_{1}}^{k}\) respectively. Let \(D\) be a \(((n_{1}-1)(k-1)+1)\times((n_{2}-1)(k-1)+1)\) matrix with the first entry equal to \(1\) and all other entries being \(0\) and, \(C_{1},C_{2}\) are matrices of order \(((n_{1}-1)(k-1)+1)\times(k-2)\), \(((n_{2}-1)(k-1)+1)\times(k-2)\) respectively, with the first-row entries equal to \(1\) and all other entries equal to \(0\).
\[A(S_{n_{1},n_{2}}^{k})=\begin{bmatrix}A(S_{n_{1}}^{k})&D&C_{1}\\ D^{T}&A(S_{n_{2}}^{k})&C_{2}\\ C_{1}^{T}&C_{2}^{T}&J_{k-2}-I_{k-2}\end{bmatrix}.\]
Then the characteristic polynomial of \(A(S_{n_{1},n_{2}}^{k})\) is given by,
\[det(A(S_{n_{1},n_{2}}^{k})-\lambda I)=det\begin{pmatrix}A(S_{n_{1}}^{k})- \lambda I&D&C_{1}\\ D^{T}&A(S_{n_{2}}^{k})-\lambda I&C_{2}\\ C_{1}^{T}&C_{2}^{T}&J_{k-2}-(1+\lambda)I_{k-2}\end{pmatrix}.\]
By Lemma 2.6
\[det(A(S_{n_{1},n_{2}}^{k})-\lambda I)=det(P)det(J_{k-2}-(1+\lambda)I_{k-2}), \tag{10}\]
where
\[P=\begin{bmatrix}A(S_{n_{1}}^{k})-\lambda I&D\\ D^{T}&A(S_{n_{2}}^{k})-\lambda I\end{bmatrix}-\begin{bmatrix}C_{1}\\ C_{2}\end{bmatrix}\left(J_{k-2}-(1+\lambda)I_{k-2}\right)^{-1}\begin{bmatrix}C_{ 1}^{T}&C_{2}^{T}\end{bmatrix}. \tag{11}\]
By Lemma 2.10 we get,
\[(J_{k-2}-(1+\lambda)I_{k-2})^{-1}=\frac{-I_{k-2}}{1+\lambda}+\frac{J_{k-2}}{(1+ \lambda)(k-3-\lambda)}.\]
On simplification, we obtain
\[\begin{bmatrix}C_{1}\\ C_{2}\end{bmatrix}(J_{k-2}-(1+\lambda)I_{k-2})^{-1}\begin{bmatrix}C_{1}^{T}&C_{2 }^{T}\end{bmatrix}=\begin{bmatrix}P_{1}&P_{2}\\ P_{3}&P_{4}\end{bmatrix}, \tag{12}\]
where \(P_{1},P_{2},P_{3}\) and \(P_{4}\) are matrices of order \(((n_{1}-1)(k-1)+1)\times((n_{1}-1)(k-1)+1)\), \(((n_{1}-1)(k-1)+1)\times((n_{2}-1)(k-1)+1)\), \(((n_{2}-1)(k-1)+1)\times((n_{1}-1)(k-1)+1)\) and \(((n_{2}-1)(k-1)+1)\times((n_{2}-1)(k-1)+1)\) respectively with first entry of the matrix equal to \(p\) and all other entries being zero, where \(p=\dfrac{k-2}{k-3-\lambda}\) sum of all entries of \((J_{k-2}-(1+\lambda)I_{k-2})^{-1}\).
From (11) and (12),we obtain
\[P=\begin{bmatrix}A(S_{n_{1}}^{k})-\lambda I-P_{1}&D-P_{2}\\ D^{T}-P_{3}&A(S_{n_{2}}^{k})-\lambda I-P_{4}\end{bmatrix}. \tag{13}\]
Let \(\overline{A(S_{n_{1}}^{k})-\lambda I}\) and \(\overline{A(S_{n_{2}}^{k})-\lambda I}\) be the matrices obtained after deleting the first row and first column of \(A(S_{n_{1}}^{k})-\lambda I\) and \(A(S_{n_{2}}^{k})-\lambda I\) respectively. Then,
\[det(P)=det\left(A(S_{n_{1}}^{k})-\lambda I-P_{1}\right)det\left( A(S_{n_{2}}^{k})-\lambda I-P_{4}\right)\\ -(1-p)^{2}det\left(\overline{A(S_{n_{1}}^{k})-\lambda I}\right) det\left(\overline{A(S_{n_{2}}^{k})-\lambda I}\right). \tag{14}\]
Then, we obtain as follows
\[det\left(A(S_{n_{1}}^{k})-\lambda I-P_{1}\right)=det\left(A(S_{n_{1}}^{k})- \lambda I\right)-p\;det\left(\overline{A(S_{n_{1}}^{k})-\lambda I}\right) \tag{15}\]
and
\[det\left(A(S_{n_{2}}^{k})-\lambda I-P_{4}\right)=det\left(A(S_{n_{2}}^{k})- \lambda I\right)-p\;det\left(\overline{A(S_{n_{2}}^{k})-\lambda I}\right). \tag{16}\]
Also,
\[det\left(\overline{A(S_{n_{1}}^{k})-\lambda I}\right) =det\left(I_{n_{1}-1}\otimes(J_{k-1}-(1+\lambda)I_{k-1})\right)\] \[=(-\lambda-1)^{(n_{1}-1)(k-2)}(-\lambda+(k-2))^{n_{1}-1}. \tag{17}\]
Similarly, we have
\[det\left(\overline{A(S_{n_{2}}^{k})-\lambda I}\right)=(-\lambda-1)^{(n_{2}-1) (k-2)}(-\lambda+(k-2))^{n_{2}-1}. \tag{18}\]
From Theorem 1.1, we have
\[det\left(A(S_{n_{i}}^{k})-\lambda I\right)=(-\lambda-1)^{(n_{i}-1)(k-2)}(- \lambda+(k-2))^{n_{i}-2}(\lambda^{2}-(k-2)\lambda-(n_{i}-1)(k-1)),\;i=1,2. \tag{19}\]
From (15),(17) and (19), we obtain
\[det\left(A(S_{n_{1}}^{k})-\lambda I-P_{1}\right)=\left((\lambda ^{2}-(k-2)\lambda-(n_{1}-1)(k-1))-\left(\frac{k-2}{k-3-\lambda}\right)(- \lambda+(k-2))\right)\\ (-\lambda-1)^{(n_{1}-1)(k-2)}(-\lambda+(k-2))^{n_{1}-2}. \tag{20}\]
Similarly,
\[det\left(A(S_{n_{2}}^{k})-\lambda I-P_{4}\right)=\left((\lambda^{2} -(k-2)\lambda-(n_{2}-1)(k-1))-\left(\frac{k-2}{k-3-\lambda}\right)(-\lambda+(k-2 ))\right)\\ (-\lambda-1)^{(n_{2}-1)(k-2)}(-\lambda+(k-2))^{n_{2}-2}. \tag{21}\]
Therefore,
\[det(P)=\bigg{(}\Big{(}(\lambda^{2}-(k-2)\lambda-(n_{1}-1)(k-1))- \left(\frac{k-2}{k-3-\lambda}\right)(-\lambda+(k-2))\Big{)}\\ \Big{(}(\lambda^{2}-(k-2)\lambda-(n_{2}-1)(k-1))-\left(\frac{k-2} {k-3-\lambda}\right)(-\lambda+(k-2))\Big{)}\\ -\left(\frac{-\lambda-1}{k-3-\lambda}\right)^{2}(-\lambda+(k-2)) ^{2}\bigg{)}(-\lambda-1)^{(n_{1}+n_{2}-2)(k-2)}(-\lambda+(k-2))^{n_{1}+n_{2}- 4}. \tag{22}\]
Since \(det(J_{k-2}-(1+\lambda)I_{k-2})=(-1-\lambda)^{k-3}(k-3-\lambda)\), we get
\[det(P)=\bigg{(}\Big{(}(\lambda^{2}-(k-2)\lambda-(n_{1}-1)(k-1))- \left(\frac{k-2}{k-3-\lambda}\right)(-\lambda+(k-2))\Big{)}\\ \Big{(}(\lambda^{2}-(k-2)\lambda-(n_{2}-1)(k-1))-\left(\frac{k-2} {k-3-\lambda}\right)(-\lambda+(k-2))\Big{)}\\ -\left(\frac{-\lambda-1}{k-3-\lambda}\right)^{2}(-\lambda+(k-2) )^{2}\bigg{)}(-\lambda-1)^{(k-2)(n_{1}+n_{2}-1)-1}(-\lambda+(k-2))^{n_{1}+n_{2 }-4}(k-3-\lambda).\]
After simplification we get the desired result.
**Theorem 5.2**.: _Let \(S_{n_{1},n_{2}}\) be a double star on \(n_{1}+n_{2}\) vertices. Then Seidel spectrum of \(S_{n_{1},n_{2}}^{k}\)\((k\geq 3)\) is_
\[\sigma_{s}(S_{n_{1},n_{2}}^{k})=\begin{pmatrix}1&-2k+3&r_{1}&r_{2}&r_{3}&r_{4 }&r_{5}\\ (k-2)(n_{1}+n_{2}-1)-1&n_{1}+n_{2}-4&1&1&1&1&1\end{pmatrix},\]
_where \(r_{1},\ r_{2},\ r_{3},\ r_{4}\) and \(r_{5}\) are the eigenvalues of the quotient matrix \(Q\) of \(S(S_{n_{1},n_{2}}^{k})\)._
Proof.: Let \(E(S_{n_{1}})=\{e_{1},e_{2},e_{3},\cdots,e_{n_{1}-1}\}\) and \(E(S_{n_{2}})=\{e^{\prime}_{1},e^{\prime}_{2},e^{\prime}_{3},\cdots,e^{\prime}_ {n_{2}-1}\}\) be the edge sets of the star graph \(S_{n_{1}}\) and \(S_{n_{2}}\) respectively. Let \(e_{0}\) be the edge connecting the central vertices of \(S_{n_{1}}\) and \(S_{n_{2}}\). Therefore, \(S_{n_{1},n_{2}}^{k}\) is a \(k\)-uniform hypergraph obtained by adding \(k-2\) vertices to every edge of \(S_{n_{1},n_{2}}\). Then \(E(S_{n_{1},n_{2}}^{k})=\{e_{0}^{k},e_{1}^{k},e_{2}^{k},e_{3}^{k},\cdots,e_{n_{1} -1}^{k},e_{1}^{\prime k},e_{2}^{\prime k},e_{3}^{k},\cdots,e_{n_{2}-1}^{\prime k}\}\). Let \(\{v_{1},v_{2},\cdots,v_{k-1}\}\in e_{1}^{k}\) be the vertices of degree \(1\). For \(2\leq i\leq k-1\) we can construct \(k-2\) linearly independent eigenvectors \(\mathbf{x}^{i}\) as follows,
\[\mathbf{x}^{i}=\begin{cases}(x^{i})_{v_{1}}=-1\\ (x^{i})_{v_{i}}=1\\ (x^{i})_{v_{j}}=0\end{cases}\ \ \ \text{for}\ v_{j}\in V(S_{n_{1},n_{2}}^{k}-\{v_{1},v_{ i}\}).\]
Applying this construction on the hyperedges \(e_{1}^{k},e_{2}^{k},e_{3}^{k},\cdots,e_{n_{1}-1}^{k}\) we obtain \((n_{1}-1)(k-2)\) eigenvectors corresponding to the eigenvalue \(1\). By using similar construction on the hyperedges \(e_{1}^{\prime k},e_{2}^{\prime k},e_{3}^{\prime k},\cdots,\)\(e_{n_{2}-1}^{\prime k}\) and on the hyperedge \(e_{0}^{k}\) we can find another set of \((n_{2}-1)(k-2)+(k-3)\) eigenvectors associated with an eigenvalue \(1\). Thus we obtain total \((k-2)(n_{1}+n_{2}-1)-1\) eigenvectors associated with the eigenvalue \(1\).
For \(2\leq j\leq(n_{1}-1)\), let \(\mathbf{z}^{j}\) be an eigenvector corresponding to \(-2k+3\) such that,
\[\mathbf{z}^{j}=\begin{cases}(z^{j})_{v_{i}}=1&,\,i=1,2,\cdots,k-1\\ (z^{j})_{v}=-1&,\,\,v\in e_{j}^{k}\text{ and }d(v)=1\\ 0&,\,\,\text{otherwise},\end{cases}\]
and for \(2\leq j\leq(n_{2}-1)\) let \(\mathbf{z}_{*}^{j}\) be an eigenvector corresponding to \(-2k+3\) such that,
\[\mathbf{z}_{*}^{j}=\begin{cases}(z_{*}^{j})_{v_{i}^{\prime}}=1&,\,v_{i}^{ \prime}\in e_{1}^{\prime k}\text{ and }d(v_{i}^{\prime})=1\\ (z_{*}^{j})_{v^{\prime}}=-1&,\,\,v^{\prime}\in e_{j}^{\prime k}\text{ and }d(v^{\prime})=1\\ 0&,\,\,\text{otherwise}.\end{cases}\]
Therefore, we obtain \(n_{1}+n_{2}-4\) eigenvectors \(\mathbf{z}^{j}\) and \(\mathbf{z}_{*}^{j}\) corresponding to the eigenvalue \(-2k+3\).
Next, we have to find the remaining eigenvalues, for that we partitioned the Seidel matrix \(S(S_{n_{1},n_{2}}^{k})\) is partitioned as follows,
\[\begin{bmatrix}0&-J_{1\times(n_{1}-1)(k-1)}&-1&J_{1\times(n_{2}-1)(k-1)}&-J_{ 1\times(k-2)}\\ -J_{(n_{1}-1)(k-1)\times 1}&I_{(n_{1}-1)(k-1)}+J_{(n_{1}-1)(k-1)}&J_{(n_{1}-1)(k-1) \times 1}&J_{(n_{1}-1)(k-1)\times(n_{2}-1)(k-1)}&J_{(n_{1}-1)(k-1)\times(k-2)}\\ &-2(I_{n_{1}-1}\otimes J_{k-1})&&\\ -1&J_{1\times(n_{1}-1)(k-1)}&0&-J_{1\times(n_{2}-1)(k-1)}&-J_{1\times(k-2)}\\ J_{(n_{2}-1)(k-1)\times 1}&J_{(n_{2}-1)(k-1)\times(n_{1}-1)(k-1)}&-J_{(n_{2}-1)(k-1) \times 1}&I_{(n_{2}-1)(k-1)}+J_{(n_{2}-1)(k-1)}&J_{(n_{2}-1)(k-1)\times(k-2)}\\ &-2(I_{n_{2}-1}\otimes J_{k-1})&&\\ -J_{(k-2)\times 1}&J_{(k-2)\times(n_{1}-1)(k-1)}&-J_{(k-2)\times 1}&J_{(k-2) \times(n_{2}-1)(k-1)}&I_{k-2}-J_{k-2}\end{bmatrix}.\]
Then, the quotient matrix of \(S(S_{n_{1},n_{2}}^{k})\) is given by,
\[Q=\begin{bmatrix}0&-(n_{1}-1)(k-1)&-1&((n_{2}-1)(k-1)&2-k\\ -1&(n_{1}-3)(k-1)+1&1&(n_{2}-1)(k-1)&k-2\\ -1&(n_{1}-1)(k-1)&0&-(n_{2}-1)(k-1)&2-k\\ 1&(n_{1}-1)(k-1)&-1&(n_{2}-3)(k-1)+1&k-2\\ -1&(n_{1}-1)(k-1)&-1&(n_{2}-1)(k-1)&3-k\end{bmatrix}.\]
By Theorem 2.14, the spectrum of \(S(S_{n_{1},n_{2}}^{k})\) contains the spectrum of \(Q\). Thus we have all \((n_{1}+n_{2}-3)k\) eigenvalues.
## 6 Spectrum of sunflower hypergraph
In this section, we estimate the adjacency and Seidel eigenvalues of sunflower hypergraph.
**Theorem 6.1**.: _Let \(S^{k}\) be a \(k\)-uniform sunflower hypergraph. If \(k\geq 2\) is an integer, then the characteristic polynomial of \(S^{k}\) is_
\[P_{A}(\lambda)=(1+\lambda)^{(k-1)(k-2)}\left((2-3k+k^{2})+(6-6k+ k^{2})\lambda+(4-2k)\lambda^{2}+\lambda^{3}\right)\\ \left((-3+2k)+(k-3)\lambda-\lambda^{2}\right)^{k-2}. \tag{23}\]
Proof.: Let \(\mathbf{e}_{i}\in\mathbb{R}^{k}\) with one in the \(i\)-th coordinate and zero elsewhere. Then, the adjacency matrix of \(S^{k}\) can be written as
\[A(S^{k})=\begin{bmatrix}J_{k}-I_{k}&\mathbf{e}_{2}\otimes J_{1,k-1}&\mathbf{ e}_{3}\otimes J_{1,k-1}&\cdots&\mathbf{e}_{k}\otimes J_{1,k-1}\\ \mathbf{e}_{2}^{\mathbf{c}}\otimes J_{k-1,1}&J_{k-1}-I_{k-1}&\mathbf{0}_{k-1} &\cdots&\mathbf{0}_{k-1}\\ \mathbf{e}_{3}^{\mathbf{c}}\otimes J_{k-1,1}&\mathbf{0}_{k-1}&J_{k-1}-I_{k-1} &\cdots&\mathbf{0}_{k-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \mathbf{e}_{k}^{T}\otimes J_{k-1,1}&\mathbf{0}_{k-1}&\mathbf{0}_{k-1}&\cdots &J_{k-1}-I_{k-1}\end{bmatrix}.\]
Therefore the characteristic polynomial of \(S^{k}\) is given by
\[det(A(S^{k})-\lambda I)=det\begin{pmatrix}J_{k}-(1+\lambda)I_{k}&B\\ B^{T}&I_{k-1}\otimes(J_{k-1}-(1+\lambda)I_{k-1})\end{pmatrix}\]
where \(B=\begin{bmatrix}\mathbf{e}_{2}\otimes J_{1,k-1}&\mathbf{e}_{3}\otimes J_{1,k- 1}&\cdots&\mathbf{e}_{k}\otimes J_{1,k-1}\end{bmatrix}.\) From Lemma 2.6, we obtain
\[det(A(S^{k})-\lambda I)=det(J_{k}-(1+\lambda)I_{k})det((I_{k-1}\otimes(J_{k-1}- (1+\lambda)I_{k-1}))-B^{T}(J_{k}-(1+\lambda)I_{k})^{-1}B). \tag{24}\]
Applying Lemma 2.10, we have
\[(J_{k}-(1+\lambda)I_{k})^{-1}=\frac{-I_{k}}{1+\lambda}+\frac{J_{k}}{(1+\lambda )(k-1-\lambda)}. \tag{25}\]
Then,
\[B^{T}(J_{k}-(1+\lambda)I_{k})^{-1}B =\left(\frac{-I_{k-1}}{1+\lambda}+\frac{J_{k-1}}{(1+\lambda)(k-1- \lambda)}\right)\otimes J_{k-1}\] \[=\frac{-I_{k-1}\otimes J_{k-1}}{1+\lambda}+\frac{J_{(k-1)^{2}}}{ (1+\lambda)(k-1-\lambda)}.\]
Therefore,
\[det((I_{k-1}\otimes(J_{k-1}- (1+\lambda)I_{k-1}))-B^{T}(J_{k}-(1+\lambda)I_{k})^{-1}B)\] \[=det\left(I_{k-1}\otimes J_{k-1}-(1+\lambda)I_{(k-1)^{2}}+\frac{ I_{k-1}\otimes J_{k-1}}{1+\lambda}-\frac{J_{(k-1)^{2}}}{(1+\lambda)(k-1- \lambda)}\right)\] \[=det\left(-(1+\lambda)I_{(k-1)^{2}}+\left(\frac{2+\lambda}{1+ \lambda}\right)I_{k-1}\otimes J_{k-1}-\frac{J_{(k-1)^{2}}}{(1+\lambda)(k-1- \lambda)}\right).\]
Using Lemma 2.11, we get
\[det( (I_{k-1}\otimes(J_{k-1}-(1+\lambda)I_{k-1}))-B^{T}(J_{k}-(1+ \lambda)I_{k})^{-1}B) \tag{26}\] \[=\left(1-\frac{1}{(1+\lambda)(k-1-\lambda)}\chi_{M}(-1-\lambda) \right)det\left((-1-\lambda)I_{(k-1)^{2}}+\frac{2+\lambda}{1+\lambda}I_{k-1} \otimes J_{k-1}\right),\]
where \(M=\left(\frac{-2-\lambda}{1+\lambda}\right)I_{k-1}\otimes J_{k-1}\) and \(\chi_{M}(-1-\lambda)=\frac{(k-1)^{2}(1+\lambda)}{-(1+\lambda)^{2}+(k-1)(2+ \lambda)}\). Since \(\frac{(k-1)(2+\lambda)}{1+\lambda}\) and \(0\) are the eigenvalues of \(\frac{2+\lambda}{1+\lambda}I_{k-1}\otimes J_{k-1}\) with multiplicity \(k-1\) and \((k-1)(k-2)\) respectively, we get
\[det\left((-1-\lambda)I_{(k-1)^{2}}+\frac{2+\lambda}{1+\lambda}I_{k-1}\otimes J _{k-1}\right)=\left((k-1)(2+\lambda)-(1+\lambda)^{2}\right)^{(k-1)}(-1-\lambda )^{(k-1)(k-3)}.\]
From (26) we obtain,
\[det( (I_{k-1}\otimes (J_{k-1}-(1+\lambda)I_{k-1}))-B^{T}(J_{k}-(1+\lambda)I_{k})^{-1}B) \tag{27}\] \[\left(\left(-(1+\lambda)^{2}+(k-1)(2+\lambda)\right)(k-1-\lambda )-(k-1)^{2}\right)(-1-\lambda)^{(k-1)(k-3)}\] \[=\frac{((k-1)(2+\lambda)-(1+\lambda)^{2})^{k-2}}{k-1-\lambda}.\]
Clearly \(det(J_{k}-(1+\lambda)I_{k})=(k-1-\lambda)(-1-\lambda)^{k-1}\). Therefore from (27) and (24), we get
\[det(A(S^{k})-\lambda I)=(1+\lambda)^{(k-1)(k-2)}\left((k-1-\lambda) \left((k-1)(k-2)-(1+\lambda)^{2}\right)-(k-1)^{2}\right)\\ \left((k-1)(2+\lambda)-(1+\lambda)^{2}\right)^{k-2}.\]
On simplification, the characteristic polynomial of \(S^{k}\) becomes
\[P_{A(S^{k})}(\lambda)=(1+\lambda)^{(k-1)(k-2)}\left((2-3k+k^{2}) +(6-6k+k^{2})\lambda+(4-2k)\lambda^{2}+\lambda^{3}\right)\\ \left((-3+2k)+(k-3)\lambda-\lambda^{2}\right)^{k-2}.\]
**Corollary 6.2**.: _Let \(S^{k}(k\geq 2)\) be a \(k\)-uniform sunflower hypergraph. Then the spectrum \(\sigma_{A}(S^{k})\) is_
\[\sigma_{A}(S^{k})=\begin{pmatrix}-1&\dfrac{(k-3)+\sqrt{(k+3)(k-1)}}{2}&\dfrac {(k-3)-\sqrt{(k+3)(k-1)}}{2}&r_{1}&r_{2}&r_{3}\\ k-2&1&1&1\end{pmatrix},\]
_where \(r_{i}=\dfrac{2}{3}(-2+k+\sqrt{-2+2k+k^{2}})\ \cos\left(\dfrac{\theta+2(i-1) \pi}{3}\right)\), \(i=1,2,3\) and_
\(\theta=\cos^{-1}\left(\dfrac{34-51k+21k^{2}-2k^{3}}{2\sqrt{(-2+2k+k^{2})^{3}} }\right).\)__
Proof.: The characteristic polynomial of \(S^{k}\) is given by,
\[P_{A(S^{k})}(\lambda)=(1+\lambda)^{(k-1)(k-2)}\left((2-3k+k^{2}) +(6-6k+k^{2})\lambda+(4-2k)\lambda^{2}+\lambda^{3}\right)\\ \left((-3+2k)+(k-3)\lambda-\lambda^{2}\right)^{k-2}\]
Clearly, \(\dfrac{(k-3)\pm\sqrt{(k+3)(k-1)}}{2}\) are the roots of the equation \((-3+2k)+(k-3)\lambda-\lambda^{2}=0\). Using the method in [19], we get \(r_{i}=\dfrac{2}{3}(-2+k+\sqrt{-2+2k+k^{2}})\cos\left(\dfrac{\theta+2(i-1)\pi}{ 3}\right)\), \(i=1,2,3\) are the solution of the equation \((2-3k+k^{2})+(6-6k+k^{2})\lambda+(4-2k)\lambda^{2}+\lambda^{3}=0\) where \(\theta=\cos^{-1}\left(\dfrac{34-51k+21k^{2}-2k^{3}}{2\sqrt{(-2+2k+k^{2})^{3}} }\right).\)
**Theorem 6.3**.: _Let \(S^{k}\)\((k\geq 2)\) be a \(k\)-uniform sunflower hypergraph. Then the Seidel spectrum \(\sigma_{S}(S^{k})\) of \(S^{k}\) is_
\[\sigma_{S}(S^{k})=\begin{pmatrix}1&2-k+\sqrt{(k+3)(k-1)}&2-k-\sqrt{(k+3)(k-1)} &r_{1}&r_{2}&r_{3}\\ (k-1)(k-2)&k-2&k-2&1&1&1\end{pmatrix},\]
_where \(r_{1},r_{2}\) and \(r_{3}\) are the roots of the equation \(\lambda^{3}-(6-5k+k^{2})\lambda^{2}-(-17+26k-12k^{2}+2k^{3})\lambda-8+17k-11k^{ 2}+2k^{3}=0.\)_
Proof.: Let \(\eta_{i}\in\mathbb{R}^{k}\) with \(-1\) in the \(i\)-th coordinate and \(1\) elsewhere. Then the Seidel matrix of the sunflower hypergraph can be expressed as follows,
\[S(S^{k})=\begin{bmatrix}I_{k}-J_{k}&\eta_{2}\otimes J_{1,k-1}&\eta_{3}\otimes J _{1,k-1}&\cdots&\eta_{k}\otimes J_{1,k-1}\\ \eta_{2}^{T}\otimes J_{k-1,1}&I_{k-1}-J_{k-1}&J_{k-1}&\cdots&J_{k-1}\\ \eta_{3}^{T}\otimes J_{k-1,1}&J_{k-1,1}&I_{k-1}-J_{k-1}&\cdots&J_{k-1,1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \eta_{k}^{T}\otimes J_{k-1,1}&J_{k-1,1}&J_{k-1,1}&\cdots&I_{k-1}-J_{k-1}\\ \end{bmatrix}.\]
From Theorem 3.3, the three Seidel eigenvalues of \(S^{k}\) are \(1,\ 2-k+\sqrt{(k+3)(k-1)}\) and \(2-k-\sqrt{(k+3)(k-1)}\). Now, to find the multiplicity of these eigenvalues, we construct the corresponding linearly independent eigenvectors.
For eigenvalue \(1\), we determine \((k-1)(k-2)\) linearly independent eigenvectors \(\mathbf{y}_{j}^{i}=\left(y_{j}^{i}\right)_{v},\ v\in V(S^{k})\) as follows
\[\left(y_{j}^{i}\right)_{v}=\begin{cases}1,&\text{if }v=v_{i,2},\\ -1,&\text{if }v=v_{i,j+1},\\ 0,&\text{otherwise},\end{cases}\]
where \(1\leq i\leq k-1,\ 2\leq j\leq k-1\). Thus \(1\) is an eigenvalue of \(S^{k}\) with multiplicity \((k-1)(k-2)\).
Let \(\mathbf{x}^{i}=\left[\mathbf{x}_{1}^{i}\quad\mathbf{x}_{2}^{i}\cdots\mathbf{x}_{k-1}^{i} \right]^{T},\ (2\leq i\leq k-1)\) be the eigenvectors corresponding to the eigenvalue \(2-k+\sqrt{(k+3)(k-1)}\) of \(S^{k}\), where \(\mathbf{x}_{1}^{i}=\left[x_{v_{0,0}}^{i}\quad x_{v_{1,1}}^{i}\quad x_{v_{2,1}}^{i }\quad x_{v_{3,1}}^{i}\cdots x_{v_{k-1,1}}^{i}\right]^{T}\) and \(\mathbf{x}_{j+1}^{i}=\left[x_{v_{j,2}}^{i}\quad x_{v_{j,3}}^{i}\cdots x_{v_{j+1,k }}^{i}\right]\), \(1\leq j\leq k-2\). From Lemma 4.1, we get \(\mathbf{x}_{j+1}^{i}\)'s are of the form \(cJ_{k-1,1}\) where \(c\) is any constant. Then,
\[\mathbf{x}^{i}=\left[\mathbf{x}_{1}^{i}\quad c_{2}J_{k-1,1}\quad c_{3}J_{k-1,1} \quad\cdots\quad c_{k-1}J_{k-1,1}\right]^{T}.\]
For \(v\in\{v_{0,0},\ v_{1,1},\ v_{2,1},\ v_{3,1},\ldots,v_{k-1,1}\}\), \(2\leq r\leq k-1\)
\[\mathbf{x}_{1}^{i}=(x_{1}^{i})_{v}=\begin{cases}1,&\text{if }v=v_{1,1},\\ -1,&\text{if }v=v_{i,1},\text{ and }c_{r}=\begin{cases}\frac{1}{2}(1-\sqrt{\frac{k+3}{k-1}}),& \text{if }r=2,\\ -\frac{1}{2}(1-\sqrt{\frac{k+3}{k-1}}),&\text{if }r=i+1,\\ 0,&\text{otherwise}.\end{cases}\end{cases}\]
Therefore, we obtain a family of \(k-2\) linearly independent eigenvectors associated with an eigenvalue \(2-k+\sqrt{(k+3)(k-1)}\). Hence \(2-k+\sqrt{(k+3)(k-1)}\) is an eigenvalue of multiplicity \((k-2)\).
Similarly, we can determine a set of linearly independent eigenvectors \(\mathbf{z}^{i}\) (\(2\leq i\leq k-2\)) associated with an eigenvalue \(2-k-\sqrt{(k+3)(k-1)}\) as follows,
\[\mathbf{z}^{i}=\left[\mathbf{z}_{1}^{i}\quad c_{2}J_{k-1,1}\quad c_{3}J_{k-1,1} \quad\cdots\quad c_{k-1}J_{k-1,1}\right]^{T}.\]
For \(v\in\{v_{0,0},\ v_{1,1},\ v_{2,1},\ v_{3,1},\ldots,v_{k-1,1}\}\), \(2\leq r\leq k-1\)
\[\mathbf{z}_{1}^{i}=(z_{1}^{i})_{v}=\begin{cases}1,&\text{if }v=v_{1,1},\\ 1,&\text{if }v=v_{i,1},\\ -2,&\text{if }v=v_{i+1,1},\\ 0,&\text{otherwise},\end{cases}\quad\text{and }\ c_{r}=\begin{cases}\frac{1}{2}(1+\sqrt{\frac{k+3}{k-1}}),& \text{if }r=2,\\ -(1+\sqrt{\frac{k+3}{k-1}}),&\text{if }r=i+2,\\ 0,&\text{otherwise}\end{cases}\]
and
\[\mathbf{z}_{1}^{k-1}=(z_{1}^{k-1})_{v}=\begin{cases}1,&\text{if }v=v_{1,1},\\ -2,&\text{if }v=v_{2,1},\\ 1,&\text{if }v=v_{k-1,1},\\ 0,&\text{otherwise}.\end{cases}\quad\text{and }\ c_{r}=\begin{cases}\frac{1}{2}(1+\sqrt{\frac{k+3}{k-1}}),& \text{if }r=2,\\ -(1+\sqrt{\frac{k+3}{k-1}}),&\text{if }r=3,\\ \frac{1}{2}(1+\sqrt{\frac{k+3}{k-1}}),&\text{if }r=k-1,\\ 0,&\text{otherwise}.\end{cases}\]
Since the eigenvectors are linearly independent, \(2-k-\sqrt{(k+3)(k-1)}\) is an eigenvalue of multiplicity \(k-2\). The remaining eigenvalues of \(S(S^{k})\) are those of its quotient matrix \(Q\) of \(S(S^{k})\),
\[Q=\begin{bmatrix}0&1-k&(k-1)^{2}\\ -1&2-k&(k-1)(k-3)\\ 1&k-3&(k-2)^{2}\end{bmatrix}.\]
Thus the characteristic equation of \(Q\) is given by,
\[\lambda^{3}-(6-5k+k^{2})\lambda^{2}-(-17+26k-12k^{2}+2k^{3})\lambda-8+17k-11k^ {2}+2k^{3}=0\]
Hence, the theorem follows.
## 7 Conclusion
In this paper, we determine the relation between the characteristic polynomial of Seidel and the adjacency matrix of the hypergraph. In addition, we obtain the Seidel spectrum and the number of walks of length \(l\) of \((k,r)\)-regular hypergraph. Also, we discuss the Seidel spectrum, Seidel energy and main Seidel eigenvalues of hyperstar. Using the adjacency matrix of hyperstar we determine the adjacency spectrum and Seidel spectrum of uniform double hyperstar. Moreover, we estimate the adjacency and Seidel spectrum of the sunflower hypergraph.
## 8 Declarations
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2305.20016 | Detecting and Characterizing Mg II absorption in DESI Survey Validation
Quasar Spectra | We present findings of the detection of Magnesium II (Mg II, {\lambda} =
2796, 2803 {\AA}) absorbers from the early data release of the Dark Energy
Spectroscopic Instrument (DESI). DESI is projected to obtain spectroscopy of
approximately 3 million quasars (QSOs), of which over 99% are anticipated to be
at redshifts greater than z > 0.3, such that DESI would be able to observe an
associated or intervening Mg II absorber illuminated by the background QSO. We
have developed an autonomous supplementary spectral pipeline that detects these
systems through an initial line-fitting process and then confirms the line
properties using a Markov Chain Monte Carlo sampler. Based upon a visual
inspection of the resulting systems, we estimate that this sample has a purity
greater than 99%. We have also investigated the completeness of our sample in
regard to both the signal-to-noise properties of the input spectra and the
rest-frame equivalent width (W0) of the absorber systems. From a parent catalog
containing 83,207 quasars, we detect a total of 23,921 Mg II absorption systems
following a series of quality cuts. Extrapolating from this occurrence rate of
28.8% implies a catalog at the completion of the five-year DESI survey that
will contain over eight hundred thousand Mg II absorbers. The cataloging of
these systems will enable significant further research because they carry
information regarding circumgalactic medium environments, the distribution of
intervening galaxies, and the growth of metallicity across the redshift range
0.3 < z < 2.5. | Lucas Napolitano, Agnesh Pandey, Adam D. Myers, Ting-Wen Lan, Abhijeet Anand, Jessica Aguilar, Steven Ahlen, David M. Alexander, David Brooks, Rebecca Canning, Chiara Circosta, Axel De La Macorra, Peter Doel, Sarah Eftekharzadeh, Victoria A. Fawcett, Andreu Font-Ribera, Juan Garcia-Bellido, Satya Gontcho A Gontcho, L. Le Guillou, Julien Guy, Klaus Honscheid, Stephanie Juneau, T. Kisner, Martin Landriau, Aaron M. Meisner, Ramon Miquel, J. Moustakas, Will J. Percival, J. Xavier Prochaska, Michael Schubnell, Gregory Tarle, B. A. Weaver, Benjamin Weiner, Zhimin Zhou, Hu Zou, Siwei Zou | 2023-05-31T16:45:51Z | http://arxiv.org/abs/2305.20016v3 | # Detecting and Characterizing Mg ii absorption in DESI Survey Validation Quasar Spectra
###### Abstract
We present findings on the detection of Magnesium II (Mg ii, \(\lambda\) = 2796, 2803A) absorbers in data from the Early Data Release (EDR) of the Dark Energy Spectroscopic Instrument (DESI). DESI is projected to obtain spectroscopy of approximately 3 million quasars (QSOs), of which over 99% are anticipated to be at redshifts greater than z \(>\) 0.3, such that DESI would be able to observe an associated or intervening Mg ii absorber illuminated by the background QSO. We have developed an autonomous supplementary spectral pipeline that detects such systems through an initial line-fitting process and then confirms line properties using a Markov Chain Monte Carlo (MCMC) sampler. Based upon both a visual inspection and the reanalysis of coadded observations, we estimate this sample to have a completeness of 82.6% and purity of 99.1%. We determine our completeness by re-searching
for detected Mg ii absorbers in coadded data with fewer observations (and therefore lower signal-to-noise). From a parent catalog containing 83,207 quasars, we detect a total of 23,921 Mg ii absorption systems following a series of quality cuts. Extrapolating from this occurrence rate of 28.8% implies a catalog at the completion of the five-year DESI survey that contains over eight hundred thousand Mg ii absorbers. The cataloging of these systems will enable significant further research as they carry information regarding circumgalactic medium (CGM) environments, the distribution of intervening galaxies, and the growth of metallicity across the redshift range \(0.3\leq z<2.5\).
Catalogs (205), Sky surveys (1464), Cosmology (343), Large-scale structure of the universe (902), AGN host galaxies (2017), Galaxies (573), Galaxy distances (590), Astronomy data analysis (1858), Computational astronomy (293), Quasars (1319), Metal line absorbers (1032), Intergalactic medium (813), Galaxy evolution (594), Quasar absorption line spectroscopy (1317)
## 1 Introduction
In the years since the discovery of the first quasars (e.g. Matthews and Sandage, 1963; Schmidt, 1963), they have become important cosmological tracers, helping to map the underlying mass distribution and history of structure formation across cosmic time (e.g. Croom et al., 2005; Springel et al., 2005; Shen et al., 2007; Ross et al., 2009; Shen et al., 2009; White et al., 2012; Eftekharzadeh et al., 2015; Zarrouk et al., 2018; Neveux et al., 2020). Because of their cosmological significance, the number of observed quasars has increased at an extremely rapid rate. The Sloan Digital Sky Survey (SDSS) has been responsible for the bulk of these observations, having observed more than 750,000 of the approximately one million known quasars (e.g. Schneider et al., 2010; Paris et al., 2017; Lyke et al., 2020).
The Dark Energy Spectroscopic Instrument (DESI) began its main survey of the sky in May 2021. The DESI survey represents a significant improvement over the SDSS in terms of both raw data observed as well as data quality (DESI Collaboration et al., 2016). DESI will be operating on a 4-meter telescope compared to the 2.5-meter SDSS telescope, which constitutes a roughly 250% increase in light collecting area (DESI Collaboration et al., 2016). DESI is also \(\sim\)160% more efficient at passing light from its telescope to its spectrographs (DESI Collaboration et al., 2016). Additionally, the DESI spectrographs have a higher resolution than SDSS allowing \(\sim 1.8\times\) as much light to be collected in each spectrum. Over a five-year mission, DESI will observe approximately 3 million quasars -- a sample several times larger than any existing catalog (Chaussidon et al., 2022).
DESI uses a combination of three optical bands (_g,r,z_) as well as _WISE W1_ and _W2_ band infrared photometry to select quasars based upon their infrared excess. The main quasar selection, detailed in Chaussidon et al. (2022), results in a selection of more than 200 deg\({}^{-2}\) quasars in the magnitude range \(16.5<r<23\). The input quasar catalog used in this study, which was informed by visual inspections of quasar spectra (Alexander et al., 2023), and is further detailed in SS2.2, has a purity greater than \(>99\%\) and a median redshift of \(z=1.72\), with 68% of quasars having redshifts between \(1.07>z>2.46\)(Chaussidon et al., 2022).
Many quasars have been observed to have metal absorption systems at redshifts distinct from their emission. It was first proposed by Wagoner et al. (1967) and Bahcall and Spitzer (1969) that these absorption systems may be caused by gas excitation in the extended halos, or circumgalactic medium (CGM), of intervening galaxies (see Tumlinson et al., 2017 for an overarching review of the study of the CGM). The host galaxies of these absorption systems are frequently too dim to be otherwise observed, (e.g. Frank et al., 2012; Corlies and Schiminovich, 2016; Corlies et al., 2020; Wijers and Schaye, 2022). As such these metal line absorbers are an unbiased tool with which to examine the evolution of galaxies and the circumgalactic medium..
Pertinent studies include the physical properties and kinematics of outflowing gas (e.g. Prochaska et al., 2004; Nestor et al., 2011; Bordoloi et al., 2011; Bouche et al., 2012; Kacprzak et al., 2012; Lan and Mo, 2018), as well as the covering fractions and relative metallicities of the absorbing material (e.g. Steidel et al., 1994; Aracil et al., 2004; Chen et al., 2010; Lan, 2020). Additionally, absorbers can themselves be used as mass tracers, and where previous studies such as Perez-Rafols et al. (2015) have used the cross-correlation of absorbers, DESI may provide a sample large enough to investigate the auto-correlation of metal line absorbers. This could be of particular interest at redshifts between \(1.5<z<2.2\) where the only readily accessible tracer for Baryon Acoustic Oscillation measurements are quasars, which suffer
from large redshift uncertainties, whereas the redshifts of metal line absorbers can be determined with high precision.
Although a range of absorption species are commonly found in DESI spectra, including Fe II, Al III, C IV, and S IV, the analysis in this paper will focus on Mg II as it produces a distinct doublet shape, and is easily detected in quasar spectra at most rest-frame wavelengths between Mg II emission near 2800 A and Ly\(\alpha\) emission near 1215 A. It becomes significantly more challenging to reliably detect metal absorption systems blueward of Ly\(\alpha\) emission as they will often blend with lines in the Ly\(\alpha\) Forest (e.g. Kim et al., 2007).
As there is such a wealth of information present in Mg II systems, numerous catalogs of Mg II absorbers have been constructed dating back over forty years. These include those based on early, purpose-built surveys with samples of dozens to hundreds of systems (e.g. Lanzetta et al., 1987; Tytler et al., 1987; Sargent et al., 1988; Caulet, 1989; Steidel and Sargent, 1992; Churchill et al., 1999), to high-resolution surveys with large telescopes (e.g. Nielsen et al., 2013; Chen et al., 2017), to catalogs based on the large number of quasars available from the SDSS (e.g. Nestor et al., 2005; York et al., 2006; Prochter et al., 2006; Lundgren et al., 2009; Quider et al., 2011; Zhu and Menard, 2013; Seyffert et al., 2013; Raghunathan et al., 2016; Zou et al., 2021; Anand et al., 2021).
DESI will observe a far higher number of quasars than any previous survey, and will also obtain spectra at a higher resolution than the SDSS. DESI will therefore have the capability to produce catalogs of absorbers that are significantly larger than for any prior campaign. This will, in turn, enable more precise analyses of clustering, of the properties of the CGM (Zou et al., 2023), and of the evolution of metallicity than has ever previously been possible. Our goal in this paper is to begin to produce and characterize such a DESI absorber catalog, with an initial focus on Mg II systems.
This paper will be organized in the following way: In SS2 we will describe our techniques for detecting absorption systems, as well as our methods for determining the purity and completeness of the absorber sample. In SS3 we will present our results, including the redshift distribution of detected systems and their rest-frame equivalent widths. In SS4 we will discuss some possible applications of our absorber catalog -- in particular as a novel check on DESI pipeline redshifts and as a marker that can be used to locate other species in absorption. We present our conclusions in SS5.
## 2 Data and Methods
In this section we will describe the nature of DESI data and the construction of our catalog. We will then discuss how we have estimated the purity and completeness of our sample through the use of both a visual inspection and the reanalysis of individual observations.
### DESI Data Construction
DESI spectra are observed using three spectrographs, commonly referred to as "b", "r", and "z" due to their wavelength coverage, that together span a wavelength region of 3600 A to 9824 A. The resolution \(R=\lambda/\Delta\lambda\) of these cameras varies between 2000 and 5500, increasing with wavelength (DESI Collaboration et al., 2016). The flux values associated with a DESI observation are extracted using a linear wavelength grid in 0.8 A wavelength steps (Guy et al., 2023).
DESI will, over the course of its survey, re-observe targets to improve the quality of its data. In particular, quasars at redshifts greater than 2.1 will be observed multiple times to improve signal in the Lyman-\(\alpha\) Forest ([7]). In order to perform our search for absorbers in the most robust fashion we chose to use spectra that coadd all observations of a given target. We will refer to these spectra as being "healpix coadded" as this is how they are grouped within the DESI spectral reduction file structure (Gorski et al., 2005). We will also make use of the spectra that coadd a subset of observations, i.e. all observations from a single night, in order to determine the completeness of our sample (see SS2.4).
In this paper, we will use data from DESI's Early Data Release (EDR). These data are separated into three stages of survey validation, which we refer to as "sv1", "sv2", and "sv3". These surveys are distinct from each other both in time frame and targeting implementation -- these stages are described in Myers et al. (2023) and DESI Collaboration et al. (2023). Observations of the same target made during different surveys are not coadded and as such it is possible that a target could be present in multiple surveys. In such cases we will only consider the results of our absorber search for only the healpix coadded spectrum from the survey which has the highest squared template signal-to-noise (TSNR2; see section 4.14 of Guy et al., 2023 for a full description of this statistic).
TSNR2 is calculated for different target classes (i.e. emission line galaxies, luminous red galaxies, quasars) according to their expected spectral properties and redshift distribution. This results in a more informed statistic that better weights relevant spectral features such as emission lines and the Lyman-\(\alpha\) Forest. Notably, TSNR2 values for different target classes cannot be fairly compared, so when we refer to TSNR2 gener
ically throughout this paper we will be referring to the TSNR2_QSO statistic.
### Pipeline Construction
Our analysis relies on parent quasar catalogs generated via three tools: Redrock (RR) (Bailey et al., 2023)1, a PCA-based template classifier that is part of the main DESI spectroscopic pipeline (Guy et al., 2023); QuasarNet (QN) (Busca and Balland, 2018)2, a neural network based quasar classifier (see also Farr et al., 2020); and an Mg ii-emission-based code designed to identify AGN-like spectra that show both strong galactic emission features and broad Mg ii emission Chaussidon et al. (2022).
Footnote 1: [https://github.com/desihub/redrock](https://github.com/desihub/redrock)
Footnote 2: [https://github.com/ngbusca/QuasarNET](https://github.com/ngbusca/QuasarNET)
Initial spectral types (QSO or non-QSO for our purposes) as well as initial redshifts are determined by RR. QN and the Mg ii-emission code are then run as afterburners. The outputs of the two afterburners can result either in RR being re-run with adjusted redshift priors, or in the case of the Mg ii-emission code the spectral type being changed to QSO when a broad Mg ii-emission line is detected. Notably, the redshift values are always ultimately determined by RR. A more complete overview of the application of these tools to construct quasar catalogs, as well as the verification of the completeness and purity of this approach can be found in Chaussidon et al. (2022). Note that we consider only those spectra that were initially targeted as quasars, and observed during 'dark' time observations, in the interest of high purity. See Myers et al. (2023) for a discussion of the distinction between DESI's bright and dark time programs.
We search these spectra for absorption doublets by first applying a Gaussian smoothing kernel, as described in the Astropy documentation (Astropy Collaboration et al., 2022). Next we estimate a continuum from the flux values of the spectra using a combination of median filters. Specifically we choose to weight the combination of a nineteen and thirty-nine pixel filter such that the contribution of the narrower, nineteen pixel filter, is strongest at low wavelengths, and decreases across the wavelength space, whereas the opposite is true for the thirty-nine pixel filter which contributes to the estimated continuum value most strongly at high wavelengths.
These pixel values were informed by a preliminary set of Mg ii absorption detected using more rudimentary methods. The precise values have been chosen to ensure that the two absorption lines of the Mg ii doublet are cleanly separated into individual absorption lines by the estimated continuum. We find that this combination reliably models the broad emission features observed in DESI quasar spectra, but not any narrow absorption features that may be present. The choice to effectively broaden the filter at high wavelengths accounts for the broadening of Mg ii systems due to redshift.
An example of both the smoothing process and the estimated continuum can be seen in the central panel of
Figure 1: A visualization of the doublet-detection step of our pipeline. Detected Mg ii systems are shown in the gray outlined boxes and the search limit of our approach is shown by the vertical black line. _Top_: A sample spectrum from DESI that features 7 separate Mg ii absorption systems. Shown are the flux and error spectrum coadded at the boundaries of the three DESI spectrographs. _Middle_: The same spectrum now shown with an applied Gaussian smoothing kernel (blue) and estimated median-filter continuum (green). _Bottom_: A residual taken from subtracting the median-filter-estimated continuum from the Gaussian-smoothed data. Note that the seven Mg ii absorption systems appear as positive lines.
Figure 1. The bottom panel of Figure 1 demonstrates that emission features are not retained in a residual obtained by subtracting the median-filter-estimated continuum from the Gaussian-smoothed data, but any absorption features remain as positive features in the residual. Also evident is the increased residual noise beyond our search limit in the Lyman-\(\alpha\) forest, making evident why it is difficult to resolve metal lines in this region.
In order to detect doublets we find every group of consecutive positive residuals and calculate a signal-to-noise ratio as:
\[SNR=\frac{\sum_{p1}^{p2}C-F}{(\sum_{p1}^{p2}\sigma^{2})^{1/2}} \tag{1}\]
where \(C\) and \(F\) are the continuum and flux values respectively, \(\sigma^{2}\) is the variance of the spectrum, and \(p1\) and \(p2\) are the first and last indices of a particular group of consecutive positive residuals. If two absorption lines are found that have SNR values greater than 2.5 and 1.5 respectively and a rest-frame wavelength separation of 7.1772 \(\pm\) 1.5 A, where 7.1772 A is the laboratory separation of Mg ii (e.g. Pickering et al., 1998), this doublet is regarded as being a likely candidate.
A similar detection method was used in Raghunathan et al. (2016), however we find that our approach improves the detection of relatively low signal absorption systems in high-signal QSO spectra. Note that our rest-frame line separation uncertainty value, 1.5 A, has been chosen in order to consider as many candidate absorbers as possible without encompassing the rest-frame separation of Si iv, another absorber doublet that is commonly strong in QSO spectra.
To further verify these systems, we next perform an MCMC analysis using the emcee software (Foreman-Mackey et al., 2013). The decision to use MCMC to fit the relatively simple model of a doublet absorption line was made in the interest of fully understanding the posterior distributions of our parameters, as well as increasing the likelihood of recovering low-signal absorbers.
In order to ensure that the full signal of the absorber is recovered we use a different continuum in fitting the systems than the one previously described and used in the initial detection step. The continuum used in detection is designed to ensure that the individual lines of the Mg ii doublet are detected separately, in the case of particularly strong or broad absorbers this can result in some of the absorption signal being lost in the residual. However, when fitting the absorber we instead construct a continuum to ensure that the the full signal is retained.
We first attempt to calculate an appropriate QSO continuum using the NonnegMFPy tool as implemented in Zhu and Menard (2013) and Anand et al. (2022). NonnegMFPy utilizes nonnegative matrix factorization (NMF, see Lee and Seung, 1999) to determine a basis set of eigen-spectra and through their reconstruction estimate an observed quasar continuum.
In cases where the NMF tool is unable to estimate a continuum due to an inability to converge, or the chi-squared value of the estimated continuum is greater than 4 (approximately thirty-two percent of DESI EDR QSOs) we estimate a secondary continuum using a wide, 85 pixel median-filter. In cases where this median-filter continuum provides a spectra fit with a lower chi-squared value we instead use this continuum. The width of this median filter is informed by the previously referenced preliminary sample of detected Mg ii absorbers and ensures that for even the broadest absorption systems no signal is lost in estimating the continuum.
For each absorber candidate we consider a region of 80 pixels, or 64 A, around the detected doublet; this value allows the sampler to explore a region of redshift space which will ultimately be much larger than the redshift uncertainty for a high quality fit, while simultaneously allowing for the detection of multiple Mg ii systems in a single spectrum. We then fit a five-parameter model of the form:
\[F=A_{1}\exp\frac{-[\lambda-C_{1}]^{2}}{2\sigma_{1}^{2}}+A_{2}\exp\frac{-[ \lambda-C_{2}]^{2}}{2\sigma_{2}^{2}} \tag{2}\]
where the two Gaussian line profiles are defined by their center \(C\), width \(\sigma\) and amplitude \(A\). Note that \(C_{1}\) and \(C_{2}\) are both set by the same underlying redshift parameter, i.e. \(C_{1}=(z+1)\times 2796.3543\) A.
The only prior attached to this model is the redshift range implied by the 80-pixel region around the suspected doublet. Initial values for our parameters are informed by the results of the detection step, with the minimum value in the group of residuals informing the line amplitude and the number of consecutive negative pixels informing the standard deviation.
We use 32 walkers and run the model for 15,000 steps. We then discard the first 1000 steps as a burn-in period and store the remaining 14,000 steps for each candidate MCMC feature. Finally, we select only those models which have high mean acceptance fractions (\(>0.45\)) and estimated integrated autocorrelation times3 that are less than 1 per cent of the chain length, indicating that the majority of proposed steps were accepted and that the model was well fit by the MCMC process.
Footnote 3: See [https://emcee.readthedocs.io/en/stable/tutorials/autocorr/](https://emcee.readthedocs.io/en/stable/tutorials/autocorr/)
After running this pipeline on our parent sample of 83,207 DESI QSOs we find a total of 29,797 systems in
18,219 individual healpix coadded spectra that meet our criteria.
In this sample there are a small number of entries with Mg ii absorption redshifts greater than the background quasar redshift, which may initially seem to suggest that the absorber is more distant the quasar,4 In analyzing the physical interpretation of this scenario, it is standard to work in velocity rather than redshift space, and to define a velocity offset as:
Footnote 4: Note that quasar redshifts are commonly determined using broad emission features, which naturally have a higher uncertainty than the redshifts determined using narrow absorption features.
\[v_{\rm off}=c\frac{z_{\rm MgII}-z_{\rm QSO}}{1+z_{\rm QSO}} \tag{3}\]
Velocity offset values within approximately \(\pm 6000\) km s\({}^{-1}\) are indicative of an associated absorption system, wherein the QSO emission and metal line absorption arise from the same galaxy, or galaxy cluster (Shen & Menard, 2012). However, in systems with larger velocity offset values it must be true that one of the redshifts is poorly determined -- as it is physically impossible for a system that is absorbing the light from a quasar to lie _behind_ that quasar.
From a brief visual inspection of these systems we find a number of true Mg ii systems in quasar spectra with incorrect redshifts. Additionally, we find a number of false Mg ii systems; these are often detected in star or galaxy spectra that have been misidentified as quasars, or spectra with unusual error features. We therefore decide to group those systems with velocity offsets greater than \(5000\) km s\({}^{-1}\) into a separate catalog of physically impossible absorbers for the purpose of diagnosing spectra that have been misclassified as quasars, or assigned an incorrect redshift. We will comment on this separate catalog further in SS4.1. Removing the 374 entries with \(v_{\rm off}>5000\) km s\({}^{-1}\) results in a preliminary sample of 29,423 suspected Mg ii absorbers.
### Visual Inspection
In the interest of both assessing and potentially improving our catalog purity we next conducted a visual inspection of 1000 randomly selected systems that pass the MCMC process, and do not have physically impossible redshifts, as described above. This process involves multiple steps: confirming that the background spectrum is indeed that of a quasar, verifying that two absorption lines have been well fit by the MCMC process, and determining if additional metal lines can be fit at the same redshift. Note that the presence of additional metal lines is considered only in confirming borderline cases where the fit absorption lines are weak.
Visually inspecting 1000 randomly selected Mg ii absorbers, following the steps outlined above, we find 808 which constitute true Mg ii absorption and 192 which do not. This suggests an initial purity of 80.8%. Based upon the statistics of these systems we have developed a series of quality cuts to improve the purity of our detected sample with minimal effect on completeness.
The first cut we perform is to remove systems for which one or both of the fit Gaussians have a positive amplitude. This outcome is not disallowed by the MCMC priors to facilitate a full exploration of the parameter space, but is clearly not indicative of an absorption feature. Note that in such a case the initial line amplitude values were given as negative, however in the course of fitting the MCMC process has converged on a positive line solution. There are seventy-seven such systems in the visual inspection set, all of which were identified as false Mg ii systems and as such we impose a cut that all systems are required to have negative amplitudes for both fit line profiles.
Our next cut is intended to remove from the sample systems in which two Gaussians with negative amplitude can be fit at the proper separation of Mg ii, however the amplitudes and/or widths of the absorption lines are not characteristic of Mg ii. In order to determine the appropriate cut, we have plotted those systems that remain after the first two cuts in the space of the similarity of the fit absorption lines. That is to say, we have plotted the relative amplitudes of the two lines against the relative widths (\(\sigma\)) of the two lines. In both cases we consider the statistic of the leading 2796A line divided by the statistic of the 2803A line. The result can be seen in Figure 2.
From the inset panel of this Figure we observe that the distribution of our visual inspection set in this space is tightly clustered around a value of roughly [1.1, 1.1], indicating (empirically) that the 2796A line of the Mg ii doublet tends to have a slightly larger amplitude and be slightly wider than the second.
The innate flux ratio of the 2796 to 2803 A lines is determined by the ratio of their collisional rate coefficients or equivalently by their quantum degeneracy factors. This ratio for Mg ii is F\({}_{2796}\)/F\({}_{2803}\) = 2, and has been experimentally verified (e.g. Mendoza, 1981; Sigut & Pradhan, 1995). However, as the majority of systems observed here are saturated, the observed ratio of absorption line area approaches 1.
In visually inspecting these systems we observe some true Mg ii absorbers where the amplitude and/or width of the 2803A line is greater than that of the 2796A line. This should not be possible theoretically, however the systematic uncertainties inherent to observation can
produce this result. With this in mind we can draw a selection in this parameter space that includes the region of highest density / physical likelihood and allows for slight variation due to observational uncertainties, while still maximizing the purity of the post-cut selection. The selection takes the form of a circle and is described by:
\[\left(\frac{\mathrm{AMP}_{2796}}{\mathrm{AMP}_{2803}}-1.4\right)^{2}+\left( \frac{\sigma_{2796}}{\sigma_{2803}}-1.4\right)^{2}<0.81 \tag{4}\]
where "AMP" is the amplitude of the fit line and \(\sigma\) the width. Note that all points with amplitude and width ratios between 1.0 and 2.0 are included in this selection. After applying a cut to our sample according to the boundaries of this circle, we remove fifty true positives and one-hundred-eight false positives.
After imposing these cuts, the visual inspection sample contains 758 true positive Mg ii systems and 7 false positives for a nominal 99.1% purity. Presuming Gaussian noise on this measurement we assign a \(\pm 3.16\%\) error on this purity. Additionally, the removal of fifty total true positive systems suggests an upper limit on our completeness of 93.8%. Applying these cuts to our pre-visual inspection sample of 29,423 absorber candidates leaves a population of 23,921 absorbers.
### Reanalysis of Nightly Coadds
The healpix coadded spectra that we search for absorbers are created by combining multiple individual observations, as described in SS2.1. These individual observations have lower signal-to-noise ratios, which makes recovering absorption features more challenging. This allows for a natural test of the completeness of our approach.
By searching the nightly coadded spectra, generally composed of a few individual observations, for a known Mg ii absorber, we can quantify the performance of our pipeline as a function of absorber redshift and spectral signal-to-noise. We choose to use the nightly coadded spectra rather than individual observations as this spans the region of relevant TSNR2 values more fully. Additionally, in rare cases where a target is observed on only a single night we do not consider the results of it's reanalysis as the healpix coadded and nightly coadded spectra are the same.
Figure 3 shows the TSNR2 distribution of healpix coadded quasar spectra. Results are shown for both targets with any number of observations and those targets with at least 4 observations. Note that we have grouped quasars with a TSNR2 value \(>140\) as above this threshold we find that Mg ii detection is not sensitive to the TSNR2 of the background quasar. Additionally, we note that this final bin happens to contain only those entries with at least four observations and accounts for approximately one third of the full healpix coadded sample.
Having determined the population of TSNR2 values we can now determine the performance of our pipeline in recovering known Mg ii absorbers as a function of TSNR2. In order to do so we consider all 23,921 detected absorbers and recover the spectra of their nightly coadded observations. We then run the doublet-finder portion of our pipeline on these observations, recording whether the known Mg ii doublet can be recovered in these lower-TSNR2 spectra. The results of this search are displayed in Figure 4 -- as can be readily seen our percentage of recovered absorbers decreases with TSNR2, as anticipated. The percentage of recovered absorbers is also noticeably worse in the lowest redshift bin; this is because the DESI instrument has lower throughput at the blue end (DESI Collaboration et al., 2022), which leads to noisier spectra.
We can next consider the average completeness per TSNR2 bin, averaging across redshift-space. By multiplying the number of quasars in each bin of TSNR2 (i.e. Figure 3), by this average completeness per bin we can determine the number of quasars in each bin for which we would expect to be able to recover Mg ii absorbers. Summing these results across TSNR2 bins and normal
Figure 2: Visualization of Eqn. 4. Plotted are the ratios between the widths and amplitudes of the two lines of the Mg ii doublet for all systems in the visually inspected set following the removal of any systems with positive amplitudes. True Mg ii systems are shown in blue and false systems are shown in orange. Note that not all points are shown. _Top-Right Inset_: Density plot indicating that the distribution is highly concentrated around the approximate value of [1.1, 1.1].
izing by the total number of quasars we determine the expected completeness. Doing so for the results shown in Figure 4 we recover an expected completeness of 88.0% for QSOs with any number of observations and 92.8% for QSOs with at least four observations. Adjusting these values for implied upper limit on completeness from our visual analysis results in SS2.3 we instead find completeness values of 82.6% and 87.0% respectively.
## 3 Results
From an initial sample of 83,207 quasars, we find a total sample of 29,797 probable Mg ii systems. Following the cuts described in SS2.3, we reduce this sample to a total of 23,921 physically possible Mg ii systems in 16,707 unique quasar spectra. This suggests that at least one absorber is detected in 20.1% of quasar spectra and the overall occurrence rate of absorbers considering multi-absorber systems is 28.8%. These results are in reasonable agreement with similar studies using SDSS data, which found at least one absorbers in between 10-20% of quasar spectra (e.g. York et al., 2006; Raghunathan et al., 2016; Anand et al., 2021). In this section we will consider the statistics of this sample as well as describe the structure of the catalog we generate from them.
Figure 5 displays the distribution of all detected absorption systems in both background quasar and absorber redshift-space. The overlaid contour lines are kernel density estimates and span 10% to 90% of the distribution in steps of 10%. The overlaid black lines represent two natural "boundaries" for Mg ii systems. The lower boundary indicates the associated absorber case, in which the redshift of the quasar and absorption system are similar. As previously discussed this suggests that the absorption is occurring within the same galaxy, or galaxy cluster, that is host to the quasar. The upper boundary indicates the redshift of an absorber that would correspond to the wavelength of a quasar's Lyman-\(\alpha\) emission line. As discussed in SS2 we exclude this region of redshift-space from our search due to contamination by the Lyman-\(\alpha\) Forest.
Figure 5 also presents marginalized histograms of the quasar and absorber redshifts. We can observe that the absorber redshift distribution is peaked between redshifts of 1.3 and 1.5 and declines in both directions from this peak. The precise physical interpretation of this histogram is complicated by redshift selection effects that are not yet well-characterized for the DESI survey, coupled with the true quasar and galaxy redshift distribution functions.
Figure 4: Nightly coadded QSO spectra grouped by Mg ii absorption redshift and TSNR2 value. Bins are colored according to the completeness β i.e. the number of Mg ii doublets recovered compared to the number of expected absorbers. Within each bin the completeness is quoted as P and the number of expected absorbers is quoted as N. Note that, as in Figure 3, the highest bin groups all spectra with a TSNR2 exceeding 140.
Figure 3: The population of healpix coadded quasar spectra TSNR2 values. The numbers above/below the blue/orange bars give the population size. The right-most bin includes all spectra with TSNR2\(>\)140.
The background quasar redshift distribution peaks in the redshift range \(2.0<z<2.4\). This is in disagreement with the general DESI quasar distribution which peaks around \(z=1.7\). Here a likely physical interpretation is that the likelihood of passing through an absorbing cloud is smaller for shorter lines of sight and therefore absorption is more likely to be found in higher redshift quasars. The decline at \(z\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}\hss}\raise 2.0pt \hbox{$\mathchar 318$}}2.4\) is likely due to the overall reduction in the density of the quasar population at higher redshifts.
Figure 6 presents the distribution of measured rest-frame equivalent widths (EWs) for both lines of the Mg ii doublet. The overlaid contour lines follow the same scheme as in Figure 5. We observe that the region of highest density corresponds to absorbers with EWs between \(\sim 0.4\) to 1.0 A. Additionally, we note a slight skew to the contours, suggesting that the leading, 2796 A, line of the Mg ii doublet generally has a larger equivalent width value. This result is to be expected given the underlying atomic physics as discussed in SS2.3.
### Catalog Format and Access
In Appendix A we describe the data columns that we catalog for each detected absorption system. We will now give a brief overview of these columns and their use.
We first retain sufficient information to uniquely identify each analyzed healpix coadded spectrum, specifically the DESI TARGETID (see Myers et al., 2023), Right Ascension, Declination and phase of the DESI survey. We additionally store the Redrock ZWARN bitmask which details possible redshift warnings, as well as various TSNR2 values of each spectrum, and the best quasar redshift for each spectrum (derived from the parent quasar catalogs, as discussed in SS2.2).
We also record the equivalent widths of both lines (which are obviously physically valuable), as well as the central posterior distribution values for all five MCMC fit parameters, described in SS2.2. For the equivalent widths, as well as the fit parameters we also provide lower and upper error bars, as determined from the 16th and 84th percentiles of their posterior distributions.
The created Mg ii absorber catalogs are available online5. We have also retained the full 14,000-step MCMC chains for each detected absorber -- these will be made available upon request.
Footnote 5: The location of the catalog will be provided on publication.
## 4 Discussion
In this section we will consider the applications of our secondary catalog composed of physically impossible absorption systems. We will also examine the possibility of using Mg ii systems to detect other metal lines.
Figure 5: Redshift-space distributions of all detected absorbers. Contours are kernel density estimates of the distribution. Marginalized histograms of each redshift population are also presented. The quasar redshift histogram is plotted alongside the redshift histogram of all DESI QSOs for reference, and both are scaled by density.
Figure 6: Distribution of rest-frame equivalent width values for the leading (2796 Γ
) and trailing (2803 Γ
) Mg ii lines. Contours are kernel density estimates of the distribution.
### Physically Impossible Absorbers
As discussed in SS2.2 we have detected a small number of systems that have an offset velocity that suggests the absorber is _behind_ the quasar, which is physically impossible. In total there are 374 such systems, which we will refer to as "PI" absorbers. These PI systems comprise roughly 1.3% of our initial pre-quality-cuts sample of 29,797 absorbers. To improve the utility of this PI subset we first apply the same quality cuts as for the main sample, which reduces the PI absorber sample to 108 systems in 84 unique spectra. Inspecting these spectra, we find 34 entries where the Mg ii absorption is clearly real, and the QSO redshift poorly determined. Nineteen of these systems are Lyman-\(\alpha\) quasars and two illustrative spectra are shown in Figure 7. We additionally find two instances of misidentified star spectra, and one instance of a QSO that has been redshifted at a value greater than that which we find in visual inspection; note that in these three cases the Mg ii absorption is not real.
The relatively low number of true PI absorbers that are found demonstrates the extremely high accuracy of the DESI redshift schema. Extrapolating from these results to the full five-year DESI sample we would anticipate finding only 1200 true PI absorbers. Given these numbers it may be worthwhile to occasionally visually inspect these systems and reclassify any Lyman-\(\alpha\) quasars with true PI absorbers such that they can be re-observed to improve the signal of the observation. We leave this consideration to future work.
### Detection of Additional Metal Lines
Once an Mg ii absorption system has been identified at a particular redshift we can search for other common metal lines, such as Fe ii, C iv, and Si iv, knowing precisely where in the spectrum these lines should appear. This enables the detection of these lines at relatively lower signal-to-noise.
A pilot analysis which involved the visual inspection of 1000 randomly selected Mg ii absorbers to search for Fe ii, C iv, and Si iv at the same redshift yielded the results in Table 1. The "Id. Rate" column in Table 1 shows the raw percentage of inspected Mg ii absorption systems in which the additional line could be identified, the "Vis. Rate" column shows the percentage of inspected absorption systems in which the non-Mg ii absorption species would be found at a wavelength \(>4000\)A such that it would be readily visible to the DESI instrument.6 Finally, the "Scaled Id. Rate" column scales the Id. Rate by [100/Vis. Rate] to give the percentage of the time the line was identified when expected to be visible. We choose to use 4000 A as this tends to be the region where the noise in DESI spectra reaches a consistent level (being noisier at lower wavelengths).
Footnote 6: I.e. as Si iv has a wavelength of \(\sim\)1394 Γ
the Vis. Rate would be equal to the fraction of absorption systems at \(z\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}\hss}\raise 2.0pt \hbox{$\mathchar 318$}}1.87\).
These results imply that (when visible) Fe ii is identifiable in 91.1% of systems and C iv and Si iv are similarly identifiable in 72.8% and 66.4% of systems, respectively. These results are promising and suggest that an algorithmic approach could reliably characterize additional absorption features when seeded with a redshift derived from a certain absorption doublet such as Mg ii.
## 5 Conclusion
\begin{table}
\begin{tabular}{c|c|c|c} \hline Line & Id. Rate & Vis. Rate & Scaled Id. Rate \\ \hline \hline Fe ii & 83.2\% & 91.3\% & 91.1\% \\ C iv & 27.9\% & 38.3\% & 72.8\% \\ Si iv & 16.2\% & 24.4\% & 66.4\% \\ \end{tabular}
\end{table}
Table 1: Additional Absorption Line VI Results
Figure 7: Two example spectra showing genuine Mg ii absorption systems that appear physically impossible due to poor pipeline redshifts. The detected Mg ii absorption is indicated by the grey outlined area. The associated error spectrum is shown in orange.
In this paper we have presented the methods by which we detect and verify a sample of Mg ii absorption systems in the data collected during the Survey Validation phase of the DESI survey. We model these absorption systems by first identifying possible systems in smoothed residuals and then characterizing them using MCMC. In total we have characterized 23,921 absorption systems in 16,707 unique quasar spectra. The parent quasar catalogs utilized in this study contain 83,207 entries, implying that 20.1% of DESI quasars will contain an identifiable Mg ii absorber. The total number of expected identifiable absorbers will then be equal to 28.8% the total number of observed quasars (accounting for spectra with multiple absorbers). Assuming that DESI ultimately obtains spectra for 3 million quasars, our pilot study implies that DESI will eventually compile a sample of over 800,000 Mg ii absorption systems across \(\sim\)560,000 quasar spectra -- by far the largest such sample ever constructed.
The statistics of this catalog, a 99.1% purity and 82.6% completeness have been verified through the visual inspection of a subset of absorbers as well as the reanalysis of lower signal-to-noise spectra of objects for which absorption systems have been detected. In future catalog releases we will aim to increase the completeness of this sample, either by reducing the doublet signal-to-noise threshold or by introducing an additional detection step that can recover Mg ii absorbers that currently escape detection due to high noise or unusual features. This goal will of course require the careful balancing of completeness and purity -- for this first catalog release we have chosen to favor a catalog with high purity.
Additionally, we have made the choice at this time to group absorbers that appear to have physically impossible redshifts -- i.e. those that would suggest the absorption system to be farther from the observer than the quasar -- into a separate catalog. Such systems account for roughly 1.3% of detected absorbers. From visual inspection of such systems we find that after applying the purity cuts described in SS2.3 around 40% of these systems are true Mg ii absorbers with incorrect background quasar redshifts. We anticipate exploring the possibility of using these systems to improve DESI quasar redshifts.
We detect Mg ii absorbers in the redshift range \(0.3\lesssim z\lesssim 2.5\) with a peak in the distribution of absorbers between \(z\sim 1.3\) and \(z\sim 1.5\). The exact interpretation of the redshift distribution of possible absorbers is difficult to disentangle from various selection effects. The background quasars which enable the observations of these systems are found at \(0.4\lesssim z\lesssim 5.8\) and, as can be seen in Figure 5, are generally at higher redshifts than the full DESI quasar population.
The physical properties of the absorption systems catalogued here, such as chemical abundances, ionization temperatures, and physical densities, can be determined by further analysis. In order to do so we plan to automate the detection and characterization of additional metal lines. As noted in SS4.2, we identify at least one additional metal line in \(>91.1\%\) of Mg ii absorbers. The equivalent widths of the Mg ii absorption systems discussed in this paper are generally similar between the two lines of the Mg ii doublet and can be found at levels well below 1A, suggesting that even weak absorbers can be readily detected.
Already the sample of absorbers collected here is sufficiently large to facilitate a variety of studies including; the nature of the CGM environments from which these absorption systems arise, the clustering of underlying dark matter traced by the three dimensional locations of the absorbers, or the use of these Mg ii systems to find additional species in absorption.
LN and ADM were supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0019022. LN was also partially supported by Wyoming NASA Space Grant Consortium award #80NSSC20M0113. AP was supported by the University of Wyoming Science Initiative Wyoming Research Scholars Program. TWL was supported by the Ministry of Science and Technology (MOST 111-2112-M-002-015-MY3), the Ministry of Education, Taiwan (MOE Yushan Young Scholar grant NTU-110VV007), National Taiwan University research grants (NTU-CC-111L894806, NTU-111L7318).
We thank Guangtun Zhu for sharing the NMF eigenspectra of SDSS quasars.
This research is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; additional support for DESI is provided by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF's National Optical-Infrared Astronomy Research Laboratory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: [https://www.desi.lbl.gov/collaborating-institutions](https://www.desi.lbl.gov/collaborating-institutions).
The authors are honored to be permitted to conduct scientific research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
|
2309.16671 | Demystifying CLIP Data | Contrastive Language-Image Pre-training (CLIP) is an approach that has
advanced research and applications in computer vision, fueling modern
recognition systems and generative models. We believe that the main ingredient
to the success of CLIP is its data and not the model architecture or
pre-training objective. However, CLIP only provides very limited information
about its data and how it has been collected, leading to works that aim to
reproduce CLIP's data by filtering with its model parameters. In this work, we
intend to reveal CLIP's data curation approach and in our pursuit of making it
open to the community introduce Metadata-Curated Language-Image Pre-training
(MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's
concepts) and yields a balanced subset over the metadata distribution. Our
experimental study rigorously isolates the model and training settings,
concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M
image-text data pairs outperforms CLIP's data on multiple standard benchmarks.
In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy,
surpassing CLIP's 68.3% on ViT-B models. Scaling to 1B data, while maintaining
the same training budget, attains 72.4%. Our observations hold across various
model sizes, exemplified by ViT-H achieving 80.5%, without any
bells-and-whistles. Curation code and training data distribution on metadata is
made available at https://github.com/facebookresearch/MetaCLIP. | Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer | 2023-09-28T17:59:56Z | http://arxiv.org/abs/2309.16671v4 | # Demystifying CLIP Data
###### Abstract
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its _data_ and _not the model_ architecture or pre-training objective. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP's data by filtering with its model parameters. In this work, we intend to reveal CLIP's data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP's data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy, surpassing CLIP's 68.3% on ViT-B models. Scaling to 1B data, while maintaining the same training budget, attains **72.4%**. Our observations hold across various model sizes, exemplified by ViT-H achieving **80.5%**, without any bells-and-whistles. Curation code and training data distribution on metadata is made available at [https://github.com/facebookresearch/MetaCLIP](https://github.com/facebookresearch/MetaCLIP).
## 1 Introduction
Deep learning has revolutionized the field of artificial intelligence, and pre-trained models have played a pivotal role in democratizing access to cutting-edge AI capabilities. However, the training data used to create these models is often concealed from the public eye, shrouded in secrecy.
The increasing availability of pre-trained models for public use contrasts sharply with the lack of transparency regarding their training data. Further, proprietary concerns, such as copyright issues, often limit access to the original data sources. Consequently, the need to explore novel approaches for curating high-quality training data that can be shared openly arises.
In the vision-language domain, the dominant model and learning approach is Contrastive Language-Image Pre-training (CLIP) (Radford et al., 2021), a simple technique to learn from image-text pairs. We believe that the secret to the dominance of CLIP models is attributed to its high-quality WIT400M _dataset_ which is curated from the web. Despite its popularity, the specifics of CLIP's curation process have remained a mystery, captivating the research community for years.
Follow-up works (Schuhmann et al., 2022, 2021) have attempted to replicate CLIP's data, but with a notable difference in their curation method. While CLIP generates data based on its unknown data source and curation methodology, these approaches remove noise by applying the CLIP model as a hard blackbox filter which in turn is a form of distilling WIT400M information captured in CLIP.
The advantages of CLIP's curation are apparent. First, it starts _from scratch_, avoiding the introduction of biases through filters. Second, CLIP's curation process _balances_ the data distribution over metadata, maximizing signal preservation while mitigating, rather than removing, noise in the data1. Such distribution lays the groundwork for task-agnostic data, a crucial part of foundation models.
Footnote 1: For example, a filter on digits can remove noise from date or id strings but remove signal for tasks that involve OCR (e.g., MNIST), or a filter removing text with less than 5 characters can remove signal βdogβ.
In this paper, we attempt to _reveal_ CLIP's method around training _data curation_. We present an empirical study on data curation, with frozen model architecture and training schedule. We focus solely on the impact of training _data_, excluding other factors that could confound the results. We make several observations for good data quality and present a simple algorithm to make CLIP's curation more transparent. Consequently, we shed light on both the curation process and the resulting training data _distribution_. Our algorithm enables easy adaptation to different data pools, allowing parties to fully own their data pipeline without relying on blackbox filters from external providers.
Our algorithm takes a raw data pool \(\mathcal{D}\) and metadata \(\mathcal{M}\) (derived from CLIP's queries or visual concepts) and yields a balanced subset \(\mathcal{D}^{*}\) over \(\mathcal{M}\): \(\mathcal{D}^{*}\gets f(\mathcal{D};\mathcal{M})\). Our approach, named Metadata-Curated Language-Image Pre-training (MetaCLIP), marks a significant step towards making the curation process more transparent and accessible.
MetaCLIP applied to CommonCrawl (CC) with 400M data points outperforms CLIP on multiple standard benchmarks. In terms of zero-shot ImageNet classification, using ViT (Dosovitskiy et al., 2020) models of various sizes. Our MetaCLIP achieves 70.8% vs CLIP's 68.3% on ViT-B and 76.2% vs 75.5% on ViT-L. Scaling to 2.5B data, with the _same_ training budget and similar distribution boosts this to unprecedented accuracy of 79.2% for ViT-L and 80.5% for ViT-H in the vanilla training setting (not using any external data, models, or longer training).
In Fig.1, we show the impact of metadata curation on ImageNet validation plotted over training steps. First, we are training on Raw English data from the web (400 image-text pairs, 57.4% accuracy), after applying Language IDentification (LID) to the random Raw set (\(\sim\)1.1B pairs, 54.1%). Using metadata to curate the training set (MetaCLIP 400M w/o bal, 60.8%) performs significantly better than these baselines, and using balancing significantly increases accuracy further (MetaCLIP, 65.5%), outperforming similar datasets, WIT400M from CLIP, 63.4% and LAION 400M, 60.0%.
## 2 Related Work
The training data of CLIP differs significantly from a traditional supervised dataset (Gadre et al., 2023) in various aspects. Firstly, it involves large-scale training with mixed-quality image-text pairs rather than categorized images with human annotated labels, as commonly seen in classification datasets. Secondly, CLIP's pre-training is the initial stage of training, assuming no access to previously trained models.
Data Pruning on Established Datasets.Current research on data algorithms primarily revolves around _data pruning_ techniques applied to well-established datasets using pre-trained models (Sorscher et al., 2022; Abbas et al., 2023). These approaches, such as coreset selection techniques (Har-Peled & Mazumdar, 2004; Feldman et al., 2011; Bachem et al., 2015; Mirzasoleiman et al., 2020; Toneva et al., 2018), aim to select a subset of data that yields similar performance to train
Figure 1: ViT-B/32 on ImageNet zero-shot classification with fixed training steps (12.8B seen pairs and training/validation data has been de-duplicated). Raw: raw CommonCrawl (CC) distribution; Raw English: English only CC; MetaCLIP w/o bal.: curated (sub-string matched) data pool from CC; MetaCLIP: curated _and balanced_ metadata distribution. Metadata curation boosts performance significantly and balancing is equally important. Our MetaCLIP data significantly outperforms CLIPβs WIT400M and LAION data.
ing on the entire dataset. However, this post-hoc data pruning approach has limited utility, as the computational resources saved have already been expended during the initial training of the model.
Handling Noisy Internet Data.Addressing noisy data from the Internet is a significant challenge, and existing approaches often heavily rely on human-designed filter systems. Classical methods involve dataset cleaning and outlier removal (Jiang et al., 2001; Yu et al., 2002) to discard samples that may introduce undesirable biases to models.
Replicating CLIP's Training Data.Recent efforts, such as LAION (Schuhmann et al., 2021; 2022) and concurrent work DataComp (Gadre et al., 2023), attempt to replicate CLIP's training data. However, they adopt fundamentally different strategies for several reasons. First, the data used in these approaches are post-hoc, filtered, by vanilla CLIP as a _teacher_ model. Second, the curation process in these methods relies on a labor-intensive pipeline of filters, making it challenging to comprehend the resulting data distribution from the raw Internet (refer to the unknown biases of using CLIP filter in (Schuhmann et al., 2022)). Thirdly, the goal is to match the quantity of CLIP's target data size rather than the data distribution itself, which may lead to an underestimation of the data pool size needed to obtain sufficient quality data. Consequently, the performance on the 400M scale is sub-optimal, with LAION400M only achieving 72.77% accuracy on ViT-L/14 on ImageNet, whereas vanilla CLIP obtains 75.5%.
Importance of Understanding CLIP's Data Curation.The observations made in these studies underscore the critical importance of understanding how OpenAI CLIP curates its data in the first place. A comprehensive understanding of the curation process can shed light on the factors that contribute to its success, allowing researchers to devise more effective and efficient algorithms for future vision-language pre-training endeavors.
## 3 MetaCLIP
The original paper (Radford et al., 2021) only provides limited details about how CLIP curates its data. Since important design choices for a direct reproduction are missing, we will clarify our choices in this section. Our goal is to uncover CLIP's data curation process, which involves preserving signal in the data while minimizing noise. In this section, we will explain the principles we have adopted to achieve this, which may differ from CLIP's as these are not known publicly.
CLIP's WIT400M is curated with an information retrieval method, quoting (Radford et al., 2021):
To address this, we constructed a new dataset of 400 million (image, text) pairs collected from a variety of publicly available sources on the Internet. To attempt to cover as broad a set of visual concepts as possible, we _search_ for (image, text) pairs as part of the construction process whose text includes one of a set of _500,000 queries_ We approximately class balance the results by including _up to 20,000 (image, text) pairs per query_.
We rigorously adhere to this description and provide detailed insights into the construction process of CLIP's metadata (in SS3.1)2, sub-string matching (in SS3.2), inverted indexing (in SS3.3), as well as query and balancing (in SS3.4).
Footnote 2: We generalize the term queries (used by CLIP) as _entries_ in _metadata_ because metadata describe training data and our algorithm does not require search on inverted index yet have similar effects.
### Metadata construction: \(\mathcal{M}=\{\textit{entry}\}\)
We start by re-building CLIP's 500,000-query metadata, citing Radford et al. (2021):
The base query list is all words occurring at least 100 times in the _English version of Wikipedia_. This is augmented with _bi-grams_ with high pointwise mutual information as well as the names of all _Wikipedia articles_ above a certain search volume. Finally all _WordNet synsets_ not already in the query list are added.
The metadata ('queries' or 'entries') consists of four components: (1) all synsets of WordNet, (2) uni-grams from the English version of Wikipedia occurring at least 100 times, (3) bi-grams with high pointwise mutual information, and (4) titles of Wikipedia articles above a certain search volume. We rebuild these components from WordNet and Wikipedia and summarize the statistics in Table 13. We estimate the thresholds for components (3) and (4) as in the 3rd column of Table 1, by first choosing a point-wise mutual information threshold of 30 that meets the budget of 100k entries for bi-grams and then fill the rest of the entries with Wikipedia titles.
Footnote 3: Note that we cannot find Wikipediaβs search volume for titles of Wikipedia (4). Instead, we use volumes of Pageviews on Wiki articles. We randomly selected 26 daysβ Pageviews from Apr. 2018 to Sep. 2022.
### Sub-string Matching: _text_\(\rightarrow\)_entry_
After constructing the metadata, CLIP's curation aligns a pool of image-text pairs with metadata entries through sub-string matching. This process identifies texts that contain any of the metadata entries, effectively associating unstructured texts with structured metadata entries. The sub-string matching step retains only high-quality matching texts, automatically filtering out various types of noises that a typical filter system would consider on a case-by-case basis.
Such alignment is referred to as sub-string matching in Radford et al. (2021):
\begin{table}
\begin{tabular}{l r|l|r} Source & \# of Entries & Desc. of Threshold & Threshold \\ \hline WordNet synsets & 86,654 & N/A & [ALL] (follow CLIP) \\ Wiki uni-gram & 251,465 & Count & 100 (follow CLIP) \\ Wiki bi-gram & 100,646 & Pointwise Mutual Info.(PMI) & 30 (estimated) \\ Wiki titles & 61,235 & View Frequency & 70 (estimated) \\ \hline \end{tabular}
\end{table}
Table 1: Composition of MetaCLIP Metadata.
### Inverted Indexing: _entry \(\rightarrow\) text_
Following sub-string matching, CLIP builds an inverted index of the data pool. All texts associated with each metadata entry are aggregated into lists, creating a mapping from each entry to the corresponding texts, _entry \(\rightarrow\) text_.
As an analysis, we count the number of matches for each entry and summarize that in Table 2. The counts exhibit a long-tailed distribution. Out of the 500k entries, **114k** entries have _no_ matches. This signifies the importance of knowing the training data distribution since it is very likely the training data does not have certain visual concepts. We observed that only 16k entries had counts higher than 20k, accounting for only **3.2%** (16k/500k) of the entries, but their counts made up **94.5%** (5.35B/5.6B) of the total counts of all entries.
Top Entries.We show the top entries of the matching in Table 3. Interestingly, many of these are stopwords, which don't carry specific meaning but can enhance the overall text quality (e.g., by generating grammatically correct sentences rather than just keyword lists). It's important to note that although sub-string matching aims to select only high-quality texts, there are instances where common entries may still include irrelevant texts. For instance, the entry "photo" could match with the popular but unhelpful term "untitled photo". These noise-related issues can be addressed in the subsequent stage of processing.
### Query and Balancing with \(t\leq\)20k
The key secret behind OpenAI CLIP's curation is to balance the counts of matched entries. For each metadata entry, the associated list of texts (or image-text pairs) is sub-sampled, ensuring that the resulting data distribution is more balanced. This step aims to mitigate noise and diversify the distribution of data points, making the data more task-agnostic as foundation data for pre-training.
The magic number \(t=20\)k is a threshold used to limit the number of texts/pairs for each entry. Entries with fewer than \(t\) pairs (tail entries) retain all associated pairs, while entries with more than \(t\) pairs (head entries) are sub-sampled to \(t\) pairs. The selection is based on the density of information in texts; texts with more matched entries have a higher chance of being curated (recall that the average is 3.5 matches per text).
To study the effect of the magic number \(t=20\)k, we plot the cumulative sum of counts for entries sorted by counts from tail to head in Fig. 2. Interestingly, the value of \(t=20\)k seemingly represents the transition from tail to head entries, when the head entries start exhibiting an _exponential growth rate_. By applying a max count of \(t\), the growth rate of total counts (i.e., the scale of resulting data points) is reduced to _linear_. This significantly flattens (and balances) the training data distribution. We further study the optimality of \(t=20\)k for the 400M data scale in our experiments.
In summary, balancing yields three interesting outcomes:
\begin{table}
\begin{tabular}{l r r} \multicolumn{1}{c}{Metadata Subset} & \multicolumn{1}{c}{\# of Entries} & \multicolumn{1}{c}{\# of Counts} \\ \hline Full & 500K & 5.6B \\ Counts \(=0\) & 114K & 0 \\ Counts \(>20000\) & 16K & 5.35B \\ \hline \end{tabular}
\end{table}
Table 2: Summary of counts for entries.
\begin{table}
\begin{tabular}{l r|l|l|l|l l} Entry & Counts & Entry & Counts & Entry & Counts & Entry & Counts \\ \hline of & 120M & in & 107M & and & 100M & for & 89M \\ the & 87M & The & 67M & with & 67M & to & 61M \\ photo & 54M & a & 50M & image & 48M & 1 & 47M \\ on & 45M & by & 43M & 2 & 43M & Image & 39M \\ at & 38M & Black & 33M & 3 & 30M & A & 29M \\ \hline \end{tabular}
\end{table}
Table 3: Top-20 entries with counts.
(i) It reduces dominance and noise from head entries, like common web terms. E.g., out of 400M pairs, only \(20\)k texts containing "photo" are kept (while there are 54M "photo" instances in the pool).
(ii) It diversifies the data distribution and balances tail/head entries, leading to a more task-agnostic foundation.
(iii) Sampling for each entry ensures that data points with more matched entries or _denser_ information are prioritized for curation.
Discussion.CLIP employs a pure NLP-based approach, requiring no access to ML models and minimizing explicit/implicit priors from humans. The metadata plays a central role in mitigating noise and preserving signal in the data distribution. The balancing step effectively flattens the data distribution, diversifying the data and making it more suitable as foundation data for pre-training tasks. We analyze the effects of balancing in Appendix A.3.
### A simple Algorithm for Curation
This section presents an algorithm that formalizes the curation process described earlier. The algorithm aims to improve scalability and reduce space complexity for operations across data points, such as inverted indexing and sub-sampling. Instead of building inverted indexes, the algorithm only maintains total counts for each entry.
We assume that CLIP curation constructs an inverted index that maps entries to documents (image-text pairs) to enable efficient _search_ for each entry ("we search for (image-text) pairs" in Radford et al. (2021)). In contrast, our algorithm approaches the balancing process through independent sampling. This avoids the need to build an inverted index that could potentially store hundreds of millions of concrete pairs for popular entries, thereby improving efficiency and scalability.
Our algorithm takes three inputs: metadata \(\mathcal{M}\), a data pool \(\mathcal{D}\), and a hyper-parameter \(t\). It aims to find a subset \(\mathcal{D}^{*}\) with a balanced distribution over \(\mathcal{M}\), denoted as \(\mathcal{D}^{*}\gets f(\mathcal{D};\mathcal{M},t)\). The algorithm consists of two parts, each corresponding to a specific stage of the curation process.
We provide the Python pseudo-code in Algorithm 1.
Figure 2: Cumulative sum of counts on entries from _tail to head_ on a data pool with 1.6B image-text pairs (5.6B match counts). (1) raw/unbalanced cumulative counts, \(t=\infty\); (2) balanced cumulative counts after applying \(t=20\)k. The limit \(t\) defines the transition of tail/head entries.
```
#D:rawimage-textpairs;
#M:metadata;
#t:maxmatchesperentryinmetadata;
#D_star:curatedimage-textpairs;
D_star=[1]
#Part:1Sub-stringmatching:storeentryindexesintext.matched_entry_idsandoutputcountsperentryinentry_count.entry_count=substruct_matching(D,H)
#Part:2balancingviaindependentsamplingentry_count{entry_count{entry_count{entry_count{entry_count{entry_count{forimage,textinD:}}}}}forentry_idinext.matched_entry_ids; ifrandomrandom()<entry_prob[entry_id]; D_star.append((image,text)) break
```
**Algorithm 1**Pseudo-code of Curation Algorithm in Python style (see Sec. A.7 for samples).
Part 1: Entry Counts from Sub-string Matching.This corresponds to Sec. 3.2. The substr_matching function outputs the total counts of matches per entry, entry_count, represented as a NumPy array indexed by entry_id. Each text is associated with matched_entry_ids that contains a list of matched entries.
Part 2: Balancing via Independent Sampling.This part corresponds to Sec.3.3 and Sec.3.4 and focuses on balancing counts on entries. Instead of building an expensive inverted index with associated lists of texts for each entry, we sample each data point independently.
We first compute the probability of sampling each entry, entry_prob, where tail entries (entry_count < \(t\)) have a probability equal to 1, and head entries have a probability less than 1. We iterate through all image-text pairs and sample/curate each pair. When an image-text pair has a matched entry sampled/selected, we include that pair in \(\mathcal{D}^{*}\).
This procedure is equivalent to CLIP's curation, because if one image-text pair has one or more matched entries, the chance of that pair being selected is determined by the probability of sampling for each individual entry: \(t\)/entry_count[entry_id]. As long as one entry selects that pair, it will be kept in \(\mathcal{D}^{*}\). Our independent sampling approach allows us to scale balancing for each data point independently and reduces the global operation to counting the total matches for each entry. We demonstrate case studies in experiments on (1) scaling curation in a data pipeline and (2) online balancing in data loader.
## 4 Experiments
Data Pools.We collect two pools of data:
Pool 1 contains 1.6 billion image-text pairs with a total of 5.6 billion counts of matches. This pool was used to estimate a target of **400M** image-text pairs, collected from 15 snapshots of Common-Crawl (CC) from January 2021 to January 2023.
Pool 2 aims to scale curation in our data pipeline. We parsed all 90 CC snapshots from 2013 to April 2023, using our algorithm (see SSA.2 for details on the curation pipeline) to curate from a pool of 10.7B matched image-text pairs that are originally from a large set of URL-text pairs, which have undergone de-duplication, English Language IDentification (LID) and sub-string matching. However, we only perform (expensive) image downloading, storing, and transferring for data points that are distribution-calibrated and selected by our algorithm.
For balancing we consider 2 scenarios on this data: (i) \(t=170k\), which is resulting in **2.5B** image-text pairs. This \(t=170k\) configuration has tail counts amounting to 6% of the total counts, the _same tail/head ratio_ that the 400M Pool 1 data has, produced by applying \(t=20k\) on the 1.6B Pool 1 data. (ii) The \(t=20k\) threshold applied to Pool 2 which results in **1B** image-text pairs and compared to the 400M set from Pool 1 only increases tail metadata matches (head counts are capped at \(20k\)).
Training SetupWe strictly follow the CLIP training setup, using V100 32GB GPUs and an equivalent global batch size of 32,768. For ViT-B/32 and ViT-B/16, we use 64 GPUs with a per GPU batch size of 512 and for ViT-L/14 we use 128 GPUs with a 256 per GPU batch size. It takes 4 days to train ViT-B/32 and a month to train ViT-L/14. We use 256 A100 80GB GPUs to train ViT-H/14 model for 1 week. We train in all experiments for the same number of iterations that correspond to 12.8B seen image-text pairs during training (32 epochs for 400M). We pre-process with face-blurring.
### Results
Zero-shot Image Classification.We follow the standard evaluation benchmark and made sure all prompts and class names were the same as those used by CLIP Radford et al. (2021). We also re-evaluated OpenAI/OpenCLIP's checkpoints to avoid differences caused by benchmark data copies. The results are shown in Tab 4.
In Table 4, we observe that MetaCLIP outperforms OpenAI CLIP on ImageNet and average accuracy across 26 tasks, for 3 model scales. With 400 million training data points on ViT-B/32, MetaCLIP outperforms CLIP by +2.1% on ImageNet and by +1.6% on average. On ViT-B/16, MetaCLIP outperforms CLIP by +2.5% on ImageNet and by +1.5% on average. On ViT-L/14, MetaCLIP outperforms CLIP by +0.7% on ImageNet and by +1.4% on average across the 26 tasks.
We next turn to Pool 2 which is a larger set of image-text pairs and study the effect of scaling data. In Table 5, we scale data to 1B and 2.5B and observe a large gain over 400M, with similar performance for both 1B and 2.5B scales. Note that the number of training iterations (and therefore compute) is the same for all rows. The main difference between 1B and 2.5B is the threshold \(t\), where 1B is a more balanced set by adding more data points (compared to the 400M set) to _tail_ entries (up to \(t=20k\)), instead the 2.5B set adds (up to \(t=170k\)) data points to all, _head and tail_, entries. The extra data in the tail entries (1B set), seems to benefit downstream accuracy for tasks on specific data such as CUB fine-grained bird classification, Flowers, KITTI, PCAM, while the larger 2.5B data that has more head entries increases broadly over more datasets, but each at a smaller amount. The overall average accuracies are similar for 1B and 2.5B (e.g., 70.2% vs. 69.8% for ViT-L model size). On ImageNet, the 2.5B training data achieves 67.6% on ViT-B/32 that breaks the previous believed saturated B/32 models (Cherti et al., 2022), 79.2% on ViT-L/14 and 80.5% on ViT-H/14.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c|}{**Methods**} & \multicolumn{13}{c|}{**\(\mathbf{\beta}\)} & \multicolumn{13}{c|}{**\(\mathbf{\beta}\)} & \multicolumn{13}{c|}{**\(\mathbf{\beta}\)} & \multicolumn{13}{c|}{**\(\mathbf{\beta}\)} & \multicolumn{13}{c|}{**\(\mathbf{\beta}\)} &
We plot the cumulative sum of counts for entries sorted by counts from tail to head in Fig. 3 for all these cases, similar to Fig. 2 for Pool 1 (and the Pool 1 configuration as dashed lines). The plot shows that the 2.5B data is still relatively long-tail, while the 1B data is more balanced, explaining it's better performance on specific data such as bird and flower types observed above.
### Ablation Study
We show ablations for MetaCLIP for the 400M scale and ViT-B/32 in Table 6. We first ablate different balancing thresholds \(t\). We observe that the choice of \(t=20k\) by CLIP yields the best performance for ImageNet and averaged accuracy and \(t=15k\) or \(t=35k\) are slightly worse.
To understand the key effect of balancing, we use the whole matched pool (1.6B image-text pairs) to train CLIP. Surprisingly, training on \(4\times\)_more data_ (on head entries) _significantly hurts the accuracy_ on ImageNet (61.9 vs 65.5) and averaged accuracy across 26 tasks (56.6 vs 58.2).
Balancing can also be applied online in the data loader with head entries down-sampled leading to slightly better performance (58.5 vs 58.2); see appendix for details. This is useful if head data has already been collected and one wants to train on a different distribution. The better accuracy for online balancing is explained by the larger diversity in head data.
## 5 Conclusion
In this paper, we attempt to reveal CLIP's data curation. Our MetaCLIP builds upon metadata for curation and balancing of raw data sourced from the web. Curating with metadata and balancing are essential for good data quality, significantly outperforming the use of raw data. Our experiments show that MetaCLIP performs well for different scales sourced from CommonCrawl data and outperforms CLIP's proprietary data source, without reliance on any external model. We make our pipeline for generating the data publicly available.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline &
#### Acknowledgments
We thank Zeyuan Allen-Zhu, and Chunting Zhou for the insightful discussion and Brighid Meredith for suggestions on scaling the pipeline.
## Appendix A Appendix
### Additional Results
**Curation from DataComp-12.8B.** The concurrent work Gadre et al. (2023) released a collection of 12.8B image-text pairs from CommonCrawl from 2014-2022. We further investigate whether we can apply the algorithm on its 12.8 unfiltered pool. Although the unfiltered pool seemingly offers an opportunity to apply our algorithm on a publicly available source, our initial studies show that, implicit biases may still be present in this pool. For example, we notice that all image URLs are collected as a string starting with http. This excludes relative URLs that could be frequently used by quality websites (with potentially good image-text pairs). We curate from DataComp's 12.8B unfiltered pool with \(t\)=60k, which has 6% of tail counts that is the same as \(t\)=20k for 400M from our 1.6B pool.
When using 1B image-text pairs curated from DataComp's pool we notice a quality drop during training, compared to data curated from our pools, see Fig. 4. Our smaller 400M set is slightly better than using DataComp-1B and our larger sets (1B, 2.5B) are significantly better.
In Table 7, we show our 400M data vs our curated DataComp-1B data at various model scales, where the same observation holds, suggesting our raw data pool is more effective.
DataComp BenchmarkWe also evaluate MetaCLIP on the benchmark used by (Gadre et al., 2023) that contains 38 tasks including variants of ImageNet, retrieval, VTAB, etc. For simplicity, we average the scores over each category.
Note that prompts and class names used by (Gadre et al., 2023) could be different from prompts and classnames used by OpenAI CLIP.
From Table 8, we can see that MetaCLIP outperforms CLIP and OpenCLIP across various model sizes. First, for the same data scale (400M), MetaCLIP outperforms OpenCLIP, which is better than CLIP on this benchmark, by +1.4% for ViT-B/16 and +2.5% for ViT-L/14, when comparing average accuracy across the 38 tasks. Second, for increasing our MetaCLIP data size to 1B we see a significant gain, especially for the larger model, from 62.2% to 65.0% average accuracy. Using our larger dataset with 2.5B and more head entries leads to a further gain to 65.5%.
### Details on Efficient Curation
Curation in Data Pipeline.Our curation algorithm does not require access to images, making it suitable for integration into a pipeline to reduce the scale of data points after parsing and before image downloading. We designed the algorithm to be modular, allowing different parts to be placed at different stages in the pipeline, as shown in Figure 5.
Specifically, sub-string matching can be placed immediately after HTML parsing to reduce data points for English-only pairs (e.g. by \(\sim\)50%). Balancing can be applied earlier, before image downloading, to further reduce data points by \(\sim\)77%. This approach led to a total reduction of \(\sim\)90%, allowing for curation of the entire CommonCrawM data _without_ the need to store and transfer all data points, which allows us to curate the whole CommonCrawl since 2013 with 300B+ scale URL-text pairs without spending the storage/transfer on all \(\sim\)10\(\times\) data points, where the rate of keeping a data point with MetaCLIP curation is \(\sim\)0.1 (0.5 \(\times\) 0.23).
Curation in Data LoadingWe applied the balancing/sampling part of the algorithm to the data loader to adjust data distribution on-the-fly. Although data points for tail entries are always sampled in Algorithm 1, the diverse pairs from the head entries are sub-sampled from a larger pool while maintaining a similar distribution as offline curation. This diversification of pairs that match the head entries improved the performance, as shown in Table 6.
\begin{table}
\begin{tabular}{l c|c|c|c|c} \hline \hline & Avg. & IN & IN Dist. Shift & VTAB & Avg. Retrieval \\ \hline \hline ViT-B/32 & & & & & \\ \hline CLIP (400M) & 51.5 & 63.4 & 48.2 & 50.5 & 48.0 \\ OpenCLIP (407M) & 52.7 & 62.9 & 48.5 & 53.0 & 50.7 \\ \hline MetaCLIP (400M) & 53.5 & 65.5 & 50.4 & 54.1 & 50.6 \\ MetaCLIP (1B) & 54.2 & 67.3 & 51.9 & 53.6 & 51.1 \\ MetaCLIP (2.5B) & 55.4 & 67.6 & 52.3 & 55.3 & 52.6 \\ \hline ViT-B/16 & & & & & \\ \hline CLIP (400M) & 55.5 & 68.3 & 54.1 & 54.4 & 50.2 \\ OpenCLIP (407M) & 56.1 & 67.0 & 52.6 & 54.9 & 53.9 \\ \hline MetaCLIP (400M) & 57.5 & 70.8 & 55.5 & 56.7 & 53.9 \\ MetaCLIP (1B) & 58.4 & 72.4 & 57.8 & 56.3 & 54.3 \\ MetaCLIP (2.5B) & 60.0 & 72.1 & 57.7 & 59.0 & 54.0 \\ \hline ViT-L/14 & & & & & \\ \hline CLIP (400M) & 61.4 & 75.5 & 61.6 & 59.5 & 53.6 \\ OpenCLIP (407M) & 59.7 & 72.7 & 57.3 & 58.6 & 55.9 \\ \hline MetaCLIP (400M) & 62.2 & 76.2 & 61.3 & 59.8 & 57.3 \\ MetaCLIP (1B) & 65.0 & 79.0 & 64.5 & 62.5 & 58.3 \\ MetaCLIP (2.5B) & 65.6 & 79.2 & 64.6 & 64.1 & 60.1 \\ \hline ViT-H/14 & & & & & \\ \hline MetaCLIP (2.5B) & 66.5 & 80.5 & 66.1 & 64.6 & 60.4 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Zero-shot classification and retrieval on tasks from (Gadre et al., 2023).
### Human Study on the Effects of Curation
In this section, we study the impact of MetaCLIP curation on data distribution by using human evaluation. We approach this exploration from three distinct angles: noise reduction, alignment of visual content, and task-agnostic attributes. In the pursuit of comprehending the first two aspects, we undertake a human study aimed at comprehending the data quality implications of implementing the balancing technique (outlined in Part 2 of Algorithm 1). This evaluation encompasses three dimensions: image-only, text-only, and image-text alignment. We collect an evaluation set of 100 random image-text pairs for balanced and unbalanced data, respectively, and ask annotators to score on the image, text, and pair quality metrics, separately, on a scale of 1 to 5.
Annotation Guidelines.Annotators follow guidelines to assess both images and texts, evaluating informativeness (how well information is conveyed) and aesthetics. For images, aesthetics considers visual elements like composition, color, lighting, balance, contrast, texture, and subject matter. For texts, aesthetics gauges factors like delimiters, sentence structure, capitalization, prefixes/suffixes, recognized words, generic words, and overall text quality. The alignment metric for image-text pairs measures the relevance between the two modalities, assessing how well the text describes the image content. Ratings are averaged across annotators for each dimension.
We show the study results in Table 9 and discuss the different criteria next.
Noise Mitigation in Image and Text.As shown in Table 9, significant quality improvement for all the three evaluation dimensions is observed after applying balancing. MetaCLIP curation has no specific hard filters such as removing shorter text, removing dates, etc. However, curation by sub-string matching and balancing has a different filtering effect. For example, a sub-string itself can never curate a date-only text. Further, balancing allows signal and noise to co-exist when they are difficult to be separated by human designed filters. For example, if one entry such as "image" or "photo" is capped to \(t=20k\), it can only contribute 0.005% of 400M data.
Visual Content Alignment.Although MetaCLIP curation does not directly involve images, it has a positive effect on aligning visual content by controlling the quality and distribution of text. First, sub-string matching increases the chance of having (visual) entities mentioned in the text, thereby improving the likelihood of finding corresponding visual content. Second, balancing favors long-tailed entries that could have more diverse visual content than a head entry (such as the text "1"). In Table 9, we observe significant improvement on pair quality from unbalanced data to balanced data.
Figure 5: Case study: Curation implementation in our data pipeline.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Evaluation Dimension & Rating for Balanced Data & Rating for Unbalanced Data & P-value \\ \hline Image & \(4.60\), \([4.50,4.70]\) & \(4.36\), \([4.23,4.48]\) & \(<0.001\) \\ \hline Text & \(4.67\), \([4.56,4.78]\) & \(4.06\), \([3.82,4.30]\) & \(<0.001\) \\ \hline Alignment & \(4.41\), \([4.23,4.59]\) & \(3.72\), \([3.46,3.99]\) & \(<0.001\) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Average human rating on the effect of balancing on data quality, with confidence intervals shown in parentheses. Higher rating is better. Balanced data is rated of higher quality.
### Training Setup of OpenAI CLIP vs OpenCLIP
Our work strictly follows CLIP's setup for a controlled comparison focusing on data curation and quality. We notice differences in the training setup of OpenCLIP5 and list the difference (known to us). OpenCLIP varies the setup from CLIP (e.g., global batch size, learning schedule, etc.). Here we only list the difference for LAION-400M, which is closer to the CLIP setup. We note that DataComp differs even more, e.g., by curating images close to ImageNet training data, a large batch size of 90k that is almost \(3\times\) larger than CLIP, and using the CLIP model to filter data.
Footnote 5: [https://github.com/mlfoundations/open_clip](https://github.com/mlfoundations/open_clip)
### Benchmark Deduplication
Our pools are deduplicated from the benchmark/ImageNet data using a 64-bit PCA hash, derived from a similarity search model's feature embeddings with PCA reduction to 64 dimensions and sign quantization. DataComp-12.8B is already deduplicated.
### Negative Results Learned from Ablating CLIP Curation
We briefly describe a few ideas close to CLIP curation that did not look promising in our initial attempts and were abandoned:
1. **Self-curated Metadata**. We initially attempted to build metadata directly from the text in raw image-text pairs (i.e., using terms appearing in text above a certain threshold of counts). We rank entries by count and keep the top 500,000. Metadata built this way appeared worse. We notice that although the top frequent entries are similar to CLIP's metadata, the long-tailed part is very different. For example, the minimal count to be in the 500,000 budget needs at least 130 counts. In contrast, our metadata has 114K entries that have no matches. This approach results in worse quality metadata including low-quality spelling/writing (instead of high-quality entries from WordNet or Wikipedia). Further, the effect of balancing saturates earlier for such data (in a larger \(t\), verified by CLIP training) since low-quality entries are also heavily in long-tail.
2. **Cased WordNet**. We also notice many cased words are missing from metadata (e.g., WordNet is in lowercase). After adding cased WordNet into metadata, we notice a performance drop on ImageNet. The reason could be class names are more likely in lower case and upper case entry matching may reduce the written quality of texts.
3. **Stopwords/Useless Entries Removal** We further study the effect of whether removing stopwords and useless words such as "photo" and "image" is beneficial. This led to almost no difference since balancing entries reduced the effects of useless entries (each entry contributes to 0.0002% (1/500k) level of the total data points). To encourage a simplified solution, we do not intend to add more artificial filters.
### Qualitative Data Examples
In Table 11, we illustrate data before/after sub-string matching and balancing. We also highlight class labels from ImageNet in the table. We mark a matched entry with a bigger font size indicating higher probability of sampling that entry. Intuitively, sub-string matching removes low quality text and balancing favors longer text with long-tail entities to improve data quality. In Table 12, we show more examples of matched text that include ImageNet tail entries.
\begin{table}
\begin{tabular}{l|c|c|c} Hyperparameter & OpenAI CLIP / MetaCLIP & OpenCLIP (LAION 400M) & DataComp \\ \hline Activation Function & QuickGLEU & GELU & GELU \\ Seen Pairs & 12.8B(400M\(\times\)32 epochs) & 13B (407M\(\times\)32 epochs) & 12.8B \\ Batch Size & 32768 & 32768 (B/32), 33792 (B/16), 38400 (L/14) & 90112 (L/14) \\ Learning Rate & 5.0e-4(B/32,B/16), 4.0e-4(L/14) & 5.0e-4(B/32) & 1e-3(L/14) \\ Warm-up & 2k & 2k (B/32) & 10k (L/14) \\ \hline \end{tabular}
\end{table}
Table 10: Hyperparameters of OpenAI CLIP vs OpenCLIP on LAION-400M and DataComp 1B.
\begin{table}
\begin{tabular}{l|c c c c} \hline Text & Substr. Bal. IN Head IN Tail \\ \hline control\_14ct & β & β & β \\ \hline cudd2008 & β & β & β \\ \hline product-img & β & β & β \\ \hline Skirmisher-Main-440x412 & β & β & β \\ \hline Adomote & β & β & β \\ \hline How-to-Find-the-Finest-Electric-Car-Companies & β & β & β \\ \hline hanukakah-party-facebook-event-cover-template & β & β & β \\ \hline johnny\_cash\_chili\_dog (2) & β & β & β \\ \hline
8533 GOLDEN RIDGE COURT & β & β & β \\ \hline How to build a stone patio on your own & β & β & β & β \\ \hline battery plate & β & β & β & β \\ \hline barn wedding & β & β & β & β \\ \hline Picture sand, machine, Concept, Jepep, the concept, the front, Slim, Wrangler, Jepep & & β & β \\ \hline desk & β & β & β & β \\ \hline Adult T-shirt & β & β & β & β \\ \hline Imix m10 stage monitor & β & β & β & β \\ \hline google wallet md3 & β & β & β & β \\ \hline Distressed beach bar sign - Pearlyβs Oyster Bar & β & β & β & β \\ \hline Why the Kilimanjaro Trek should be top of your bucket list & β & β & β & β \\ \hline J70 desk model & β & β & β & β \\ \hline Whitby castle & β & β & β & β \\ \hline Inside the castle & β & β & β & β & β \\ \hline Hama Harma Oyster Saloon 1 restaurant 1 35846 US-101, Lilliwaup, WA & β & β & β & β \\ \hline
98555, USA 1 3608775811 OR + 1 360877-5811 & β & β & β & β \\ \hline beach & β & β & β & β \\ \hline Caramelized onions, Sauteed red bell peppers and zucchini combined create a winning egg frittata breakfast dish. & β & β & β \\ \hline Vector layered paper cut craft style music composition of saxophone guitar trumpet violin music instruments, notes on abstract color background. Jazz concert festival party poster banner card template & β & β & β & β \\ \hline Nautilus hot tub & β & β & β & β \\ \hline night binoculars for hunting & β & β & β & β \\ \hline
2017 cat eyes womenβs sunglasses for women vintage sun glasses round women sun glasses oculos de SO1 & & & & \\ \hline \end{tabular}
\end{table}
Table 11: Examples categorized by whether passing sub-string matching and balancing: Words in violet color are in metadata and their font size indicates probability of being sampled, ranging from 13pt that has probability close to 0, to 22pt that has probability 1. ImageNet labels in head entries are cyan.
\begin{table}
\begin{tabular}{l|c c c c c} Text & Substr. Bal. IN Head IN Tail & & & & \\ \hline Antique German sterling silver 800 cup julep goblet. & β & β & β & β \\
1 & β & β & β & β \\ \hline A journey to the East Sea by high-speed train & β & β & β & β \\ : New KTx line makes Gangneungβs Wonders all the more accessible & & & & \\ \hline little arrow design co watercolor rainbow blush shower curtain and mat & β & β & β & β \\ \hline photo of antique silver top break revolver & β & β & β & β \\ \hline trombone model & β & β & β & β \\ Staffordshire Bull Terrier & β & β & β & β \\ \hline ki Plantation Timers fire truck on the night of January 3, 2020. Photography : Tim Wilson & β & β & β & β \\ \hline water buffalo bath & β & β & β & β \\ \hline Basset Hound pup with Lionhead X Lop rabbit & & & & \\ \hline Single serving of peach trifle & β & β & β & β \\ \hline A tarantula (hairy arachnid belonging to the Therapisidae family of spiders) in a box, standing still & & & \\ \hline Modern Staffordshire Bull Terterled & β & β & β & β \\ Night Light Animal Pet Dog Puppy 3D Optical illusion & & & & \\ Lamp Home Decor Table Lamp Desk Light & & & & \\ \hline Amazon, rocking chair,canva, camping chairs & & & & \\ \hline dispalying kukri machete fitted inside Scab- & β & β & β & β \\ bard & & & & \\ \hline jacksonschameleon & β & β & β & β \\ \hline Wall Mural - Insects pollinating blooming & & & & \\ \hline rapeseed crops in field & & & & \\ \hline Scottish Terrier Phone Pocket & & & & \\ \end{tabular}
\end{table}
Table 12: Examples passing both sub-string matching and balancing wit ImageNet tail classes: Words in violet color are in metadata and their font size indicates probability of being sampled, ranging from 13pt that has probability close to 0, to 22pt that has probability 1. ImageNet labels in tail entries are in cyan. |
2302.14314 | Adapter Incremental Continual Learning of Efficient Audio Spectrogram
Transformers | Continual learning involves training neural networks incrementally for new
tasks while retaining the knowledge of previous tasks. However, efficiently
fine-tuning the model for sequential tasks with minimal computational resources
remains a challenge. In this paper, we propose Task Incremental Continual
Learning (TI-CL) of audio classifiers with both parameter-efficient and
compute-efficient Audio Spectrogram Transformers (AST). To reduce the trainable
parameters without performance degradation for TI-CL, we compare several
Parameter Efficient Transfer (PET) methods and propose AST with Convolutional
Adapters for TI-CL, which has less than 5% of trainable parameters of the fully
fine-tuned counterparts. To reduce the computational complexity, we introduce a
novel Frequency-Time factorized Attention (FTA) method that replaces the
traditional self-attention in transformers for audio spectrograms. FTA achieves
competitive performance with only a factor of the computations required by
Global Self-Attention (GSA). Finally, we formulate our method for TI-CL, called
Adapter Incremental Continual Learning (AI-CL), as a combination of the
"parameter-efficient" Convolutional Adapter and the "compute-efficient" FTA.
Experiments on ESC-50, SpeechCommandsV2 (SCv2), and Audio-Visual Event (AVE)
benchmarks show that our proposed method prevents catastrophic forgetting in
TI-CL while maintaining a lower computational budget. | Nithish Muthuchamy Selvaraj, Xiaobao Guo, Adams Kong, Bingquan Shen, Alex Kot | 2023-02-28T05:11:40Z | http://arxiv.org/abs/2302.14314v2 | # Adapter Incremental Continual Learning of Efficient Audio Spectrogram Transformers
###### Abstract
Continual learning involves training neural networks incrementally for new tasks while retaining the knowledge of previous tasks. However, efficiently fine-tuning the model for sequential tasks with minimal computational resources remains a challenge. In this paper, we propose Task Incremental Continual Learning (TI-CL) of audio classifiers with both parameter-efficient and compute-efficient Audio Spectrogram Transformers (AST). To reduce the trainable parameters without performance degradation for TI-CL, we compare several Parameter Efficient Transfer (PET) methods and propose AST with Convolutional Adapters for TI-CL, which has less than 5% of trainable parameters of the fully fine-tuned counterparts. To reduce the computational complexity, we introduce a novel Frequency-Time factorized Attention (FTA) method that replaces the traditional self-attention in transformers for audio spectrograms. FTA achieves competitive performance with only a factor of the computations required by Global Self-Attention (GSA). Finally, we formulate our method for TI-CL, called Adapter Incremental Continual Learning (AI-CL), as a combination of the "parameter-efficient" Convolutional Adapter and the "compute-efficient" FTA. Experiments on ESC-50, SpeechCommandsV2 (SCv2), and Audio-Visual Event (AVE) benchmarks show that our proposed method prevents catastrophic forgetting in TI-CL while maintaining a lower computational budget.
Nithish Muthuchamy Selvaraj*\({}^{1}\), Xiaobao Guo*\({}^{1}\), Adams Kong\({}^{1}\), Bingquan Shen \({}^{2}\), Alex Kot \({}^{1}\)\({}^{1}\)Nanyang Technological University, Singapore
\({}^{2}\)DSO National Laboratories, Singapore
Footnote β : These authors contributed equally to this work
**Index Terms**: Continual Learning, Audio Spectrogram Transformer, Adapter, Self-Attention
## 1 Introduction
Continual learning of new knowledge and skill acquisition are the desirable traits for intelligent machines. However, in Deep Learning, neural networks may forget the previous knowledge due to optimization of network weights for new tasks, leading to "catastrophic forgetting". Many works have been proposed to address this issue by constraining the weights of neural nets [1, 2] or using data (pseudo-data) of previous tasks [3]. A simple way to mitigate this issue is to assign task specific sub-networks, where only the sub-network is optimized for new tasks and other parameters are task-independent and can be shared across tasks. This approach is particularly effective for Tack Incremental Continual Learning (TI-CL), which requires a task-ID to route the data to the corresponding sub-network. As the model is incrementally trained on new tasks, its size grows sub-linearly.
This paper explores TI-CL of audio classifiers with Audio Spectrogram Transformers (AST) [4], which achieved state-of-the-art results in several audio benchmarks [5, 6, 7]. However, there are two main issues with AST that must be addressed for sequential training: parameter inefficiency and computational inefficiency.
**Parameter Inefficiency.** In TI-CL, the use of pre-trained transformer-based models like AST can lead to parameter inefficiency due to a large number of trainable parameters in full-finetuning for sequential tasks. This can cause overfitting, especially when the sequential tasks have limited data.
**Computational Inefficiency.** The transformer's self-attention mechanism has quadratic computational complexity. Hence, a large number of tokens extracted from larger spectrograms (from long duration audio) exponentially increases the number of computations. However, audio spectrograms cannot be resized since their characteristics are determined by the audio duration and the number of frequency bins. Resizing audio spectrograms can lead to a loss of critical information and adversely affect their quality. Therefore, transformer-based AST shows significant computational inefficiency when processing long-duration audio.
Therefore, we propose a TI-CL method based on AST and address the issues of parameter and computational efficiency. We leverage PET methods to improve the parameter efficiency of AST. Our study evaluates the efficacy of various PET
Figure 1: Adapter Incremental Continual Learning of Audio Spectrogram Transformers.
methods for AST on ESC-50 [5] and SpeechCommandsV2 [6] benchmarks and proposes Convolutional Adapters to address parameter inefficiency. Note that the performance of PET methods for AST audio classifiers has not been studied before. The adapters perform as well as fully fine-tuned models in high-resource settings and even outperform them in low-resource settings with \(<\)5% of the trainable parameters.
Next, we propose Frequency-Time factorized Attention (FTA) to address computational inefficiency in self-attention for long-duration audio spectrograms. Unlike traditional self-attention, FTA enables an arbitrary token to attend only to the frequency and temporal tokens that share the same position index in either axis, thereby leveraging the orthogonal nature of frequency and time in spectrograms (see Fig. 2). This factorization greatly reduces complexity and improves computational efficiency. To achieve both parameter and computational efficiency, we combine Convolutional Adapter and FTA for TI-CL of audio classification.
The main contributions of this paper can be summarized as follows.
* We provide an empirical study on the performance of various PET methods for AST.
* We propose TI-CL of audio classifiers with parameter-efficient AST, using Convolutional Adapters.
* We introduce a novel Frequency-Time factorized Attention (FTA) for compute-efficient AST.
* Through comprehensive experiments we demonstrate the advantages of the proposed approach for TI-CL of audio classifiers.
## 2 Related work
### Continual Learning for Audio
To prevent catastrophic forgetting in continual learning, various methods have been proposed. For example, GIM [8] incrementally adds new modules to capture drifts in input distribution, DFWF [9] uses a knowledge distillation loss to preserve memory from the original model, and static memory networks [10] introduce static memory to reduce memory usage and model complexity. Few-shot CL [11] enables fast and interactive model updates in a few-shot learning framework to expand the audio classifier to recognize novel classes, while CTR [12] addresses both catastrophic forgetting and knowledge transfer issues with a pair of continual learning plugin modules.
### Parameter Efficient Transfer
Many recent works have proposed efficient transfer learning and fine-tuning techniques for downstream tasks, such as Adapter for NLP [13] and similar methods like LoRA [14], AdaptFormer [15], and ConvPass [16]. These methods achieve efficient fine-tuning by inserting small trainable bottleneck modules at different locations inside a transformer encoder while freezing other parameters during training. Simple implementations typically involve a down-projection followed by an up-projection. Other methods tune specific parameters in the network, such as BitFit [17], which adapts the model for different tasks by tuning the bias terms of the transformer layers, LayerNorm Tune [18], which tunes the affine transformation parameters in the encoder normalization layers, and Prompt Tuning [19], which optimizes a set of learnable latent tokens that are prepended to the input sequence at every encoder layer for transfer learning.
## 3 Methodology
### Continual Learning (CL) and AST audio classifier
The objective of continual learning is to sequentially train a parameterized model \(f_{\mathbf{\theta}}\) over a set of \(n\) tasks \(D\in\{D_{1},D_{2},...,D_{n}\}\). Each task is defined by \(D_{i}\in\{X_{i},Y_{i}\},i\in[1,n]\), where \(X\) is the set of input samples and \(Y\) is the set of corresponding labels. The parameterized function \(f_{\mathbf{\theta}}:x\xrightarrow{}y\) maps the input \(x\in X\) to the corresponding label \(y\in Y\) and the goal of CL is to train \(f_{\mathbf{\theta}}\) such that it can correctly predict the label \(y\) for an unseen arbitrary input \(x\) sampled across \(D\).
If \(D\) is an audio classification task, then \(f_{\mathbf{\theta}}\) is a pre-trained AST model with total weights \(\mathbf{\theta},x\in X\) is a spectrogram image and \(y\in Y\) is the corresponding audio class label. \(f_{\mathbf{\theta}}\) extracts tokens of shape \(\mathbf{Z}\in\mathbb{R}^{(MT+1)\times d}\) from \(x\), where \(M\) and \(T\) denotes the tokens in frequency and time axis, \(d\) is the embedding dimension and \(1\) denotes the class token. These tokens are processed by a series of 12 transformer encoders with Multi-Head Self-Attention (MHSA), Multi Layer Perceptron (MLP) and Layer Normalization (LN) sublayers, and can be formulated as,
\[\begin{split}\mathbf{Z^{\prime}_{t}}&=MHSA(LN_{1}( \mathbf{Z_{t-1}}))+\mathbf{Z_{t-1}},\\ \mathbf{Z_{t}}&=MLP(LN_{2}(\mathbf{Z^{\prime}_{t}}))+\mathbf{Z^{ \prime}_{t}},\end{split} \tag{1}\]
where \(l\) denotes the layer number and \(\mathbf{Z_{t}}\) is the extracted tokens from layer \(l\).
### Adapter Incremental Continual Learning of AST
Task Incremental Continual Learning is one of the three scenarios for CL [20], where it assumes that the tasks \(D_{i}\) are disjoint and the task ID \(i\) is known both during training and inference. Full-finetuning \(f_{\mathbf{\theta}}\) on the sequential tasks by optimizing \(\mathbf{\theta}\) may not be efficient and may lead to the overfitting issue. A parameter incremental approach to solve TI-CL involves training a parameterized network with multiple task-specific sub-modules denoted as \(f_{\mathbf{\theta}+\mathbf{\theta}}\), where \(\theta\) is the shared task-independent parameter, \(\mathbf{\delta\theta}\in\{\mathbf{\theta_{1}},\mathbf{\theta_{2}},...,\mathbf{\theta_{n}}\}\) are the task-specific parameters and \(\mathbf{\theta}>>\delta\mathbf{\theta}\).
We propose an adapter incremental method for TI-CL called Adapter Incremental Continual Learning (AI-CL), where a Convolutional Adapter (CA) is incrementally added and trained for each task while keeping the shared \(\mathbf{\theta}\) frozen. We denote the weights of task-specific CA as \(\delta\mathbf{\theta_{i}}\) for every new task \(D_{i}\). CA has a bottleneck structure, which consists of a down-projection followed by an up-projection with an additional 2D convolution layer in between. CA processes any input tokens \(\mathbf{z}\) as,
\[CA(\mathbf{z})=\mathbf{W_{up}}(Conv2D^{*}(\mathbf{W^{*}_{down}}(\mathbf{z}))), \tag{2}\]
where \(\mathbf{W_{down}}\in\mathbb{R}^{d\times d^{\prime}}\), \(\mathbf{W_{up}}\in\mathbb{R}^{d^{\prime}\times d}\), \(d^{\prime}<<d\) and \(*\) denotes the non-linear GELU activation. CA runs parallel to
Figure 2: Frequency-Time factorized Attention for a (yellow) token along the frequency and time axis.
both MHSA and MLP layers, which can be represented as,
\[\begin{split}\mathbf{Z^{\prime}_{l}}&=MHSA(LN_{1}(\mathbf{Z_{l-1 }}))+\mathbf{Z_{l-1}}+CA_{1}(LN_{1}(\mathbf{Z_{l-1}})),\\ \mathbf{Z_{l}}&=MLP(LN_{2}(\mathbf{Z^{\prime}_{l}}))+\mathbf{Z^{ \prime}_{l}}+CA_{2}(LN_{2}(\mathbf{Z^{\prime}_{l}})).\end{split} \tag{3}\]
The proposed AI-CL method using CA is parameter efficient since only the CA weights \(\delta\mathbf{\theta_{i}}\) are trainable and saving these weights also occupies less storage. The backbone weights \(\mathbf{\theta}\) are frozen and shared across tasks, both during the training and inference stage. During inference, when a test audio spectrogram \(x\) is passed along with the task ID \(i\), the AST model routes the tokens \(\mathbf{Z}\) to the corresponding CA with the parameter \(\delta\mathbf{\theta_{i}}\) and the corresponding classifier. The AST model with multiple task-specific CAs is illustrated in Fig 1.
### Frequency-Time factorized Attention (FTA)
While the AI-CL approach is parameter-efficient, the use of self-attention in AST results in a quadratic increase in computations (_i.e._, the number of floating point operations or FLOPS) for larger spectrograms. To address this issue, prior alternatives to self-attention either limit self-attention to a local window [21] or factorize self-attention along two orthogonal axis [22], but they were developed for images and videos.
Inspired by the factorization approach [22], we propose Frequency-Time factorized Attention (FTA) in the AI-CL method as shown in Figure 2. It factorizes self-attention across the frequency and time axis of a spectrogram, by masking out the undesired tokens. This approach makes AST more computationally efficient, with attention along the frequency (vertical) axis learning the distribution of various frequency components at a given time interval, and attention along the time (horizontal) axis learning how a frequency component evolves over time. The only exception is the \([CLS]\) token, which attends to all the tokens (including itself) since it must summarize the semantic information in a spectrogram. For a token \(\mathbf{Z}\in\mathbb{R}^{(MT+1)\times d}\), the computation complexity \(\mathcal{O}\) of Global Self-Attention (GSA) and FTA can be calculated as follows,
\[\begin{split}\mathcal{O}_{GSA}&=(MT+1)^{2}*d,\\ \mathcal{O}_{FTA}&=(MT(M+T+1)+1)*d,\end{split} \tag{4}\]
where \((M+T)<<MT\). Thus, when \(M\) and \(T\) grow, FTA has much fewer computations than GSA. Empirically, we show that the proposed Frequency Time factorized Attention (FTA) achieves competitive performance to global self-attention with only a fraction of the computations.
## 4 Results
### Experimental Setup
**Datasets.** The datasets used for PET evaluation and TI-CL experiments are:
* ESC-50 [5], which contains 2,000 5-second audio recordings organized into 50 classes for environmental sound classification. The standard 5-fold cross-validation is used unless otherwise specified.
* Speech Commands V2 (SCv2) [6], which includes 105k 1-second recordings of 35 speech classes for speech recognition. The standard training and test set split is used with 84,843 and 11,005 samples respectively.
* AVE [23], an event localization dataset of 4,143 samples covering 28 events with a duration of 10 seconds (long duration). Only the audio modality is used, and the original train-test split for audio classification is followed.
**Model.** Our system is built upon the AST model, an ImageNet pre-trained ViT/B-16 model with 12 transformer encoders. We process audio input by converting the waveform into a log mel spectrogram with 128 Mel bins, a 25ms Hamming window, and a hop length of 10ms, without any data augmentation. Tokens are extracted using a convolutional feature extractor with a kernel size of 16, a stride of 10, and a dimensionality of 768, with position embeddings added via bilinear interpolation. The model is trained using Adam optimizer with a learning rate of 3e-4 and cross-entropy loss, with batch sizes of 128/32/12 for the SCv2/ESC-50/AVE datasets. We train the model for 5/20/15 epochs on the respective datasets.
### Evaluation of PET methods
While several PET methods have been proposed for NLP and Vision tasks, their effectiveness in audio classification remains largely unexplored. In this study, we evaluated several PET methods on the ESC-50 and SCv2 datasets, and found that AdaptFormer [15] and ConvPass [16] achieved the highest performance (see Table 2). The Linear method was simply adding a trainable linear layer for classification in Table 2. Notably, ConvPass achieved comparable performance to full fine-tuning on SCv2 (with 2.7k samples per class), and even outperformed it on ESC-50 (with only 40 samples per class) while using less than 5% of trainable parameters. The evaluation provides compelling evidence for the effectiveness of a parameter efficient strategy. Therefore, we adopted the Convolutional Adapter for further investigation in TI-CL.
### Adapter Incremental Continual Learning of AST
**Formulation.** The TI-CL setup consists of three tasks: SCv2, ESC-50, and AVE, which are performed in a sequential order.
Figure 3: Performance of the AST model in TI-CL setup for three training modes.
In each task of the TI-CL, only the corresponding dataset is available for training and the datasets from previous tasks are no longer available. Only the test data of previous tasks are used to evaluate the model performance after training on current task.
**Training Modes.** To demonstrate the proposed approach's advantages, we trained the AST model in three different modes, following the sequential training order. These modes are:
* Model Sequential: The same AST model is trained repeatedly on new tasks.
* Model Incremental: For every new task, a new AST model is trained independently.
* Adapter Incremental: The proposed approach described in Section 3, where new adapter modules are added to the frozen backbone with FTA for new tasks.
The first two modes rely on GSA and the ESC-50 task is evaluated with single fold.
**Performance vs Parameter-Efficiency.** Figure 3 displays the performance of the AST model for three training modes. In the Model Sequential setting, catastrophic forgetting occurred, where the model weights optimized for a new task forgot the knowledge gained from previous tasks, leading to a significant performance drop. However, Model Incremental setting trained the models independently for each task, thereby resolving this issue. The proposed Adapter Incremental method also addressed the catastrophic forgetting issue by training independent task-specific adapter modules and showed competitive performance on all three tasks. However, the Model Sequential and Model Incremental settings were less efficient than the Adapter Incremental method in terms of total model parameters and trainable parameters, as illustrated in Table 3. Note that the total number of parameters were those required for inference upon the completion of sequential training on three tasks. The Model Incremental setting had a large number of total parameters, and both Model Sequential and Model Incremental settings required nearly 25 times more trainable parameters than Adapter Incremental. Overall, the proposed Adapter Incremental method for TI-CL combined the best of performance and parameter efficiency, delivering stable performance and minimizing the number of trainable parameters. Also, Adapter Incremental setting has substantially lower storage cost because only the adapter weights need to be saved, unlike the other two setting which stores the weights of the entire models(s).
### Impact of FTA
We conducted a study to compare the computational efficiency and performance of our proposed FTA with Global Self-Attention (GSA) on three datasets, each with varying maximum audio durations. In Table 1, we summarize the details of our comparison. Our results showed that FTA required significantly fewer computations than GSA, especially with longer audio durations (the larger spectrograms). To further evaluate the performance of FTA and GSA, we implemented both methods using the Convolutional Adapter model and measured their audio classification accuracies on the three datasets. The results are presented in Table 4. We found that FTA performed competitively with GSA in terms of accuracy, but with only a fraction of the computational resources required by self-attention. Overall, our study demonstrates that FTA is a promising approach for audio classification tasks, as it achieves comparable accuracy to GSA while using significantly fewer computational resources.
## 5 Conclusions
In this work, we proposed a new method called Adapter Incremental Continual Learning (AI-CL) for audio classification in the context of Task Incremental Continual Learning (TI-CL) of AST audio classifiers. AI-CL improved parameter efficiency with the introduction of Convolutional Adapters for AST. To enhance compute efficiency for longer audio streams, we proposed a new method called Frequency-Time factorized Attention. Our experiments have shown that AI-CL is both parameter-efficient and compute-efficient. AI-CL enables continual learning with minimal resources, which can be scaled effectively for a large number of tasks.
## 6 Acknowledgements
This work was carried out at the Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University, Singapore. The research is supported by the DSO National Laboratories, under the project agreement No. DSOCL21238.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & Duration & Spectrogram Shape & Freq (M Tokens) & Time (T Tokens) & \(\mathcal{O}_{GSA}/d\) & \(\mathcal{O}_{FTA}/d\) & \(k\) \\ \hline SCv2 & 1s & [128,101] & 12 & 9 & 11881 & 2377 & 0.2 \\ ESC-50 & 5s & [128,501] & 12 & 49 & 346921 & 36457 & 0.105 \\ AVE & 10s & [128,1006] & 12 & 100 & 1442401 & 135601 & 0.094 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational efficiency of the proposed FTA. \(k\) denotes the factor of GSA computations required by FTA.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{**Params (Million)**} & \multicolumn{2}{c}{Accuracy (\%)} \\ \cline{3-4} & & **ESC-50** & **SCv2** \\ \hline Linear & 0.26 & 71.05 & 81.44 \\ LayerNorm Tune [18] & 0.27 & 72.75 & 89.2 \\ BitFit [17] & 0.32 & 72 & 87.91 \\ AdapterFormer [15] & 1.43 & 83 & 92.3 \\ Prompt Tuning [19] & 2.17 & 78.85 & 91.64 \\ LoRA [14] & 2.6 & 79.05 & 92.14 \\ Houlsby [13] & 2.62 & 69.75 & 90.83 \\ \hline ConvPass [16] & **3.5** & **83.3** & **93.42** \\ \hline Full Fine Tuning & **86.33** & **82.3** & **94.58** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of PET methods for AST.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Accuracy (\%)} \\ \cline{2-3} & SCv2 & ESC-50 & AVE (Audio) \\ \hline GSA & 93.57 & 85.25 & 69.1 \\ FTA & 92.81 & 83 & 66.42 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of FTA vs GSA on three tasks.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Trainable Params} & Total Params & Storage \\ \hline Model Seq. & 86.5M & 86.62M & 348MB \\ Model Inc. & 86.5M & 259.63M & 1.02GB \\ Adapter Inc. & 3.5M & 96.6M & 47MB \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of parameter and storage cost for three training modes in TI-CL setup. |
2305.19697 | Interaction-induced Liouvillian skin effect in a fermionic chain with a
two-body loss | Despite recent intensive research on topological aspects of open quantum
systems, effects of strong interactions have not been sufficiently explored. In
this paper, we demonstrate that complex-valued interactions induce the
Liouvillian skin effect by analyzing a one-dimensional correlated model with
two-body loss. We show that, in the presence of complex-valued interactions,
eigenmodes and eigenvalues of the Liouvillian strongly depend on boundary
conditions. Specifically, we find that complex-valued interactions induce
localization of eigenmodes of the Liouvillian around the right edge under open
boundary conditions. To characterize the Liouvllian skin effect, we define the
topological invariant by using the Liouvillian superoperator. Then, we
numerically confirm that the topological invariant captures the Liouvillian
skin effect. Furthermore, the presence of the localization of eigenmodes
results in the unique dynamics observed only under open boundary conditions:
particle accumulation at the right edge in transient dynamics. Our result paves
the way to realize topological phenomena in open quantum systems induced by
strong interactions. | Shu Hamanaka, Kazuki Yamamoto, Tsuneya Yoshida | 2023-05-31T09:43:23Z | http://arxiv.org/abs/2305.19697v2 | # Interaction-induced Liouvillian skin effect in a fermionic chain with two-body loss
###### Abstract
Despite recent intensive research on topological aspects of open quantum systems, effects of strong interactions have not been sufficiently explored. In this paper, we demonstrate that interactions induce the Liouvillian skin effect by analyzing a one-dimensional correlated model with two-body loss. We show that, in the presence of interactions, eigenmodes and eigenvalues of the Liouvillian strongly depend on boundary conditions. Specifically, we find that interactions induce localization of eigenmodes of the Liouvillian around the right edge under open boundary conditions. To characterize the Liouvillian skin effect, we define the topological invariant by using the Liouvillian superoperator. Then, we numerically confirm that the topological invariant captures the Liouvillian skin effect. Furthermore, the presence of the localization of eigenmodes results in the unique dynamics observed only under open boundary conditions: particle accumulation at the right edge in transient dynamics. Our result paves the way to realize topological phenomena in open quantum systems induced by strong interactions.
## I Introduction
In the past decade, a lot of theoretical and experimental studies have uncovered topological aspects of condensed matter systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. In particular, it has been elucidated that strong correlations alter topological phases and lead to novel phenomena. For example, it has been turned out that interactions change the \(\mathbb{Z}\)-classification to the \(\mathbb{Z}_{\text{g}}\)-classification for one-dimensional topological superconductors [11; 12] and another study has shown that strong correlations generate topological Mott phases [13]. Moreover, in Ref. [14], the interaction-enabled topological insulator has been proposed, which has no counterpart in noninteracting systems.
On the other hand, non-Hermitian physics has attracted broad interest in classical and open quantum systems [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. One of the most remarkable phenomena induced by non-Hermiticity is the non-Hermitian skin effect, which is characterized by the extreme sensitivity of eigenvalues and eigenstates to boundary conditions [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. The non-Hermitian skin effect has been experimentally observed in ultracold \({}^{87}\)Rb atoms [45] as well as electric circuits [46], quantum walks [47], and mechanical metamaterials [48]. In noninteracting systems, theoretical studies have shown that the non-Hermitian skin effect is caused by the nontrivial point-gap topology, which is intrinsic to non-Hermitian systems [35; 36; 37]. Furthermore, the non-Hermitian skin effect has been extended to open quantum systems following the Lindblad master equation [39; 49; 50; 51; 52; 53]. Especially, the Liouvillian skin effect manifests as the extreme dependence of eigenvalues and eigenmodes of the Liouvillian on boundary conditions. In particular, the eigenmode localized near the edge is referred to as the skin mode. It has been pointed out that the Liouvillian skin effect has a striking influence on the relaxation processes. Specifically, it has been reported that the maximal relaxation time to the steady state can diverge while maintaining the Liouvillian gap finite [50; 53].
In addition to the above progress of the non-Hermitian topological band theory, it has become possible to implement dissipative correlated systems in ultracold atoms [54; 55; 56; 57; 58; 59; 60; 61; 62]. This development has opened up a new direction in studies of novel phases characterized by spontaneous symmetry breaking and critical phenomena in open quantum systems. Previous studies have revealed that particle losses induce unique phenomena. In particular, two-body loss brings about unusual behavior, e.g., the sign reversal of magnetic correlations [54; 63]. Moreover, a lot of theoretical studies have been conducted on a variety of quantum many-body phenomena with atom losses [64; 65; 66; 67; 68; 69; 70], such as unconventional superfluid phase transitions in a dissipative BCS model [71; 72] and anomalous dissipation-induced renormalization-group flows in a non-Hermitian Kondo lattice [73].
In view of the pivotal role of interactions in enriching topological phases in Hermitian systems and inducing unique phenomena in open quantum systems, one may naturally expect the presence of novel phenomena induced by the interplay between strong interactions and non-Hermitian topology. So far, the effects of interactions of non-Hermitian topological phases have been studied in several works [74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92]. However, previous studies have mainly focused on the effective Hamiltonian, which captures the time evolution of a single trajectory between successive quantum jumps [93]. Thus, it seems that the effects of interaction on the topological property of the Liouvillian remain unclear [94; 95; 96; 97; 98]. More specifically, whether many-body interactions can induce the Liouvillian skin effect has not been addressed.
In this work, we demonstrate that interactions can induce the Liouvillian skin effect in one-dimensional open quantum systems. Specifically, we analyze the correlated
fermionic systems with two-body loss. We show that owing to strong interactions, eigenmodes and eigenvalues of the Liouvillian become extremely sensitive to boundary conditions. In particular, eigenmodes of the Liouvillian exhibit localization near the edge. To characterize the Liouvillian skin effect, we introduce the topological invariant defined by the Liouvillian superoperator. Then, we numerically reveal that the above topological invariant characterizes the Liouvillian skin effect. Moreover, the Liouvillian skin effect significantly affects the dynamics. In particular, in transient dynamics, particles accumulate near the right edge under open boundary conditions (OBC).
The rest of this paper is organized as follows. In Sec. II, we first introduce the dissipative one-dimensional correlated model. We then briefly explain the methods to analyze the Lindblad equation via the vectorization of the density matrix. Section III provides the definition of the topological invariant and the right-state particle density, which measures the degree of localization of eigenmodes in many-body systems. Then, in Sec. IV, a numerical demonstration of the interaction-induced Liouvillian skin effect is conducted. We give the conclusions in Sec. V. In Appendix A, we discuss the relation between the symmetry of the Liouvillian and the topological number. In Appendix B, we compute the topological number analytically and give the characterization of the Liouvillian skin effect reported in Ref [50]. We numerically show the absence of the Liouvillian skin effect in noninteracting systems in Appendix C. Appendix D is devoted to the sensitivity of eigenvalues of the Liouvillian to boundary conditions. We provide the derivation of an alternative method for calculating the topological number in Appendix E. In Appendix F, we demonstrate that the Liouvillian skin effect survives for other configurations of down-spins.
## II Model and method
### Falicov-Kimball model with two-body loss
We consider the two-orbital Falicov-Kimball model [99]
\[H=\sum_{\langle ij\rangle\alpha\beta}h_{i\alpha j\beta}c^{\dagger}_{i\alpha \uparrow}c_{j\beta\uparrow}+U\sum_{j}n_{jb\uparrow}n_{jb\downarrow}, \tag{1}\]
where \(c^{\dagger}_{j\alpha\sigma}(c_{j\alpha\sigma})\) is a fermionic creation (annihilation) operator at site \(j=1,\cdots,L\) in orbital \(\alpha=a,b\) with spin \(\sigma=\uparrow,\downarrow\) state. \(h_{i\alpha j\beta}\) is the hopping Hamiltonian between site \(i\) in orbital \(\alpha\) with spin up state and site \(j\) in orbital \(\beta\) with spin up state. \(U\) denotes the strength of interactions. The summation of the first term \(\langle ij\rangle\) runs over all pairs of nearest neighbor sites \(i\) and \(j\). By applying the Fourier transformation to the first term in Eq. (1), the Bloch Hamiltonian \(h_{\alpha\beta}(k)\) in the orbital space reads
\[h(k)=b_{2}(k)\sigma_{2}+b_{3}(k)\sigma_{3}, \tag{2}\]
with
\[b_{2} =2t_{h}-0.5t_{h}\sin k, \tag{3a}\] \[b_{3} =2t_{h}\cos k. \tag{3b}\]
Here, \(\sigma_{j}\) (\(j=1,2,3\)) express the Pauli matrices in the orbital space. The Hamiltonian given in Eq. (1) is obtained from the two-orbital Hubbard model by turning off the hopping of fermions in the down-spin states. It is worth noting that the above model breaks the inversion symmetry, which is essential for the emergence of the interaction-induced Liouvillian skin effect with local dissipation (see Appendix A).
When dissipation is introduced into this model, under the Markov approximation, the dynamics is described by the Lindblad equation [100; 101]
\[\frac{d\rho}{dt}=\mathscr{L}(\rho)=-i[H,\rho]+\sum_{j}\biggl{[}L_ {j}\rho L_{j}^{\dagger}-\frac{1}{2}\{L_{j}^{\dagger}L_{j},\rho\}\biggr{]}. \tag{4}\]
Here, \(\mathscr{L}\) denotes the Liouvillian, which is the superoperator acting on the density matrix \(\rho\), the operator \(H\) is the Hamiltonian, and the Lindblad operator \(L_{j}\) characterizes the effect of dissipation. The Lindblad operator is given by the on-site two-body loss
\[L_{j}=\sqrt{2\gamma}c_{jb\uparrow}c_{jb\downarrow}. \tag{5}\]
We decompose the Liouvillian \(\mathscr{L}(\rho)\) as
\[\mathscr{L}(\rho)=\mathscr{L}_{0}(\rho)+\mathscr{L}_{3}(\rho), \tag{6}\]
where we have introduced
\[\mathscr{L}_{0}(\rho)=-i(H_{\text{eff}}\rho-\rho H_{\text{eff}}^{\dagger}) \tag{7}\]
and
\[\mathscr{L}_{3}(\rho)=\sum_{j}L_{j}\rho L_{j}^{\dagger}. \tag{8}\]
Here, the non-Hermitian Hamiltonian given by
\[H_{\text{eff}} =H-\frac{i}{2}\sum_{j}L_{j}^{\dagger}L_{j}\] \[=\sum_{\langle ij\rangle\alpha\beta}h_{i\alpha j\beta}c^{\dagger }_{i\alpha\uparrow}c_{j\beta\uparrow}+(U-i\gamma)\sum_{j}n_{jb\uparrow}n_{jb\downarrow} \tag{9}\]
describes the dynamics of the single quantum trajectory between the quantum jumps [93]. In the following, we demonstrate that the complex-valued interaction \(U-i\gamma\) plays a crucial role in inducing the Liouvillian skin effect.
### Vectorization of the density matrix
In this subsection, we rewrite the Liouvillian superoperator \(\mathscr{L}\) as an operator \(\mathcal{L}\) acting on the doubled Hilbert
space by vectorizing the density matrix. Following the procedure of Refs.[102; 103; 104; 105], we identify the density matrix \(\rho\) as a vector \(|\rho\rangle\rangle\) in the doubled Hilbert space \(\mathcal{H}\otimes\mathcal{H}\) through the mapping
\[\rho=\sum_{ij}\rho_{ij}|i\rangle\langle j|\mapsto|\rho\rangle\rangle=\sum_{ij} \rho_{ij}|i\rangle\otimes|j\rangle. \tag{10}\]
We note that the first (second) space of the doubled Hilbert space \(\mathcal{H}\otimes\mathcal{H}\) is referred to as the ket (bra) space [104]. When the density matrix \(\rho\) is given by the vectorized form \(|\rho\rangle\rangle\), the Liouvillian superoperator \(\mathscr{L}\) is written as the operator \(\mathcal{L}\) that acts on the doubled Hilbert space
\[\mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{\rm J}. \tag{11}\]
Here, we define
\[\mathcal{L}_{0}=-i\Big{(}H_{\rm eff}\otimes I-I\otimes H_{\rm eff}^{*}\Big{)} \tag{12}\]
and
\[\mathcal{L}_{\rm J}=\sum_{j}L_{j}\otimes L_{j}^{*}, \tag{13}\]
where \(I\) is the identity operator acting on the ket or bra space [106]. Thus, the Liouvillian superoperator \(\mathscr{L}\) is mapped to the non-Hermitian operator \(\mathcal{L}\) acting on the doubled Hilbert space. After the vectorization of the density matrix, the \(n\)-th eigenmode \(|\rho_{\rm R}^{(n)}\rangle\rangle\) and the \(n\)-th eigenvalue \(\Lambda_{n}\) are obtained by solving the eigenvalue equation
\[\mathcal{L}|\rho_{\rm R}^{(n)}\rangle\rangle=\Lambda_{n}|\rho_{\rm R}^{(n)} \rangle\rangle, \tag{14}\]
for \(n=1,\cdots,\dim\,\mathcal{L}\). As demonstrated in Sec. IV, eigenmodes and eigenvalues of the Liouvillian exhibit a strong dependence on boundary conditions.
### Two-body loss process
Let us consider the two-body loss process as illustrated in Fig. 1. First, we consider an initial state where only one fermion is in an up-spin state with down-spin configurations \(\{n_{\downarrow}\}\). As the commutation relation
\[[\mathcal{L}_{0},N_{\uparrow}\otimes N_{\uparrow}]=[\mathcal{L}_{0},n_{jb \downarrow}\otimes n_{jb\downarrow}]=0 \tag{15}\]
indicates that the density matrix \(|\rho\rangle\rangle\rangle\) is labeled by the total number of fermions in up-spin states \(N_{\uparrow}\) and down-spin configurations \(\{n_{\downarrow}\}\), the density matrix for \(N_{\uparrow}=1\) with down-spin configurations \(\{n_{\downarrow}\}\) is spanned by the basis
\[|(N_{\uparrow}=1)\rangle\rangle=c_{j_{1}a\uparrow}^{\dagger}\otimes c_{j_{2} \beta\uparrow}^{\dagger}\ |\{n_{\downarrow}\}\rangle\otimes|\{n_{\downarrow}\}\rangle \tag{16}\]
for \(j_{1},j_{2}=1,\cdots,L,\ \alpha,\beta=a,b\) in the absence of the jump operator \(\mathcal{L}_{\rm J}\). Then, owing to the jump operator \(\mathcal{L}_{\rm J}\) that describes the two-body loss process, a fermion in an up-spin state and that in a down-spin state form pairs and are scattered out into environments. For example, when the total number of down spins in the initial state is \(N_{\downarrow}=4\), the two-body loss process changes down-spin configurations into one of the following four (see Fig. 1)
\[\{n_{\downarrow}\}=\{1,1,1,1\}\] \[\rightarrow \{n_{\downarrow}\}^{\prime}=\{0,1,1,1\},\{1,0,1,1\},\{1,1,0,1\}, \{1,1,1,0\}. \tag{17}\]
Here, \(\{n_{\downarrow}\}^{\prime}\) denotes the down-spin configurations after the two-body loss process. Then, the density matrix for \(N_{\uparrow}=0\) state is spanned by the basis
\[|(N_{\uparrow}=0)\rangle\rangle=|\{n_{\downarrow}\}^{\prime} \rangle\otimes|\{n_{\downarrow}\}^{\prime}\rangle. \tag{18}\]
We construct the basis set \(\{|i\rangle\otimes|j\rangle\}\), which spans the doubled Hilbert space \(\mathcal{H}\otimes\mathcal{H}\) given in Eq. (10), by identifying the basis set \(\{|i\rangle\otimes|j\rangle\}\) with the basis sets combining \(\{|(N_{\uparrow}=1)\rangle\rangle\}\) and \(\{|(N_{\uparrow}=0)\rangle\rangle\}\):
\[\Big{\{}|i\rangle\otimes|j\rangle\Big{\}}=\Big{\{}|(N_{\uparrow}=1)\rangle \rangle,|(N_{\uparrow}=0)\rangle\rangle\Big{\}}. \tag{19}\]
The matrix representation of the Liouvillian with respect to these bases \(\{|i\rangle\otimes|j\rangle\}\) takes the form
\[\mathcal{L}=\left(\begin{array}{c|c}\mathcal{L}_{0}^{(N_{\uparrow}=1)}&\\ \hline\mathcal{L}_{J}&\end{array}\right|\mathcal{L}_{0}^{(N_{\uparrow}=0)} \right). \tag{20}\]
Here, \(\mathcal{L}_{0}^{(N_{\uparrow}=1)}\) [\(\mathcal{L}_{0}^{(N_{\uparrow}=0)}\)] denotes the matrix representation of \(\mathcal{L}_{0}\) for \(N_{\uparrow}=1\) [\(N_{\uparrow}=0\)] sector. It is known that the block triangular structure is a general property of the Liouvillian for particle losses [107]. This structure simplifies the calculation of the winding number of the Liouvillian as shown in Sec. IV.
## III Topological invariant and Skin mode of the Liouvillian
### Topological invariant
In this subsection, we first present the topological number defined by the Liouvillian superoperator, and then
Figure 1: Schematic illustration of the two-body loss process for an initial state with \(N_{\uparrow}=1\). Due to the jump operator of the two-body loss \(\mathcal{L}_{\rm J}\) given in Eq. (13), a fermion with an up-spin and that with a down-spin form pairs and are scattered into environments. Down-spin configurations are changed from \(\{n_{\downarrow}\}=\{1,1,1,1\}\) to \(\{n_{\downarrow}\}^{\prime}=\{1,0,1,1\}\) after the two-body loss process.
discuss the relation between the topological number and the Liouvillian skin effect.
First, we introduce the following topological invariant by using the Liouvillian superoperator
\[\nu(\Lambda_{\rm ref})=\oint_{0}^{2\pi}\frac{d\theta}{2\pi i}\frac{d}{d\theta} \log\det\bigl{[}\mathcal{L}(\theta)-\Lambda_{\rm ref}\bigr{]}, \tag{21}\]
where we have imposed the twisted boundary condition only on the ket space. Here, \(\Lambda_{\rm ref}\in\mathbb{C}\) denotes the reference point and \(\mathcal{L}(\theta)\) is defined as
\[i\mathcal{L}(\theta)=H_{\rm eff}(\theta)\otimes I-I\otimes H_{\rm eff}^{*}+i \sum_{j}L_{j}(\theta)\otimes L_{j}^{*}, \tag{22}\]
where operators \(H_{\rm eff}(\theta)\) and \(L_{j}(\theta)\) are defined by multiplying \(e^{\pm i\theta}\) to the hopping term at the boundary, e.g., \(c_{1\alpha\sigma}^{\dagger}c_{L\alpha^{\prime}\sigma^{\prime}}\) is replaced by \(c_{1\alpha\sigma}^{\dagger}c_{L\alpha^{\prime}\sigma^{\prime}}e^{i\theta}\). More precisely, in the case of the Falicov-Kimball model with the two-body loss given in Eq. (9), \(H_{\rm eff}(\theta)\) is written by
\[H_{\rm eff}(\theta)=H_{\rm eff}^{\rm bulk}+H_{\rm eff}^{\rm edge}(\theta) \tag{23}\]
where \(H_{\rm eff}^{\rm bulk}\) is the Hamiltonian in the bulk, which is independent of \(\theta\) and is written down as
\[H_{\rm eff}^{\rm bulk}=\sum_{\langle ij\rangle^{\prime}\alpha\beta}h_{i\alpha j \beta}c_{i\alpha\uparrow}^{\dagger}c_{j\beta\uparrow}+U\sum_{j=1}^{L}n_{jb \uparrow}n_{jb\downarrow}. \tag{24}\]
The summation \(\langle ij\rangle^{{}^{\prime}}\) runs over all pairs of nearest neighbor sites \(i\) and \(j\), excluding the hopping at the boundary between site \(1\) and site \(L\). The boundary term of the Hamiltonian \(H_{\rm eff}^{\rm edge}(\theta)\) is given by
\[H_{\rm eff}^{\rm edge}(\theta)=\sum_{\alpha\beta}(h_{1\alpha L\beta}c_{1\alpha \uparrow}^{\dagger}c_{L\beta\uparrow}e^{i\theta}+{\rm h.c.}). \tag{25}\]
Here, \(h_{1\alpha L\beta}\) is the hopping Hamiltonian between site \(1\) and site \(L\). Since we consider the on-site dissipator given in Eq. (5), the Lindblad operator is independent on \(\theta\), i.e. \(L_{j}(\theta)=L_{j}\). Because of the relation \(\mathcal{L}(\theta)=\mathcal{L}(\theta+2\pi)\), the winding number \(\nu(\Lambda_{\rm ref})\) given in Eq. (21) is quantized. Hereafter, when the winding number given in Eq. (21) takes a nonzero value, we denote that the point-gap topology of the Liouvillian is nontrivial.
Second, we discuss the relation between the topological number defined in Eq. (21) and the Liouvillian skin effect. Even in the single-particle system, the topological characterization of the Liouvillian skin effect has not been accomplished so far. Importantly, in the single-particle system, the topological invariant \(\nu(\Lambda_{\rm ref})\) defined in Eq. (21) gives the characterization of the Liouvillian skin effect, provided that the Lindblad operator is given by the asymmetric hopping (see Appendix B for details). In Appendix B, we compute the topological invariant \(\nu(\Lambda_{\rm ref})\) defined in Eq. (21) analytically and discuss the validity of the characterization of the Liouvillian skin effect in the single-particle system. Significantly, the topological invariant \(\nu(\Lambda_{\rm ref})\) can be computed even in many-body systems. In the following section, we numerically calculate the topological invariant \(\nu(\Lambda_{\rm ref})\) and observe the nontrivial value of the topological invariant \(\nu(\Lambda_{\rm ref})\) corresponding to the Liouvillian skin effect in many-body systems.
### Skin mode of the Liouvillian
In this subsection, we first introduce the right-state particle density of the \(n\)-th eigenmode of the Liouvillian superoperator \(\mathcal{L}\) as \(\Delta_{j\alpha\sigma}^{(n)}\), which measures the degree of localization of eigenmodes of the Liouvillian superoperator in many-body systems. Then, we show that in the single-particle system, the right-state particle density reduces to the diagonal element of the right eigenmode, which is used as the characterization of the Liouvillian skin effect in single-particle systems in Ref. [50]. Finally, we show that when the right eigenmode is written by the right eigenstate of the effective Hamiltonian, the right-state particle density gives the particle density, which is used as the characterization of the non-Hermitian skin effect in many-body systems in Refs. [78; 79].
First, we define the following right-state particle density of the \(n\)-th eigenmode of the Liouvillian superoperator \(\mathcal{L}\) to quantify the degree of localization of the eigenmode of the Liouvillian in many-body systems:
\[\Delta_{l}^{(n)}=\langle\langle J|c_{l}^{\dagger}c_{l}\otimes I|\rho_{\rm R}^ {(n)}\rangle\rangle=\langle\langle J|I\otimes c_{l}^{\dagger}c_{l}|\rho_{\rm R }^{(n)}\rangle\rangle \tag{26}\]
with \(l\) denoting the set of \(j\),\(\alpha\) and \(\sigma\), i.e. \(l=j\alpha\sigma\). Here, \(|J\rangle\rangle\) is the identity operator defined by \(|J\rangle=\sum_{j}|j\rangle\otimes|j\rangle\), and \(|\rho_{\rm R}^{(n)}\rangle\rangle\) is the \(n\)-th right eigenmode of the Liouvillian \(\mathcal{L}\) that satisfies the eigenvalue equation given in Eq. (14). We note that the right-state particle density is not identical to the ordinary particle density, which is observable and takes real values. Specifically, the right-state particle density is complex-valued, which is introduced to measure the degree of localization of eigenmodes in many-body systems.
Next, we show that the right-state particle density defined in Eq. (26) reduces to the diagonal element of the right eigenmode in the single-particle system. We note that the right-state particle density of the \(n\)-th eigenmode given in Eq. (26) is expressed as
\[\Delta_{l}^{(n)}={\rm Tr}[c_{l}^{\dagger}c_{l}\rho_{\rm R}^{(n)}], \tag{27}\]
where we have used the following relation
\[\langle\langle J|A\otimes I|\rho_{\rm R}^{(n)}\rangle\rangle =\sum_{jkl}\langle j|\otimes\langle j|\ \Big{(}A|k\rangle\otimes|l\Big{)}\rho_{\rm R,kl}^{(n)}\] \[=\sum_{jkl}\delta_{jl}\ \langle j|A|k\rangle\rho_{\rm R,kl}^{(n)}\] \[={\rm Tr}[A\rho_{\rm R}^{(n)}]. \tag{28}\]
Now, we show that the definition given in Eq. (26) reduces to the diagonal element of the right eigenmode of
the Liouvillian in the single-particle system. We take the single-particle basis \(|S\rangle\), which is generated by acting the creation operator to the vacuum \(|\text{vac}\rangle\) as \(|S\rangle=c_{S}^{\dagger}|\text{vac}\rangle\) with \(S\) denoting the set of \(j\),\(\alpha\) and \(\sigma\), i.e. \(S=j\alpha\sigma\). Then the \(n\)-th right eigenstate of the density matrix \(\rho_{\text{R}}^{(n)}\) is expanded by using the single-particle state \(|S\rangle\) as
\[\rho_{\text{R}}^{(n)}=\sum_{ST}\rho_{\text{R},ST}^{(n)}|S\rangle\langle T|. \tag{29}\]
Here, \(\rho_{\text{R},ST}^{(n)}\in\mathbb{C}\) is expansion coefficient. By substituting Eq. (29) into Eq. (27), we obtain
\[\Delta_{S}^{(n)}=\rho_{\text{R},SS}^{(n)}. \tag{30}\]
Thus the right-state particle density defined in Eq. (27) is a generalization of the diagonal element of the right eigenmode, which measures the degree of localization of the eigenmode of the Liouvillian in the single-particle system.
Finally, we show that when the right eigenmode \(\rho_{\text{R}}^{(n)}\) is written by the right eigenstate of the effective Hamiltonian \(H_{\text{eff}}\) as \(|\varphi_{\text{R}}^{(n)}\rangle\), the right-state particle density reduces to the particle density defined by \(n_{l}=\langle\varphi_{\text{R}}^{(n)}|c_{l}^{\dagger}c_{l}|\varphi_{\text{R} }^{(n)}\rangle\), which is used as the characterization of the skin mode in non-Hermitian many-body system in Refs [78; 79]. We take the right and left eigenstate of the effective Hamiltonian as \(|\varphi_{\text{R}}^{(n)}\rangle\) and \(\langle\varphi_{\text{L}}^{(n)}|\) that satisfy the eigenvalue equations
\[H_{\text{eff}}|\varphi_{\text{R}}^{(n)}\rangle=E_{n}|\varphi_{\text{R}}^{(n)}\rangle \tag{31}\]
and
\[\langle\varphi_{\text{L}}^{(n)}|H_{\text{eff}}=E_{n}\langle\varphi_{\text{L} }^{(n)}|, \tag{32}\]
respectively. When we take the right eigenmode \(\rho_{\text{R}}^{(n)}\) as
\[\rho_{\text{R}}^{(n)}=|\varphi_{\text{R}}^{(n)}\rangle\langle\varphi_{\text{ R}}^{(n)}|, \tag{33}\]
the right-state particle density given in Eq. (27) becomes
\[\Delta_{l}^{(n)} =\text{Tr}[\rho_{\text{R}}^{(n)}c_{l}^{\dagger}c_{l}]\] \[=\sum_{m}\langle\varphi_{\text{L}}^{(m)}|\varphi_{\text{R}}^{(n) }\rangle\langle\varphi_{\text{R}}^{(n)}|c_{l}^{\dagger}c_{l}|\varphi_{\text{R }}^{(m)}\rangle\] \[=\sum_{m}\delta_{mn}\langle\varphi_{\text{R}}^{(n)}|c_{l}^{ \dagger}c_{l}|\varphi_{\text{R}}^{(m)}\rangle=n_{l}, \tag{34}\]
where we have used the biorthogonal relation in the third equality [15]. Therefore the right-state particle density is the generalization of the particle density \(n_{l}\), which measures the degree of localization of the eigenstate in non-Hermitian many-body systems. In the following, we demonstrate the right-state particle density \(\Delta_{l}^{(n)}\) exhibits localization near the edge. From Eqs. (30) and (34), we consider that the right-state particle density is the proper definition to measure the degree of localization of eigenmodes of the Liouvillian in many-body systems.
## IV Numerical results
In this section, we demonstrate that interactions can induce the Liouvillian skin effect by analyzing the Falicov-Kimball model introduced in Sec. II. First, in the noninteracting case (\(U-i\gamma=0\)), we show that the Liouvillian skin effect is absent. Then, it is demonstrated that the complex-valued interaction \(U-i\gamma\) induces the Liouvillian skin effect. In the following discussion, we set \(t_{h}=1\) as an energy unit.
### Noninteracting case
First, we see that the Liouvillian skin effect is not observed in the noninteracting case (\(U=\gamma=0\)). In this case, the Liouvillian given in Eq. (11) becomes
\[\mathcal{L}_{\text{free}}=-i\left(H\otimes I-I\otimes H^{T}\right). \tag{35}\]
Because \(\mathcal{L}_{\text{free}}\) is skew-Hermitian, i.e. \(\mathcal{L}_{\text{free}}^{\dagger}=-\mathcal{L}_{\text{free}}\), its eigenvalues are purely imaginary or zero. In other words, all eigenvalues of the Liouvillian lie on the imaginary axis regardless of boundary conditions. As a result, the point-gap topology of the Liouvillian always becomes trivial because the winding number always takes zero. Correspondingly, the eigenvalues and the eigenmodes of the Liouvillian are not sensitive to the boundary conditions (for more details, see Appendix C). Therefore, the Liouvillian skin effect is absent for the noninteracting system.
### Interacting case
Next, we demonstrate that the interaction \(U-i\gamma\) makes the point-gap topology nontrivial and induces the Liouvillian skin effect. Figures 2(a) and (b) display the \(\theta\) dependence of \(\text{det}[\mathcal{L}(\theta)-\Lambda_{\text{ref}}]\). We see that the winding number takes \(\nu=3\) (\(\nu=1\)) for \(\Lambda_{\text{ref}}=-0.5-0.8i\) (\(\Lambda_{\text{ref}}=-0.3-0.2i\)). Now, we analyze the emergence of skin modes by comparing the results under OBC with those under periodic boundary conditions (PBC). Figure 2(c) [(d)] displays \(D_{j}^{(n)}=\sum_{\alpha}|\Delta_{j\alpha\uparrow}^{(n)}|\) for OBC [PBC]. We note that the right-state particle density of the \(n\)-th eigenmode \(\Delta_{j\alpha\uparrow}^{(n)}\) defined in Eq. (26) takes a complex value. Figure 2(c) indicates that the eigenmodes are localized at the right edge under OBC. In contrast, such a localization cannot be observed under PBC. These results demonstrate the emergence of skin modes of the Liouvillian. We also note that the sensitivity of the eigenvalues to boundary conditions is also observed although it is smeared for small \(L\) (for more details, see Appendix D). With the above results (see Fig. 2), we conclude that interactions induce the Liouvillian skin effect though the system is subject to homogeneous two-body losses.
Here, we comment on the relation between the winding number of the Liouvillian and that of the effective
non-Hermitian Hamiltonian. As derived in Appendix E, we obtain the following relation for the winding number defined in Eq. (21)
\[\nu(\Lambda_{\rm ref})=\sum_{j}w(E_{\rm ref}=E_{j}^{*}+i\Lambda_{\rm ref}), \tag{36}\]
where \(w(E_{\rm ref})\) is the winding number of the non-Hermitian Hamiltonian
\[w(E_{\rm ref})=\oint_{0}^{2\pi}\frac{d\theta}{2\pi i}\frac{d}{d\theta}\log \det[H_{\rm eff}(\theta)-E_{\rm ref}]. \tag{37}\]
Here, \(E_{j}\) denotes an eigenvalue of the non-Hermitian Hamiltonian \(H_{\rm eff}\) defined in Eq. (9). Equation (36) indicates that the winding number of the Liouvillian \(\nu(\Lambda_{\rm ref})\) can be computed from the winding number of the effective non-Hermitian Hamiltonian \(w(E_{\rm ref})\) with this model. Here, we compute the winding number \(\nu(\Lambda_{\rm ref})\) by making use of Eq. (36). First, we take the complex conjugate of the eigenvalue of the non-Hermitian Hamiltonian \(E_{j}^{*}\) [see Fig. 3(a)]. Then, we shift \(E_{j}^{*}\) by \(i\Lambda_{\rm ref}\) and obtain \(E_{\rm ref}=E_{j}^{*}+i\Lambda_{\rm ref}\). Because Eq. (36) indicates that the summation of \(w\) for all possible \(E_{\rm ref}=E_{j}^{*}+i\Lambda_{\rm ref}\) results in the winding number of the Liouvillian, we obtain \(\nu(\Lambda_{\rm ref})=3\;|\nu(\Lambda_{\rm ref})=1|\) for \(\Lambda_{\rm ref}=-0.5-0.8i\) (\(\Lambda_{\rm ref}=-0.3-0.2i\)) [see Figs. 3(b) and (c)]. We note that the winding number of the effective Hamiltonian takes one when \(E_{\rm ref}\) is in the shaded region in Fig. 3(a). These results of the winding number \(\nu(\Lambda_{\rm ref})\) are consistent with the results obtained by the direct computation of \(\nu(\Lambda_{\rm ref})\) [see Figs. 2(a) and (b)].
### Dynamical properties
In this subsection, we show that the Liouvillian skin effect significantly affects the dynamics of the particle density. We assume the total number of particles in the up-spin states equals to one in the initial state, i.e.,
\[\langle\Psi(t=0)|N_{\uparrow}|\Psi(t=0)\rangle=1, \tag{38}\]
where the wavefunction in the initial state \(|\Psi(t=0)\rangle\) reads
\[|\Psi(t=0)\rangle=\frac{1}{\sqrt{L}}\sum_{j=1}^{L}c_{j\alpha\uparrow}^{ \dagger}|\{n_{\downarrow}\}\rangle. \tag{39}\]
Here, we have assumed that the particle in orbital \(a\) is uniformly distributed in the initial state. The expectation value of the particle density in the up-spin state at time \(t\) is given by
\[\langle n_{j\uparrow}(t)\rangle =\sum_{\alpha}\mathrm{Tr}\left[n_{j\alpha\uparrow}\rho(t)\right]\] \[=\sum_{\alpha}\langle\langle J|c_{j\alpha\uparrow}^{\dagger}c_{j \alpha\uparrow}\otimes I|\rho(t)\rangle\rangle. \tag{40}\]
Figure 3: Schematic figure that describes the relationship between the winding number \(\nu(\Lambda_{\rm ref})\) and \(w(E_{\rm ref})\) given in Eq. (36). (a) Eigenvalues of the effective Hamiltonian \(H_{\rm eff}\) are indicated by blue dots. \(E^{*}\) is the complex conjugation of the eigenvalue \(E\) (gray dots). When \(E_{\rm ref}\) is located inside the blue region, the winding number of the Hamiltonian equals one, i.e. \(w(E_{\rm ref})=1\). (b),(c) Schematic figure of the origin of the non-trivial winding number. The winding number \(\nu(\Lambda_{\rm ref})\) equals the number of dots in the blue region indicated by the yellow star. The number of the yellow stars in panel (b) [(c)] corresponds to the winding number \(\nu(\Lambda_{\rm ref})\) in Fig. 2(a)[(b)]. The constant shift \(i\Lambda_{\rm ref}\) in Eq. (36) is set to be \(i\Lambda_{\rm ref}=0.8-0.5i\) and \(i\Lambda_{\rm ref}=\;0.2-0.3i\) for panel(b) and (c), respectively. The parameters are set to be \(L=6,\;U=0.5,\;\gamma=1.0\). The configuration of fermions in the down-spin states is set to be \(\{n_{\downarrow}\}=\{1,\cdots,1\}\).
Then, the time evolution of the density matrix reads
\[|\rho(t)\rangle\rangle=e^{\mathcal{L}t}|\rho(t=0)\rangle\rangle. \tag{41}\]
By using the wavefunction in the initial state \(|\Psi(t=0)\rangle\), the density matrix at time \(t=0\) is given by
\[|\rho(t=0)\rangle\rangle=|\Psi(t=0)\rangle\otimes|\Psi(t=0)\rangle. \tag{42}\]
Here, \(|\Psi(t=0)\rangle\otimes|\Psi(t=0)\rangle\) is defined by the following mapping
\[|\Psi(t=0)\rangle\langle\Psi(t=0)|=\sum_{ij}\Psi_{ij}(t=0)|i \rangle\langle j|\] \[\mapsto |\Psi(t=0)\rangle\otimes|\Psi(t=0)\rangle=\sum_{ij}\Psi_{ij}(t=0 )|i\rangle\otimes|j\rangle \tag{43}\]
where \(\Psi_{ij}(t=0)\) is the matrix element of \(|\Psi(t=0)\rangle\langle\Psi(t=0)|\) and \(|i\rangle\otimes|j\rangle\) is the element in the basis set given in Eq. (19). Now, we numerically calculate the expectation value of the particle density given in Eq. (40) considering Eqs. (41) and (42). We note that, since the initial state only has one fermion in an up-spin state, the dynamics is computed only from \(H_{\rm eff}\)[108]. Figure 4 displays the time dependence of the expectation value \(\langle n_{j\uparrow}(t)\rangle\). Under OBC, we see that the particle is accumulated near the right boundary as shown in Fig. 4 (a). In contrast, under PBC, we find that the particle density decreases uniformly due to the dissipation as shown in Fig. 4 (b).
The above significant dependence of \(\langle n_{j\uparrow}\rangle\) on boundary conditions can be understood in terms of the right-state particle density of the \(n\)-th eigenmode of the Liouvillian \(\Delta_{j\alpha\uparrow}^{(n)}\). First, we expand the initial density matrix \(|\rho(0)\rangle\rangle=\sum_{n}a_{n}|\rho_{\rm R}^{(n)}\rangle\rangle\) by using eigenmode of the Liouvillian. Then, by combining the eigenvalue equation of the Liouvillian given in Eq. (14) and the time evolution of the density matrix given in Eq. (41), we obtain the particle density at time \(t\) as
\[\langle n_{j\uparrow}(t)\rangle=\sum_{n,\alpha}e^{\Lambda_{n}t}a_{n}\Delta_{j \alpha\uparrow}^{(n)}. \tag{44}\]
Thus the particle accumulation quantitatively originates from the anomalous localization of \(\Delta_{j\alpha\uparrow}^{(n)}\) (see Fig. 2).
## V Conclusions
In this paper, we have demonstrated that interactions induce the Liouvillian skin effect in the one-dimensional correlated system with two-body loss. Specifically, by introducing the winding number constructed by the Liouvillian superoperator, we have elucidated that interactions make the point-gap topology nontrivial. Moreover, we have seen that eigenvalues and eigenmodes of the Liouvillian exhibit extreme sensitivity to boundary conditions. As a result, we have observed the particle accumulation around the right edge in transient dynamics only under OBC which is attributed to the emergence of the skin mode.
As two-body losses have already been introduced in ytterbium atoms by using photoassociation techniques [54; 56], our results can be tested in ultracold atoms. The method to realize the Falikov-Kimball model is provided in Ref. [109], by introducing two species of atoms like \({}^{40}\)K and \({}^{6}\)Li, where mobile and immobile atoms are coupled via an on-site interaction. We expect that the interaction-induced Liouvillian skin effect in our model can be observed in ultracold atoms.
Recently, classifications of the Liouvillian superoperator have been actively conducted [94; 97]. When the Hamiltonian preserves the inversion symmetry, the winding number defined in Eq. (21), which characterizes the nontrivial topological phase of the Liouvillian, can be trivial. Last but not least, it deserves to study the detailed relations between the symmetry and the topological number in a different Liouvillian, but we leave it for future work.
###### Acknowledgements.
The authors are grateful to Naomichi Hatano, Hosho Katura, Shuta Nakajima, Hironobu Yoshida, and Shin Kaneshring for valuable discussions. S.H. particularly acknowledges Masaya Nakagawa for fruitful discussions. S.H. was supported by WISE Program, MEXT. K.Y. was supported by JSPS KAKENHI Grant-in-Aid for JSPS fellows Grant No. JP20J21318. This work was supported by JSPS KAKENHI Grants No. JP22H05247 and No. JP21K13850.
Figure 4: (a) [(b)] Time evolution of the particle density under OBC [PBC]. The parameters are set to be \(L=20,\ U=0.1,\ \gamma=0.1\). The configuration of fermions in the down-spin states is set to be \(\{n_{\downarrow}\}=\{1,\cdots,1\}\). The particle is uniformly distributed in orbital \(a\) in the initial state. Only under OBC, the anomalous localization of the particle density is observed.
## Appendix A Symmetry constraint on the winding number
In this appendix, we discuss the relation between the symmetry of the Liouvillian and the winding number \(\nu(\Lambda_{\rm ref})\). As we will see below, breaking the inversion symmetry of the Hamiltonian is essential for the existence of the nonzero topological number. The winding number \(\nu(\Lambda_{\rm ref})\) given in Eq. (21) becomes trivial when the Liouvillian superoperator satisfies
\[\mathcal{U}\mathcal{L}(-\theta)\mathcal{U}^{\dagger}=\mathcal{L}(\theta). \tag{24}\]
Here, \(\mathcal{U}\) is the unitary operator (\(\mathcal{U}\mathcal{U}^{\dagger}=\mathcal{U}^{\dagger}\mathcal{U}=1\)). We note that Eq. (24) leads to \(\nu(\Lambda_{\rm ref})=-\nu(\Lambda_{\rm ref})\). This relation means that the point-gap topology of the Liouvillian is trivial \(\nu(\Lambda_{\rm ref})=0\).
In the case of particle loss, this triviality [\(\nu(\Lambda_{\rm ref})=0\)] originates from the symmetry of the Hamiltonian. Since the eigenvalue of the Liouvillian is determined only by the effective Hamiltonian \(H_{\rm eff}\)[107], the winding number given in Eq. (21) reduces to
\[\nu(\Lambda_{\rm ref})=\oint_{0}^{2\pi}\frac{d\theta}{2\pi i}\frac{d}{d\theta }\log\det\bigl{[}\mathcal{L}_{0}(\theta)-\Lambda_{\rm ref}\bigr{]}. \tag{25}\]
If the effective Hamiltonian \(H_{\rm eff}\) satisfies the following relation
\[UH_{\rm eff}(-\theta)U^{\dagger}=H_{\rm eff}(\theta), \tag{26}\]
we can construct the unitary operator \(\mathcal{U}\) as
\[\mathcal{U}=U\otimes V, \tag{27}\]
where \(UU^{\dagger}=U^{\dagger}U=1\) and we have defined \(V=U^{*}\). Due to the relation \(\mathcal{U}\mathcal{L}_{0}(-\theta)\mathcal{U}^{\dagger}=\mathcal{L}_{0}(\theta)\), we find that the winding number Eq. (25) becomes zero.
Now, we discuss whether the Liouvillian given in Eq. (11) satisfies the condition given in Eq. (11). The effective Hamiltonian under twisted boundary conditions in real space is \(H_{\rm eff}(\theta)=H_{\rm eff}^{\rm bulk}+H_{\rm eff}^{\rm edge}(\theta)\) where \(H_{\rm eff}^{\rm bulk}\) is written as
\[H_{\rm eff}^{\rm bulk}=H_{1}+H_{2}+H_{3}+H_{4}. \tag{28}\]
Here, we have defined
\[H_{1}=-2it_{h}\sum_{j=1}^{L}(c_{ja\uparrow}^{\dagger}c_{jb \uparrow}-{\rm h.c.}),\] \[H_{2}=-0.25t_{h}\sum_{j=1}^{L-1}(c_{j+1b\uparrow}^{\dagger}c_{j a\uparrow}-c_{j+1a\uparrow}^{\dagger}c_{jb\uparrow}+{\rm h.c.}),\] \[H_{3}=t_{h}\sum_{j=1}^{L-1}(c_{j+1a\uparrow}^{\dagger}c_{ja \uparrow}-c_{j+1b\uparrow}^{\dagger}c_{jb\uparrow}+{\rm h.c.}),\] \[H_{4}=(U-i\gamma)\sum_{j=1}^{L}n_{jb\uparrow}n_{jb\downarrow}, \tag{29}\]
and
\[H_{\rm eff}^{\rm edge}(\theta)= -0.25t_{h}(e^{i\theta}c_{1b\uparrow}^{\dagger}c_{La\uparrow}-e^{ i\theta}c_{1a\uparrow}^{\dagger}c_{Lb\uparrow}+{\rm h.c.})\] \[+t_{h}(e^{i\theta}c_{1a\uparrow}^{\dagger}c_{La\uparrow}-e^{i \theta}c_{1b\uparrow}^{\dagger}c_{Lb\uparrow}+{\rm h.c.}). \tag{30}\]
The first term \(H_{1}\) of the bulk Hamiltonian \(H_{\rm eff}^{\rm bulk}\) [i.e., the first term of the Bloch Hamiltonian in Eq. (3a) denoted by \(2t_{h}\)] violates the condition Eq. (24). In the absence of the first term, the effective Hamiltonian preserves the inversion symmetry defined by
\[PH_{\rm eff}(-\theta)P^{\dagger}=H_{\rm eff}(\theta), \tag{31}\]
where the inversion operator \(P\) acts on the annihilation operator as \(Pc_{j\alpha\sigma}P^{\dagger}=c_{L-(j-1)\alpha\sigma}\) and satisfies \(P^{\dagger}P=PP^{\dagger}=\mathbf{1}\). Then, we see that the inversion symmetry given in Eq. (31) is nothing but the condition of the triviality of the winding number given in Eq. (26). Therefore, the Liouvillian superoperator \(\mathcal{L}(\theta)\) given in Eq. (22) satisfies Eq. (24) in the absence of \(H_{1}\). The presence of \(H_{1}\) breaks the condition Eq. (31), which leads to the violation of the condition Eq. (24). Hence, the nonzero winding number originates from the property of the Liouvillian. In particular, the Hamiltonian breaks the inversion symmetry.
## Appendix B Topological characterization of the Liouvillian skin effect reported in Ref. [50]
In this appendix, we show that the winding number \(\nu(\Lambda_{\rm ref})\) defined in Eq. (21) characterizes the Liouvillian skin effect reported in Ref. [50], which implies the validity of employing \(\nu(\Lambda_{\rm ref})\) for characterizing the interaction-induced Liouvillian skin effect. We consider the bosonic systems and assume that the Lindblad operators are given by
\[L_{j,l}=\sqrt{t_{l}}b_{j}^{\dagger}b_{j+1},\] \[L_{j,r}=\sqrt{t_{r}}b_{j+1}^{\dagger}b_{j}, \tag{32}\]
which describe the stochastic hopping to the nearest neighbor sites. Following the discussion in Ref. [50], we assume that the Hamiltonian of the systems is zero, i.e. \(H=0\). The Liouvillian superoperator becomes
\[\mathcal{L}^{H=0} =\sum_{j,\alpha}\Biggl{[}L_{j,\alpha}\otimes L_{j,\alpha}^{*}-\frac {1}{2}\left(L_{j,\alpha}^{\dagger}L_{j,\alpha}\otimes I+I\otimes L_{j,\alpha}^{T }L_{j,\alpha}^{*}\right)\Biggr{]}\] \[=\sum_{j=1}^{L}\Biggl{[}t_{r}b_{j+1}^{\dagger}b_{j}\otimes b_{j+1 }^{\dagger}b_{j}+t_{l}b_{j}^{\dagger}b_{j+1}\otimes b_{j}^{\dagger}b_{j+1}- \frac{t_{r}+t_{l}}{2}\left(b_{j}^{\dagger}b_{j}\otimes I+I\otimes b_{j}^{ \dagger}b_{j}\right)\Biggr{]}. \tag{10}\]
When we impose twisted boundary conditions only on the ket space, the Liouvillian superoperator is expressed as
\[\mathcal{L}^{H=0}(\theta)=\mathcal{L}^{H=0}_{\text{bulk}}+t_{r}e^{i\theta}b_{1 }^{\dagger}b_{L}\otimes b_{1}^{\dagger}b_{L}+t_{l}e^{-i\theta}b_{L}^{\dagger} b_{1}\otimes b_{L}^{\dagger}b_{1}, \tag{11}\]
where we have introduced the bulk term of the Liouvillian \(\mathcal{L}^{H=0}_{\text{bulk}}\) as
\[\mathcal{L}^{H=0}_{\text{bulk}}= \sum_{j=1}^{L-1}\Biggl{[}t_{r}b_{j+1}^{\dagger}b_{j}\otimes b_{j+ 1}^{\dagger}b_{j}+t_{l}b_{j}^{\dagger}b_{j+1}\otimes b_{j}^{\dagger}b_{j+1} \Biggr{]}\] \[- \sum_{j=1}^{L}\Biggl{[}\frac{t_{r}+t_{l}}{2}\left(b_{j}^{\dagger} b_{j}\otimes I+I\otimes b_{j}^{\dagger}b_{j}\right)\Biggr{]}. \tag{12}\]
In the following discussion, we focus on the single-particle diagonal subspace spanned by the basis \(\{|i\rangle\otimes|i\rangle\}_{i=1,\cdots,L}\). The matrix representation of the Liouvillian with respect to this basis is given by
\[\mathcal{L}^{H=0}(\theta)=\left(\begin{array}{c|c|c}-(t_{l}+t_{r})&t_{l}&t_ {r}e^{i\theta}\\ \hline t_{r}&\ddots&\ddots&\\ &\ddots&\ddots&t_{l}\\ \hline t_{l}e^{-i\theta}&t_{r}&-(t_{l}+t_{r})\end{array}\right). \tag{13}\]
In this subspace, the action of the Liouvillian \(\mathcal{L}^{H=0}(\theta)\) is identical to that of the following Hamiltonian in the single-particle system
\[H_{\text{HN}}(\theta)= -\sum_{j=1}^{L}\left(t_{l}+t_{r}\right)c_{j}^{\dagger}c_{j}+\sum_ {j=1}^{L-1}\left(t_{l}c_{j}^{\dagger}c_{j+1}+t_{r}c_{j+1}^{\dagger}c_{j}\right)\] \[+t_{l}c_{l}^{\dagger}c_{1}e^{-i\theta}+t_{r}c_{1}^{\dagger}c_{L}e ^{i\theta}. \tag{14}\]
We note that the matrix representation of the Hamiltonian \(H_{\text{HN}}(\theta)\) with respect to the basis \(\{|i\rangle\}_{i=1,\cdots,L}\) gives the Eq. (13). The Hamiltonian given in Eq. (14) is nothing but the Hatano-Nelson model [110; 111; 112] under twisted boundary conditions.
Now, we calculate the winding number \(\nu(\Lambda_{\text{ref}})\) defined in Eq. (21) for the Liouvillian \(\mathcal{L}^{H=0}(\theta)\) in this subspace. First, we recover the translational invariance of the Hamiltonian by using the gauge transformation \(c_{j}\to c_{j}e^{-i\frac{j}{L}\theta}\) as
\[H_{\text{HN}}(\theta)=\sum_{j=1}^{L}\Bigl{[} -(t_{l}+t_{r})c_{j}^{\dagger}c_{j}\] \[+t_{l}e^{-i\frac{j}{L}}c_{j}^{\dagger}c_{j+1}+t_{r}e^{i\frac{j}{L }}c_{j+1}^{\dagger}c_{j}\Bigr{]}. \tag{15}\]
Then, we diagonalize Eq. (15) as
\[H_{\text{HN}}(\theta)=\sum_{k}h_{\text{HN}}\left(k+\frac{\theta}{L}\right)c_{ k}^{\dagger}c_{k} \tag{16}\]
with
\[h_{\text{HN}}\left(k\right)=t_{r}e^{ik}+t_{l}e^{-ik}-(t_{l}+t_{r}). \tag{17}\]
In the translational invariant single-particle system, many-body topological invariant of non-Hermitian systems reduces to the following topological invariant defined in the momentum space [113; 78]
\[W(\Lambda_{\text{ref}})=\oint_{0}^{2\pi}\frac{dk}{2\pi i}\frac{d}{dk}\log \det[h_{\text{HN}}(k)-\Lambda_{\text{ref}}]. \tag{18}\]
Finally, in a similar way to Eq. (18), we can compute the winding number \(\nu(\Lambda_{\text{ref}})\) defined in Eq. (21) for \(\mathcal{L}^{H=0}(\theta)\) given in Eq. (13) as
\[\nu(\Lambda_{\text{ref}})=\text{sgn}\left(t_{r}-t_{l}\right), \tag{19}\]
where we set \(\Lambda_{\text{ref}}\) inside the region enclosed by the PBC spectrum. As shown in Ref. [36], the nonzero winding number given in Eq. (19) induces the skin effect for quadratic systems. Consequently, the right-state particle density of the Liouvillian given in Eq. (26) exhibits the skin effect. Therefore, the Liouvillian skin effect demonstrated in Ref. [50] is characterized by the winding number defined in Eq. (21). This fact supports the use of \(\nu(\Lambda_{\text{ref}})\) as a characterization of the interaction-induced Liouvillian skin effect as shown in the main text. It should be noted that even for the presence \(H\) in Ref. [50], we numerically confirm that the winding number takes a nonzero value.
## Appendix C Absence of the Liouvillian skin effect in noninteracting systems
Here we numerically show that the Liouvillian skin effect is absent when the systems do not have interactions
\((U-i\gamma=0)\). Figure 5(a) shows that the eigenmodes of the Liouvillian do not exhibit the skin effect for the noninteracting case. As mentioned in Sec. IV, all eigenvalues of the Liouvillian lie on the imaginary axis and are insensitive to boundary conditions [see Figs. 5(b) and 5(c)]. Thus, the Liouvillian skin effect does not occur in the noninteracting system.
## Appendix D Sensitivity of eigenvalues of the Liouvillian to boundary conditions
In this appendix, we show that eigenvalues of the Liouvillian exhibit the sensitivity to boundary conditions. Under OBC as shown in Fig. 6(a), the eigenvalues form a line-like structure, which is in contrast to the case of PBC shown in Fig. 6(b). Such sensitivity is a signal of the Liouvillian skin effect. We note that since the steady state is \(N_{\downarrow}\)-fold degenerate regardless of boundary conditions, the eigenvalues corresponding to the steady state do not exhibit the Liouvillian skin effect.
## Appendix E Derivation of the relation between winding numbers given in Eq. (36)
In this appendix, we derive the relation between the winding number \(\nu(\Lambda_{\rm ref})\) defined by the Liouvillian superoperator and \(w(E_{\rm ref})\) defined by the Hamiltonian given in Eq. (36). First, we recall that in the case of particle losses, the Liouvillian takes the following block triangular structure
\[\mathcal{L}=\left(\frac{\mathcal{L}_{0}^{(N_{\uparrow}=1)}(\theta)}{ \mathcal{L}_{J}(\theta)}\left|\begin{array}{c}\\ \end{array}\right.\mathcal{L}_{0}^{(N_{\uparrow}=0)}(\theta)\right). \tag{10}\]
We note that, since the Lindblad operator given in Eq. (5) has no hopping term between site 1 and site \(L\), the jump term is independent on \(\theta\), i.e. \(\mathcal{L}_{J}(\theta)=\mathcal{L}_{J}\). For a block triangular matrix, the following relation holds:
\[\det\left(\begin{array}{c|c}A&\\ \hline B&C\end{array}\right)=\det A\det C. \tag{11}\]
Since \(\mathcal{L}^{(N_{\uparrow}=0)}(\theta)\) is independent of \(\theta\), we obtain
\[d_{\theta}\log\det[\mathcal{L}(\theta)-\Lambda_{\rm ref}]\] \[= d_{\theta}\log\det\Bigl{[}\mathcal{L}_{0}^{(N_{\uparrow}=1)}( \theta)-\Lambda_{\rm ref}\Bigr{]}\] \[= d_{\theta}\log\det\mathcal{M}(\theta) \tag{12}\]
for \(\Lambda_{\rm ref}\neq 0\), where
\[\mathcal{M}(\theta)=H_{\rm eff}(\theta)\otimes I-I\otimes H_{\rm eff}^{*}-i \Lambda_{\rm ref}I\otimes I. \tag{13}\]
Then, we introduce \(\mathcal{N}(\theta)\) to diagonalize \(\mathcal{M}(\theta)\) defined by
\[\mathcal{N}(\theta)=S(\theta)\otimes T, \tag{14}\]
where \(T=S^{*}(\theta=0)\), and operator \(S(\theta)\) which diagonalizes \(H_{\rm eff}(\theta)\) as
\[S^{-1}(\theta)H_{\rm eff}(\theta)S(\theta)={\rm diag}\Bigl{(}E_{1}(\theta), \cdots,E_{2L}(\theta)\Bigr{)}. \tag{15}\]
A straightforward calculation results in
\[\log\det\Bigl{[}\mathcal{N}^{-1}\mathcal{M}(\theta)\ \mathcal{N} \Bigr{]}\] \[=\sum_{i=1}^{2L}\log\det\prod_{j=1}^{2L}\biggl{[}E_{j}(\theta)-( E_{i}^{*}+i\Lambda_{\rm ref})\biggr{]}. \tag{16}\]
Finally, we obtain the relation between \(\nu(\Lambda_{\rm ref})\) and \(w(E_{\rm ref})\) as
\[\oint_{0}^{2\pi}\frac{d\theta}{2\pi i}\frac{d}{d\theta}\log\det[ \mathcal{L}(\theta)-\Lambda_{\rm ref}I\otimes I\ ]\] \[=\sum_{j}w(E_{\rm ref}=E_{j}^{*}+i\Lambda_{\rm ref}), \tag{17}\]
which is nothing but Eq. (36) in the main text.
Figure 5: (a) The right-state particle density of the Liouvillian under OBC. (b) [(c)] Eigenvalues of the Liouvillian under OBC [PBC] for the noninteracting case. The parameters are set to be \(L=10\), \(U=\gamma=0.0\). The configuration of fermions in the down-spin states is set to be \(\{n_{i}\}=\{1,\cdots,1\}\).
Figure 6: (a)[(b)] Eigenvalues of the Liouvillian under OBC [PBC]. The parameters are set to be \(L=14\), \(U=0.1\), and \(\gamma=0.2\). The configuration of fermions in the down-spin states is set to be \(\{n_{\downarrow}\}=\{1,\cdots,1\}\).
## Appendix F Results for other configurations of down-spins in the initial state
In the main text, the configuration of fermions in the down-spin states is set to be \(\{n_{\downarrow}\}=\{1,\cdots,1\}\) in the initial state. In this appendix, we show that the Liouvillian skin effect survives for other configurations of down-spins in the initial state. We set the configuration of fermions in the down-spin states to be \(\{n_{\downarrow}\}=\{1,1,1,1,0,1\}\). Then we numerically calculate the winding number given in Eq. (21). Figure 7(a) shows that the winding number takes three. Moreover, eigenmodes of the Liouvillian exhibit the skin effect under OBC as shown in Fig. 7(b). We observe the dependence of eigenvalues on boundary conditions, which is similar to that presented in Appendix D. Therefore, the Liouvillian skin effect survives for other configurations of down-spins in the initial state.
|
2309.17209 | Robots That Can See: Leveraging Human Pose for Trajectory Prediction | Anticipating the motion of all humans in dynamic environments such as homes
and offices is critical to enable safe and effective robot navigation. Such
spaces remain challenging as humans do not follow strict rules of motion and
there are often multiple occluded entry points such as corners and doors that
create opportunities for sudden encounters. In this work, we present a
Transformer based architecture to predict human future trajectories in
human-centric environments from input features including human positions, head
orientations, and 3D skeletal keypoints from onboard in-the-wild sensory
information. The resulting model captures the inherent uncertainty for future
human trajectory prediction and achieves state-of-the-art performance on common
prediction benchmarks and a human tracking dataset captured from a mobile robot
adapted for the prediction task. Furthermore, we identify new agents with
limited historical data as a major contributor to error and demonstrate the
complementary nature of 3D skeletal poses in reducing prediction error in such
challenging scenarios. | Tim Salzmann, Lewis Chiang, Markus Ryll, Dorsa Sadigh, Carolina Parada, Alex Bewley | 2023-09-29T13:02:56Z | http://arxiv.org/abs/2309.17209v1 | # Robots That Can See:
###### Abstract
Anticipating the motion of all humans in dynamic environments such as homes and offices is critical to enable safe and effective robot navigation. Such spaces remain challenging as humans do not follow strict rules of motion and there are often multiple occluded entry points such as corners and doors that create opportunities for sudden encounters. In this work, we present a Transformer based architecture to predict human future trajectories in human-centric environments from input features including human positions, head orientations, and 3D skeletal keypoints from onboard in-the-wild sensory information. The resulting model captures the inherent uncertainty for future human trajectory prediction and achieves state-of-the-art performance on common prediction benchmarks and a human tracking dataset captured from a mobile robot adapted for the prediction task. Furthermore, we identify new agents with limited historical data as a major contributor to error and demonstrate the complementary nature of 3D skeletal poses in reducing prediction error in such challenging scenarios.
**Project page: [https://human-scene-transformer.github.io/](https://human-scene-transformer.github.io/)**
Autonomous Vehicle Navigation; Deep Learning Methods; Human-Aware Motion Planning
## I Introduction
The presence of robots sharing the environment with humans has led to a need for effective methods to understand the human's intention in order to avoid collisions and ensure smooth interactions between humans and robots. A series of steps are required for a robot to successfully navigate around humans in dynamic scenes, namely perception, prediction and planning. Perception is responsible for detecting the presence of humans and extracting other features of the environment around the robot. Prediction aims to model how humans will move into the future, and finally planning future actions towards a goal while avoiding collisions. In this work we focus on the prediction step where the inputs include perceived visual features and the outputs are predicted trajectory distributions for motion planning.
While the trajectory prediction topic has been extensively studied in the context of autonomous driving [1, 2, 3, 4], predicting human trajectories in other environments where service robots could have a profound impact, such as offices, homes, hospitals, and elderly care facilities, have received less attention. In contrast to street scenes, the main dynamic agent in service robotics environments are humans carrying out a wider range of tasks that require diverse motions in unstructured environments, compared to the setting of driving which has significant structure in place such as staying within lanes or following the traffic rules. Additionally, the spatial environment is generally smaller with a higher degree of perceptual obstructions (e.g. blind corners or internal walls) resulting in a closer proximity upon first observation due to occluded entry-points. Our goal is to enable more natural, safe, smooth, and predictable navigation by anticipating where humans will be moving in the near future using the robot's onboard sensors. Several existing methods try to generalize vehicle trajectory prediction ideas to human trajectory prediction by representing humans as 2D bounding boxes [5, 6, 7, 8, 9]. While bounding boxes can be sufficient for predicting vehicle trajectories in outdoor environments [1, 3, 4, 10], the reduction of human agents to bounding boxes neglects the plethora of perceptual information, present in a human-centric scene - scenes with one or multiple humans sharing limited space with the robot -- such as a robot navigating in a hallway of a busy office as in Fig. 1. In such settings, humans tend to take advantage of information beyond each other's position: There is substantial information about people's intent when _they turn their head_ or _look at where to walk next_ -- humans predict and anticipate each other's intentions through vision-based features: pose, gaze, and gestures. These visual cues are also apparent for a human viewing a static image absent of temporal cues. Similarly, we will demonstrate that a robot navigating in close proximity to humans as in Fig. 1 enables a more detailed model of the human beyond simple bounding boxes and leads to more accurate prediction of human trajectories. Specifically, we posit the research question: _"Can information from human visual features lead to improved prediction accuracy?"_
Fig. 1: A service robot navigating a busy office space. To do so it anticipates human motion using human-position and visual features such as head orientation or 3D skeletal keypoints.
We present the Human Scene Transformer (HST) which leverages different feature streams: Historic positions of each human, vision-based features such as skeletal keypoints (see Figure 1, joints of the human skeleton) or head orientation when available. We specifically focus on demonstrating the usefulness of noisy in-the-wild human skeletal information from a 3D human pose estimator.
As such our contribution is threefold:
* To the best of our knowledge, we are the first to adapt the trajectory prediction task to the domain of human-centric navigation and demonstrate that 3D vision-based features improve prediction performance in a service robot context notwithstanding imperfect in-the-wild data (without a dedicated motion capture system). Specifically in the regime of limited historical data, which is particularly under-explored, but a common scenario in indoor robot navigation our key idea is to use 3D vision features as complimentary predictive cues.
* We present a prediction architecture, which flexibly processes and includes detailed vision-based human features such as skeletal keypoints and head orientation. To target crowded human-centric environments, we define HST within a system of components from fields of Computer Vision, Machine Learning, and Autonomous Driving to make use of real-world sensor data instead of relying on ground truth labels. We demonstrate HST's capability to consistently model interactions which is critical in human-centric environments.
* We highlight a gap in prior work by showing the limitations of existing datasets for human trajectory prediction in indoor navigation. We propose an adaptation of existing datasets that can enable this new way of future trajectory prediction. Using this adaptation, we demonstrate the feasibility of our approach in human-centric environments. Simultaneously, we display state-of-the art performance on a common outdoor pedestrian prediction dataset.
## II Related Work
Predicting the future trajectory of humans is a challenging task where prior work has considered various motion models, scene context, and social interaction. We will revisit three research fields which influence our targeted domain of service robots and the use of vision-based human skeletal (human pose) features. We will first outline current research in _Trajectory Prediction_, where the center position of an agent (which can be a human but also a human driven vehicle) is forecasted over time. Subsequently, we will introduce work in the field of _Human Pose Prediction_, which uses a skeletal representation of the human and predicts this pose over time. Finally, we outline approaches in _Pose Estimation_, where the estimated human pose is directly used in trajectory prediction in application domains outside of human-centric navigation.
**Trajectory Prediction.** From the early work of Pellegrini _et al_. [5] for short-term future locations of humans for next frame data association to the recent longer term multi-second prediction methods that are often used in autonomous driving [1, 2, 3, 4, 11], trajectory prediction research has played an important role in improving down-stream robotic tasks. Prior approaches consider scene context [4], motion dynamics [3, 4], and the interaction between agents [3]. Salzmann _et al_. [4] combine historic agent positions with scene and dynamics constraints to make informed predictions. In an autonomous driving context, extracting additional visual information about the human actor (driver, cyclist, pedestrian) is challenging due to occlusion (driver only partially visible in the car) or distance. Therefore, the information representing each agent is reduced to a position estimate per observation timestep [1, 2, 3, 4]. Unlike the large prediction range required for self-driving [12], we focus on service-robot environments where people are generally close enough to the robot to obtain a richer visual representation of the human. As such, our work can benefit from recent approaches that fuse LiDAR information adapting the human pose estimation problem to a robotic sensor suit [13]. While prior works in trajectory prediction rely on Generative Adversarial Networks (GANs) [8, 14] or Conditional Variational Autoencoders (CVAEs) [4, 7, 10, 15, 16], this work follows the recent trend towards Transformer architectures as they naturally lend themselves to the set-to-set prediction problems such as multi-agent trajectory prediction and are invariant to a varying number of agents. Specifically, we leverage the fundamental idea of a Transformer based prediction framework [1, 2, 3] inspired by Ngiam _et al_. [3]. Their Transformer architecture is used for vehicle trajectory prediction in autonomous driving applications and captures joint interactions between vehicles.
**Pose Prediction.** Another related area is human pose forecasting in 3D [17, 18, 19, 20, 21]. Corona _et al_. use scene context in the form of an influence graph to refine a future trajectory of 3D human poses for a single subject in a motion capture environment [17]. For multi-person pose prediction, [22] extends DeTR [23] to predict multi-person 2D poses from a single image. Towards social robot navigation, Narayanan _et al_. use a sequence of human poses, also known as gait, to classify a person's emotional state for setting appropriate proximal distance constraints [24]. However, these approaches commonly consider a single human motion relying on ground truth pose information from a motion capture system, while we target multi-human in-the-wild scenarios which are not limited to spaces with a motion capture system.
**Pose Estimation in Trajectory Prediction.** There have been prior efforts to combine pose estimation with trajectory prediction, i.e., informing forecasted trajectories by incorporating historic pose information. However, these works are either limited to prediction in 2D image space [11, 25, 26] or operating on motion capture datasets which do not exhibit diverse positional movement of the human [17, 27]. Yagi _et al_. [25] showed that augmenting a convolutional auto-encoder style model with scale and pose encoders reduces prediction error compared to position only; however, their approach is applied to first-person video using 2D pose detection and limiting the prediction to the 2D image space. Similarly, Chen _et al_. [26] use a convolutional and recurrent architecture to segment an image into heterogeneous traffic objects and body parts, before using a Transformer decoder to attend to feature maps and extract objects leading to improvements in 2D image space prediction. When considering predicting multiple poses, Adeli _et al_. [28] use a form of graph attention to capture dependencies between interacting agents but this work is, again, limited to 2D first-person videos. Other works have explored attention mechanisms between multiple human features in the image-view [11]. However, for robotic navigation it is desired
to obtain predictions for agents across multiple sensors and ideally in a 3D or bird's-eye metric space. In this work, we follow these requirements by solely relying on onboard sensor information of a robotic platform and predict in the metric global frame rather than in image space.
## III Human Scene Transformer
Our proposed method Human Scene Transformer (HST) follows the concept of masked sequence to sequence prediction using an architecture with Transformer blocks (see Figure 3 - top right). This approach has shown promising vehicle prediction results in the autonomous driving domain [3]. HST introduces multiple important ideas extending the general Transformer architecture which makes it suitable for trajectory prediction in human-centric environments. These include the utilization of vision-based human features (Section III-A), a feature attention mechanism to merge multiple potentially incomplete features (Section III-B - Input Embedding), an improved attention mechanism facilitating a more complete information flow (Section III-B - Full Self-Attention), and a self-alignment layer which elegantly solves the problem of discriminating between multiple masked agent timesteps while keeping permutation equivariance (Section III-B - Agent Self-Alignment). Notably, implementing an attention based architecture end-to-end, the model is agnostic to the number of humans per frame. This means the model can dynamically handle a varying number of humans in different timesteps during inference. The maximum number of jointly (single forward pass) predicted humans is only limited by available memory.
### _Model Inputs: Incorporating Vision-based Features_
The robot's observations for the last \(H+1\) timesteps are processed as agent features and scene context (Figure 3 blue box). The scene context can be an occupancy grid or a raw point cloud at the current timestep, containing information common to all nearby agents (e.g. static obstacles). Agent features include the centroid position and vision-based features: skeletal keypoints, and head orientation for each agent.
To extract these vision-based features from the raw data, image patches for all agents are first obtained by projecting their detected 3D bounding boxes into the 360 degree image using ex- and intrinsic camera calibrations (see Figure 2-a). To extract skeletal key points from these patches, one could choose from a plethora of off-the-shelf skeletal keypoints extractor for images [29, 30, 31, 32, 33]. However, these extractors commonly output keypoints in a 2D image coordinate frame. To produce 3D keypoints, we follow the work of Grishchenko _et al_. [34] to estimate 3D keypoints from images using a pre-trained model: As existing datasets commonly only include 2D keypoint annotations, the 3D label required for supervised pre-training is generated by fitting a parametric human shape model to available 2D keypoints solving the following optimization problem:
\[\operatorname*{argmin}_{\mathbf{k}}\left(\|r(\mathbf{k})-\hat{\mathbf{k}}_{2} \|_{2}+\lambda p(\mathbf{k})\right), \tag{1}\]
where \(\mathbf{k}\) are the 3D skeletal keypoints, \(\hat{\mathbf{k}}_{2}\) is the 2D keypoints label, \(r:\mathbb{R}^{33\times 3}\rightarrow\mathbb{R}^{33\times 2}\) is the re-projection function projecting 3D points into the 2D image space using the camera calibrations. Many representations (from \(<20\)[35] to \(>500\)[36] keypoints) are present in the literature; we settled for a 33 keypoints skeleton representation [34, 37] as it presents itself as a minimal representation capturing _both_ head information and limb articulation. The learned prior distribution over human pose configuration \(p(\mathbf{k})\) penalizes infeasible poses which can arise in optimization for the underdetermined 3D-2D-projection problem, as multiple 3D poses can result in equivalent 2D projections. A infeasible pose would be a configuration of human joints which is physically infeasible for a human to achieve (e.g. head rotated full 180\({}^{\circ}\)). The prior distribution is learned by fitting a variational autoencoder to a dataset of feasible 3D human poses. The 3D keypoints in the camera space are transformed to the global coordinate frame using extrinsic camera calibration. Given the skeletal keypoints \(\mathbf{k}\) from optimizing Equation (1), we can easily extract the head orientation as the vector from in-between ears to in-between eyes. Note that vision-based agent features may not be available for all agents at all timesteps. This can be due to the agent being too far away to reliably extract keypoints (e.g., the picture being too small). We will show in Section III-B - Input Embedding how we can leverage inherent properties of our HST Transformer architecture to deal with these situations.
### _Model Architecture_
Figure 3 outlines the HST architecture. We will explain the individual components in this section by first introducing the Transformer layer as core concept and subsequently following the data flow depicted in Figure 3.
**Transformer Layer.** The primary building block of the model's architecture is the Transformer layer (shown in Figure 3 top right), which itself is comprised of a Multi-Head Attention layer [38] and multiple dense and normalization layers. The Transformer layer receives three tensors as input: Query (Q), Key (K), and Value (V). However, a single tensor may be used for multiple of the inputs. Consequently, we define a self-attention (SA) operation as a Transformer Layer where inputs Q, K, and V are the same tensor: The tensor attends to itself and conveys its information along one or more dimensions. Similarly, we define cross-attention (XA) as a Transformer Layer where the Q input is distinct from the K/V inputs. Intuitively the query attends to additional information from a different tensor as means of merging multiple streams of information. For a comprehensive explanation on the Attention mechanism and its inputs we refer the reader to Vaswani _et al_. [38].
Fig. 2: **Process of estimating three dimensional vision-based features of the human. (Left) image of the human cropped based on the bounding box detection. Lighting conditions can be sub-optimal in an human-centric environment. (Middle) Inferred three dimensional pose from the trained pose estimator. (Right) Head orientation post-processed from the pose keypoints.**
**Input Embedding.** The input agent features (blue) are tensors of shape \([N,T,d]\), where \(d=2\) for the x-y centroid position, \(d=99\) for the x-y-z position of 33 skeletal keypoints, and \(d=1\) for the head orientation. These tensors contain information of all \(N\) nearby agents for all \(H+1\) historic and current input timesteps. If an agent's feature is not observed at specific timesteps, we mask those timesteps with \(0\). As depicted in Figure 4, we also mask all future \(F\) timesteps for all agents by setting their feature value to \(0\), thus making only historical and current information available to the model. This masking approach is a well known technique in missing-data problems such as future prediction using Transformer based architectures [1, 3, 38]. Masking exploits the inductive bias inherent in the prediction problem, which allows for the filling of the missing information using available context in the vicinity of the gaps. As such, our approach allows for missing keypoints in frames due to bad lighting or other influences as the Transformer effectively "fills" in for the missing information.
The agent features are encoded independently and are combined by a learned attention query. This masked attention mechanism offers scalability to systems that have a large number of features with limited availability. The combined agent features result in a latent tensor for each agent and timestep of shape \([N,T,h]\) where \(T=H+1+F\) and \(h\) is the size of the token dimension. For simplicity we will refer to such tensors of latent representations throughout the network as _agent-timestep-tokens_.
**Full Self-Attention Via Agent Self-Alignment.** In contrast to previous methods [3] which alternately attends to agents and time dimensions separately via factorized attention, we propose a _full self-attention_ (FSA) operation where each agent-timestep-token attends to all agent-timestep-tokens along _both_ agent and time dimensions. This provides a more direct path of information flow. For example in social interactions, a change in action such as adjustment in walking direction does not have an immediate influence on other humans in proximity but rather influences their future. Following this illustration, an agent at a given timestep in our Transformer architecture should be able to attend not just to other agents at the current timestep (factorized attention) but to _all agents_ at _all timesteps_ (full self-attention).
Naively using full-self-attention can lead to sub-optimal outcomes. Since all future agent-timestep-tokens are masked out, two agents with the same masked future agent-timestep-tokens will also have the same input (Query) representation to the Transformer layer (Figure 3 top right). This prevents the model from associating future timesteps of an agent with the agent's history, since all future agents' timesteps "look" the same (masked). The problem could be addressed by enforcing an innate order on perceived agents, where all agents are enumerated. This, however, would eliminate the permutation invariant set-to-set prediction capabilities [39]; one of the core strengths of Transformers: An agent's future would be predicted differently based on its enumeration embedding with the same historic features.
Instead, we solve this problem and achieve full self-attention via a simple approach that we refer to as _agent self-alignment_. The agent-timestep-tokens resulting from the feature combination are cross-attended with a learned query tensor _only_ in the time dimension. This query, a weight matrix jointly optimized with all other network weights during training, learns to propagate available historic information for each agent to future timesteps, enabling the model to align future masked timesteps of an agent with historic ones during full self-attention without an explicit enumeration embedding. After this process, which is visualized in the dark green box in Figure 3, the previously masked future agent-timestep-tokens hold information from the respective agent's history; differentiating them from another. As such, the agent self-alignment mechanism preserves agents' permutation equivariance and enables full self-attention without restricting information flow along matching timesteps [3] or utilizing special attention matrices which explicitly separates agents [1]. The output agent-timestep-tokens of the agent self-alignment then passes
Fig. 4: Structure and masking embedded tensors. Future agent-timestep-tokens are masked and subsequently filled by the Transformer structure, iteratively with updated latent representations and finally with position distribution information on the output level.
Fig. 3: **Overview of the HST architecture.** From the robotβs sensors we extract the scene context, the historic position tracks of each agent, and vision based skeletal keypoints/head orientation when feasible. All features are encoded individually before the agent features are combined via cross-attention (XA) using a learned query tensor. The resulting agent-timestep-tokens is passed to our Agent Self-alignment layer which enables the use of subsequent full self-attention (FSA) layers. Embedded scene context is attended to via cross-attention (XA). After multimodality is induced and further FSA layers the model outputs the parameters of a Normal distribution for each agent at each prediction timestep. We can represent the full output structure as a Gaussian Mixture Model (formula in bottom right) over all possible futures where the mixture coefficients \(w\) come from the Multimodality Induction. Both cross-attention (XA) and full self-attention layers use the Transformer layer (top right) with different input configurations for Query (Q), Key (K), and Value (V).
through \(K\) transformer layers with full self-attention across agent and time dimensions before cross-attending to the encoded scene features.
**Multimodality Induction.** Our architecture can predict multiple consistent futures (modes) for a scene. To do so, the Multimodality Induction module repeats the agent-timestep-tokens by the number of future modes (\(M\)), resulting in a tensor of shape \([N,T,M,h]\). To discriminate between modes it is combined with a learned _mode-identifier_ tensor of shape \([1,1,M,h]\). Each future's logit probability \(w_{m};\,m\in 1,\dots,M\) is also inferred here by having the _mode-identifier_ attend to the repeated input; resulting in shape \([1,1,M,h]\) which is subsequently reduced by a MLP to output \(w_{m}\) as \([1,1,M,1]\).
**Prediction Head.** The agent-timestep-tokens updated with the learned mode-identifier go through \(L\) Transformer layers, again with full self-attention, before predicting per mode parameters \(\mu,\sigma\) using a dense layer as _prediction head_.
### _Producing Multimodal Trajectory Distributions_
Combining \(\mu\) and \(\sigma\) with the mode likelihoods \(w_{m}\) coming from the Multimodality Induction, the Human Scene Transformer models the distribution of the \(i\)-th agent's centroid position at each timestep \(t\) with a 2D Gaussian Mixture Model (GMM):
\[P_{\theta}^{i}(\mathbf{x}_{t}|O(t),...,O(t-H))=\sum_{m=1}^{M}w_{m}\mathcal{N} (\mathbf{x};\sigma_{m,i,t},\mu_{m,i,t}), \tag{2}\]
where \(m\) is the \(m\)-th future mode. Here, we represent the position of an agent at a specific timestep by a GMM with mixture weights \(w\) equal to the probability distribution of future modes.
We adopt a joint future loss function, that is, the cumulative negative log-likelihood of the Gaussian mode (\(m^{*}\)) with the smallest mean negative log-likelihood:
\[\mathcal{L}_{\text{minNLL}}=\sum_{i,t}-\text{log}(\mathcal{N}(\mathbf{x}_{t, i}^{*};\sigma_{m^{*},i,t},\mu_{m^{*},i,t})), \tag{3}\]
where
\[m^{*}=\text{argmin}_{m}(\sum_{i,t}-\text{log}(\mathcal{N}(\mathbf{x}_{t,i}^{* };\sigma_{m,i,t},\mu_{m,i,t}))), \tag{4}\]
and \(\mathbf{x}_{i,t}^{*}\) is the ground truth agent position. The resulting prediction represents \(M\) possible future realizations of all agents at once in a consistent manner, where the mode mixture weights \(w\) are shared by all agents in the scene. The most likely future mode during inference is given as
\[m^{+}=\text{argmax}_{m}(w_{m}). \tag{5}\]
## IV Experiments
We structure our experiments to support our contributions: First, we qualitatively and quantitatively demonstrate how our architecture provides accurate predictions for the human-centric service robot domain. We especially demonstrate how HST can leverage and model interactions between humans consistently over multiple possible futures. Further, we show that our approach is cross-domain compatible with unconstrained outdoor pedestrian prediction, where data from a surveillance camera is used to predict pedestrians in different outdoor squares. Finally, we demonstrate how HST can leverage vision-based features in human-centric environments to improve prediction accuracy, specifically in short history situations where prediction errors are high.
**Datasets.** To effectively investigate the performance of HST in human-centric environments and the possible benefits that a detailed 3D skeletal representation of the human body can have, a dataset should (I) include a _diverse_ range of indoor and outdoor environments, (II) capture humans' movement from a _robot_ viewpoint (III) in a natural _unscripted_ environment, (IV) provide labels for the position and skeletal keypoints (k) of all agents at all timesteps, and (V) be sufficiently _large_ to prevent over-fitting. The evaluation of different datasets for human trajectory prediction in Table I takes into account the satisfaction of these requirements. Many of existing datasets are collected from a single top-down camera in a limited number of environments, such as the ETH [5] and UCY [6] pedestrian datasets. Others are specific to the autonomous driving domain [40, 41, 42, 43]. While none of these datasets provide labels for skeleton keypoints, other datasets such as H3.6.M [44], AMASS [45], and 3DPW [46] which are collected using a motion capture system or wearable IMU devices [46], do offer such labels. However, these datasets are limited to artificial environments and often feature stationary or scripted motions. Finally, while all these datasets provide labels for position, these labels are often hand labeled ground truth not representing noisy input data a robot would experience in the real-world during inference.
One dataset which is recorded in diverse human-centric environments using sensors (2 x 16 Channel LIDAR, 5 x Stereo RGB Cameras) on a mobile robotic platform is the JackRabbot Dataset and Benchmark (JRDB) [47]. However, JRDB was created as a detection and tracking dataset rather than a prediction dataset. To make the data suitable for a prediction task, we first extract the robot motion from the raw sensor data to account for the robot's motion. Tracks are generated for both train and test split using the JRMOT [48] detector and tracker. The ground truth labeled bounding-boxes on the train set were disregarded as they were exposed to filtering during the labeling process to the point where the smoothness eases the prediction task. We were able to increase the number of human tracks for training by associating the JRMOT detections to ground truth track labels via Hungarian matching, while on the test split we solely use JRMOT predictions.
Due to factors such as distance, lighting and occlusion the pre-trained 3D pose estimator model (Section III) is not guaranteed to produce keypoints for all agents at all timesteps. We observed human keypoints information in \(\sim 50\%\) of all timesteps for all agents in a distance of up to 7 meters from
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & 1 Diverse & II Robot & III Unscripted & IV & β & V Large \\ \hline ETH [5] \& UCY [6] & β & β & β & (β) & β \\ AD [40, 41, 42, 43] & β & β & β & (β) & β \\ Motion Capture [44, 45] & β & β & β & (β) & (β) \\
3DPW [46] & β & β & β & β & β \\ \hline Adapted JRDB & β & β & β & (β) & β \\ \hline \hline \end{tabular}
\end{table} TABLE I: A summary of different prediction datasets indicating our desiderata addressed by each dataset. Parentheses indicate that a desiderata is only partially fulfilled.
the robot. The available data is split using \(50\%\) of each scene as training data, \(20\%\) as validation data and \(30\%\) as test data. We sub-sample the dataset from \(15\mathrm{Hz}\) to \(3\mathrm{Hz}\) keeping all of the intermediate samples and therefore increasing the number of datapoints by a factor of five.
In addition, we also compare our model to the ETH [5] and UCY [6] datasets. These are standard benchmarks for pedestrian trajectory prediction and enable a fair comparison of our architecture against other methods.
**Metrics.** In consistency with prior work [1, 4, 7, 8], prediction quality is evaluated using the following metrics:
1. Minimum Average Displacement Error (_minADE_): Minimum Mean \(l_{2}\) distance between the ground truth and all \(M\) future mode trajectories.
2. Minimum Final Displacement Error (_minFDE_): Minimum \(l_{2}\) distance between the final positions of the ground truth and all \(M\) future mode trajectories.
3. Maximum Likelihood Average Displacement Error (_MLADE_): Mean \(l_{2}\) distance between the ground truth and the most likely mode trajectory.
4. Negative Log Likelihood (_NLL_) of the ground truth under the full parametric output distribution.
Lower is better for all metrics.
**Baselines.** We re-implement the autonomous driving Scene Transformer architecture [3], where we match the number of Transformer layers to our architecture. We further compare to trajectory prediction architectures Trajectron++ [4], Agentformer [1], SoPhie [8], and PECNet [7].
**Evaluation Protocol.** For the JRDB prediction dataset we report _minADE_, _MLADE_, and _NLL_ for 128k scene snippets from the test split including partially occluded agents. We use up to \(2\,\mathrm{s}\) of history as input and predict the next \(4\,\mathrm{s}\) future of the scene. Note that if the number of agents exceeds \(N_{\text{max}}=16\), we randomly select one agent and only consider and predict it and the \(N_{\text{max}}-1\) closest agents. On the ETH and UCY dataset we follow the standard procedure to train in a leave-one-out fashion and evaluate _minADE_ and _minFDE_ on \(20\) trajectories over a prediction horizon of 12 timesteps (\(4.8\mathrm{s}\)) using 8 historic timesteps as input. We match the evaluation protocol of AgentFormer [1] by setting the number of modes to \(M=20\).
### _Trajectory Prediction in Human-centric Environments_
In Table II and Figure 5 we show quantitative and qualitative results of HST's predictions in the human-centric environment. We show that in crowded human-centric environments the influence of interaction between humans has large benefits on the prediction accuracy of each individual. To show this, we compare against a version of our model which is trained to predict a single human at a time ignoring interactions with other agents. Subsequently, adding our full self-attention via self-alignment mechanism additionally increases the model's ability to capture interactions across time, leading to improvements across all metrics. The capability to account for interactions between humans is qualitatively demonstrated in Figure 5 where we show multiple predicted futures for a scene of two interacting humans. The two humans approach each other head on. The possible interactions to avoid collisions are modeled _consistently_ within each future.
### _Vision-based Features_
In this section, we consider the adversarial setting, where the robot encounters a human unexpectedly, i.e., the robot observes a new human with little historical observations. Prediction architectures solely relying on historic position information struggle in scenarios where no or only a limited amount of history of the human position is available to the model. Specifically, at the first instance of human detection, the experimental error is 200% higher compared to full historic information over \(2\,\mathrm{s}\). Given the specifics of our targeted human-centric environment, where we are mostly interested in humans close to the robot, we are likely able to extract vision-based features for the human in addition to the position. Specifically, we revisit our research question: _"Can information from human visual features lead to improved prediction accuracy?"_
Before answering this question quantitatively we show a clarifying visual example in Figure 6 where a human just entered the scene through a door and is first detected. When solely relying on historic position information the most likely prediction by the model is a stationary agent. However, when we employ the pre-trained skeleton keypoints estimator to provide pose keypoints as additional input to our model the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c|}{Model Configuration} & minADE & MLADE & NLL \\ \hline \hline \multicolumn{2}{c|}{Scene Transformer [3]} & \(0.53\) & \(0.86\) & \(0.25\) \\ \hline \multicolumn{2}{c|}{Full Self-Attention Interaction} & & & \\ HST & β & β & \(0.57\) & \(0.93\) & \(0.89\) \\ HST & β & β & \(0.50\) & \(0.84\) & \(-0.02\) \\ HST & β & β & \(\mathbf{0.48}\) & \(\mathbf{0.80}\) & \(\mathbf{-0.13}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Comparison against Scene Transformer on JRDB prediction dataset.** HST outperforms the original Scene Transformer on all metrics. Ablation shows that the interaction attention to other agents improves performance by comparing to a model predicting a single human at time. We also show the positive impact of Full Self-Attention.
Fig. 5: **Consistently modeled interactions in different predicted futures for a single scene in the x-y-plane.** Two humans approaching each other head on. (a) History (solid) and ground truth future (dashed) of both humans. (b) Two of the \(M\) predicted futures (dots - increasing transparency with time) of the scene by HST. Within each mode the influence and reaction of both agents is consistent and reasonable. The humansβ futures are predicted without collisions giving each other space to navigate within the specific predicted future mode of the scene. (c) Two predicted futures of a crowded scene.
model correctly realizes if the human is in a walking motion and how the human is oriented, accurately predicting the most likely future trajectory.
Quantitatively, during evaluation, when keypoints are available on the first detection we observe a substantial prediction improvement of up to 11% (Figure 7). When additional timesteps with position information are available the improvement using keypoints vs not using keypoints averages between 5% and 10%. The relative improvement generally increases with the number of timesteps with keypoints in the history and decreases with the number of historic position information.
Finally, we want to provide an outlook of how much vision-based features can improve prediction performance if available for all agents at all timesteps. We therefore enforce feature parity between position and skeletal keypoints features by disregarding position information without associated keypoints (Table III). We find that a relative improvement of around 10% is achievable using our in-the-wild vision-based features. The baseline is naturally worse than in Table II as we partially disregard historic position information.
### _Pedestrian Dataset_
In addition to showing HST's capabilities in a robotics specific environment we will further validate our architecture against a range of state-of-the-art prediction methods. For this we use ETH/UCY which is prolific in the trajectory prediction community while also being the dataset which we think is the closest to the human-centric environments that we would like to explore: On the ETH and UCY dataset, we either improve current state-of-the-art methods or we are on par with them on 4 out of the 5 scenes.
## V Discussion
Simply representing a human as its spatial position, as commonly done in autonomous driving environments, does deliver a baseline prediction performance in human-centric service robot environments. However, it suffers in challenging settings where the history of a human is limited. Specifically in these situations we demonstrate how the Human Scene Transformer can leverage vision-based features to improve prediction accuracy (Figure 6). Beyond scenarios such as when robot and humans encounter each other in blind corners, general improvement trends using in-the-wild skeletal pose detections were also observed with more observations as shown in Figure 7. Another intuitive assumption which we can support quantitatively in Table III is that the full skeletal keypoints (full human pose) hold additional information over just the head orientation (where is the human looking). This is expected as the head orientation can give away a possible direction for the trajectory, while the full keypoints can be more informative about the speed the human is approaching, e.g., running or slowly walking.
Figure 5 and Table II demonstrate HST's capability to model consistent interactions between agents and use these influences to improve the overall prediction substantially. This is especially useful for the crowded spaces a service robot navigates and opens opportunities not just modeling human-to-human interactions but robot-human interactions.
**Limitations.** Exploring this new domain of human-centric environments we recognize that numerous limitations exist: We showed that an on par relationship between positions and keypoint features leads to relative improvements and expect the performance to be similarly correlated to the quality of the detected 3D keypoints. We therefore think that an 3D skeletal keypoints estimator, specifically designed for robotic applications, increasing both the number of successful detections and the quality of each detected skeleton would improve performance for the prediction task as well as other robot tasks
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c}{_minADE_} & \multicolumn{2}{c}{_NLL_} & \multicolumn{2}{c}{_minADE_ @ 2s} & \multicolumn{2}{c}{_minADE_ @ 4s} \\ \hline Baseline & \(0.56\) & \(0\%\) & \(1.04\) & \(0\%\) & \(0.46\) & \(0\%\) & \(0.87\) & \(0\%\) \\ Head Orient. & \(0.53\) & \(-5\%\) & \(0.89\) & \(-14\%\) & \(0.42\) & \(-9\%\) & \(0.79\) & \(-9\%\) \\ Keypoints & \(\mathbf{0.51}\) & \(-9\%\) & \(\mathbf{0.85}\) & \(-18\%\) & \(\mathbf{0.40}\) & \(-13\%\) & \(\mathbf{0.77}\) & \(-11\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Vision-based features relatively improve prediction across all metrics on a prediction horizon of \(4\,\mathrm{s}\). To create feature parity between position and keypoints features we ignore all position history without detected skeletal keypoints during training and evaluation.
Fig. 6: A visualization of the predicted trajectory distributions for a new human agent entering the scene through the door on the right as viewed in (a). For _both_ (b) and (c) the HST model does not have any historic information here and only has access to the current frame. The plot of future trajectory distributions (b) and (c) show the effect of using and not using skeletal keypoints (respectively) as input in that single frame. Without pose keypoints the HST model predicts the agent to be most-likely stationary while, with keypoints as input, it can reason that the human is moving and correctly anticipates the direction. Blue dot is the detected human at the initial frame, orange dots are our most likely mode predictions with corresponding distribution shown with blue shading, and green dots are the ground truth human future (actually executed trajectory by the human).
Fig. 7: Impact of vision-based features conditioned on different number of consecutive non occluded input timesteps.
in close human contact (e.g. handover). Such an estimator could make use of the full sensor suite of a robotic platform, fusing camera and LiDAR information. We are happy to see first works [13] in this direction and hope that our findings encourage the research community to pursue this path within the domain of human-centric environments.
In Section III we presented a way to include scene context via a point cloud but were unable to see any predictive benefit from this information. We attribute this to three factors: the small size of the adapted JRDB dataset, the limited number of locations (29) in which the data was recorded, and the raw point lack semantic labels, e.g. a door is indistinguishable from a wall. These factors highlight the potential for further work towards better representations of point clouds.
**Conclusion.** This work introduced the task of human trajectory prediction into the domain of human-centric service robots. We demonstrated that the proximity of robot and humans in such environments can be leveraged to improve prediction performance by explicitly incorporating vision-based human features. We showed HST achieves strong prediction improvements using in-the-wild 3D pose representations for the critical situation of agents being first detected in close proximity to the robot. A quantitative ablation using paired position and vision-based features highlighted the influence of different visual features with skeletal keypoints providing the highest gains across all metrics. Our Transformer based architecture is flexible in accommodating different feature inputs while also achieving state-of-the-art results on a common pedestrian prediction dataset without visual features outside the domain of human-centric service robot environments. We think that the unstructured and uncompressed nature of environment point-clouds fits nicely with the permutation invariance property of Transformer architectures and are therefore excited to further explore this direction in the future. We hope that our findings will inspire further research in the human-centric domain and in developing improved methods for generating accurate 3D vision-based human representation for service robotics applications.
|